The Two Rings of (50000) Quaoar
Astronomy and Astrophysics(2023)
Observ Nacl MCTI | Univ Paris | UCF | Space Telescope Sci Inst | Fed Univ Technol Parana UTFPR Curitiba | Lab Interinst Eastron LIneA | CSIC | Aix Marseille Univ | Inst Polytech Sci Avancees IPSA | Akdeniz Univ | Univ Montreal | Harvard Smithsonian Ctr Astrophys | Gemini Observ | Cabrillo Coll Astron | Southwest Res Inst | Int Occultat Timing Assoc IOTA | CALTECH | Univ Nacl Autonoma Mexico | Univ Oregon | NASA Ames Res Ctr | Univ New Haven | Canada France Hawaii Telescope | Univ Colorado | Tohoku University | Univ Victoria | Institute of Astronomy and Astrophysics | Unistellar | Wesleyan Univ | Purdue Univ Northwest | Univ Virginia | Private Observ | Univ La Laguna | Univ Occupat & Environm Hlth | Naylor Observ
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
