Cold Atoms in Space: Community Workshop Summary and Proposed Road-Map
EPJ Quantum Technology(2022)SCI 2区SCI 3区
University of the Balearic Islands | University of Washington | University of South Carolina | Imperial College London | CERN | Aarhus University | King’s College London | University of Belgrade | Università di Firenze | California Institute of Technology | NCSR Demokritos | University of Pisa | Kitzbühel Centre for Physics | University of Trieste | Université Bordeaux-IOGS-CNRS: UMR 5298 | Université PSL | Université Paris-Saclay | University of Valencia | Rutherford Appleton Laboratory | Universitat Autònoma de Barcelona | University of Birmingham | German Aerospace Center (DLR) | Ulm University | Johannes Gutenberg-Universität Mainz | Instituto Superior Técnico | University of Liverpool | European Space Agency | Physikalisch-Technische Bundesanstalt | University of Sussex | Istituto Nazionale di Ricerca Metrologica | Institute of Space Science | RHEA for European Space Agency | University of St Andrews | Leibniz Universität Hannover | Fermilab | Indian Institute of Technology | Peking University | University of Science and Technology of China | University of California | Antwerp University | Oak Ridge National Laboratory | University of Nevada | Italian Space Agency | University of Nis | Istituto Nazionale di Fisica Nucleare | Foundation for Research and Technology-Hellas | The University of Tokyo | Los Alamos National Laboratory | Mohammed V University in Rabat | The Johns Hopkins University | University College London | University of New South Wales | Technical University of Denmark | University of Nottingham | Northwestern University | The Pennsylvania State University | University of Cambridge | National Physical Laboratory | Wayne State University | Stanford University | University of Strathclyde | Sorbonne Université | Humboldt-Universität zu Berlin | Okinawa Institute of Science and Technology | Universität Bremen | University of Oxford | University of Bergen | Jožef Stefan Institute | University of Zurich | Vilnius University | University of Ljubljana | National Technical University of Athens | University of Latvia | Teledyne e2v | University of Wisconsin | Warsaw University of Technology | University of the Witwatersrand | Indian Institute of Science Education and Research | Brown University | National Institute of Standards and Technology | Centre National d’Etudes Spatiales | University of Warsaw | Purdue University | Autonomous University of Aguascalientes | Potsdam Institute for Climate Impact Research | Universitat de Lleida | Bates College | Fayoum University | Politecnico di Milano | University of Manchester | University of Niš | University of Warwick | Koç University | University of Glasgow | ColdQuanta | AMOLF | University of Crete | Queen’s University Belfast | Center for Theoretical Physics PAS | University College Cork | Castle Point on the Hudson | Energy Efficient AI On-Chip | University of Bologna | The University of Arizona | Scuola Internazionale Superiore di Studi Avanzati | ETH Zurich | University of Delaware | Institute of Physics of the Czech Academy of Sciences | University of Amsterdam | Heinrich-Heine-Universität | Observatoire de Paris-PSL | University of Vienna | Polish Academy of Sciences | INO-CNR | Institut de Ciències de l’Espai (ICE | National and Kapodistrian University of Athens | University of Sydney | Florida State University | Universität Basel | Thales Alenia Space | The Barcelona Institute of Science and Technology | University of Urbino | Technical University of Darmstadt | Johannes Gutenberg-University Mainz | Münster University of Applied Sciences | University of Malta | International University of Sarajevo | National Taiwan University | Instituto de Telecomunicações | Chinese Academy of Sciences | University of Cincinnati
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
