Multiwavelength Observations of the Blazar PKS 0735+178 in Spatial and Temporal Coincidence with an Astrophysical Neutrino Candidate IceCube-211208A
The Astrophysical Journal(2023)SCI 2区SCI 3区
Univ Alabama | Columbia Univ | Depauw Univ | Univ Delaware | Univ Utah | DESY | Harvard & Smithsonian | NASA | Washington Univ | Calif Polytech State Univ San Luis Obispo | Washington University in St. Louis | Penn State Univ | Univ Minnesota | Calif State Univ East Bay | Ball State Univ | McGill Univ | Univ Calif Santa Cruz | Harvard Smithsonian Ctr Astrophys | University of Delaware | Univ Maryland | Univ Iowa | Univ Galway | Univ Coll Dublin | Univ Calif Los Angeles | Univ Potsdam | Munster Technol Univ | Campus Box 157 POB 173364 | University of Alabama | Purdue Univ | Indiana Univ Purdue Univ Indianapolis | Iowa State Univ | Dublin Inst Adv Studies | Univ Groningen | Univ Namibia | Univ Paris | Max Planck Inst Kernphys | Univ Tubingen | Northwest Univ | Univ PSL | Sorbonne Univ | Univ Savoie Mont Blanc | Humboldt Univ | Univ Paris Saclay | Friedrich Alexander Univ Erlangen Nurnberg | Univ Warsaw | Instytut Fizyki Jadrowej PAN | Univ Hamburg | Univ Witwatersrand | Univ Oxford | Univ Western Sydney | Univ Adelaide | Aix Marseille Univ | Ecole Polytech | Heidelberg Univ | Leopold Franzens Univ Innsbruck | Uniwersytet Jagiellonski | Nicolaus Copernicus Univ | Polish Acad Sci | Univ Montpellier | Univ Leicester | Univ Amsterdam | Yerevan Phys Inst | Konan Univ | Univ Tokyo
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
