Determination of the Pseudoscalar Decay Constant F_{d_{s}^{+}} Via D_{s}^{+}→μ^{+}ν_{μ}.
Physical Review Letters(2019)SCI 1区
Institute of High Energy Physics | G.I. Budker Institute of Nuclear Physics SB RAS (BINP) | Helmholtz Institute Mainz | Bochum Ruhr-University | University of Turin | Southeast University | Joint Institute for Nuclear Research | INFN Laboratori Nazionali di Frascati | Peking University | Institute of Physics and Technology | Indiana University | Carnegie Mellon University | Johannes Gutenberg University of Mainz | INFN Sezione di Ferrara | Wuhan University | Ankara University | Istanbul Bilgi University | INFN | University of South China | Nanjing University | Shanghai Jiao Tong University | Liaoning University | Nankai University | Zhengzhou University | Tsinghua University | University of Eastern Piedmont | Central China Normal University | University of Minnesota | GSI Helmholtzcentre for Heavy Ion Research GmbH | Guangxi University | Nanjing Normal University | Shandong Normal University | KVI-CART | Henan Normal University | University of Hawaii | Shandong University | University of the Punjab | Uppsala University | Huangshan College | University of Jinan | University of Muenster | Soochow University | Beijing Institute of Petrochemical Technology | China Center of Advanced Science and Technology | Sun Yat-Sen University | Sichuan University | Guangxi Normal University | Indian Institute of Technology Madras | Shanxi University | Henan University of Science and Technology | University of Chinese Academy of Sciences | Lanzhou University | Zhejiang University | INFN and University of Perugia | COMSATS University Islamabad | Seoul National University | Beihang University | University of Ferrara | Hunan Normal University | Uludag University | Near East University | Hunan University | Hangzhou Normal University | Xinyang Normal University | University of Science and Technology of China | University of Science and Technology Liaoning
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
