Long-term Temporal Stability of the DarkSide-50 Dark Matter Detector
JOURNAL OF INSTRUMENTATION(2024)
Royal Holloway Univ London | Univ Sao Paulo | Pacific Northwest Natl Lab | Augustana Univ | INFN Pisa | Fermilab Natl Accelerator Lab | INFN | INFN Cagliari | Univ Genoa | INFN Roma Tre | Princeton Univ | INFN Genova | Kings Coll London | Univ Federico II Napoli | Lomonosov Moscow State Univ | Univ Milan | St Petersburg Nucl Phys Inst | Univ Massachusetts | Univ Sassari | Univ Paris | Univ Paris Diderot | Natl Res Ctr Kurchatov Inst | Inst High Energy Phys | Aix Marseille Univ | Nicolaus Copernicus Astron Ctr | Univ Houston | Black Hills State Univ | Joint Inst Nucl Res | Belgorod Natl Res Univ | Univ Calif Los Angeles | Univ Hawaii | Univ Paris Saclay | Univ Perugia | Univ Calif Davis | Univ Manchester | Virginia Tech | Univ Cagliari | Univ Calif Riverside | Jagiellonian Univ
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
