Status and Prospects of Discovery of 0??? Decay with the CUORE Detector
NUOVO CIMENTO C-COLLOQUIA AND COMMUNICATIONS IN PHYSICS(2023)
Univ South Carolina | Virginia Polytech Inst & State Univ | INFN | Sapienza Univ Roma | Univ Calif Berkeley | Fudan Univ | Lawrence Berkeley Natl Lab | Univ Milano Bicocca | Univ Paris Saclay | Calif Polytech State Univ San Luis Obispo | Shanghai Jiao Tong Univ | Yale Univ | Univ Calif Los Angeles | MIT | Johns Hopkins Univ | Lawrence Livermore Natl Lab
Abstract
In this contribution we present the achievements of the CUORE experiment so far. It is the first tonne-scale bolometric detector and it is in stable data taking since 2018. We reached to collect about 1800 kgxyr of exposure of which more than 1 tonxyear have been analysed. The CUORE detector is meant to search for the neutrinoless double 0 decay (0v00) of the 130Te isotope. This is a beyond Standard Model process which could establish the nature of the neutrino to be Dirac or a Majorana particle. It is an alternative mode of the two-neutrinos double 0 decay, a rare decay which have been precisely measured by CUORE in the 130Te. We found no evidence of the 0v 00 and we set a Bayesian lower limit of 2.2 x1025yr on its half-life. The expertise achieved by CUORE set a milestone for any future bolometric detector, including CUPID, which is the planned next generation experiment searching for 0v0 0 with scintillating bolometers.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2013
被引用21 | 浏览
2020
被引用79 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper