Precision Analysis of the ^{136}xe Two-Neutrino Ββ Spectrum in KamLAND-Zen and Its Impact on the Quenching of Nuclear Matrix Elements.
Physical Review Letters(2019)SCI 1区
Tohoku Univ | Univ Tokyo | Kyoto Univ | Osaka Univ | Tokushima Univ | Univ Hawaii Manoa | MIT | Triangle Univ Nucl Lab | Virginia Polytech Inst & State Univ | Comenius Univ
Abstract
We present a precision analysis of the ^{136}Xe two-neutrino ββ electron spectrum above 0.8 MeV, based on high-statistics data obtained with the KamLAND-Zen experiment. An improved formalism for the two-neutrino ββ rate allows us to measure the ratio of the leading and subleading 2νββ nuclear matrix elements (NMEs), ξ_{31}^{2ν}=-0.26_{-0.25}^{+0.31}. Theoretical predictions from the nuclear shell model and the majority of the quasiparticle random-phase approximation (QRPA) calculations are consistent with the experimental limit. However, part of the ξ_{31}^{2ν} range allowed by the QRPA is excluded by the present measurement at the 90% confidence level. Our analysis reveals that predicted ξ_{31}^{2ν} values are sensitive to the quenching of NMEs and the competing contributions from low- and high-energy states in the intermediate nucleus. Because these aspects are also at play in neutrinoless ββ decay, ξ_{31}^{2ν} provides new insights toward reliable neutrinoless ββ NMEs.
MoreTranslated text
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2012
被引用954 | 浏览
2011
被引用198 | 浏览
2015
被引用146 | 浏览
2013
被引用330 | 浏览
2016
被引用11 | 浏览
2014
被引用65 | 浏览
2015
被引用41 | 浏览
2016
被引用1171 | 浏览
2018
被引用47 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话