Exclusive Semileptonic Bs→Kℓν Decays on the Lattice
PHYSICAL REVIEW D(2023)
Univ Southampton | Univ Edinburgh | Brookhaven Natl Lab | CERN | Univ Siegen
Abstract
Semileptonic Bs -Key decays provide an alternative b-decay channel to determine the Cabibbo-Kobayashi-Maskawa (CKM) matrix element IVubI and to obtain a R-ratio to investigate lepton-flavor -universality violations. Results for the CKM matrix element may also shed light on the discrepancies seen between analyses of inclusive or exclusive decays. We calculate the decay form factors using lattice QCD with domain-wall light quarks and a relativistic b-quark. We analyze data at three lattice spacings with unitary pion masses down to 268 MeV. Our numerical results are interpolated/extrapolated to physical quark masses and to the continuum to obtain the vector and scalar form factors f+(q2) and f0(q2) with full error budgets at q2 values spanning the range accessible in our simulations. We provide a possible explanation of tensions found between results for the form factor from different lattice collaborations. Model-and truncation-independent z-parametrization fits following a recently proposed Bayesian-inference approach extend our results to the entire allowed kinematic range. Our results can be combined with experimental measurements of Bs -Ds and Bs -K semileptonic decays to determine IVubI = 3.8(6) x 10-3. The error is currently dominated by experiment. We compute differential branching fractions and two types of R ratios, the one commonly used as well as a variant better suited to test lepton -flavor universality.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
1982
被引用374 | 浏览
2012
被引用131 | 浏览
2011
被引用55 | 浏览
2012
被引用239 | 浏览
2012
被引用38 | 浏览
2016
被引用27 | 浏览
2017
被引用62 | 浏览
2017
被引用44 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper