Reduction of Couplings and Finite Unified Theories
CORFU SUMMER INSTITUTE 2022 SCHOOL AND WORKSHOPS ON ELEMENTARY PARTICLE PHYSICS AND GRAVITY(2023)
Univ Autonoma Madrid | Univ Warsaw | Natl Ctr Nucl Res | Univ Nacl Autonoma Mexico | Univ Lisbon | Natl Tech Univ Athens
Abstract
We review the basic idea of the reduction of couplings method, both in the dimensionless and dimension 1 and 2 sectors. Then, we show the application of the method to $N=1$ supersymmetric GUTs, and in particular to the construction of finite theories. We present the results for two phenomenologically viable finite models, an all-loop finite $SU(5)$ SUSY GUT, and a two-loop finite $SU(3)^3$ one. For each model we select three representative benchmark scenarios. In both models, the supersymmetric spectrum lies beyond the reach of the 14 TeV HL-LHC. For the $SU(5)$ model, the lower parts of the parameter space will be in reach of the FCC-hh, although the heavier part will be unobservable. For the two-loop finite $SU(3)^3$ model, larger parts of the spectrum would be accessible at the FCC-hh, although the highest possible masses would escape the searches.
MoreTranslated text
Key words
Supersymmetry
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话