Chrome Extension
WeChat Mini Program
Use on ChatGLM

Reduction of Couplings and Finite Unified Theories

CORFU SUMMER INSTITUTE 2022 SCHOOL AND WORKSHOPS ON ELEMENTARY PARTICLE PHYSICS AND GRAVITY(2023)

Univ Autonoma Madrid | Univ Warsaw | Natl Ctr Nucl Res | Univ Nacl Autonoma Mexico | Univ Lisbon | Natl Tech Univ Athens

Cited 0|Views0
Abstract
We review the basic idea of the reduction of couplings method, both in the dimensionless and dimension 1 and 2 sectors. Then, we show the application of the method to $N=1$ supersymmetric GUTs, and in particular to the construction of finite theories. We present the results for two phenomenologically viable finite models, an all-loop finite $SU(5)$ SUSY GUT, and a two-loop finite $SU(3)^3$ one. For each model we select three representative benchmark scenarios. In both models, the supersymmetric spectrum lies beyond the reach of the 14 TeV HL-LHC. For the $SU(5)$ model, the lower parts of the parameter space will be in reach of the FCC-hh, although the heavier part will be unobservable. For the two-loop finite $SU(3)^3$ model, larger parts of the spectrum would be accessible at the FCC-hh, although the highest possible masses would escape the searches.
More
Translated text
Key words
Supersymmetry
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文介绍了耦合缩减方法,并将其应用于构造有限统一理论,提出了两个符合物理现象的有限模型,并分析了它们在当前及未来加速器中的可观测性。

方法】:文章采用耦合缩减方法,在无量纲及维度为1和2的领域中,对N=1超对称大统一理论(SUSY GUTs)进行了研究,并构建了有限理论。

实验】:文中展示了两个模型:一个全圈有限的$SU(5)$超对称GUT模型和一个两圈有限的$SU(3)^3$模型,并针对每个模型选定了三个代表性的基准场景。这些模型中的超对称谱都超出了14 TeV HL-LHC的探测范围,而$SU(5)$模型的参数空间下部可望在FCC-hh中被探测到,尽管其较重部分无法观测。对于两圈有限的$SU(3)^3$模型,更大范围的谱可能能在FCC-hh中被探测到,但最高质量的可能仍会逃过探测。