Performance of a Spaghetti Calorimeter Prototype with Tungsten Absorber and Garnet Crystal Fibres
NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT(2023)
European Org Nucl Res CERN | Univ Barcelona | Imperial Coll London | Kurchatov Inst | Univ Valencia | Natl Univ Sci & Technol | Peking Univ
Abstract
A spaghetti calorimeter (SPACAL) prototype with scintillating crystal fibres was assembled and tested with electron beams of energy from 1 to 5 GeV. The prototype comprised radiation-hard Cerium-doped Gd3Al2Ga3O12 (GAGG:Ce) and Y3Al5O12 (YAG:Ce) embedded in a pure tungsten absorber. The energy resolution root was studied as a function of the incidence angle of the beam and found to be of the order of 10%/ E a 1%, in line with the LHCb Shashlik technology. The time resolution was measured with metal channel dynode photomultipliers placed in contact with the fibres or coupled via a light guide, additionally testing an optical tape to glue the components. Time resolution of a few tens of picosecond was achieved for all the energies reaching down to (18.5 +/- 0.2) ps at 5 GeV.
MoreTranslated text
Key words
Calorimetry,High energy physics (HEP),Particle detectors,Spaghetti calorimeter (SPACAL),Fibres,Scintillating crystals
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
CERAMICS-SWITZERLAND 2023
被引用3
Scintillating Sampling ECAL Technology for the LHCb ECAL Upgrade II
JOURNAL OF INSTRUMENTATION 2024
被引用0
Latest Feasibility Studies of LAPPD As a Timing Layer for the LHCb Upgrade 2 ECAL
JOURNAL OF INSTRUMENTATION 2024
被引用0
Development of a MCP-based Timing Layer for the Upgrade 2 of the LHCb Experiment(∗)
NUOVO CIMENTO C-COLLOQUIA AND COMMUNICATIONS IN PHYSICS 2024
被引用0
Electroweak Measurements at LHCb with the Calorimeter Upgrade
NUOVO CIMENTO C-COLLOQUIA AND COMMUNICATIONS IN PHYSICS 2024
被引用0
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话