Chrome Extension
WeChat Mini Program
Use on ChatGLM

Performance of Optically Readout GEM-based TPC with a 55fe Source

Journal of instrumentation(2019)

Ist Nazl Fis Nucl | Gran Sasso Sci Inst | Museo Stor Fis & Ctr Studi & Ric Enrico Fermi | Univ Roma Tre

Cited 11|Views0
Abstract
Optical readout of large Time Projection Chambers (TPCs) with multiple Gas Electron Multipliers (GEMs) amplification stages has shown to provide very interesting performances for high energy particle tracking. Proposed applications for low-energy and rare event studies, such as Dark Matter search, ask for demanding performance in the keV energy range. The performance of such a readout was studied in details as a function of the electric field configuration and GEM gain by using a ^55Fe source within a 7 litre sensitive volume detector developed as a part of the R&D for the CYGNUS project. Results reported in this paper show that the low noise level of the sensor allows to operate with a 2 keV threshold while keeping a rate of fake-events lesser than 10 per year. In this configuration, a detection efficiency well above 95% along with an energy resolution (σ) of 18% is obtained for the 5.9 keV photons, demonstrating the very promising capabilities of this technique.
More
Translated text
Key words
Dark Matter detectors (WIMPS, axions, etc.),Charge transport and multiplication in gas,Optical detector readout concepts,Micropattern gaseous detectors Micropattern gaseous detectors (MSGC, GEM, THGEM, RETHGEM, MHSP, MICROPIC, MICROMEGAS, InGrid, etc)
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest