ATOMS: ALMA Three-millimeter Observations of Massive Star-forming Regions -XXI. A Large-sample Observational Study of Ethanol and Dimethyl Ether in Hot Cores
arXiv · Astrophysics of Galaxies(2025)
Abstract
Hot cores, as a stage of massive star formation, exhibit abundant line emissions of COMs. We present a deep line survey of two isomers of C_2H_6O: ethanol (C_2H_5OH; EA), and dimethyl ether (CH_3OCH_3; DE) as well as their possible precursor CH_3OH towards 60 hot cores by using the ALMA 3 mm line observations. EA is detected in 40 hot cores and DE is detected in 59 hot cores. Of these, EA and DE are simultaneously detected in 39 hot cores. We calculate rotation temperatures and column densities of EA and DE by using the XCLASS software. The average rotation temperature of EA is higher than that of DE, whereas the average column density of EA is lower than that of DE. Combined with previous studies of hot cores and hot corinos, we find strong column density correlations among EA and DE (ρ = 0.92), EA and CH_3OH (ρ = 0.82), as well as DE and CH_3OH (ρ = 0.80). The column density ratios of EA/DE versus the column densities of CH_3OH remain nearly constant with values within 1 order of magnitude. These strong correlations and the stable ratios, suggest that EA, DE, and CH_3OH could be chemically linked, with CH_3OH potentially serving as a precursor for EA and DE. Compared with chemical models, the three different warm-up timescale models result in the systematic overproduction of EA and the systematic underproduction of DE. Therefore, our large sample observations can provide crucial constraints on chemical models.
MoreTranslated text
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话