Shapes of Galaxies Hosting Radio-Loud AGNs with Z ≤1
Astronomy and Astrophysics(2022)SCI 2区
Leiden Univ | Univ Ghent | Royal Observ
Abstract
Links between the properties of radio-loud active galactic nuclei (RLAGNs) and the morphology of their hosts may provide important clues for our understanding of how RLAGNs are triggered. In this work, focusing on passive galaxies, we study the shape of the hosts of RLAGNs selected from the Karl G. Jansky Very Large Array Cosmic Evolution Survey (VLA-COSMOS) 3GHz Large Project, and compare them with previous results based on the first data release (DR1) of the LOFAR Two-Metre Sky Survey (LoTSS). We find that, at redshifts of between 0.6 and 1, high-luminosity ($L_{1.4 GHz}\gtrsim10^{24}\rm W Hz^{-1}$) RLAGNs have a wider range of optical projected axis ratios than their low-redshift counterparts, which are essentially all found in round galaxies with axis ratios of higher than 0.7. We construct control samples and show that although the hosts of high-redshift RLAGNs with the highest luminosities still have a rounder shape compared with the non-RLAGNs, they on average have a smaller axis ratio (more elongated) than the local RLAGNs with similar stellar masses and radio luminosities. This evolution can be interpreted as a byproduct of radio luminosity evolution, namely that galaxies at fixed stellar mass are more radio luminous at high redshifts: artificially increasing the radio luminosities of local galaxies ($z\leq$0.3) by a factor of 2 to 4 can remove the observed evolution of the axis ratio distribution. If this interpretation is correct then the implication is that the link between AGN radio luminosity and host galaxy shape is similar at $z\simeq1$ to in the present-day Universe.
MoreTranslated text
Key words
galaxies: active,galaxies: fundamental parameters,galaxies: structure,galaxies: high-redshift
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
ASTRONOMY & ASTROPHYSICS 2023
被引用2
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined