GLP1R Strives T Cell Negative Costimulation Through a Rrna Gene Repressor Interactome
openalex(2023)
University of Milan | Magna Graecia University | Humanitas University | ASST Fatebenefratelli Sacco | University of Milano-Bicocca | University of Insubria | Massachusetts General Hospital | University of Parma | Huazhong University of Science and Technology
Abstract
Glucagon Like Peptide-1 Receptor (GLP1R) is a key regulator of glucose metabolism, known to be expressed by pancreatic β-cells, gastric mucosa and hypothalamus. Recently, GLP1R mRNA has been detected on T lymphocytes; however, its function in these cells remains poorly understood. We herein investigated the role of GLP1R on T lymphocytes during immune response. Our data showed that a subset of naïve CD4 and CD8 T lymphocytes expresses GLP1R in humans and mice, while GLP1R appeared to have a unique interactome pattern in T lymphocytes different from those of pancreatic β-cells. Interestingly, the presence of GLP1R delineates a population of activated T cells which are more prone to cell death. GLP1R is upregulated in vitro and in vivo during alloimmune response, similarly to PD-1 and CTLA4. When C57BL/6 mice received islet or cardiac allotransplantation from a fully mismatched BALB/c donor, an expansion of GLP1R+ CD4+/CD8+ T cells occurred in the spleen and were found to infiltrate the graft. Importantly, when signaling through GLP1R with an agonist, T cell pathways of apoptosis and senescence are upregulated with significant prolongation of cardiac and islet allograft survival and reduced graft T lymphocytes infiltrate. The gene-repression protein Baz2a appeared to be responsible for driving the GLP1R-dependent T cell negative costimulation. Genetic GLP1R gain of function is associated with T cell activation and cell death, while GLP1RKO accelerate chronic allograft rejection. GLP1R acts as a T cell negative costimulatory molecule and GLP1R signaling prolongs allograft survival, mitigates alloimmune response and reduces T lymphocytes graft infiltration.
MoreTranslated text
Key words
Glucose Control
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper