Aberrant Activation of TCL1A Promotes Stem Cell Expansion in Clonal Haematopoiesis
Nature(2023)SCI 1区
Univ Michigan | Department of Pathology | Department of Human Genetics | Program in Medical and Population Genetics | Department of Genetics | Division of Genetic Medicine | Department of Pediatrics | Department of Biochemistry | Department of Biostatistics | Department of Medicine I | Department of Medicine | Human Genome Sequencing Center | Division of Biomedical Informatics and Personalized Medicine | The Charles Bronfman Institute of Personalized Medicine | Institute for Genomic Health | University of Texas Health at Houston | Department of Preventive Medicine | University of Washington | Cardiovascular Health Research Unit | Brigham and Women’s Hospital | Channing Division of Network Medicine | Department of Internal Medicine | National Heart Lung and Blood Institute’s | Department of Epidemiology | Univ Groningen | Department of Epidemiology and Population Health | College of Public Health | University of Alabama at Birmingham | Intermountain Heart Institute | Department of Quantitative Health Sciences | Department of Public Health Sciences | Univ Vermont | Department of Cardiology | Genome Science | Lund Univ | Vitalant Research Institute | Brown Univ | Center for Individualized and Genomic Medicine Research (CIGMA) | Division of Biostatistics | Department of Medical Research | Division of Cardiology | Univ Calif San Francisco | Cardiovascular Research Institute | Vanderbilt Univ | Division of Public Health Sciences | Division of Hematology and Oncology | National Heart | New York Genome Center | Regeneron Pharmaceut | Univ N Carolina | Broad Institute
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance

被引用8149 | 浏览
被引用1602 | 浏览
被引用5026 | 浏览
被引用279 | 浏览
被引用4994 | 浏览
被引用1673 | 浏览
被引用235 | 浏览
被引用815 | 浏览
被引用170 | 浏览
被引用553 | 浏览
被引用100 | 浏览
被引用70 | 浏览
被引用18 | 浏览