LBP-026 the Global Prospective Observational Study to Evaluate the Role of ‘liver Transplantation in Patients with Cirrhosis and Severe Acuteon-Chronic Liver Failure (CHANCE)’: Study Design, Demographics and Overall Outcome
Journal of Hepatology(2024)
Univ Libre Bruxelles | Kings Coll Hosp London | European Fdn Study Chron Liver Failure EF CLIF | Univ Sao Paulo | Cedars Sinai Med Ctr | Hosp Italiano Buenos Aires | Royal Free Hosp | Rela Inst Liver Dis & Transplantat | Dr Rela Inst & Med Ctr | Amrita Inst Med Sci | Hosp Clin Barcelona | Univ Barcelona | Liver Transplant Ctr GOM Niguarda | Univ Toronto | Univ Utah | TGMG | Tampa Gen Hosp | Hosp Univ Austral | Cleveland Clin | Hop Trousseau | Med Univ Warsaw | Hosp Aleman | Univ Hosp Tubingen | Alma Mater Studiorum Univ Bologna | Fdn IRCCS Ca Granda Osped Maggiore Policlin | Hosp Univ & Politecn La Fe | Univ Colorado | Baylor St Lukes Med Ctr | Strasbourg Univ Hosp | Natl Inst Med Sci & Nutr Salvador Zubiran | Hop Pontchaillou | Univ Maryland | Univ Fed Ceara | Inonu Univ | Fed Univ Hlth Sci Porto Alegre | Univ Munster | Kyushu Univ Hosp | Hop Beaujon | Hop Univ Pitie Salpetriere | Univ Chicago Med | Hosp Univ Vall dHebron | Hiroshima Univ Hosp | Nagasaki Univ | Charite Univ Med Berlin | Univ Med Ctr Hamburg Eppendorf | Austin Hlth | Montpellier Univ | 12 Octubre Univ Hosp | Queen Elizabeth Hosp Birmingham | Medanta Medic | Royal Prince Alfred Hosp | Ghent Univ Hosp | Leeds Teaching Hosp | Royal Infirm | Univ Tokyo | Kaohsiung Chang Gung Mem Hosp | Med Univ Vienna | Hosp Pablo Tobon Uribe | Piedmont Atlanta Hosp | Int Liver Ctr | Michael E DeBakey VA Med Ctr | Paris Est Univ | Hop Paul Brousse | Hosp La Paz | Univ Paris Saclay | Univ Alberta | European Fdn Study Chron Liver Failure
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
