EFFICACY AND SAFETY OF FRONT-LINE TREATMENT WITH IBRUTINIB AND RITUXIMAB IN UNFIT PATIENTS WITH CHRONIC LYMPHOCYTIC LEUKEMIA (CLL). FIRST REPORT OF THE GIMEMA LLC1114 STUDY
Haematologica(2019)SCI 1区SCI 2区
Sapienza Univ | Pugliese Ciaccio Hosp | Italian Grp Adult Hematol Dis GIMEMA | Fdn IRCCS Ca Granda Osped Maggiore Policlin | Univ Padua | Osped Cardinal Massaia | SS Antonio & Biagio & Cesare Arrigo Hosp | Univ Torino | Univ Modena & Reggio Emilia | Univ Perugia | Univ Piemonte Orientale | Azienda Osped Univ Citta Salute & Sci | Azienda Osped Bianchi Melacrino Morelli | PO V Fazzi | AOU S Giovanni Battista | CTMO Univ | AUSL IRCCS S Maria Nuova Reggio Emilia | Seragnoli Univ Bologna | Azienda Osped Papardo | Univ Siena | Infermi Hosp | Cosenza Hosp | Azienda Policlin OVE | Fdn Policlin Univ A Gemelli | Fdn IRCCS Policlin San Matteo | Azienda Osped Brotzu | Santa Maria delle Croci Hosp | Univ Bari | St Anna Univ Hosp
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
