Phase Ib study of avadomide (CC‐122) in combination with rituximab in patients with relapsed/refractory diffuse large B‐cell lymphoma and follicular lymphoma
eJHaem(2022)
Department of Lymphoma and Myeloma Division of Cancer Medicine MD Anderson Cancer Center The University of Texas Houston Texas USA | Division of Medical Oncology and Hematology Princess Margaret Cancer Centre University of Toronto Toronto Ontario Canada | H. Lee Moffitt Cancer Center and Research Institute Tampa Florida USA | Institut Bergonié Bordeaux Cedex France | Mayo Clinic Rochester Minnesota USA | Department of Biomedical Sciences Pieve Emanuele Milan Humanitas University Italy ‐IRCCS Humanitas Research Hospital‐ Humanitas Cancer Center Rozzano Milan Italy | Sarah Cannon Research Institute Nashville Tennessee USA | SC Ematologia ASOU Città della Salute e della Scienza di Torino Turin Italy | Division of Hematology and Oncology University of Wisconsin Madison Wisconsin USA | IRCCS Istituto Nazionale dei Tumori University of Milano Milano Italy | Yale Cancer Center New Haven Connecticut USA | Rocky Mountain Cancer Centers The US Oncology Network Boulder Colorado USA | Illinois Cancer Specialists The US Oncology Network Niles Illinois USA | Cancer Center of Santa Barbara Santa Barbara California USA | Cross Cancer Institute Edmonton Alberta Canada | Bristol Myers Squibb Princeton New Jersey USA | Centre for Innovation and Translational Research Europe (CITRE) Bristol‐Myers Squibb Company Seville Spain | Institut Gustave Roussy Villejuif France
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance

被引用2
Emerging Synthetic Drugs for the Treatment of Diffuse Large B-cell Lymphoma
被引用0
Cancer Drugs with High Repositioning Potential for Alzheimer's Disease
被引用0