Dixon Techniques for Water and Fat Imaging
Journal of Magnetic Resonance Imaging(2008)SCI 2区
Univ Texas MD Anderson Canc Ctr
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance

被引用298
被引用103
Flow Compensation for the Fast Spin Echo Triple-Echo Dixon Sequence
被引用1
被引用19
被引用31
被引用88
Gadofosveset Trisodium-Enhanced Abdominal Perforator Mra
被引用15
被引用70
Abdominal MRI at 3.0 T: LAVA‐flex Compared with Conventional Fat Suppression T1‐weighted Images
被引用35
Hepatic MR Imaging Techniques, Optimization, and Artifacts
被引用18
被引用13
被引用32
被引用17
Anatomy Detection and Localization in 3D Medical Images
被引用21
Neuromuscular Imaging in Muscular Dystrophies and Other Muscle Diseases
被引用10
被引用19
被引用25
Clinical Applications of Advanced Magnetic Resonance Imaging Techniques for Arthritis Evaluation
被引用15
Mapping Brown Adipose Tissue Based on Fat Water Fraction Provided by Z‐spectral Imaging
被引用16
被引用15
被引用11
Robust Water Fat Separated Dual‐echo MRI by Phase‐sensitive Reconstruction
被引用13