Canadian Surgery Forum: Abstracts of Presentations to the Annual Meetings of the Canadian Association of Bariatric Physicians and Surgeons, Canadian Association of General Surgeons, Canadian Association of Thoracic Surgeons, Canadian Hepato-Pancreato-Biliary Society, Canadian Society of Surgical Oncology, Canadian Society of Colon and Rectal Surgeons, London, Ont. Sept. 15-18, 2011.
Canadian journal of surgery Journal canadien de chirurgie(2011)
From the University of Alberta | From Canadian Surgical Technologies and Advanced Robotics | From McGill University | From the University of Toronto | From the University Health Network | From the McGill University Health Centre | From St. Michael's Hospital | From McMaster University | From University of Toronto | From Sunnybrook Health Sciences Centre | From the University of Manitoba | From the University of Western Ontario | From The Ottawa Hospital | From the University of Calgary | From the University of Saskatchewan | From the Canadian Forces Medical Service and University of Western Ontario | From McGill University and the Jewish General Hospital | From the Department of Surgery | From the University of British Columbia | From the Department of General Surgery | From the Foothills Medical Centre and Tom Baker Cancer Centre | From the Surrey Memorial Hospital | From the Jewish General Hospital | From the Chatham Kent Health Alliance | From the Brown University School of Medicine | From the London Health Sciences Centre and CSTAR | From Queen's University | From the Toronto General Hospital | From the Roswell Park Cancer Institute | From the University of Ottawa | From Hôpital Maisonneuve-Rosemont | From the McGill Medical School | From the Juravinski Cancer Centre | From McMaster University Hamilton | From the Winnipeg Regional Health Authority
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
