Chrome Extension
WeChat Mini Program
Use on ChatGLM

Extracting Structured Information from Unstructured Histopathology Reports Using Generative Pre-Trained Transformer 4 (GPT-4)

JOURNAL OF PATHOLOGY(2024)

Univ Hosp RWTH Aachen | Tech Univ Dresden | Univ Hosp Erlangen | Charite Univ Med Berlin | Univ Med Ctr Mainz

Cited 5|Views55
Abstract
Deep learning applied to whole-slide histopathology images (WSIs) has the potential to enhance precision oncology and alleviate the workload of experts. However, developing these models necessitates large amounts of data with ground truth labels, which can be both time-consuming and expensive to obtain. Pathology reports are typically unstructured or poorly structured texts, and efforts to implement structured reporting templates have been unsuccessful, as these efforts lead to perceived extra workload. In this study, we hypothesised that large language models (LLMs), such as the generative pre-trained transformer 4 (GPT-4), can extract structured data from unstructured plain language reports using a zero-shot approach without requiring any re-training. We tested this hypothesis by utilising GPT-4 to extract information from histopathological reports, focusing on two extensive sets of pathology reports for colorectal cancer and glioblastoma. We found a high concordance between LLM-generated structured data and human-generated structured data. Consequently, LLMs could potentially be employed routinely to extract ground truth data for machine learning from unstructured pathology reports in the future.(c) 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
More
Translated text
Key words
artificial intelligence,large language models,natural language processing,named entity recognition,text mining,pathology report
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本研究提出使用大型语言模型GPT-4,通过零样本学习方法,直接从未结构化的病理报告中提取结构化信息,以减轻专家工作负担并提高精确肿瘤学的效能。

方法】:研究采用GPT-4模型,这是一种无需任何重新训练即可执行零样本学习的预训练语言模型。

实验】:实验通过两组广泛的病理报告数据集进行,分别是结直肠癌和胶质oblastoma的病理报告,实验结果显示GPT-4生成的结构化数据与人工生成的结构化数据高度一致。