Unsupervised Detection and Correction of Model Calibration Shift at Test-Time

2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC(2023)

引用 0|浏览0
暂无评分
摘要
The wide adoption of predictive models into clinical practice require generalizability across hospitals and maintenance of consistent performance across time. Model calibration shift, caused by factors such as changes in prevalence rates or data distribution shift, can affect the generalizability of such models. In this work, we propose a model calibration detection and correction (CaDC) method, specifically designed to utilize only unlabeled data at a target hospital. The proposed method is very flexible and can be used alongside any deep learning-based clinical predictive model. As a case study, we focus on the problem of detecting and correcting model calibration shift in the context of early prediction of sepsis. Three patient cohorts consisting of 545,089 adult patients admitted to the emergency departments at three geographically diverse healthcare systems in the United States were used to train and externally validate the proposed method. We successfully show that utilizing the CaDC model can help assist the sepsis prediction model in achieving a predefined positive predictive value (PPV). For instance, when trained to achieve a PPV of 20%, the performance of the sepsis prediction model with and without the calibration shift estimation model was 18.0% vs 12.9% and 23.1% vs 13.4% at the two external validation cohorts, respectively. As such, the proposed CaDC method has potential applications in maintaining performance claims of predictive models deployed across hospital systems.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要