A Closer Look at AUROC and AUPRC under Class Imbalance
CoRR(2024)
摘要
In machine learning (ML), a widespread adage is that the area under the
precision-recall curve (AUPRC) is a superior metric for model comparison to the
area under the receiver operating characteristic (AUROC) for binary
classification tasks with class imbalance. This paper challenges this notion
through novel mathematical analysis, illustrating that AUROC and AUPRC can be
concisely related in probabilistic terms. We demonstrate that AUPRC, contrary
to popular belief, is not superior in cases of class imbalance and might even
be a harmful metric, given its inclination to unduly favor model improvements
in subpopulations with more frequent positive labels. This bias can
inadvertently heighten algorithmic disparities. Prompted by these insights, a
thorough review of existing ML literature was conducted, utilizing large
language models to analyze over 1.5 million papers from arXiv. Our
investigation focused on the prevalence and substantiation of the purported
AUPRC superiority. The results expose a significant deficit in empirical
backing and a trend of misattributions that have fuelled the widespread
acceptance of AUPRC's supposed advantages. Our findings represent a dual
contribution: a significant technical advancement in understanding metric
behaviors and a stark warning about unchecked assumptions in the ML community.
All experiments are accessible at
https://github.com/mmcdermott/AUC_is_all_you_need.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要