谷歌浏览器插件
订阅小程序
在清言上使用

Aligning Multiclass Neural Network Classifier Criterion with Task Performance via F_β-Score

Nathan Tsoi, Deyuan Li, Taesoo Daniel Lee,Marynel Vázquez

CoRR(2024)

引用 0|浏览7
暂无评分
摘要
Multiclass neural network classifiers are typically trained using cross-entropy loss. Following training, the performance of this same neural network is evaluated using an application-specific metric based on the multiclass confusion matrix, such as the Macro F_β-Score. It is questionable whether the use of cross-entropy will yield a classifier that aligns with the intended application-specific performance criteria, particularly in scenarios where there is a need to emphasize one aspect of classifier performance. For example, if greater precision is preferred over recall, the β value in the F_β evaluation metric can be adjusted accordingly, but the cross-entropy objective remains unaware of this preference during training. We propose a method that addresses this training-evaluation gap for multiclass neural network classifiers such that users can train these models informed by the desired final F_β-Score. Following prior work in binary classification, we utilize the concepts of the soft-set confusion matrices and a piecewise-linear approximation of the Heaviside step function. Our method extends the 2 × 2 binary soft-set confusion matrix to a multiclass d × d confusion matrix and proposes dynamic adaptation of the threshold value τ, which parameterizes the piecewise-linear Heaviside approximation during run-time. We present a theoretical analysis that shows that our method can be used to optimize for a soft-set based approximation of Macro-F_β that is a consistent estimator of Macro-F_β, and our extensive experiments show the practical effectiveness of our approach.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要