BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
CoRR(2024)
摘要
Neuro-Symbolic (NeSy) predictors that conform to symbolic knowledge -
encoding, e.g., safety constraints - can be affected by Reasoning Shortcuts
(RSs): They learn concepts consistent with the symbolic knowledge by exploiting
unintended semantics. RSs compromise reliability and generalization and, as we
show in this paper, they are linked to NeSy models being overconfident about
the predicted concepts. Unfortunately, the only trustworthy mitigation strategy
requires collecting costly dense supervision over the concepts. Rather than
attempting to avoid RSs altogether, we propose to ensure NeSy models are aware
of the semantic ambiguity of the concepts they learn, thus enabling their users
to identify and distrust low-quality concepts. Starting from three simple
desiderata, we derive bears (BE Aware of Reasoning Shortcuts), an ensembling
technique that calibrates the model's concept-level confidence without
compromising prediction accuracy, thus encouraging NeSy architectures to be
uncertain about concepts affected by RSs. We show empirically that bears
improves RS-awareness of several state-of-the-art NeSy models, and also
facilitates acquiring informative dense annotations for mitigation purposes.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要