On the Independence Assumption in Neurosymbolic Learning
arxiv(2024)
摘要
State-of-the-art neurosymbolic learning systems use probabilistic reasoning
to guide neural networks towards predictions that conform to logical
constraints over symbols. Many such systems assume that the probabilities of
the considered symbols are conditionally independent given the input to
simplify learning and reasoning. We study and criticise this assumption,
highlighting how it can hinder optimisation and prevent uncertainty
quantification. We prove that loss functions bias conditionally independent
neural networks to become overconfident in their predictions. As a result, they
are unable to represent uncertainty over multiple valid options. Furthermore,
we prove that these loss functions are difficult to optimise: they are
non-convex, and their minima are usually highly disconnected. Our theoretical
analysis gives the foundation for replacing the conditional independence
assumption and designing more expressive neurosymbolic probabilistic models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要