Uncertainty in latent representations of variational autoencoders optimized for visual tasks
arxiv(2024)
摘要
Deep learning methods are increasingly becoming instrumental as modeling
tools in computational neuroscience, employing optimality principles to build
bridges between neural responses and perception or behavior. Developing models
that adequately represent uncertainty is however challenging for deep learning
methods, which often suffer from calibration problems. This constitutes a
difficulty in particular when modeling cortical circuits in terms of Bayesian
inference, beyond single point estimates such as the posterior mean or the
maximum a posteriori. In this work we systematically studied uncertainty
representations in latent representations of variational auto-encoders (VAEs),
both in a perceptual task from natural images and in two other canonical tasks
of computer vision, finding a poor alignment between uncertainty and
informativeness or ambiguities in the images. We next showed how a novel
approach which we call explaining-away variational auto-encoders (EA-VAEs),
fixes these issues, producing meaningful reports of uncertainty in a variety of
scenarios, including interpolation, image corruption, and even
out-of-distribution detection. We show EA-VAEs may prove useful both as models
of perception in computational neuroscience and as inference tools in computer
vision.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要