Calibrating Bayesian Generative Machine Learning for Bayesiamplification
arxiv(2024)
摘要
Recently, combinations of generative and Bayesian machine learning have been
introduced in particle physics for both fast detector simulation and inference
tasks. These neural networks aim to quantify the uncertainty on the generated
distribution originating from limited training statistics. The interpretation
of a distribution-wide uncertainty however remains ill-defined. We show a clear
scheme for quantifying the calibration of Bayesian generative machine learning
models. For a Continuous Normalizing Flow applied to a low-dimensional toy
example, we evaluate the calibration of Bayesian uncertainties from either a
mean-field Gaussian weight posterior, or Monte Carlo sampling network weights,
to gauge their behaviour on unsteady distribution edges. Well calibrated
uncertainties can then be used to roughly estimate the number of uncorrelated
truth samples that are equivalent to the generated sample and clearly indicate
data amplification for smooth features of the distribution.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要