Bayesian Pseudo-Coresets via Contrastive Divergence
arxiv(2023)
摘要
Bayesian methods provide an elegant framework for estimating parameter
posteriors and quantification of uncertainty associated with probabilistic
models. However, they often suffer from slow inference times. To address this
challenge, Bayesian Pseudo-Coresets (BPC) have emerged as a promising solution.
BPC methods aim to create a small synthetic dataset, known as pseudo-coresets,
that approximates the posterior inference achieved with the original dataset.
This approximation is achieved by optimizing a divergence measure between the
true posterior and the pseudo-coreset posterior. Various divergence measures
have been proposed for constructing pseudo-coresets, with forward
Kullback-Leibler (KL) divergence being the most successful. However, using
forward KL divergence necessitates sampling from the pseudo-coreset posterior,
often accomplished through approximate Gaussian variational distributions.
Alternatively, one could employ Markov Chain Monte Carlo (MCMC) methods for
sampling, but this becomes challenging in high-dimensional parameter spaces due
to slow mixing. In this study, we introduce a novel approach for constructing
pseudo-coresets by utilizing contrastive divergence. Importantly, optimizing
contrastive divergence eliminates the need for approximations in the
pseudo-coreset construction process. Furthermore, it enables the use of
finite-step MCMC methods, alleviating the requirement for extensive mixing to
reach a stationary distribution. To validate our method's effectiveness, we
conduct extensive experiments on multiple datasets, demonstrating its
superiority over existing BPC techniques.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要