谷歌浏览器插件
订阅小程序
在清言上使用

Adaptive Importance Sampling by Kernel Smoothing

Bernard Delyon, François Portier

arXiv (Cornell University)(2019)

引用 0|浏览0
暂无评分
摘要
A key determinant of the success of Monte Carlo simulation is the sampling policy, the sequence of distribution used to generate the particles, and allowing the sampling policy to evolve adaptively during the algorithm provides considerable improvement in practice. The issues related to the adaptive choice of the sampling policy are addressed from a functional estimation point of view. %Uniform convergence of the sampling policy are established %The standard adaptive importance sampling approach is revisited The considered approach consists of modelling the sampling policy as a mixture distribution between a flexible kernel density estimate, based on the whole set of available particles, and a naive heavy tail density. When the share of samples generated according to the naive density goes to zero but not too quickly, two results are established. Uniform convergence rates are derived for the sampling policy estimate. A central limit theorem is obtained for the resulting integral estimates. The fact that the asymptotic variance is the same as the variance of an oracle procedure, in which the sampling policy is chosen as the optimal one, illustrates the benefits of the proposed approach.
更多
查看译文
关键词
Model Selection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要