学术报告
题 目:Optimization-based methods for likelihood-free inference
报 告 人:何志坚 教授 (邀请人:钟柳强 )
华南理工大学
时 间:9月18日 16:30-17:30
地 点:数科院西楼112
报告人简介:
何志坚,华南理工大学数学学院教授、博导、副院长,国家级青年人才计划获得者。2015年于清华大学获得统计学博士学位。研究兴趣为随机计算方法与不确定性量化,特别是拟蒙特卡罗方法的理论和应用研究。相关研究发表在统计学四大期刊Journal of the Royal Statistical Society: Series B,计算科学重要期刊SIAM Journal on Numerical Analysis,SIAM Journal on Scientific Computing,Mathematics of Computation,和运筹管理权威期刊European Journal of Operational Research等。博士论文获得新世界数学奖银奖。主持两项国家自然科学基金项目以及两项省部级项目。
摘 要:
Recently, optimization-based methods are widely used for Bayesian inference with intractable likelihoods. This talk will show two optimization-based methods to approximate the posterior. First, we propose unbiased estimators based on multilevel Monte Carlo (MLMC) for the gradient of Kullback--Leibler divergence between the posterior distribution and the variational distribution when the likelihood is intractable, but can be estimated unbiasedly. The MLMC method can greatly speed up the process of optimization. On the other hand, we use sequential neural posterior estimation (SNPE) techniques to deal with simulation-based models with intractable likelihoods. Unlike approximate Bayesian computation, SNPE techniques learn the posterior from sequential simulation using neural network-based conditional density estimators. We aslo reclaim the efficiency of SNPE through sophisticated sampling strategies. Numerical experiments demonstrate that our methods outperform existing competitors on certain tasks.
欢迎老师、同学们参加、交流!