论文标题

通过概率分类器,具有$ f $ divergences的广义贝叶斯更新

Generalised Bayes Updates with $f$-divergences through Probabilistic Classifiers

论文作者

Thomas, Owen, Pesonen, Henri, Corander, Jukka

论文摘要

从理论和应用的角度来看,一系列算法进步稳步提高了贝叶斯方法作为推理范式的普及。即使在众多应用领域取得了明显的成功,在存在模型错误指定的情况下,贝叶斯推断的鲁棒性也令人关注,这可能导致大型样本量的后验分布的不良极端行为。具有损失函数的普遍信念更新代表了使贝叶斯推论更健壮且易于与假定模型偏离的核心原则。在这里,我们考虑使用$ f $ -Diverences进行此类更新,以量化假定的统计模型与生成观察到的数据的概率分布之间的差异。由于后者通常未知,因此差异的估计可能被视为一个棘手的问题。我们表明,通过使用概率分类器可以利用两个概率分布比的估计值,即使一个或两个未知。我们证明了$ f $ divergence家族中各种特定选择的广义信念更新的行为。我们表明,对于特定的差异函数,这种方法甚至可以通过分析评估正确模型可能性函数的方法改进。

A stream of algorithmic advances has steadily increased the popularity of the Bayesian approach as an inference paradigm, both from the theoretical and applied perspective. Even with apparent successes in numerous application fields, a rising concern is the robustness of Bayesian inference in the presence of model misspecification, which may lead to undesirable extreme behavior of the posterior distributions for large sample sizes. Generalized belief updating with a loss function represents a central principle to making Bayesian inference more robust and less vulnerable to deviations from the assumed model. Here we consider such updates with $f$-divergences to quantify a discrepancy between the assumed statistical model and the probability distribution which generated the observed data. Since the latter is generally unknown, estimation of the divergence may be viewed as an intractable problem. We show that the divergence becomes accessible through the use of probabilistic classifiers that can leverage an estimate of the ratio of two probability distributions even when one or both of them is unknown. We demonstrate the behavior of generalized belief updates for various specific choices under the $f$-divergence family. We show that for specific divergence functions such an approach can even improve on methods evaluating the correct model likelihood function analytically.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源