论文标题

Bregman通用指数家庭的偏差

Bregman Deviations of Generic Exponential Families

论文作者

Chowdhury, Sayak Ray, Saux, Patrick, Maillard, Odalric-Ambrym, Gopalan, Aditya

论文摘要

我们重新审视混合技术的方法,也称为拉普拉斯法,以研究通用指数家族中的浓度现象。将与家族的对数分区功能相关的Bregman差异的特性与超级木星混合物的方法相关联,我们建立了一个通用的结合,以控制家族参数与参数的有限样本估算之间的Bregman差异。我们的界限是时间均匀的,并且使得将经典信息收益扩展到指数式家庭,我们称之为Bregman信息收益。对于从业者而言,我们实例化了这部小说与几个古典家庭的束缚,例如高斯,伯努利,指数,威布尔,帕雷托,帕尔托,泊松,泊松和卡方,从而产生了置信度的明确形式和布雷格曼信息的收益。我们从数值上进一步将所得的置信度界限与最先进的替代方案进行比较,以使时间均匀浓度,并表明这种新颖的方法会产生竞争结果。最后,我们强调了集中界对某些说明性应用的好处。

We revisit the method of mixture technique, also known as the Laplace method, to study the concentration phenomenon in generic exponential families. Combining the properties of Bregman divergence associated with log-partition function of the family with the method of mixtures for super-martingales, we establish a generic bound controlling the Bregman divergence between the parameter of the family and a finite sample estimate of the parameter. Our bound is time-uniform and makes appear a quantity extending the classical information gain to exponential families, which we call the Bregman information gain. For the practitioner, we instantiate this novel bound to several classical families, e.g., Gaussian, Bernoulli, Exponential, Weibull, Pareto, Poisson and Chi-square yielding explicit forms of the confidence sets and the Bregman information gain. We further numerically compare the resulting confidence bounds to state-of-the-art alternatives for time-uniform concentration and show that this novel method yields competitive results. Finally, we highlight the benefit of our concentration bounds on some illustrative applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源