论文标题

“ calibeating”:在自己的游戏中击败预报员

"Calibeating": Beating Forecasters at Their Own Game

论文作者

Foster, Dean P., Hart, Sergiu

论文摘要

为了识别专业知识,预报者不应通过其校准评分来测试,这总是可以任意地使其较小,而应通过其胸骨得分进行测试。布里尔分数是校准得分和改进得分的总和;后者衡量了以相同的预测分解为垃圾箱的良好性,因此证明了“专业知识”。这就提出了一个问题,即是否可以在不失去专业知识的情况下获得校准,我们称这是“ caribeating”。通过确定性的在线程序,我们提供了一种简单的方法,以使任何预测都进行。此外,我们表明可以通过校准的随机过程来实现calibeating,然后将结果扩展到同时对多个过程进行定位,并确定不断校准的确定性程序。

In order to identify expertise, forecasters should not be tested by their calibration score, which can always be made arbitrarily small, but rather by their Brier score. The Brier score is the sum of the calibration score and the refinement score; the latter measures how good the sorting into bins with the same forecast is, and thus attests to "expertise." This raises the question of whether one can gain calibration without losing expertise, which we refer to as "calibeating." We provide an easy way to calibeat any forecast, by a deterministic online procedure. We moreover show that calibeating can be achieved by a stochastic procedure that is itself calibrated, and then extend the results to simultaneously calibeating multiple procedures, and to deterministic procedures that are continuously calibrated.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源