论文标题
引入可点击共享信息的可区分量度
Introducing a differentiable measure of pointwise shared information
论文作者
论文摘要
多元互相信息的部分信息分解(PID)描述了一组源变量包含有关目标变量的信息的不同方式。威廉姆斯和啤酒的开创性工作表明,这种分解不能在没有其他假设的情况下从经典信息理论中确定,并且已经提出了几种候选措施,通常是基于与决策理论等相关领域的原则。相对于潜在的概率质量函数,这些措施均无差异。我们在这里提出了一种满足该特性的新颖措施,仅来自信息理论原理,并具有局部互信息的形式。我们展示了如何从概率质量质量质量的排除的角度来理解该度量,这是Fano的原始定义的基础。由于我们的措施是针对对随机变量的单个实现的明确定义,例如在人工神经网络中的局部学习。我们还表明,它在冗余晶格上具有有意义的Möbius倒置,并遵守目标链规则。我们根据代理人仅给出共享信息应做出的决策对措施进行操作解释。
Partial information decomposition (PID) of the multivariate mutual information describes the distinct ways in which a set of source variables contains information about a target variable. The groundbreaking work of Williams and Beer has shown that this decomposition cannot be determined from classic information theory without making additional assumptions, and several candidate measures have been proposed, often drawing on principles from related fields such as decision theory. None of these measures is differentiable with respect to the underlying probability mass function. We here present a novel measure that satisfies this property, emerges solely from information-theoretic principles, and has the form of a local mutual information. We show how the measure can be understood from the perspective of exclusions of probability mass, a principle that is foundational to the original definition of the mutual information by Fano. Since our measure is well-defined for individual realizations of the random variables it lends itself for example to local learning in artificial neural networks. We also show that it has a meaningful Möbius inversion on a redundancy lattice and obeys a target chain rule. We give an operational interpretation of the measure based on the decisions that an agent should take if given only the shared information.