论文标题

通过对抗最大化共同信息来学习离散结构化表示

Learning Discrete Structured Representations by Adversarially Maximizing Mutual Information

论文作者

Stratos, Karl, Wiseman, Sam

论文摘要

我们通过最大化结构化潜在变量和目标变量之间的相互信息,从未标记的数据中提出学习离散的结构化表示。在这种情况下,计算共同信息是棘手的。我们的关键技术贡献是一个对抗性目标,仅假设仅跨熵计算的可行性,可以仔细估计相互信息。我们通过对二进制编码的马尔可夫分布来开发这种通用公式的具体实现。我们报告了有关目标的实际方面的关键和意外发现,例如选择变异先验。我们将模型应用于文档哈希(Hashing),并表明它的表现优于基于离散和矢量量化的变异自动编码器的当前最佳基线。它还产生高度压缩的可解释表示。

We propose learning discrete structured representations from unlabeled data by maximizing the mutual information between a structured latent variable and a target variable. Calculating mutual information is intractable in this setting. Our key technical contribution is an adversarial objective that can be used to tractably estimate mutual information assuming only the feasibility of cross entropy calculation. We develop a concrete realization of this general formulation with Markov distributions over binary encodings. We report critical and unexpected findings on practical aspects of the objective such as the choice of variational priors. We apply our model on document hashing and show that it outperforms current best baselines based on discrete and vector quantized variational autoencoders. It also yields highly compressed interpretable representations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源