论文标题

在经常性模型中正规化置换不变性

Regularizing Towards Permutation Invariance in Recurrent Models

论文作者

Cohen-Karlik, Edo, David, Avichai Ben, Globerson, Amir

论文摘要

在许多机器学习问题中,输出不应取决于输入的顺序。最近对这种“置换不变”功能进行了广泛的研究。在这里,我们认为,尽管RNN对顺序具有固有的依赖性,但诸如RNN之类的时间体系结构与此类问题高度相关。我们表明,与非持续体系结构相比,RNN可以正规化为置换不变性,并且这可能导致紧凑的模型。我们通过一种新型的随机正规化形式实现了这一想法。 现有的解决方案主要表明将学习问题限制为假设类别,这些假设类是设计不变的。我们通过正规化执行置换不变性的方法产生了\ textit {semi置换不变}的模型(例如,某些排列而不是对其他排列的不变)。我们表明,我们的方法优于合成和现实世界数据集的其他置换方法。

In many machine learning problems the output should not depend on the order of the input. Such "permutation invariant" functions have been studied extensively recently. Here we argue that temporal architectures such as RNNs are highly relevant for such problems, despite the inherent dependence of RNNs on order. We show that RNNs can be regularized towards permutation invariance, and that this can result in compact models, as compared to non-recurrent architectures. We implement this idea via a novel form of stochastic regularization. Existing solutions mostly suggest restricting the learning problem to hypothesis classes which are permutation invariant by design. Our approach of enforcing permutation invariance via regularization gives rise to models which are \textit{semi permutation invariant} (e.g. invariant to some permutations and not to others). We show that our method outperforms other permutation invariant approaches on synthetic and real world datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源