论文标题

结构结构:结构化表示没有正则化

Structure by Architecture: Structured Representations without Regularization

论文作者

Leeb, Felix, Lanzillotta, Guilia, Annadani, Yashas, Besserve, Michel, Bauer, Stefan, Schölkopf, Bernhard

论文摘要

我们研究了使用自动编码器进行下游任务(例如生成建模)的自我监管结构化表示的问题。与大多数依赖于任意,相对非结构化的先验分布的方法不同,我们提出了一种采样技术,该技术仅依赖于潜在变量的独立性,从而避免了在VAE中通常观察到的重建质量和生成性能之间的权衡。我们设计了一种新型的自动编码器体系结构,能够学习结构化表示,而无需进行积极的正则化。我们的结构解码器学习了潜在变量的层次结构,从而在没有任何其他正则化或监督的情况下订购了信息。我们展示了这些模型如何学习改进的表示形式,从而完成了多种下游任务,包括发电,分离和使用几个挑战性和自然图像数据集进行外推。

We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling. Unlike most methods which rely on matching an arbitrary, relatively unstructured, prior distribution for sampling, we propose a sampling technique that relies solely on the independence of latent variables, thereby avoiding the trade-off between reconstruction quality and generative performance typically observed in VAEs. We design a novel autoencoder architecture capable of learning a structured representation without the need for aggressive regularization. Our structural decoders learn a hierarchy of latent variables, thereby ordering the information without any additional regularization or supervision. We demonstrate how these models learn a representation that improves results in a variety of downstream tasks including generation, disentanglement, and extrapolation using several challenging and natural image datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源