论文标题

各种自动编码器如何学习?来自代表性相似性的见解

How do Variational Autoencoders Learn? Insights from Representational Similarity

论文作者

Bonheme, Lisa, Grzes, Marek

论文摘要

变异自动编码器(VAE)学习脱离表示表示的能力使它们在实际应用中很受欢迎。但是,他们的行为尚未完全理解。例如,何时可以提供分离的表示形式或后倒塌的问题仍然是积极研究的领域。尽管如此,尚无对VAE学到的表示形式进行层次比较,这将进一步了解这些模型。在本文中,我们使用代表性相似性技术研究VAE的内部行为。具体而言,使用CKA和Procrustes相似性,我们发现编码器的表示早在解码器之前就学会了,并且此行为独立于超参数,学习目标和数据集。此外,除均值和方差层以外,编码器的表示在超级参数和学习目标之间相似。

The ability of Variational Autoencoders (VAEs) to learn disentangled representations has made them popular for practical applications. However, their behaviour is not yet fully understood. For example, the questions of when they can provide disentangled representations, or suffer from posterior collapse are still areas of active research. Despite this, there are no layerwise comparisons of the representations learned by VAEs, which would further our understanding of these models. In this paper, we thus look into the internal behaviour of VAEs using representational similarity techniques. Specifically, using the CKA and Procrustes similarities, we found that the encoders' representations are learned long before the decoders', and this behaviour is independent of hyperparameters, learning objectives, and datasets. Moreover, the encoders' representations in all but the mean and variance layers are similar across hyperparameters and learning objectives.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源