论文标题
通过各种潜在表示归一流
Normalizing Flow with Variational Latent Representation
论文作者
论文摘要
正常化流量(NF)因其强大的能力对复杂的数据分布进行建模,因此在基于传统的最大似然的方法上越来越受欢迎。但是,将观察到的数据映射到正态分布的标准方法在处理多种相对隔离模式的数据分布方面困难。为了克服这个问题,我们提出了一个基于各种潜在表示的新框架,以提高NF的实际绩效。这个想法是用更一般的潜在表示替换标准的常规潜在变量,该变量通过变异贝叶斯共同学习。例如,通过将潜在表示作为离散序列,我们的框架可以学习一个产生潜在序列的变压器模型和一个生成以序列为条件的连续数据分布的NF模型。所得方法比具有多种模式生成数据分布的标准归一化流量方法要强大得多。广泛的实验表明了NF具有变异潜在表示的优势。
Normalizing flow (NF) has gained popularity over traditional maximum likelihood based methods due to its strong capability to model complex data distributions. However, the standard approach, which maps the observed data to a normal distribution, has difficulty in handling data distributions with multiple relatively isolated modes. To overcome this issue, we propose a new framework based on variational latent representation to improve the practical performance of NF. The idea is to replace the standard normal latent variable with a more general latent representation, jointly learned via Variational Bayes. For example, by taking the latent representation as a discrete sequence, our framework can learn a Transformer model that generates the latent sequence and an NF model that generates continuous data distribution conditioned on the sequence. The resulting method is significantly more powerful than the standard normalization flow approach for generating data distributions with multiple modes. Extensive experiments have shown the advantages of NF with variational latent representation.