论文标题
通过示例自动编码器无监督的视听综合
Unsupervised Audiovisual Synthesis via Exemplar Autoencoders
论文作者
论文摘要
我们提出了一种无监督的方法,该方法将任何个人的输入语音转换为潜在的许多输出扬声器的视听流。我们的方法以简单的自动编码器为基础,将样本外数据投影到训练集的分布上。我们使用示例性自动编码器来学习声音,风格韵律和特定目标示例性演讲的视觉外观。与现有方法相反,仅使用3分钟的目标音频视频数据数据,可以轻松地将所提出的方法扩展到任意大量的扬声器和样式,而无需{\ em Any}培训数据。为此,我们学习了视听瓶颈表示,这些表示可以捕捉演讲的结构化语言内容。我们在音频和视频综合方面都超越了事先的方法,并在我们的项目页面上提供了广泛的定性分析-https://www.cs.cmu.edu/~exemplar-ae/。
We present an unsupervised approach that converts the input speech of any individual into audiovisual streams of potentially-infinitely many output speakers. Our approach builds on simple autoencoders that project out-of-sample data onto the distribution of the training set. We use Exemplar Autoencoders to learn the voice, stylistic prosody, and visual appearance of a specific target exemplar speech. In contrast to existing methods, the proposed approach can be easily extended to an arbitrarily large number of speakers and styles using only 3 minutes of target audio-video data, without requiring {\em any} training data for the input speaker. To do so, we learn audiovisual bottleneck representations that capture the structured linguistic content of speech. We outperform prior approaches on both audio and video synthesis, and provide extensive qualitative analysis on our project page -- https://www.cs.cmu.edu/~exemplar-ae/.