论文标题
带有视觉镜的野外:无监督的视听屏幕声音分离
Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds
论文作者
论文摘要
深度学习的最新进展使声音分离和视觉场景的理解能够取得许多进步。但是,在自然视频中提取的声源仍然是一个空旷的问题。在这项工作中,我们提出了Audioscope,这是一种新颖的视听声音分离框架,可以在不监督的情况下训练,以将屏幕上的声音源与真实的野外视频隔离。先前的视听分离工作假定了声音类别域(例如,对语音或音乐)的人为限制,限制了源数,并且需要强烈的声音分离或视觉分段标签。 Audioscope克服了这些限制,在声音的开放域上运行,源数量可变,没有标签或事先视觉分段。听力镜的训练程序使用混合训练(混合)将混合物(MOMS)的合成混合物分离为单个来源,在该来源中,混合混合物的嘈杂标签是由无监督的视听巧合模型提供的。使用嘈杂的标签,以及视频和音频功能之间的关注,Audioscope学会了识别视听相似性并抑制屏幕外声音。我们使用从开放域YFCC100M视频数据中提取的视频片段数据集证明了方法的有效性。该数据集包含在不受约束的条件下记录的各种声音类别,这使得以前的方法的应用不合适。为了进行评估和半监督实验,我们收集了人类标签,以在一小部分夹子中存在屏幕和屏幕外声音。
Recent progress in deep learning has enabled many advances in sound separation and visual scene understanding. However, extracting sound sources which are apparent in natural videos remains an open problem. In this work, we present AudioScope, a novel audio-visual sound separation framework that can be trained without supervision to isolate on-screen sound sources from real in-the-wild videos. Prior audio-visual separation work assumed artificial limitations on the domain of sound classes (e.g., to speech or music), constrained the number of sources, and required strong sound separation or visual segmentation labels. AudioScope overcomes these limitations, operating on an open domain of sounds, with variable numbers of sources, and without labels or prior visual segmentation. The training procedure for AudioScope uses mixture invariant training (MixIT) to separate synthetic mixtures of mixtures (MoMs) into individual sources, where noisy labels for mixtures are provided by an unsupervised audio-visual coincidence model. Using the noisy labels, along with attention between video and audio features, AudioScope learns to identify audio-visual similarity and to suppress off-screen sounds. We demonstrate the effectiveness of our approach using a dataset of video clips extracted from open-domain YFCC100m video data. This dataset contains a wide diversity of sound classes recorded in unconstrained conditions, making the application of previous methods unsuitable. For evaluation and semi-supervised experiments, we collected human labels for presence of on-screen and off-screen sounds on a small subset of clips.