论文标题
深入的视听学习:一项调查
Deep Audio-Visual Learning: A Survey
论文作者
论文摘要
自从深度学习开始成功使用以来,旨在利用音频和视觉方式之间关系的视听学习引起了人们的关注。研究人员倾向于利用这两种方式来提高先前被认为是单模式任务的性能或解决新的具有挑战性的问题。在本文中,我们对最近的视听学习发展进行了全面的调查。我们将当前的视听学习任务划分为四个不同的子字段:视听分离和本地化,视听通讯学习,视听生成和视听表示学习。进一步讨论了最新的方法以及每个子场的其余挑战。最后,我们总结了常用的数据集和性能指标。
Audio-visual learning, aimed at exploiting the relationship between audio and visual modalities, has drawn considerable attention since deep learning started to be used successfully. Researchers tend to leverage these two modalities either to improve the performance of previously considered single-modality tasks or to address new challenging problems. In this paper, we provide a comprehensive survey of recent audio-visual learning development. We divide the current audio-visual learning tasks into four different subfields: audio-visual separation and localization, audio-visual correspondence learning, audio-visual generation, and audio-visual representation learning. State-of-the-art methods as well as the remaining challenges of each subfield are further discussed. Finally, we summarize the commonly used datasets and performance metrics.