论文标题
视频瞬间检索的多模式跨域对准网络
Multi-Modal Cross-Domain Alignment Network for Video Moment Retrieval
论文作者
论文摘要
作为多媒体信息检索中越来越流行的任务,视频瞬间检索(VMR)旨在根据给定语言查询从未修剪的视频中定位目标时刻。大多数以前的方法在很大程度上取决于众多的手动注释(即瞬间边界),在实践中获取非常昂贵。此外,由于不同数据集之间的域间隙,直接将这些预训练的模型应用于看不见的域,导致了大量的性能下降。在本文中,我们专注于一项新的任务:跨域VMR,其中一个域中完全注释的数据集(````源域''''),但是感兴趣的域(``目标域'')仅包含未经注释的数据集。据我们所知,我们介绍了有关跨域VMR的第一项研究。为了解决这一新任务,我们提出了一个新型的多模式跨域对准(MMCDA)网络,以将注释知识从源域转移到目标域。但是,由于源和目标域之间的域差异以及视频和查询之间的语义差距,直接将经过训练的模型应用于目标域通常会导致性能下降。为了解决这个问题,我们开发了三个新型模块:(i)域对齐模块旨在使每种模式不同域之间的特征分布对齐; (ii)跨模式对齐模块旨在将视频和查询特征映射到关节嵌入空间中,并将目标域不同模态之间的特征分布对齐; (iii)特定的比对模块试图获得特定帧与给定查询之间的细粒度相似性以进行最佳定位。通过共同训练这三个模块,我们的MMCDA可以学习域不变和语义一致的跨模式表示。
As an increasingly popular task in multimedia information retrieval, video moment retrieval (VMR) aims to localize the target moment from an untrimmed video according to a given language query. Most previous methods depend heavily on numerous manual annotations (i.e., moment boundaries), which are extremely expensive to acquire in practice. In addition, due to the domain gap between different datasets, directly applying these pre-trained models to an unseen domain leads to a significant performance drop. In this paper, we focus on a novel task: cross-domain VMR, where fully-annotated datasets are available in one domain (``source domain''), but the domain of interest (``target domain'') only contains unannotated datasets. As far as we know, we present the first study on cross-domain VMR. To address this new task, we propose a novel Multi-Modal Cross-Domain Alignment (MMCDA) network to transfer the annotation knowledge from the source domain to the target domain. However, due to the domain discrepancy between the source and target domains and the semantic gap between videos and queries, directly applying trained models to the target domain generally leads to a performance drop. To solve this problem, we develop three novel modules: (i) a domain alignment module is designed to align the feature distributions between different domains of each modality; (ii) a cross-modal alignment module aims to map both video and query features into a joint embedding space and to align the feature distributions between different modalities in the target domain; (iii) a specific alignment module tries to obtain the fine-grained similarity between a specific frame and the given query for optimal localization. By jointly training these three modules, our MMCDA can learn domain-invariant and semantic-aligned cross-modal representations.