论文标题
视频字幕的时空排名 - 注意网络
Spatio-Temporal Ranked-Attention Networks for Video Captioning
论文作者
论文摘要
自动生成视频描述是一项具有挑战性的任务,涉及时空视觉特征和语言模型之间的复杂相互作用。鉴于视频由空间(帧级)功能及其时间演变组成,因此有效的字幕模型应能够选择性地参与这些不同的线索。为此,我们提出了一个时空和颞时代(统计)注意模型,该模型以语言状态为条件,在两个不同的订单中层次结合空间和时间关注视频:(i)时空(st)子模型,该区域首先参加了具有时间进化的区域,然后是临时池的特征,这些区域是这些特征,这些区域是这些特征的特征。 (ii)颞空间(TS)子模型,首先决定要参加的单个框架,然后在该框架内应用空间注意力。我们提出了一种基于LSTM的新型时间排名函数,我们称之为排名的注意力,以捕获ST模型来捕获动作动力学。我们的整个框架是端到端训练的。我们在两个基准数据集上提供实验:MSVD和MSR-VTT。我们的结果证明了ST和TS模块之间的协同作用,表现优于最近的最新方法。
Generating video descriptions automatically is a challenging task that involves a complex interplay between spatio-temporal visual features and language models. Given that videos consist of spatial (frame-level) features and their temporal evolutions, an effective captioning model should be able to attend to these different cues selectively. To this end, we propose a Spatio-Temporal and Temporo-Spatial (STaTS) attention model which, conditioned on the language state, hierarchically combines spatial and temporal attention to videos in two different orders: (i) a spatio-temporal (ST) sub-model, which first attends to regions that have temporal evolution, then temporally pools the features from these regions; and (ii) a temporo-spatial (TS) sub-model, which first decides a single frame to attend to, then applies spatial attention within that frame. We propose a novel LSTM-based temporal ranking function, which we call ranked attention, for the ST model to capture action dynamics. Our entire framework is trained end-to-end. We provide experiments on two benchmark datasets: MSVD and MSR-VTT. Our results demonstrate the synergy between the ST and TS modules, outperforming recent state-of-the-art methods.