论文标题
凝视驱动的快速前进方法,用于第一人称视频
A gaze driven fast-forward method for first-person videos
论文作者
论文摘要
不断增长的数据共享和生命文化正在推动未经编辑的第一人称视频数量的前所未有的增加。在本文中,我们通过创建输入视频的加速版并强调录音机的重要时刻来解决第一人称视频中访问相关信息的问题。我们的方法是基于由注视和视觉场景分析驱动的注意模型,该模型提供了输入视频的每个帧的语义得分。我们对公开可用的第一人称视频数据集进行了几项实验评估。结果表明,我们的方法论可以快速前进的视频强调录音机在视觉上与场景组件相互作用而不包含单调剪辑的时刻。
The growing data sharing and life-logging cultures are driving an unprecedented increase in the amount of unedited First-Person Videos. In this paper, we address the problem of accessing relevant information in First-Person Videos by creating an accelerated version of the input video and emphasizing the important moments to the recorder. Our method is based on an attention model driven by gaze and visual scene analysis that provides a semantic score of each frame of the input video. We performed several experimental evaluations on publicly available First-Person Videos datasets. The results show that our methodology can fast-forward videos emphasizing moments when the recorder visually interact with scene components while not including monotonous clips.