论文标题
基于姿势的视频人通过循环图卷积网络重新识别
Pose-Aided Video-based Person Re-Identification via Recurrent Graph Convolutional Network
论文作者
论文摘要
现有的基于视频的人重新识别(REID)的方法主要通过功能提取器和功能聚合器来了解给定行人的外观特征。 但是,当不同的行人外观相似时,外观模型将失败。 考虑到不同的行人具有不同的步行姿势和身体比例,我们建议学习视频检索的外观功能之外的歧视性姿势功能。 具体而言,我们实现了一个两分支的体系结构,以单独学习外观功能和姿势功能,然后将它们串联在一起进行推理。 为了学习姿势特征,我们首先通过现成的姿势检测器检测到每个框架中的行人姿势,并使用姿势序列构造时间图。 然后,我们利用复发图卷积网络(RGCN)学习时间姿势图的节点嵌入,该图形设计了一个全局信息传播机制,以同时实现框内节点的邻域聚集,并在框架间图之间传递消息。 最后,我们提出了一种由节点注意和时间注意的双重意见方法,以从节点嵌入中获得时间图表示,其中采用自我注意的机制来了解每个节点和每个帧的重要性。 我们验证了三个基于视频的REID数据集(即Mars,Dukemtmc和Ilids-Vid)上提出的方法,其实验结果表明,学习的姿势功能可以有效地改善现有外观模型的性能。
Existing methods for video-based person re-identification (ReID) mainly learn the appearance feature of a given pedestrian via a feature extractor and a feature aggregator. However, the appearance models would fail when different pedestrians have similar appearances. Considering that different pedestrians have different walking postures and body proportions, we propose to learn the discriminative pose feature beyond the appearance feature for video retrieval. Specifically, we implement a two-branch architecture to separately learn the appearance feature and pose feature, and then concatenate them together for inference. To learn the pose feature, we first detect the pedestrian pose in each frame through an off-the-shelf pose detector, and construct a temporal graph using the pose sequence. We then exploit a recurrent graph convolutional network (RGCN) to learn the node embeddings of the temporal pose graph, which devises a global information propagation mechanism to simultaneously achieve the neighborhood aggregation of intra-frame nodes and message passing among inter-frame graphs. Finally, we propose a dual-attention method consisting of node-attention and time-attention to obtain the temporal graph representation from the node embeddings, where the self-attention mechanism is employed to learn the importance of each node and each frame. We verify the proposed method on three video-based ReID datasets, i.e., Mars, DukeMTMC and iLIDS-VID, whose experimental results demonstrate that the learned pose feature can effectively improve the performance of existing appearance models.