论文标题

学习高性能视觉跟踪的时空表现记忆网络

Learning Spatio-Appearance Memory Network for High-Performance Visual Tracking

论文作者

Xie, Fei, Yang, Wankou, Liu, Bo, Zhang, Kaihua, Xue, Wanli, Zuo, Wangmeng

论文摘要

现有的Visual对象跟踪通常会学习基于边界框的模板,以匹配跨帧的目标,该框架无法准确地学习像素的表示形式,从而在处理严重的外观变化时受到限制。为了解决这些问题,对基于细分的跟踪已经付出了很多努力,该跟踪学习了一个像素对象感知模板,并且可以比基于边界框模板的跟踪获得更高的准确性。但是,由于不使用丰富的时间信息,现有基于细分的跟踪器在学习跨帧的时空对应方面无效。为了克服这个问题,本文介绍了一种基于分割的新型跟踪体系结构,该结构配备了空间表现记忆网络,以学习准确的时空对应关系。其中,一个外观存储网络探索了时空的非本地相似性,以了解分割掩码和当前帧之间的密集对应关系。同时,将空间内存网络建模为判别相关过滤器,以了解特征映射和空间图之间的映射。外观存储网络有助于滤除空间内存网络中的嘈杂样品,而后者为前者提供了更准确的目标几何中心。这种相互的促销极大地提高了跟踪性能。我们的简单有效的跟踪体系结构在没有铃铛和哨子的情况下,分别在forp2016,dot2018,dot2019,got-10k,trackingnet和dot2020基准上设置了新的最新作品。此外,我们的跟踪器在两个视频对象细分基准davis16和davis17上优于基于细分的跟踪器siammask和d3,均超过了一个较大的利润。可以在https://github.com/phiphiphi31/dmb上找到源代码。

Existing visual object tracking usually learns a bounding-box based template to match the targets across frames, which cannot accurately learn a pixel-wise representation, thereby being limited in handling severe appearance variations. To address these issues, much effort has been made on segmentation-based tracking, which learns a pixel-wise object-aware template and can achieve higher accuracy than bounding-box template based tracking. However, existing segmentation-based trackers are ineffective in learning the spatio-temporal correspondence across frames due to no use of the rich temporal information. To overcome this issue, this paper presents a novel segmentation-based tracking architecture, which is equipped with a spatio-appearance memory network to learn accurate spatio-temporal correspondence. Among it, an appearance memory network explores spatio-temporal non-local similarity to learn the dense correspondence between the segmentation mask and the current frame. Meanwhile, a spatial memory network is modeled as discriminative correlation filter to learn the mapping between feature map and spatial map. The appearance memory network helps to filter out the noisy samples in the spatial memory network while the latter provides the former with more accurate target geometrical center. This mutual promotion greatly boosts the tracking performance. Without bells and whistles, our simple-yet-effective tracking architecture sets new state-of-the-arts on the VOT2016, VOT2018, VOT2019, GOT-10K, TrackingNet, and VOT2020 benchmarks, respectively. Besides, our tracker outperforms the leading segmentation-based trackers SiamMask and D3S on two video object segmentation benchmarks DAVIS16 and DAVIS17 by a large margin. The source codes can be found at https://github.com/phiphiphi31/DMB.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源