论文标题
跨域人员重新识别的关节解开和适应
Joint Disentangling and Adaptation for Cross-Domain Person Re-Identification
论文作者
论文摘要
尽管在监督人员的重新识别(RE-ID)中已经看到了重大进展,但由于巨大的领域差距,将重新ID模型推广到新领域仍然具有挑战性。最近,人们对使用无监督的域适应来解决此可伸缩性问题越来越兴趣。现有方法通常对包含ID相关和ID无关因素的表示空间进行适应,因此不可避免地破坏了与ID相关特征的适应性疗效。在本文中,我们试图通过净化要适应的表示空间来改善适应性。为此,我们提出了一个联合学习框架,该框架可以解散与ID相关/无关的功能,并强制执行适应性,以专门在与ID相关的功能空间上工作。我们的模型涉及一个解开的模块,该模块将跨域图像编码为共享的外观空间和两个独立的结构空间,以及在共享外观空间上执行对抗性对齐和自我训练的适应模块。两个模块共同设计为互惠互利。广泛的实验表明,所提出的联合学习框架优于明确的边缘的最新方法。
Although a significant progress has been witnessed in supervised person re-identification (re-id), it remains challenging to generalize re-id models to new domains due to the huge domain gaps. Recently, there has been a growing interest in using unsupervised domain adaptation to address this scalability issue. Existing methods typically conduct adaptation on the representation space that contains both id-related and id-unrelated factors, thus inevitably undermining the adaptation efficacy of id-related features. In this paper, we seek to improve adaptation by purifying the representation space to be adapted. To this end, we propose a joint learning framework that disentangles id-related/unrelated features and enforces adaptation to work on the id-related feature space exclusively. Our model involves a disentangling module that encodes cross-domain images into a shared appearance space and two separate structure spaces, and an adaptation module that performs adversarial alignment and self-training on the shared appearance space. The two modules are co-designed to be mutually beneficial. Extensive experiments demonstrate that the proposed joint learning framework outperforms the state-of-the-art methods by clear margins.