论文标题
域适应的对抗性损失
Adversarial-Learned Loss for Domain Adaptation
论文作者
论文摘要
最近,在跨领域的可转移表示方面取得了显着进步。以前的域适应性作品主要基于两种技术:域 - 逆转学习和自我训练。但是,域 - 逆向学习只有对齐域之间的特征分布,但没有考虑目标特征是否具有歧视性。另一方面,自我训练利用模型预测来增强目标特征的歧视,但无法明确调整域分布。为了结合这两种方法的优势,我们提出了一种新的方法,称为“对抗性损失”域适应性(ALDA)。我们首先分析了伪标签方法,这是一种典型的自我训练方法。然而,伪标签与地面真相之间存在差距,这可能会导致不正确的培训。因此,我们介绍了混乱矩阵,该矩阵是通过ALDA中的对抗方式学习的,以减少差距并对齐特征分布。最后,从学习的混淆矩阵中自动构建了一个新的损失函数,该矩阵是未标记的目标样本的损失。在四个标准域适应数据集中,我们的ALDA优于最先进的方法。我们的代码可在https://github.com/zjulearning/alda上找到。
Recently, remarkable progress has been made in learning transferable representation across domains. Previous works in domain adaptation are majorly based on two techniques: domain-adversarial learning and self-training. However, domain-adversarial learning only aligns feature distributions between domains but does not consider whether the target features are discriminative. On the other hand, self-training utilizes the model predictions to enhance the discrimination of target features, but it is unable to explicitly align domain distributions. In order to combine the strengths of these two methods, we propose a novel method called Adversarial-Learned Loss for Domain Adaptation (ALDA). We first analyze the pseudo-label method, a typical self-training method. Nevertheless, there is a gap between pseudo-labels and the ground truth, which can cause incorrect training. Thus we introduce the confusion matrix, which is learned through an adversarial manner in ALDA, to reduce the gap and align the feature distributions. Finally, a new loss function is auto-constructed from the learned confusion matrix, which serves as the loss for unlabeled target samples. Our ALDA outperforms state-of-the-art approaches in four standard domain adaptation datasets. Our code is available at https://github.com/ZJULearning/ALDA.