论文标题
用于多源和半监督域适应的在线元学习
Online Meta-Learning for Multi-Source and Semi-Supervised Domain Adaptation
论文作者
论文摘要
域适应性(DA)是从标记的源数据集中调整模型的主题问题,因此它们在仅可用的无标记或部分标记的数据的目标数据集中表现良好。已经提出了许多方法通过不同的方法来解决此问题,以最大程度地减少源和目标数据集之间的域移动。在本文中,我们采用正交的观点,并提出一个框架,通过元学习现有DA算法的初始条件进一步提高性能。与涉及的计算图的长度相比,与更广泛地考虑的几个射击元学习的设置相比,这具有挑战性。因此,我们提出了一个在线最短的元学习框架,该框架既可以计算上的操作,又可以在改善DA性能方面有效。我们介绍了多源无监督的域适应性(MSDA)和半监督域适应性(SSDA)的变体。重要的是,我们的方法对基础适应算法不可知,并且可以应用于改进许多技术。在实验上,我们证明了对MSDA和SSDA的经典(DANN)和最近(MCD和MME)技术的改进,并最终在包括最大规模域内的几个DA基准上实现了最先进的结果。
Domain adaptation (DA) is the topical problem of adapting models from labelled source datasets so that they perform well on target datasets where only unlabelled or partially labelled data is available. Many methods have been proposed to address this problem through different ways to minimise the domain shift between source and target datasets. In this paper we take an orthogonal perspective and propose a framework to further enhance performance by meta-learning the initial conditions of existing DA algorithms. This is challenging compared to the more widely considered setting of few-shot meta-learning, due to the length of the computation graph involved. Therefore we propose an online shortest-path meta-learning framework that is both computationally tractable and practically effective for improving DA performance. We present variants for both multi-source unsupervised domain adaptation (MSDA), and semi-supervised domain adaptation (SSDA). Importantly, our approach is agnostic to the base adaptation algorithm, and can be applied to improve many techniques. Experimentally, we demonstrate improvements on classic (DANN) and recent (MCD and MME) techniques for MSDA and SSDA, and ultimately achieve state of the art results on several DA benchmarks including the largest scale DomainNet.