论文标题
领域不变的特征探索域概括
Domain-invariant Feature Exploration for Domain Generalization
论文作者
论文摘要
在过去的几年中,深度学习取得了巨大的成功。但是,面对非IID情况,深度学习的表现可能会阻碍。域的概括(DG)使模型可以概括为看不见的测试分布,即学习域不变表示。在本文中,我们认为域不变的特征应起源于内部和相互侧面。内部不变性意味着可以通过单个域学习这些功能,并且该功能捕获了数据的内在语义,即在域内的属性,这是其他域对其他域的不可知论。相互不变性意味着可以通过多个域(跨域)学习这些特征,并且功能包含常见信息,即可转移的特征W.R.T.其他域。然后,我们为域不变特征探索提出了DIFEX。 DiFex采用知识蒸馏框架来捕获高级傅立叶相,作为内部不变的特征,并将跨域相关对准作为相互不变的特征。我们进一步设计了探索损失,以增加功能多样性以更好地泛化。对时间序列和视觉基准的广泛实验表明,所提出的DIFEX实现了最先进的性能。
Deep learning has achieved great success in the past few years. However, the performance of deep learning is likely to impede in face of non-IID situations. Domain generalization (DG) enables a model to generalize to an unseen test distribution, i.e., to learn domain-invariant representations. In this paper, we argue that domain-invariant features should be originating from both internal and mutual sides. Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i.e., the property within a domain, which is agnostic to other domains. Mutual invariance means that the features can be learned with multiple domains (cross-domain) and the features contain common information, i.e., the transferable features w.r.t. other domains. We then propose DIFEX for Domain-Invariant Feature EXploration. DIFEX employs a knowledge distillation framework to capture the high-level Fourier phase as the internally-invariant features and learn cross-domain correlation alignment as the mutually-invariant features. We further design an exploration loss to increase the feature diversity for better generalization. Extensive experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.