论文标题
语义细分中的持续粗到最新域的适应
Continual Coarse-to-Fine Domain Adaptation in Semantic Segmentation
论文作者
论文摘要
深度神经网络通常是在单次拍摄中进行特定任务和数据分发的训练,但是在现实世界中,任务和应用程序领域都可以改变。在密集的预测任务(例如语义细分)中,这个问题变得更加具有挑战性,此外,大多数方法分别解决了这两个问题。在本文中,我们介绍了在域转移存在下对语义分割体系结构进行粗到精细学习的新任务。我们考虑随后的学习阶段,逐步完善语义层面的任务;即,每个学习步骤的语义标签较细的集合是从上一步的更粗糙集中得出的。我们提出了一种新方法(CCDA)来应对这种情况。首先,我们采用最大正方形损失来对齐源和目标域,同时平衡分类良好的样品和较硬样本之间的梯度。其次,我们将新颖的粗到1个知识蒸馏限制引入了在一组更精细的标签上获得的转移网络功能,以获得一组较细的标签。最后,我们设计了一个粗到最新的权重初始化规则,以将重要性从每个粗体类传播到各个较细的类。为了评估我们的方法,我们设计了两个基准,从GTA5数据集中提取源知识,并将其传输到CityScapes或IDD数据集,并且我们展示了它如何优于主要竞争对手。
Deep neural networks are typically trained in a single shot for a specific task and data distribution, but in real world settings both the task and the domain of application can change. The problem becomes even more challenging in dense predictive tasks, such as semantic segmentation, and furthermore most approaches tackle the two problems separately. In this paper we introduce the novel task of coarse-to-fine learning of semantic segmentation architectures in presence of domain shift. We consider subsequent learning stages progressively refining the task at the semantic level; i.e., the finer set of semantic labels at each learning step is hierarchically derived from the coarser set of the previous step. We propose a new approach (CCDA) to tackle this scenario. First, we employ the maximum squares loss to align source and target domains and, at the same time, to balance the gradients between well-classified and harder samples. Second, we introduce a novel coarse-to-fine knowledge distillation constraint to transfer network capabilities acquired on a coarser set of labels to a set of finer labels. Finally, we design a coarse-to-fine weight initialization rule to spread the importance from each coarse class to the respective finer classes. To evaluate our approach, we design two benchmarks where source knowledge is extracted from the GTA5 dataset and it is transferred to either the Cityscapes or the IDD datasets, and we show how it outperforms the main competitors.