论文标题
SceneAdapt:基于场景的域适应,用于使用对抗性学习的语义分割
SceneAdapt: Scene-based domain adaptation for semantic segmentation using adversarial learning
论文作者
论文摘要
由于深度学习,语义细分方法取得了出色的表现。然而,当将这种算法部署到培训期间未见的新上下文中时,有必要收集和标记特定于场景的数据,以便使用微型调整将它们调整到新的域。每当移动已经安装的摄像头或在相机网络中引入新相机时,由于不同的观点引起的不同场景布局,因此需要此过程。为了限制要收集的额外培训数据的数量,使用已经可用的标签数据训练语义分割方法是理想的选择,并且仅来自新相机的未标记数据。我们将这个问题正式化为域适应任务,并介绍了具有相关语义标签的新型城市场景数据集。作为解决这项具有挑战性任务的第一种方法,我们提出了SceneDapt,这是一种基于对抗性学习的语义分割算法的场景适应方法。实验和与最先进的域适应性方法进行了比较,强调,当两个场景具有不同但观点不同,以及它们包含完全不同场景的图像时,可以使用对抗性学习来实现有希望的表现。为了鼓励有关此主题的研究,我们在网页上提供了代码:https://iplab.dmi.unict.it/parksmartsceneadaptation/。
Semantic segmentation methods have achieved outstanding performance thanks to deep learning. Nevertheless, when such algorithms are deployed to new contexts not seen during training, it is necessary to collect and label scene-specific data in order to adapt them to the new domain using fine-tuning. This process is required whenever an already installed camera is moved or a new camera is introduced in a camera network due to the different scene layouts induced by the different viewpoints. To limit the amount of additional training data to be collected, it would be ideal to train a semantic segmentation method using labeled data already available and only unlabeled data coming from the new camera. We formalize this problem as a domain adaptation task and introduce a novel dataset of urban scenes with the related semantic labels. As a first approach to address this challenging task, we propose SceneAdapt, a method for scene adaptation of semantic segmentation algorithms based on adversarial learning. Experiments and comparisons with state-of-the-art approaches to domain adaptation highlight that promising performance can be achieved using adversarial learning both when the two scenes have different but points of view, and when they comprise images of completely different scenes. To encourage research on this topic, we made our code available at our web page: https://iplab.dmi.unict.it/ParkSmartSceneAdaptation/.