论文标题
综合到现实域的适应道
Synthetic-to-Real Domain Adaptation for Lane Detection
论文作者
论文摘要
准确的车道检测是自动驾驶的至关重要的推动因素,目前依赖于获得一个大型且标记的训练数据集。在这项工作中,我们从丰富的,随机生成的合成数据以及未标记或部分标记的目标域数据中探索学习。随机生成的合成数据具有在车道几何形状和照明中受控变异性的优势,但在光真实主义方面受到限制。这构成了在不切实际的合成域中学习到真实图像的改编模型的挑战。为此,我们开发了一种新型的基于自动编码器的方法,该方法使用与特定图像未对齐的合成标签,以适应目标域数据。此外,我们还探索了现有的域适应方法,例如图像翻译和自学方法,并将其调整为车道检测任务。我们在无监督域的适应设置中测试了所有方法,其中没有目标域标签,并且在半监督的设置中,标记了目标图像的一小部分。在使用三个不同数据集的广泛实验中,我们证明了节省昂贵的目标域标签工作的可能性。例如,使用我们在Llamas和Tusimple Lane数据集上提出的自动编码器方法,我们几乎只能使用标记的数据的10%恢复完全监督的准确性。此外,我们的自动编码器方法在半监督域适应方案中的表现优于所有其他方法。
Accurate lane detection, a crucial enabler for autonomous driving, currently relies on obtaining a large and diverse labeled training dataset. In this work, we explore learning from abundant, randomly generated synthetic data, together with unlabeled or partially labeled target domain data, instead. Randomly generated synthetic data has the advantage of controlled variability in the lane geometry and lighting, but it is limited in terms of photo-realism. This poses the challenge of adapting models learned on the unrealistic synthetic domain to real images. To this end we develop a novel autoencoder-based approach that uses synthetic labels unaligned with particular images for adapting to target domain data. In addition, we explore existing domain adaptation approaches, such as image translation and self-supervision, and adjust them to the lane detection task. We test all approaches in the unsupervised domain adaptation setting in which no target domain labels are available and in the semi-supervised setting in which a small portion of the target images are labeled. In extensive experiments using three different datasets, we demonstrate the possibility to save costly target domain labeling efforts. For example, using our proposed autoencoder approach on the llamas and tuSimple lane datasets, we can almost recover the fully supervised accuracy with only 10% of the labeled data. In addition, our autoencoder approach outperforms all other methods in the semi-supervised domain adaptation scenario.