论文标题
解码器对语义分割的预绘制预处理
Decoder Denoising Pretraining for Semantic Segmentation
论文作者
论文摘要
语义细分标签昂贵且耗时。因此,预处理通常用于改善分割模型的标签效率。通常,分割模型的编码器被鉴定为分类器,并且解码器被随机初始化。在这里,我们认为解码器的随机初始化可以是次优的,尤其是当很少有标记的示例可用时。我们提出了一种基于denoising的解码器预处理方法,可以将其与编码器的监督预处理结合使用。我们发现,在ImageNet数据集上进行预处理的解码器强烈胜过仅包含的训练预处理。尽管它很简单,但解码器deno的预处理还是在标签有效的语义细分方面取得了最新的结果,并在城市景观,帕斯卡环境和ADE20K数据集方面提供了相当大的收益。
Semantic segmentation labels are expensive and time consuming to acquire. Hence, pretraining is commonly used to improve the label-efficiency of segmentation models. Typically, the encoder of a segmentation model is pretrained as a classifier and the decoder is randomly initialized. Here, we argue that random initialization of the decoder can be suboptimal, especially when few labeled examples are available. We propose a decoder pretraining approach based on denoising, which can be combined with supervised pretraining of the encoder. We find that decoder denoising pretraining on the ImageNet dataset strongly outperforms encoder-only supervised pretraining. Despite its simplicity, decoder denoising pretraining achieves state-of-the-art results on label-efficient semantic segmentation and offers considerable gains on the Cityscapes, Pascal Context, and ADE20K datasets.