论文标题

联合Sar-Optical代表学习的自我监督视觉变压器

Self-supervised Vision Transformers for Joint SAR-optical Representation Learning

论文作者

Wang, Yi, Albrecht, Conrad M, Zhu, Xiao Xiang

论文摘要

自欺欺人的学习(SSL)由于能够在没有人类注释的情况下学习任务不合时宜的表示能力而引起了人们对遥感和地球观察的极大兴趣。尽管大多数现有的SSL在遥感中起作用,利用Convnet骨架并专注于单个模态,但我们探索了视觉变压器(VIT)的潜力,用于联合SAR-OCTICATION学习。基于Dino,一种最先进的SSL算法,它可以从输入图像的两个增强视图中提取知识,我们通过将所有通道将所有通道与统一输入相连,将SAR和光学图像结合在一起。随后,我们随机掩盖一种模式的通道作为数据增强策略。在训练期间,该模型将被喂养仅光学,仅SAR-SAR-SAR-SAR-OFICATION图像对学习内部和模式内表示。使用BigeArthnet-MM数据集的实验结果证明了VIT骨架和拟议的多模式SSL算法Dino-MM的好处。

Self-supervised learning (SSL) has attracted much interest in remote sensing and earth observation due to its ability to learn task-agnostic representations without human annotation. While most of the existing SSL works in remote sensing utilize ConvNet backbones and focus on a single modality, we explore the potential of vision transformers (ViTs) for joint SAR-optical representation learning. Based on DINO, a state-of-the-art SSL algorithm that distills knowledge from two augmented views of an input image, we combine SAR and optical imagery by concatenating all channels to a unified input. Subsequently, we randomly mask out channels of one modality as a data augmentation strategy. While training, the model gets fed optical-only, SAR-only, and SAR-optical image pairs learning both inner- and intra-modality representations. Experimental results employing the BigEarthNet-MM dataset demonstrate the benefits of both, the ViT backbones and the proposed multimodal SSL algorithm DINO-MM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源