论文标题
无监督的对比度学习图像表示从超声视频中的图像表示形式,并具有硬性负面挖掘
Unsupervised Contrastive Learning of Image Representations from Ultrasound Videos with Hard Negative Mining
论文作者
论文摘要
在观点中,丰富的时间信息和变化使视频数据成为使用无监督的对比学习(UCL)技术学习图像表示的有吸引力的选择。最先进的(SOTA)对比学习技术将视频中的框架视为嵌入空间中的阳性,而其他视频的框架则被视为负面因素。我们观察到,与自然场景视频中对象的多种视图不同,超声(US)视频捕获了器官的不同2D片。因此,即使是相同的美国视频的暂时遥远框架之间几乎没有相似之处。在本文中,我们建议相反使用诸如艰苦的负面因素之类的框架。我们主张在UCL框架中对硬度敏感的负挖掘课程进行挖掘,并在硬度敏感的负面挖掘课程中挖掘,以学习丰富的图像表示。我们部署框架以从美国视频中学习胆囊(GB)恶性肿瘤的表示。我们还构建了第一个大型美国视频数据集,其中包含64个视频和15,800帧,用于学习GB表示。我们表明,通过我们的框架训练的标准RESNET50骨干线可以提高使用SOTA UCL技术预测的模型的准确性,并在ImaTEnet上进行了预处理的模型,以实现GB恶性检测任务的预期模型,提高了2-6%。我们进一步验证了方法在COVID-19病理的公开肺图像数据集上的普遍性,与SOTA相比,改善了1.5%。源代码,数据集和模型可在https://gbc-iitd.github.io/usucl上找到。
Rich temporal information and variations in viewpoints make video data an attractive choice for learning image representations using unsupervised contrastive learning (UCL) techniques. State-of-the-art (SOTA) contrastive learning techniques consider frames within a video as positives in the embedding space, whereas the frames from other videos are considered negatives. We observe that unlike multiple views of an object in natural scene videos, an Ultrasound (US) video captures different 2D slices of an organ. Hence, there is almost no similarity between the temporally distant frames of even the same US video. In this paper we propose to instead utilize such frames as hard negatives. We advocate mining both intra-video and cross-video negatives in a hardness-sensitive negative mining curriculum in a UCL framework to learn rich image representations. We deploy our framework to learn the representations of Gallbladder (GB) malignancy from US videos. We also construct the first large-scale US video dataset containing 64 videos and 15,800 frames for learning GB representations. We show that the standard ResNet50 backbone trained with our framework improves the accuracy of models pretrained with SOTA UCL techniques as well as supervised pretrained models on ImageNet for the GB malignancy detection task by 2-6%. We further validate the generalizability of our method on a publicly available lung US image dataset of COVID-19 pathologies and show an improvement of 1.5% compared to SOTA. Source code, dataset, and models are available at https://gbc-iitd.github.io/usucl.