论文标题
通过在线相互知识蒸馏进行跨模式的医学图像细分
Towards Cross-modality Medical Image Segmentation with Online Mutual Knowledge Distillation
论文作者
论文摘要
深卷积神经网络的成功部分归因于大量注释的培训数据。但是,实际上,医疗数据注释通常昂贵且耗时。考虑到具有相同解剖结构的多模式数据在临床常规中广泛使用,在本文中,我们旨在利用从一种模式(又称助理模态)中学到的先验知识(例如,形状先验)来提高另一种模式(aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。,aka。为了减轻特定于模态外观差异引起的学习困难,我们首先提出图像对准模块(IAM),以缩小助手和目标模态数据之间的外观差距。然后,我们提出了一种新型的相互知识蒸馏(MKD)方案,以彻底利用模态相关知识,以促进目标模样群体。具体来说,我们将框架作为两个单个分段的集成。每个分段不仅从相应的注释中明确提取一种模态知识,而且还以相互引导的方式隐式地探索了其对应物中的另一种模态知识。两个分段的合奏将进一步整合来自模态的知识,并在目标模态上产生可靠的分割结果。公共多级心脏分割数据的实验结果,即MMWHS 2017,表明我们的方法通过利用其他MRI数据并表现优于其他最先进的多模式学习方法,从而实现了CT分割的大量改进。
The success of deep convolutional neural networks is partially attributed to the massive amount of annotated training data. However, in practice, medical data annotations are usually expensive and time-consuming to be obtained. Considering multi-modality data with the same anatomic structures are widely available in clinic routine, in this paper, we aim to exploit the prior knowledge (e.g., shape priors) learned from one modality (aka., assistant modality) to improve the segmentation performance on another modality (aka., target modality) to make up annotation scarcity. To alleviate the learning difficulties caused by modality-specific appearance discrepancy, we first present an Image Alignment Module (IAM) to narrow the appearance gap between assistant and target modality data.We then propose a novel Mutual Knowledge Distillation (MKD) scheme to thoroughly exploit the modality-shared knowledge to facilitate the target-modality segmentation. To be specific, we formulate our framework as an integration of two individual segmentors. Each segmentor not only explicitly extracts one modality knowledge from corresponding annotations, but also implicitly explores another modality knowledge from its counterpart in mutual-guided manner. The ensemble of two segmentors would further integrate the knowledge from both modalities and generate reliable segmentation results on target modality. Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation by utilizing additional MRI data and outperforms other state-of-the-art multi-modality learning methods.