论文标题

通过半监督学习和标签融合的前庭造型瘤和耳蜗的无监督域的适应

Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea Segmentation via Semi-supervised Learning and Label Fusion

论文作者

Liu, Han, Fan, Yubo, Cui, Can, Su, Dingjie, McNeil, Andrew, Dawant, Benoit M.

论文摘要

分割前庭造型瘤(VS)肿瘤和磁共振成像(MRI)的耳蜗的自动方法至关重要。尽管有监督的方法在VS细分方面取得了令人满意的性能,但它们需要专家的完整注释,这是费力且耗时的。在这项工作中,我们旨在在无监督的域适应设置中解决VS和耳蜗细分问题。我们提出的方法利用图像级域的比对来最大程度地减少域的差异和半监督训练,以进一步提高性能。此外,我们建议通过噪声标签校正从多个模型预测的标签融合。在MICCAI 2021 Crossmoda挑战中,我们在最终评估排行榜上的结果表明,我们所提出的方法的平均骰子得分分别为79.9%和82.5%,分别为1.29 mm和0.18 mm,分别为VS肿瘤和同伴。通过我们的方法实现的耳蜗的ASD表现优于所有其他竞争方法以及受监督的NNU-NET。

Automatic methods to segment the vestibular schwannoma (VS) tumors and the cochlea from magnetic resonance imaging (MRI) are critical to VS treatment planning. Although supervised methods have achieved satisfactory performance in VS segmentation, they require full annotations by experts, which is laborious and time-consuming. In this work, we aim to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting. Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance. Furthermore, we propose to fuse the labels predicted from multiple models via noisy label correction. In the MICCAI 2021 crossMoDA challenge, our results on the final evaluation leaderboard showed that our proposed method has achieved promising segmentation performance with mean dice score of 79.9% and 82.5% and ASSD of 1.29 mm and 0.18 mm for VS tumor and cochlea, respectively. The cochlea ASSD achieved by our method has outperformed all other competing methods as well as the supervised nnU-Net.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源