论文标题

脑肿瘤分割的知识蒸馏

Knowledge Distillation for Brain Tumor Segmentation

论文作者

Lachinov, Dmitrii, Shipunova, Elena, Turlapov, Vadim

论文摘要

多模式MRIS中脑肿瘤的分割是医学图像分析中最具挑战性的任务之一。最近解决此任务的最新技术算法是基于机器学习方法,尤其是深度学习。用于培训此类模型及其可变性的数据量是构建具有高代表力的算法的钥匙到建筑。在本文中,我们研究了模型的性能与培训过程中使用的数据量之间的关系。以脑肿瘤分割挑战为例,我们将训练的模型与挑战组织者提供的标记数据进行了比较,并使用其他未标记的数据与异质模型的集合相同的其他未标记的数据进行了相同的模型。结果,一个经过其他数据训练的单个模型可实现靠近多个模型合奏的性能,并优于单个方法。

The segmentation of brain tumors in multimodal MRIs is one of the most challenging tasks in medical image analysis. The recent state of the art algorithms solving this task is based on machine learning approaches and deep learning in particular. The amount of data used for training such models and its variability is a keystone for building an algorithm with high representation power. In this paper, we study the relationship between the performance of the model and the amount of data employed during the training process. On the example of brain tumor segmentation challenge, we compare the model trained with labeled data provided by challenge organizers, and the same model trained in omni-supervised manner using additional unlabeled data annotated with the ensemble of heterogeneous models. As a result, a single model trained with additional data achieves performance close to the ensemble of multiple models and outperforms individual methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源