论文标题

利用弱监督定位的高分辨率乳腺癌筛查图像的可解释的分类器

An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization

论文作者

Shen, Yiqiu, Wu, Nan, Phang, Jason, Park, Jungkyu, Liu, Kangning, Tyagi, Sudarshini, Heacock, Laura, Kim, S. Gene, Moy, Linda, Cho, Kyunghyun, Geras, Krzysztof J.

论文摘要

医学图像与自然图像不同的分辨率和较小的感兴趣区域不同。由于这些差异,适用于自然图像的神经网络体系结构可能不适用于医学图像分析。在这项工作中,我们扩展了全球意识的多个实例分类器,我们提出了一个框架,旨在解决医学图像的这些独特属性。该模型首先在整个图像上使用低容量,存储效率的网络来识别最有用的区域。然后,它应用了另一个高容量网络来收集所选区域的详细信息。最后,它采用了一个融合模块,该模块汇总了全球和本地信息以做出最终预测。尽管现有方法通常需要在训练期间进行病变细分,但我们的模型仅使用图像级标签进行培训,并且可以生成像素级显着图,以表明可能的恶性发现。我们将模型应用于筛选乳房X线摄影解释:预测良性和恶性病变的存在或不存在。在纽约大学乳腺癌筛查数据集上,由超过一百万张图像组成,我们的模型在分类具有恶性发现的乳房中达到了0.93,表现优于Resnet-34,RESNET-34和更快的R-CNN。与RESNET-34相比,我们的模型的推断速度更快为4.1倍,而GPU存储器使用78.4%。此外,我们在一项读者研究中证明,我们的模型超过了放射科医生级的AUC,边距为0.11。提出的模型可在线获得:https://github.com/nyukat/gmic。

Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we extend the globally-aware multiple instance classifier, a framework we proposed to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a final prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, consisting of more than one million images, our model achieves an AUC of 0.93 in classifying breasts with malignant findings, outperforming ResNet-34 and Faster R-CNN. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11. The proposed model is available online: https://github.com/nyukat/GMIC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源