论文标题
钩网:在组织病理学全图像中的语义分割的多分辨率卷积神经网络
HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images
论文作者
论文摘要
我们提出了Hooknet,这是一个组织病理学全扫描图像的语义分割模型,该模型通过编码器卷积神经网络的多个分支结合了上下文和详细信息。具有不同视野的多个分辨率下的同心捕获用于喂食钩网的不同分支,并且中间表示通过钩机构组合。我们描述了一个设计和训练钩网的框架,以实现高分辨率的语义分割,并引入约束,以确保挂钩期间特征地图中的像素对齐。我们显示了在两个组织病理学图像分割任务中使用钩网的优点,在这些组织病理学图像分割任务中,组织类型的预测准确性在很大程度上取决于上下文信息,即(1)乳腺癌中的多级组织分割以及(2)(2)肺癌中第三级淋巴机构结构和生殖中心的分割。与在不同分辨率上工作的单分辨率U-NET模型相比,钩网的优越性以及最近发表的组织病理学图像分割的多分辨率模型
We propose HookNet, a semantic segmentation model for histopathology whole-slide images, which combines context and details via multiple branches of encoder-decoder convolutional neural networks. Concentricpatches at multiple resolutions with different fields of view are used to feed different branches of HookNet, and intermediate representations are combined via a hooking mechanism. We describe a framework to design and train HookNet for achieving high-resolution semantic segmentation and introduce constraints to guarantee pixel-wise alignment in feature maps during hooking. We show the advantages of using HookNet in two histopathology image segmentation tasks where tissue type prediction accuracy strongly depends on contextual information, namely (1) multi-class tissue segmentation in breast cancer and, (2) segmentation of tertiary lymphoid structures and germinal centers in lung cancer. Weshow the superiority of HookNet when compared with single-resolution U-Net models working at different resolutions as well as with a recently published multi-resolution model for histopathology image segmentation