论文标题
MRI扫描中的卷积3D到2D贴片转换,用于像素神经胶质瘤分割
Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma Segmentation in MRI Scans
论文作者
论文摘要
结构磁共振成像(MRI)已被广泛用于分析和诊断脑疾病。由于肿瘤子区域的低组织对比,脑肿瘤的自动分割是计算机辅助诊断的一项挑战。为了克服这一点,我们通过卷积3D到2D MR斑块转换模型设计了一种新颖的像素分割框架,以预测输入滑动斑块中中央像素的类标签。确切地说,我们首先通过挤压和激发(SE)块从每种方式中提取3D斑块,以校准切片。然后,将SE块的输出直接送入随后的瓶颈层以减少通道的数量。最后,将校准的2D切片连接在一起以通过2D卷积神经网络(CNN)获得多模式的特征,以预测中央像素。在我们的体系结构中,通过2D CNN分类器在给定的补丁中共同利用本地板间和全局板层特征,以预测中央体素的类标签。我们通过可训练的参数隐式应用所有模式,以将权重分配给每个序列的分割。关于多模式MRI扫描中脑肿瘤分割的实验结果(BRATS'19)表明,我们提出的方法可以有效地分割肿瘤区域。
Structural magnetic resonance imaging (MRI) has been widely utilized for analysis and diagnosis of brain diseases. Automatic segmentation of brain tumors is a challenging task for computer-aided diagnosis due to low-tissue contrast in the tumor subregions. To overcome this, we devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model to predict class labels of the central pixel in the input sliding patches. Precisely, we first extract 3D patches from each modality to calibrate slices through the squeeze and excitation (SE) block. Then, the output of the SE block is fed directly into subsequent bottleneck layers to reduce the number of channels. Finally, the calibrated 2D slices are concatenated to obtain multimodal features through a 2D convolutional neural network (CNN) for prediction of the central pixel. In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch through the 2D CNN classifier. We implicitly apply all modalities through trainable parameters to assign weights to the contributions of each sequence for segmentation. Experimental results on the segmentation of brain tumors in multimodal MRI scans (BraTS'19) demonstrate that our proposed method can efficiently segment the tumor regions.