论文标题
比较有限批量分割神经网络的归一化方法
Comparing Normalization Methods for Limited Batch Size Segmentation Neural Networks
论文作者
论文摘要
批处理标准化的广泛使用使培训更深的神经网络具有更稳定,更快的结果。但是,在训练过程中,使用较大的批量大小,并且由于最先进的卷积神经网络架构非常需要内存,因此批处理归一化的效果最佳,因此,当前硬件通常无法实现较大的批量尺寸。我们评估提出的替代标准化方法,以解决3D CT扫描的二元脊柱分割问题。我们的结果表明,在有限的批处理大小神经网络训练环境中实例归一化的有效性。在所有比较的方法中,实例归一化取得了最高结果,骰子系数= 0.96,这与我们先前通过更较长的训练时间获得的更深层网络获得的结果相当。我们还表明,与网络相比,该实验中使用的实例归一化实现是计算时间的效率,而没有任何归一化方法。
The widespread use of Batch Normalization has enabled training deeper neural networks with more stable and faster results. However, the Batch Normalization works best using large batch size during training and as the state-of-the-art segmentation convolutional neural network architectures are very memory demanding, large batch size is often impossible to achieve on current hardware. We evaluate the alternative normalization methods proposed to solve this issue on a problem of binary spine segmentation from 3D CT scan. Our results show the effectiveness of Instance Normalization in the limited batch size neural network training environment. Out of all the compared methods the Instance Normalization achieved the highest result with Dice coefficient = 0.96 which is comparable to our previous results achieved by deeper network with longer training time. We also show that the Instance Normalization implementation used in this experiment is computational time efficient when compared to the network without any normalization method.