论文标题
最佳量化神经网络部署及其他
Optimal Quantization for Batch Normalization in Neural Network Deployments and Beyond
论文作者
论文摘要
量化的神经网络(QNN)使用低位固定点数来表示权重参数和激活,并且由于其节省了计算资源和结果的可重复性,通常在现实世界应用中使用。 批发归一化(BN)对QNN构成了互惠操作中需要浮点的挑战,并且以前的QNN要么需要以高精度计算BN,要么以启发式方式将BN对BN进行修改。 在这项工作中,我们提出了一种新颖的方法来量化BN,通过将两个浮点的仿射转换转换为具有共享量化标尺的固定点操作,这对硬件加速和模型部署很友好。 我们确认我们的方法通过严格的理论分析和数值分析来维持相同的输出。我们的量化方法的准确性和效率通过在CIFAR和Imagenet数据集的层级别的实验验证。 我们还认为,我们的方法可能在涉及量化的其他问题中有用。
Quantized Neural Networks (QNNs) use low bit-width fixed-point numbers for representing weight parameters and activations, and are often used in real-world applications due to their saving of computation resources and reproducibility of results. Batch Normalization (BN) poses a challenge for QNNs for requiring floating points in reciprocal operations, and previous QNNs either require computing BN at high precision or revise BN to some variants in heuristic ways. In this work, we propose a novel method to quantize BN by converting an affine transformation of two floating points to a fixed-point operation with shared quantized scale, which is friendly for hardware acceleration and model deployment. We confirm that our method maintains same outputs through rigorous theoretical analysis and numerical analysis. Accuracy and efficiency of our quantization method are verified by experiments at layer level on CIFAR and ImageNet datasets. We also believe that our method is potentially useful in other problems involving quantization.