论文标题
超低潜伏期自适应局部二进制尖峰神经网络具有准确的损失估计器
Ultra-low Latency Adaptive Local Binary Spiking Neural Network with Accuracy Loss Estimator
论文作者
论文摘要
尖峰神经网络(SNN)是一种受脑启发的模型,具有更时空的信息处理能力和计算能效。但是,随着SNN深度的增加,由SNN的重量引起的记忆问题逐渐引起了人们的注意。受到人工神经网络(ANN)量化技术的启发,引入了二进制SNN(BSNN)来解决记忆问题。由于缺乏合适的学习算法,BSNN通常由ANN-SNN转换获得,其准确性将受到训练有素的ANN的限制。在本文中,我们提出了具有准确性损失估计器的超低潜伏期自适应局部二进制二元尖峰神经网络(ALBSNN),该估计器动态选择要二进制的网络层,以通过评估网络学习过程中的二进制权重来评估网络的准确性。实验结果表明,此方法可以将存储空间降低超过20%,而不会丢失网络的准确性。同时,为了加速网络的训练速度,引入了全球平均池(GAP)层,以通过卷积和合并的结合来代替完全连接的层,以便SNN可以使用少量的时间步骤来获得更好的识别精度。在极端的情况下,我们仍然可以在三个不同的数据集(FashionMnist,CIFAR-10和CIFAR-10和CIFAR-100)上获得92.92%,91.63%和63.54%的测试精度。
Spiking neural network (SNN) is a brain-inspired model which has more spatio-temporal information processing capacity and computational energy efficiency. However, with the increasing depth of SNNs, the memory problem caused by the weights of SNNs has gradually attracted attention. Inspired by Artificial Neural Networks (ANNs) quantization technology, binarized SNN (BSNN) is introduced to solve the memory problem. Due to the lack of suitable learning algorithms, BSNN is usually obtained by ANN-to-SNN conversion, whose accuracy will be limited by the trained ANNs. In this paper, we propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators, which dynamically selects the network layers to be binarized to ensure the accuracy of the network by evaluating the error caused by the binarized weights during the network learning process. Experimental results show that this method can reduce storage space by more than 20 % without losing network accuracy. At the same time, in order to accelerate the training speed of the network, the global average pooling(GAP) layer is introduced to replace the fully connected layers by the combination of convolution and pooling, so that SNNs can use a small number of time steps to obtain better recognition accuracy. In the extreme case of using only one time step, we still can achieve 92.92 %, 91.63 % ,and 63.54 % testing accuracy on three different datasets, FashionMNIST, CIFAR-10, and CIFAR-100, respectively.