论文标题
死去的神经元和稀疏性之间的细线
The fine line between dead neurons and sparsity in binarized spiking neural networks
论文作者
论文摘要
尖峰神经网络可以通过编码时间域中的信息或处理更高精度的隐藏状态中的离散数量来补偿量化误差。从理论上讲,一个广泛的动态范围态空间使多个二进制输入能够共同积累,从而提高了单个神经元的表示能力。这可以通过增加点火阈值来实现,但是使其太高且稀疏的尖峰活动变成没有尖峰排放。在本文中,我们建议将“阈值退火”用作发射阈值的热身方法。我们表明,它可以在多个层中传播峰值,尽管使用了二进制的权重,但神经元否则会停止发射,并在四个不同的数据集上取得高度竞争性的结果。源代码可从https://github.com/jeshraghian/snn-tha/获得
Spiking neural networks can compensate for quantization error by encoding information either in the temporal domain, or by processing discretized quantities in hidden states of higher precision. In theory, a wide dynamic range state-space enables multiple binarized inputs to be accumulated together, thus improving the representational capacity of individual neurons. This may be achieved by increasing the firing threshold, but make it too high and sparse spike activity turns into no spike emission. In this paper, we propose the use of `threshold annealing' as a warm-up method for firing thresholds. We show it enables the propagation of spikes across multiple layers where neurons would otherwise cease to fire, and in doing so, achieve highly competitive results on four diverse datasets, despite using binarized weights. Source code is available at https://github.com/jeshraghian/snn-tha/