论文标题
通过美白改善基于STDP的视觉功能学习
Improving STDP-based Visual Feature Learning with Whitening
论文作者
论文摘要
近年来,峰值神经网络(SNN)作为深神经网络(DNNS)的替代品。 SNN使用低功耗神经形态硬件提出了较高的计算效率,并且需要使用局部和无监督的学习规则(例如Spike Piming依赖性可塑性(STDP))进行培训的标签较低的数据。 SNN证明了它们在MNIST等简单数据集上的图像分类中的有效性。但是,要处理自然图像,需要一个预处理步骤。差异差异(狗)过滤通常与中心/不中心的编码一起使用,但导致信息损失对分类性能有害。在本文中,我们建议在使用STDP学习功能之前将美白作为预处理步骤。 CIFAR-10上的实验表明,美白允许STDP学习与标准神经网络所学的视觉特征,与狗过滤相比,分类性能显着提高。我们还建议将美白作为卷积内核的近似值,在计算上学习更便宜,更适合于神经形态硬件实施。 CIFAR-10的实验表明,其性能与常规美白类似。 CIFAR-10和STL-10上的跨数据库实验还表明,它在数据集中相当稳定,从而可以学习单个美白转换以处理不同的数据集。
In recent years, spiking neural networks (SNNs) emerge as an alternative to deep neural networks (DNNs). SNNs present a higher computational efficiency using low-power neuromorphic hardware and require less labeled data for training using local and unsupervised learning rules such as spike timing-dependent plasticity (STDP). SNN have proven their effectiveness in image classification on simple datasets such as MNIST. However, to process natural images, a pre-processing step is required. Difference-of-Gaussians (DoG) filtering is typically used together with on-center/off-center coding, but it results in a loss of information that is detrimental to the classification performance. In this paper, we propose to use whitening as a pre-processing step before learning features with STDP. Experiments on CIFAR-10 show that whitening allows STDP to learn visual features that are closer to the ones learned with standard neural networks, with a significantly increased classification performance as compared to DoG filtering. We also propose an approximation of whitening as convolution kernels that is computationally cheaper to learn and more suited to be implemented on neuromorphic hardware. Experiments on CIFAR-10 show that it performs similarly to regular whitening. Cross-dataset experiments on CIFAR-10 and STL-10 also show that it is fairly stable across datasets, making it possible to learn a single whitening transformation to process different datasets.