论文标题

探索输入模式以增强液态机器的性能

Exploration of Input Patterns for Enhancing the Performance of Liquid State Machines

论文作者

Guo, Shasha, Qu, Lianhua, Wang, Lei, Tian, Shuo, Li, Shiming, Xu, Weixia

论文摘要

尖峰神经网络(SNN)因其低功耗而引起了人们的关注。但是培训SNN具有挑战性。液态机器(LSM)是一种主要的储层计算,因其在SNN中的低训练成本而被广泛认可。探索LSM拓扑以提高性能通常需要超级参数搜索,这既昂贵又耗时。我们探讨了减少输入量表对LSM的影响。研究LSM的输入减少有两个主要原因。一个是,大图像的输入维度需要有效的处理。另一个是输入探索通常比建筑搜索更经济。为了减轻有效处理LSM巨大输入空间的困难,并发现输入减少是否可以增强LSM性能,我们探索了几种输入模式,即FullScale,Scanline,Cansanline,Chessboard和Patch。几个数据集已用于评估所提出的输入模式的性能,包括两个时空图像数据集和一个时空图像数据库。实验结果表明,棋盘模式下的减少输入可提高准确性高达5%,并且比LSM的全尺度输入模式降低了50%的执行时间,最多降低了75 \%的输入存储。

Spiking Neural Networks (SNN) have gained increasing attention for its low power consumption. But training SNN is challenging. Liquid State Machine (LSM), as a major type of Reservoir computing, has been widely recognized for its low training cost among SNNs. The exploration of LSM topology for enhancing performance often requires hyper-parameter search, which is both resource-expensive and time-consuming. We explore the influence of input scale reduction on LSM instead. There are two main reasons for studying input reduction of LSM. One is that the input dimension of large images requires efficient processing. Another one is that input exploration is generally more economic than architecture search. To mitigate the difficulty in effectively dealing with huge input spaces of LSM, and to find that whether input reduction can enhance LSM performance, we explore several input patterns, namely fullscale, scanline, chessboard, and patch. Several datasets have been used to evaluate the performance of the proposed input patterns, including two spatio image datasets and one spatio-temporal image database. The experimental results show that the reduced input under chessboard pattern improves the accuracy by up to 5%, and reduces execution time by up to 50% with up to 75\% less input storage than the fullscale input pattern for LSM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源