论文标题

将直接反馈对准扩展到卷积和复发性神经网络,以进行生物学知识深度学习

Extension of Direct Feedback Alignment to Convolutional and Recurrent Neural Network for Bio-plausible Deep Learning

论文作者

Han, Donghyeon, Park, Gwangtae, Ryu, Junha, Yoo, Hoi-jun

论文摘要

在整个本文中,我们关注的是直接反馈对齐(DFA)算法的改进,并将DFA的用法扩展到卷积和复发性神经网络(CNNS和RNNS)。尽管DFA算法在生物学上是合理的,并且具有高速训练的潜力,但由于CNN和RNN训练的准确性较低,因此尚未被视为反向传播(BP)的替代品。在这项工作中,我们为BP级准确的CNN和RNN培训提出了一种新的DFA算法。首先,我们将网络分为几个模块,并在模块中应用DFA算法。其次,施加了稀疏向后重量的DFA。在CNN情况下,它具有扩张的卷积形式,在RNN情况下以稀疏基质乘法形式。另外,通过组卷积,CNN的误差传播方法变得更加简单。最后,混合DFA将CNN和RNN训练的准确性提高到BP级别,同时利用DFA算法的并行性和硬件效率。

Throughout this paper, we focus on the improvement of the direct feedback alignment (DFA) algorithm and extend the usage of the DFA to convolutional and recurrent neural networks (CNNs and RNNs). Even though the DFA algorithm is biologically plausible and has a potential of high-speed training, it has not been considered as the substitute for back-propagation (BP) due to the low accuracy in the CNN and RNN training. In this work, we propose a new DFA algorithm for BP-level accurate CNN and RNN training. Firstly, we divide the network into several modules and apply the DFA algorithm within the module. Second, the DFA with the sparse backward weight is applied. It comes with a form of dilated convolution in the CNN case, and in a form of sparse matrix multiplication in the RNN case. Additionally, the error propagation method of CNN becomes simpler through the group convolution. Finally, hybrid DFA increases the accuracy of the CNN and RNN training to the BP-level while taking advantage of the parallelism and hardware efficiency of the DFA algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源