论文标题

基于无梯度ADMM框架的深神经网络的模型并行性

Toward Model Parallelism for Deep Neural Network based on Gradient-free ADMM Framework

论文作者

Wang, Junxiang, Chai, Zheng, Cheng, Yue, Zhao, Liang

论文摘要

最近已提出了乘数的交替方向方法(ADMM),作为与随机梯度下降(SGD)的潜在替代优化器,以解决深度学习问题。这是因为ADMM可以解决梯度消失和调节性不佳问题。此外,它在许多大规模的深度学习应用中显示出良好的可伸缩性。但是,由于变量之间的层依赖性,因此仍然缺乏深层神经网络的平行ADMM计算框架。在本文中,我们提出了一个新颖的平行深度学习ADMM框架(PDADMM)来实现层并行性:可以并行独立更新神经网络中的参数。在轻度条件下,提出的PDADMM与临界点的收敛性在理论上得到了证明。 PDADMM的收敛速率被证明为$ O(1/k)$,其中$ k $是迭代的数量。在六个基准数据集上进行的广泛实验表明,我们提出的PDADMM可以导致训练大型深神经网络的10倍以上,并且表现优于大多数比较方法。我们的代码可在以下网址提供:https://github.com/xianggebenben/pdadmm。

Alternating Direction Method of Multipliers (ADMM) has recently been proposed as a potential alternative optimizer to the Stochastic Gradient Descent(SGD) for deep learning problems. This is because ADMM can solve gradient vanishing and poor conditioning problems. Moreover, it has shown good scalability in many large-scale deep learning applications. However, there still lacks a parallel ADMM computational framework for deep neural networks because of layer dependency among variables. In this paper, we propose a novel parallel deep learning ADMM framework (pdADMM) to achieve layer parallelism: parameters in each layer of neural networks can be updated independently in parallel. The convergence of the proposed pdADMM to a critical point is theoretically proven under mild conditions. The convergence rate of the pdADMM is proven to be $o(1/k)$ where $k$ is the number of iterations. Extensive experiments on six benchmark datasets demonstrated that our proposed pdADMM can lead to more than 10 times speedup for training large-scale deep neural networks, and outperformed most of the comparison methods. Our code is available at: https://github.com/xianggebenben/pdADMM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源