论文标题
当可以利用非结构化的稀疏性来加速GPU上的深度学习模型时
When deep learning models on GPU can be accelerated by taking advantage of unstructured sparsity
论文作者
论文摘要
本文的重点是提高图形处理单元(GPU)上稀疏卷积神经网络(CNN)层的效率。 NVIDIA深神经网络(CUDNN)库为GPU提供了最有效的深度学习(DL)算法。 GPU是用于深度学习计算的最有效和常用的加速器之一。现代CNN模型需要系数的兆字节,需要数百万MAC操作才能执行卷积。压缩CNN模型的最常见技术之一是修剪重量。修剪有两种主要类型:结构(基于删除整个体重通道)和非结构性(去除单个重量)。第一个实现了更容易的加速度,但是使用这种类型,很难达到与第二种类型的稀疏度和准确性一样高。在某些Deep CNN模型中,非结构性修剪可以产生矩阵重量高达$ \ sim90 \%$或更多的稀疏性。这项工作表明何时值得使用直接稀疏操作来加快卷积层的计算。 Resnet模型的VGG-16,CNN-NON静态和1x1层用作基准测试。此外,我们提出了使用降低精度对时间效率的影响。
This paper is focused on the improvement the efficiency of the sparse convolutional neural networks (CNNs) layers on graphic processing units (GPU). The Nvidia deep neural network (cuDnn) library provides the most effective implementation of deep learning (DL) algorithms for GPUs. GPUs are one of the most efficient and commonly used accelerators for deep learning computations. The modern CNN models need megabytes of coefficients and needed millions MAC operations to perform convolution. One of the most common techniques for compressing CNN models is weight pruning. There are two main types of pruning: structural (based on removing whole weight channels) and non-structural (removing individual weights). The first enables much easier acceleration, but with this type it is difficult to achieve a sparsity level and accuracy as high as that obtained with the second type. Non-structural pruning with retraining can generate a matrix-weight up to $\sim90\%$ or more of sparsity in some deep CNN models. This work shows when is worth using a direct sparse operation to speed-up the calculation of the convolution layers. The VGG-16, CNN-non-static and 1x1 layers from ResNet models were used as a benchmarks. In addition, we present the impact of using reduced precision on time efficiency.