论文标题
术语揭示:在量化的DNN上进行运行时间的进一步量化
Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
论文作者
论文摘要
我们提出了一种新型技术,称为术语揭示(TR),用于在运行时间进一步量化,以改善已经使用常规量化方法量化的深神经网络(DNN)的性能。 tr以两个值的二进制术语为基础,在二进制值中。在计算点产品计算时,TR会动态选择固定数量的最大术语,可以从点产品中的两个向量的值中使用。通过利用通常存在于DNN中的正常重量和数据分布,TR对DNN模型性能(即准确性或过于复杂性)的影响最小。我们使用TR来促进紧密同步的处理器阵列,例如收缩阵列,以有效地并行处理。我们展示了FPGA实现,该实现可以使用少量的控制位在常规量化和TR-NABLED量化之间切换,并以微不足道的延迟进行切换。为了进一步提高TR效率,我们使用签名的数字表示(SDR),而不是仅使用非负两个术语的经典二进制编码。要执行从二进制文件到SDR的转换,我们开发了一种有效的编码方法,称为HESE(用于签名表达式的混合编码),该方法一次只能一次观察两个位。我们用MNIST的MLP,ImageNet的多个CNN和Wikitext-2的LSTM评估了TR,与同一模型性能的常规量化相比,推理计算(3-10X之间)的TR值显着降低了推理计算(3-10x之间)。
We present a novel technique, called Term Revealing (TR), for furthering quantization at run time for improved performance of Deep Neural Networks (DNNs) already quantized with conventional quantization methods. TR operates on power-of-two terms in binary expressions of values. In computing a dot-product computation, TR dynamically selects a fixed number of largest terms to use from the values of the two vectors in the dot product. By exploiting normal-like weight and data distributions typically present in DNNs, TR has a minimal impact on DNN model performance (i.e., accuracy or perplexity). We use TR to facilitate tightly synchronized processor arrays, such as systolic arrays, for efficient parallel processing. We show an FPGA implementation that can use a small number of control bits to switch between conventional quantization and TR-enabled quantization with a negligible delay. To enhance TR efficiency further, we use a signed digit representation (SDR), as opposed to classic binary encoding with only nonnegative power-of-two terms. To perform conversion from binary to SDR, we develop an efficient encoding method called HESE (Hybrid Encoding for Signed Expressions) that can be performed in one pass looking at only two bits at a time. We evaluate TR with HESE encoded values on an MLP for MNIST, multiple CNNs for ImageNet, and an LSTM for Wikitext-2, and show significant reductions in inference computations (between 3-10x) compared to conventional quantization for the same level of model performance.