论文标题

神经传感器训练:通过样本计算减少记忆消耗

Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation

论文作者

Braun, Stefan, McDermott, Erik, Hsiao, Roger

论文摘要

神经传感器是自动语音识别(ASR)的端到端模型。尽管该模型非常适合流媒体ASR,但培训过程仍然具有挑战性。在训练过程中,内存需求可能会迅速超过最先进的GPU的能力,从而限制了批次尺寸和序列长度。在这项工作中,我们分析了典型的传感器训练设置的时间和空间复杂性。我们提出了一种记忆有效的训练方法,该方法通过样品计算传感器损失和梯度样本。我们提出了优化,以提高样本方法的效率和并行性。在一组详尽的基准测试中,我们表明我们的样本方法大大降低了内存的使用情况,并且与默认的批处理计算相比,以竞争速度执行。作为亮点,我们设法仅使用6 GB的内存来计算1024批量的换能器损耗和梯度的梯度,音频长度为40秒。

The neural transducer is an end-to-end model for automatic speech recognition (ASR). While the model is well-suited for streaming ASR, the training process remains challenging. During training, the memory requirements may quickly exceed the capacity of state-of-the-art GPUs, limiting batch size and sequence lengths. In this work, we analyze the time and space complexity of a typical transducer training setup. We propose a memory-efficient training method that computes the transducer loss and gradients sample by sample. We present optimizations to increase the efficiency and parallelism of the sample-wise method. In a set of thorough benchmarks, we show that our sample-wise method significantly reduces memory usage, and performs at competitive speed when compared to the default batched computation. As a highlight, we manage to compute the transducer loss and gradients for a batch size of 1024, and audio length of 40 seconds, using only 6 GB of memory.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源