论文标题

与双模式的构象异构体有关联合在线和离线ASR的关注

Conformer with dual-mode chunked attention for joint online and offline ASR

论文作者

Weninger, Felix, Gaudesi, Marco, Haidar, Md Akmal, Ferri, Nicola, Andrés-Ferrer, Jesús, Zhan, Puming

论文摘要

在本文中,我们介绍了一项有关使用构象异构体换能器的双模式(即联合在线和离线)ASR的在线注意机制和蒸馏技术的深入研究。在双模式构象异构体模型中,在共享参数时可以在在线或离线模式下运行层,并且在培训中使用从离线模式到在线模式的现场知识蒸馏以提高在线准确性。在我们的研究中,我们首先证明了在构象异构体编码器中使用额分的注意力的准确性提高,而不是带有和没有Lookahead的自回归注意力。此外,我们探索了在知识蒸馏中在线和离线输出之间发生不同变化的高效KLD和1好的KLD损失。最后,我们表明,仅具有模式特异性自我注意的简化双模式构象异构体与具有特定模式的卷积和归一化的表现同样出色。我们的实验基于两个截然不同的数据集:LibrisPeech任务和内部医学对话语料库。结果表明,与双模式系统相比,使用自回归的注意力与平均lookahead相比,使用librispeech和医疗任务的拟议的双模式系统对Librispeech和医疗任务的相对相对提高了5%和4%。

In this paper, we present an in-depth study on online attention mechanisms and distillation techniques for dual-mode (i.e., joint online and offline) ASR using the Conformer Transducer. In the dual-mode Conformer Transducer model, layers can function in online or offline mode while sharing parameters, and in-place knowledge distillation from offline to online mode is applied in training to improve online accuracy. In our study, we first demonstrate accuracy improvements from using chunked attention in the Conformer encoder compared to autoregressive attention with and without lookahead. Furthermore, we explore the efficient KLD and 1-best KLD losses with different shifts between online and offline outputs in the knowledge distillation. Finally, we show that a simplified dual-mode Conformer that only has mode-specific self-attention performs equally well as the one also having mode-specific convolutions and normalization. Our experiments are based on two very different datasets: the Librispeech task and an internal corpus of medical conversations. Results show that the proposed dual-mode system using chunked attention yields 5% and 4% relative WER improvement on the Librispeech and medical tasks, compared to the dual-mode system using autoregressive attention with similar average lookahead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源