论文标题

深编码器,浅解码器:重新评估非自动回旋机器翻译

Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation

论文作者

Kasai, Jungo, Pappas, Nikolaos, Peng, Hao, Cross, James, Smith, Noah A.

论文摘要

最近的努力已投入到非自动回忆神经机器翻译上,这似乎是现代GPU上最先进的自动回归机器翻译的有效替代品。与后者相反,后者是顺序的,前者允许在目标令牌位置平行生成。与自回归基线相比,一些最新的非自动入学模型已经实现了令人印象深刻的翻译质量速度折衷。在这项工作中,我们重新审查了这种权衡,并认为自回归基线可以大大加速而不会损失准确性。具体而言,我们研究了具有不同深度的编码器和解码器的自回旋模型。我们的广泛实验表明,鉴于一个足够深的编码器,单层自回归解码器可以大大优于强大的非自动回旋型模型,其推理速度可比。我们表明,与非自动回归方法相比,自回归基准的速度劣势在三个方面被高估了:次优层分配,不足的速度测量和缺乏知识蒸馏。我们的结果为将来的研究建立了一个新的协议,以快速,准确的机器翻译。我们的代码可在https://github.com/jungokasai/deep-shallow上找到。

Much recent effort has been invested in non-autoregressive neural machine translation, which appears to be an efficient alternative to state-of-the-art autoregressive machine translation on modern GPUs. In contrast to the latter, where generation is sequential, the former allows generation to be parallelized across target token positions. Some of the latest non-autoregressive models have achieved impressive translation quality-speed tradeoffs compared to autoregressive baselines. In this work, we reexamine this tradeoff and argue that autoregressive baselines can be substantially sped up without loss in accuracy. Specifically, we study autoregressive models with encoders and decoders of varied depths. Our extensive experiments show that given a sufficiently deep encoder, a single-layer autoregressive decoder can substantially outperform strong non-autoregressive models with comparable inference speed. We show that the speed disadvantage for autoregressive baselines compared to non-autoregressive methods has been overestimated in three aspects: suboptimal layer allocation, insufficient speed measurement, and lack of knowledge distillation. Our results establish a new protocol for future research toward fast, accurate machine translation. Our code is available at https://github.com/jungokasai/deep-shallow.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源