论文标题

预先训练的跨语性语言模型的多语言语法误差校正统一策略

A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model

论文作者

Sun, Xin, Ge, Tao, Ma, Shuming, Li, Jingjing, Wei, Furu, Wang, Houfeng

论文摘要

非英语语言的语法误差校正(GEC)的合成数据构建在很大程度上取决于人类设计的和语言特定的规则,这些规则产生有限的错误校正模式。在本文中,我们为多语言GEC提出了一种通用和语言独立的策略,该策略可以有效地训练GEC系统的新型非英语语言,只有两个易于访问的资源:1)审慎的跨语性语言模型(PXLM)和2)英语和语言之间的平行翻译数据。我们的方法通过采用PXLM生成的非自动性翻译和Gold Translation作为错误校正的句子对,从而创建了不同的平行GEC数据,而无需任何特定语言的操作。然后,我们重复使用PXLM以初始化GEC模型,并用自身生成的合成数据预处理它,从而进一步改进。我们用不同语言的三个公共基准评估了我们的方法。它在NLPCC 2018 Task 2数据集(中文)上取得了最先进的结果,并在Falko-Merlin(德语)和Rulec-Gec(俄语)上获得了竞争性能。进一步的分析表明,我们的数据构建方法与基于规则的方法互补。

Synthetic data construction of Grammatical Error Correction (GEC) for non-English languages relies heavily on human-designed and language-specific rules, which produce limited error-corrected patterns. In this paper, we propose a generic and language-independent strategy for multilingual GEC, which can train a GEC system effectively for a new non-English language with only two easy-to-access resources: 1) a pretrained cross-lingual language model (PXLM) and 2) parallel translation data between English and the language. Our approach creates diverse parallel GEC data without any language-specific operations by taking the non-autoregressive translation generated by PXLM and the gold translation as error-corrected sentence pairs. Then, we reuse PXLM to initialize the GEC model and pretrain it with the synthetic data generated by itself, which yields further improvement. We evaluate our approach on three public benchmarks of GEC in different languages. It achieves the state-of-the-art results on the NLPCC 2018 Task 2 dataset (Chinese) and obtains competitive performance on Falko-Merlin (German) and RULEC-GEC (Russian). Further analysis demonstrates that our data construction method is complementary to rule-based approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源