论文标题
BLEURT:学习文本生成的强大指标
BLEURT: Learning Robust Metrics for Text Generation
论文作者
论文摘要
在过去的几年中,文本生成取得了重大进步。然而,由于最受欢迎的选择(例如,Bleu和Rouge)可能与人类判断较差,因此评估指标落后了。我们提出了Bleurt,这是一种基于BERT的学识渊博的评估指标,可以用几千次可能有偏见的培训示例来对人类的判断进行建模。我们方法的一个关键方面是一种新型的预训练方案,该方案使用数百万个合成示例来帮助该模型推广。 Bleurt在WMT指标共享任务和WebNLG竞争数据集的最后三年中提供了最先进的结果。与基于香草bert的方法相反,即使训练数据稀缺且分布不足,它也会产生卓越的结果。
Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.