论文标题
使用预训练的语言模型生成无监督的释义
Unsupervised Paraphrase Generation using Pre-trained Language Models
论文作者
论文摘要
在各种自然语言任务中,大规模预训练的语言模型已被证明是非常有力的方法。 Openai的GPT-2 \ cite {Radford2019Language}的能力值得注意,其能力能够产生流利的,配方良好的语法,语法一致的文本和短语完成。在本文中,我们利用GPT-2的这一生成能力来生成释义,而无需标记数据的任何监督。我们研究结果如何与其他受监督和无监督的方法进行比较,以及使用释义对分类等下游任务进行数据增强的影响。我们的实验表明,使用我们的模型产生的释义质量良好,具有多样性,并在用于数据增强的情况下改善了下游任务性能。
Large scale Pre-trained Language Models have proven to be very powerful approach in various Natural language tasks. OpenAI's GPT-2 \cite{radford2019language} is notable for its capability to generate fluent, well formulated, grammatically consistent text and for phrase completions. In this paper we leverage this generation capability of GPT-2 to generate paraphrases without any supervision from labelled data. We examine how the results compare with other supervised and unsupervised approaches and the effect of using paraphrases for data augmentation on downstream tasks such as classification. Our experiments show that paraphrases generated with our model are of good quality, are diverse and improves the downstream task performance when used for data augmentation.