论文标题

从大型预训练模型中引起知识,以进行无监督的知识对话

Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation

论文作者

Li, Yanyang, Zhao, Jianqiao, Lyu, Michael R., Wang, Liwei

论文摘要

大规模预训练的最新进展提供了大型模型,具有从原始文本中学习知识的潜力。因此,自然要问是否有可能利用这些大型模型作为下游任务的知识库。在这项工作中,我们在无监督的知识对话中回答了上述问题。我们探索各种最能从大型模型中获取知识的方法。我们的人类研究表明,尽管存在幻觉,但大型模型的独特优势是能够输出常识并总结无法直接从搜索引擎中检索的事实。为了更好地利用对话生成中的这种产生的知识,我们将生成的知识视为嘈杂的知识来源,并提出基于后验的重新训练以及嘈杂的培训策略。两个基准的经验结果表明,与最先进的方法相比具有优势。

Recent advances in large-scale pre-training provide large models with the potential to learn knowledge from the raw text. It is thus natural to ask whether it is possible to leverage these large models as knowledge bases for downstream tasks. In this work, we answer the aforementioned question in unsupervised knowledge-grounded conversation. We explore various methods that best elicit knowledge from large models. Our human study indicates that, though hallucinations exist, large models post the unique advantage of being able to output common sense and summarize facts that cannot be directly retrieved from the search engine. To better exploit such generated knowledge in dialogue generation, we treat the generated knowledge as a noisy knowledge source and propose the posterior-based reweighing as well as the noisy training strategy. Empirical results on two benchmarks show advantages over the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源