论文标题

在基于检索的聊天机构

Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots

论文作者

Gu, Jia-Chen, Li, Tianda, Liu, Quan, Ling, Zhen-Hua, Su, Zhiming, Wei, Si, Zhu, Xiaodan

论文摘要

在本文中,我们研究了在基于检索的聊天机器人中采用预训练的语言模型进行多转响应选择的问题。为了使模型了解说话者更改信息,提出了一种名为“说话者意识的伯特(Sa-bert)”的新模型,这是多转向对话的重要且内在的属性。此外,提出了一种说话者意识到的解散策略来解决纠缠的对话。该策略根据说话者的信息选择了少数最重要的话语作为过滤的上下文。最后,进行域的适应性以将内域知识纳入预训练的语言模型中。五个公共数据集的实验表明,我们提出的模型以大幅度的优于所有指标上的模型,并实现了多转响应选择的新最新性能。

In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots. A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues. Furthermore, a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues. This strategy selects a small number of most important utterances as the filtered context according to the speakers' information in them. Finally, domain adaptation is performed to incorporate the in-domain knowledge into pre-trained language models. Experiments on five public datasets show that our proposed model outperforms the present models on all metrics by large margins and achieves new state-of-the-art performances for multi-turn response selection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源