论文标题

LAD:语言模型作为零击对话框的数据

LAD: Language Models as Data for Zero-Shot Dialog

论文作者

Mehri, Shikib, Altun, Yasemin, Eskenazi, Maxine

论文摘要

为了促进零弹性对话框中的零拍,本文提出语言模型作为数据(LAD)。 LAD是创建各种准确的合成数据的范式,该数据传达了必要的结构约束,可用于训练下游神经对话模型。 LAD利用GPT-3诱导语言多样性。 LAD在意图预测(+15%),插槽填充(+31.4 f-1)和下一个动作预测(+11 F1)上,在零拍设置中获得了显着的性能提高。此外,交互式人类评估表明,与LAD的培训具有在人类对话中培训的竞争力。 LAD是开源的,可在https://github.com/shikib/lad上获得代码和数据。

To facilitate zero-shot generalization in taskoriented dialog, this paper proposes Language Models as Data (LAD). LAD is a paradigm for creating diverse and accurate synthetic data which conveys the necessary structural constraints and can be used to train a downstream neural dialog model. LAD leverages GPT-3 to induce linguistic diversity. LAD achieves significant performance gains in zero-shot settings on intent prediction (+15%), slot filling (+31.4 F-1) and next action prediction (+11 F1). Furthermore, an interactive human evaluation shows that training with LAD is competitive with training on human dialogs. LAD is open-sourced, with the code and data available at https://github.com/Shikib/lad.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源