论文标题
通过传输自然语言推断,判别最近的邻居几次意图检测
Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference
论文作者
论文摘要
意图检测是面向目标的对话系统的核心组成部分之一,检测出副本(OOS)意图也是实际上重要的技能。很少有学习的学习吸引了减轻数据稀缺性的大量关注,但是OOS检测变得更具挑战性。在本文中,我们提出了一种简单而有效的方法,即具有深刻自我注意力的歧视性最近的邻居分类。与SoftMax分类器不同,我们利用Bert风格的成对编码来训练二进制分类器,该分类器估计用户输入的最佳匹配培训示例。我们建议通过转移自然语言推断(NLI)模型来提高歧视能力。我们在大规模多域意图检测任务上进行的广泛实验表明,与基于罗伯塔的分类器以及基于嵌入的基于嵌入的最近邻居方法相比,我们的方法实现了更稳定和准确的内域和OOS检测精度。更值得注意的是,NLI传输使我们的10弹模型能够使用50弹跳甚至全弹药分类器进行竞争性能,而我们可以通过利用更快的嵌入检索模型来保持推理时间持续。
Intent detection is one of the core components of goal-oriented dialog systems, and detecting out-of-scope (OOS) intents is also a practically important skill. Few-shot learning is attracting much attention to mitigate data scarcity, but OOS detection becomes even more challenging. In this paper, we present a simple yet effective approach, discriminative nearest neighbor classification with deep self-attention. Unlike softmax classifiers, we leverage BERT-style pairwise encoding to train a binary classifier that estimates the best matched training example for a user input. We propose to boost the discriminative ability by transferring a natural language inference (NLI) model. Our extensive experiments on a large-scale multi-domain intent detection task show that our method achieves more stable and accurate in-domain and OOS detection accuracy than RoBERTa-based classifiers and embedding-based nearest neighbor approaches. More notably, the NLI transfer enables our 10-shot model to perform competitively with 50-shot or even full-shot classifiers, while we can keep the inference time constant by leveraging a faster embedding retrieval model.