论文标题

基于案例的绑架自然语言推论

Case-Based Abductive Natural Language Inference

论文作者

Valentino, Marco, Thayaparan, Mokanarangan, Freitas, André

论文摘要

多跳的自然语言推理(NLI)构建解释的大多数当代方法都孤立地考虑了每个测试案例。但是,众所周知,这种范式会遭受语义漂移的困扰,这种现象导致了伪造的解释,导致了错误的结论。相比之下,本文提出了一个用于多跳的NLI的绑架框架,该框架在基于案例的推理(CBR)中探索了检索reuse-Refine范式。具体而言,我们提出了基于案例的绑架自然语言推断(CB-ANLI),该模型通过类似的解释从类似示例中的先前解释来解决看不见的推理问题。我们从经验上评估了常识性和科学问题回答任务的绑架框架,这表明CB-ANLI可以有效地与稀疏且密集的预训练的编码器有效整合以改善多跳推断,也可以用作变形金刚的证据回收者。此外,对语义漂移的经验分析表明,CBR范式提高了最具挑战性的解释的质量,该功能对下游推理任务的鲁棒性和准确性有直接影响。

Most of the contemporary approaches for multi-hop Natural Language Inference (NLI) construct explanations considering each test case in isolation. However, this paradigm is known to suffer from semantic drift, a phenomenon that causes the construction of spurious explanations leading to wrong conclusions. In contrast, this paper proposes an abductive framework for multi-hop NLI exploring the retrieve-reuse-refine paradigm in Case-Based Reasoning (CBR). Specifically, we present Case-Based Abductive Natural Language Inference (CB-ANLI), a model that addresses unseen inference problems by analogical transfer of prior explanations from similar examples. We empirically evaluate the abductive framework on commonsense and scientific question answering tasks, demonstrating that CB-ANLI can be effectively integrated with sparse and dense pre-trained encoders to improve multi-hop inference, or adopted as an evidence retriever for Transformers. Moreover, an empirical analysis of semantic drift reveals that the CBR paradigm boosts the quality of the most challenging explanations, a feature that has a direct impact on robustness and accuracy in downstream inference tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源