论文标题

广义量词作为多语言NLU基准的误差源

Generalized Quantifiers as a Source of Error in Multilingual NLU Benchmarks

论文作者

Cui, Ruixiang, Hershcovich, Daniel, Søgaard, Anders

论文摘要

自19世纪以来,代表语言的逻辑方法已经开发并评估了量词单词的计算模型,但是当今的NLU模型仍然很难捕获其语义。我们依靠通用的量词理论来量化单词语义的语言无关表示,以量化其对NLU模型错误的贡献。我们发现,量词在NLU基准中是普遍的,并且它们在测试时的发生与性能下降有关。多语言模型还表现出不满意的量词推理能力,但对于非英语语言而言,不一定会更糟。为了促进直接靶向的探测,我们提出了对抗性的广义量词NLI任务(GQNLI),并表明预训练的语言模型在广义量词推理中显然缺乏鲁棒性。

Logical approaches to representing language have developed and evaluated computational models of quantifier words since the 19th century, but today's NLU models still struggle to capture their semantics. We rely on Generalized Quantifier Theory for language-independent representations of the semantics of quantifier words, to quantify their contribution to the errors of NLU models. We find that quantifiers are pervasive in NLU benchmarks, and their occurrence at test time is associated with performance drops. Multilingual models also exhibit unsatisfying quantifier reasoning abilities, but not necessarily worse for non-English languages. To facilitate directly-targeted probing, we present an adversarial generalized quantifier NLI task (GQNLI) and show that pre-trained language models have a clear lack of robustness in generalized quantifier reasoning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源