论文标题

ECONET:有效持续预测事件时间推理的语言模型

ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning

论文作者

Han, Rujun, Ren, Xiang, Peng, Nanyun

论文摘要

尽管预训练的语言模型(PTLM)在许多NLP任务上取得了显着的成功,但他们仍然为需要事件时间推理的任务而努力,这对于以事件为中心的应用程序至关重要。我们提出了一种持续的预训练方法,该方法将PTLM与有关事件时间关系的有针对性知识。我们设计了自我监督的学习目标,以恢复蒙版的事件和时间指标,并将句子与损坏的对应物(事件或时间指标更换)区分句子。通过与这些目标共同预先培训PTLM,我们加强了其对事件和时间信息的关注,从而提高了事件时间推理的能力。这个有效的事件时间推理(ECONET)的有效持续的预训练框架改善了PTLMS在五个关系提取的五个关系中的微调表现,并在我们的大多数下游任务中都可以解决新的或应得出的新的或在PAR上进行最先进的表演。

While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications. We present a continual pre-training approach that equips PTLMs with targeted knowledge about event temporal relations. We design self-supervised learning objectives to recover masked-out event and temporal indicators and to discriminate sentences from their corrupted counterparts (where event or temporal indicators got replaced). By further pre-training a PTLM with these objectives jointly, we reinforce its attention to event and temporal information, yielding enhanced capability on event temporal reasoning. This effective continual pre-training framework for event temporal reasoning (ECONET) improves the PTLMs' fine-tuning performances across five relation extraction and question answering tasks and achieves new or on-par state-of-the-art performances in most of our downstream tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源