论文标题

PSSAT:一种扰动的语义结构意识转移方法,用于扰动插槽填充

PSSAT: A Perturbed Semantic Structure Awareness Transferring Method for Perturbation-Robust Slot Filling

论文作者

Dong, Guanting, Guo, Daichi, Wang, Liwen, Li, Xuefeng, Wang, Zechen, Zeng, Chen, He, Keqing, Zhao, Jinzheng, Lei, Hao, Cui, Xinyue, Huang, Yi, Feng, Junlan, Xu, Weiran

论文摘要

大多数现有的插槽填充模型倾向于记住实体的固有模式和培训数据中相应的上下文。但是,这些模型可能会导致系统故障或在实践中受到口语扰动或变化时的不良输出。我们提出了一种扰动的语义结构意识转移方法,用于训练扰动插槽填充模型。具体而言,我们介绍了两种基于传销的培训策略,以分别从无监督的语言扰动语料库中分别学习上下文语义结构和单词分布。然后,我们将从上游训练过程学到的语义知识转移到原始样本中,并通过一致性处理过滤生成的数据。这些程序旨在增强插槽填充模型的鲁棒性。实验结果表明,我们的方法始终优于先前的基本方法,并获得了强大的概括,同时阻止了模型记住实体和环境的固有模式。

Most existing slot filling models tend to memorize inherent patterns of entities and corresponding contexts from training data. However, these models can lead to system failure or undesirable outputs when being exposed to spoken language perturbation or variation in practice. We propose a perturbed semantic structure awareness transferring method for training perturbation-robust slot filling models. Specifically, we introduce two MLM-based training strategies to respectively learn contextual semantic structure and word distribution from unsupervised language perturbation corpus. Then, we transfer semantic knowledge learned from upstream training procedure into the original samples and filter generated data by consistency processing. These procedures aim to enhance the robustness of slot filling models. Experimental results show that our method consistently outperforms the previous basic methods and gains strong generalization while preventing the model from memorizing inherent patterns of entities and contexts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源