论文标题

米克:一个元学习框架,用于与小型培训数据的几次射击关系分类

MICK: A Meta-Learning Framework for Few-shot Relation Classification with Small Training Data

论文作者

Geng, Xiaoqing, Chen, Xiwen, Zhu, Kenny Q., Shen, Libin, Zhao, Yinggong

论文摘要

很少有射击关系分类旨在在仅遇到很少的支持实例后对传入查询实例进行分类。通过大量域内注释数据培训可以获得此能力。在本文中,我们通过进一步限制培训时可用的数据量来解决一个更困难的问题。我们为关系分类提出了一些射击学习框架,当培训数据很小时,这特别强大。在此框架中,模型不仅努力对查询实例进行分类,而且还寻求有关支持实例的基本知识以获得更好的实例表示。该框架还包括一种通过开源任务富集将跨域知识汇总到模型中的方法。此外,我们构建了一个全新的数据集:Tinyrel-CM数据集,在健康领域中使用有意的小型培训数据和挑战性关系类别的健康领域中的一些射击关系分类数据集。实验结果表明,我们的框架为大多数潜在的分类模型带来了绩效提高,在给定较小的培训数据的情况下,我们的框架表现优于最先进的结果,并通过足够大的培训数据实现了竞争成果。

Few-shot relation classification seeks to classify incoming query instances after meeting only few support instances. This ability is gained by training with large amount of in-domain annotated data. In this paper, we tackle an even harder problem by further limiting the amount of data available at training time. We propose a few-shot learning framework for relation classification, which is particularly powerful when the training data is very small. In this framework, models not only strive to classify query instances, but also seek underlying knowledge about the support instances to obtain better instance representations. The framework also includes a method for aggregating cross-domain knowledge into models by open-source task enrichment. Additionally, we construct a brand new dataset: the TinyRel-CM dataset, a few-shot relation classification dataset in health domain with purposely small training data and challenging relation classes. Experimental results demonstrate that our framework brings performance gains for most underlying classification models, outperforms the state-of-the-art results given small training data, and achieves competitive results with sufficiently large training data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源