论文标题

CCPREFIX:多个类别分类的反事实对比前缀调节

CCPrefix: Counterfactual Contrastive Prefix-Tuning for Many-Class Classification

论文作者

Li, Yang, Xu, Canran, Long, Guodong, Shen, Tao, Tao, Chongyang, Jiang, Jing

论文摘要

最近,提出了前缀调整,以有效地将预训练的语言模型适应各种自然语言分类任务。它以特定于任务的指标和语言语言器作为分类标签提到,它利用软体前缀来缩小培训前语言模型的配方差距。但是,当标签空间大大增加(即多类分类)时,这种调整技术会遇到口头歧义问题,因为多级标签在短语言短语中以语义相似的语言词来表示。为了克服这一点,受到每个实例中最模棱两可的阶级的启发,我们提出了一种全新的前缀调节方法,反事实对比前缀 - 调用(CCPREFIX),以进行多个阶级的分类。基本上,由实例依赖的软前缀源自标签空间中的事实相互作用对,并利用了多个阶级分类中的语言言语来补充。我们在完全有监督的设置和少量设置中对多类基准数据集进行了实验,这表明我们的模型表现优于以前的基线。

Recently, prefix-tuning was proposed to efficiently adapt pre-trained language models to a broad spectrum of natural language classification tasks. It leverages soft prefix as task-specific indicators and language verbalizers as categorical-label mentions to narrow the formulation gap from pre-training language models. However, when the label space increases considerably (i.e., many-class classification), such a tuning technique suffers from a verbalizer ambiguity problem since the many-class labels are represented by semantic-similar verbalizers in short language phrases. To overcome this, inspired by the human-decision process that the most ambiguous classes would be mulled over for each instance, we propose a brand-new prefix-tuning method, Counterfactual Contrastive Prefix-tuning (CCPrefix), for many-class classification. Basically, an instance-dependent soft prefix, derived from fact-counterfactual pairs in the label space, is leveraged to complement the language verbalizers in many-class classification. We conduct experiments on many-class benchmark datasets in both the fully supervised setting and the few-shot setting, which indicates that our model outperforms former baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源