论文标题
UXLA:零资源跨语义NLP的强大无监督数据增强框架
UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual NLP
论文作者
论文摘要
转移学习已经产生了最先进的(SOTA)导致许多有监督的NLP任务。但是,每个目标语言中每个目标任务的注释数据很少,尤其是对于低资源语言而言。我们提出了UXLA,这是一个新型的无监督数据增强框架,用于零资源转移学习方案。特别是,UXLA旨在解决从源语言任务分布到未知目标语言任务分布的跨语言适应问题,假设没有目标语言的培训标签。 UXLA以数据增强和无监督的样本选择同时进行自我培训。为了显示其有效性,我们对三个不同的零资源跨语言转移任务进行了广泛的实验。 UXLA达到SOTA会导致所有任务,从而优于基准的优于良好的利润率。通过深入的框架解剖,我们证明了不同组成部分对其成功的累积贡献。
Transfer learning has yielded state-of-the-art (SoTA) results in many supervised NLP tasks. However, annotated data for every target task in every target language is rare, especially for low-resource languages. We propose UXLA, a novel unsupervised data augmentation framework for zero-resource transfer learning scenarios. In particular, UXLA aims to solve cross-lingual adaptation problems from a source language task distribution to an unknown target language task distribution, assuming no training label in the target language. At its core, UXLA performs simultaneous self-training with data augmentation and unsupervised sample selection. To show its effectiveness, we conduct extensive experiments on three diverse zero-resource cross-lingual transfer tasks. UXLA achieves SoTA results in all the tasks, outperforming the baselines by a good margin. With an in-depth framework dissection, we demonstrate the cumulative contributions of different components to its success.