论文标题
粗到1:层次多任务学习,以了解自然语言理解
Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding
论文作者
论文摘要
广义文本表示是许多自然语言理解任务的基础。要充分利用不同的语料库,不可避免地需要了解它们之间的相关性。但是,许多方法忽略了相关性,并直接采用单渠道模型(粗大的范式)来直接用于所有任务,而这些任务缺乏足够的理性和解释。此外,一些现有的作品通过针迹技能块(一个精细的范式)学习下游任务,这可能会导致其冗余和噪音,从而引起非理性的态度。在这项工作中,我们首先通过三种不同的角度分析任务相关性,即数据属性,手动设计和基于模型的相关性,基于相似的任务被分组在一起。然后,我们提出了一个带有粗到最新范式的层次结构框架,其底层共享了所有任务,中层级别分为不同的组,并分配给每个任务的顶级级别。这使我们的模型可以从所有任务中学习基本的语言属性,提高相关任务的性能,并减少与无关任务的负面影响。我们在五个自然语言理解任务的13个基准数据集上进行的实验证明了我们方法的优势。
Generalized text representations are the foundation of many natural language understanding tasks. To fully utilize the different corpus, it is inevitable that models need to understand the relevance among them. However, many methods ignore the relevance and adopt a single-channel model (a coarse paradigm) directly for all tasks, which lacks enough rationality and interpretation. In addition, some existing works learn downstream tasks by stitches skill block(a fine paradigm), which might cause irrationalresults due to its redundancy and noise. Inthis work, we first analyze the task correlation through three different perspectives, i.e., data property, manual design, and model-based relevance, based on which the similar tasks are grouped together. Then, we propose a hierarchical framework with a coarse-to-fine paradigm, with the bottom level shared to all the tasks, the mid-level divided to different groups, and the top-level assigned to each of the tasks. This allows our model to learn basic language properties from all tasks, boost performance on relevant tasks, and reduce the negative impact from irrelevant tasks. Our experiments on 13 benchmark datasets across five natural language understanding tasks demonstrate the superiority of our method.