论文标题
VGCN-BERT:增强BERT用图形嵌入文本分类
VGCN-BERT: Augmenting BERT with Graph Embedding for Text Classification
论文作者
论文摘要
最近,使用基于神经网络的方法在文本分类方面取得了很多进展。特别是,使用注意机制(例如BERT)的模型已证明具有在句子或文档中捕获上下文信息的能力。但是,他们捕获有关语言词汇的全球信息的能力更加有限。后者是图形卷积网络(GCN)的强度。在本文中,我们提出了将BERT与词汇图卷积网络(VGCN)相结合的VGCN-BERT模型。本地信息和全球信息通过BERT的不同层进行相互作用,从而使它们可以相互影响并共同建立最终的分类表示形式。在我们对几个文本分类数据集的实验中,我们的方法仅优于BERT和GCN,并且具有比以前的研究所报道的更高的有效性。
Much progress has been made recently on text classification with methods based on neural networks. In particular, models using attention mechanism such as BERT have shown to have the capability of capturing the contextual information within a sentence or document. However, their ability of capturing the global information about the vocabulary of a language is more limited. This latter is the strength of Graph Convolutional Networks (GCN). In this paper, we propose VGCN-BERT model which combines the capability of BERT with a Vocabulary Graph Convolutional Network (VGCN). Local information and global information interact through different layers of BERT, allowing them to influence mutually and to build together a final representation for classification. In our experiments on several text classification datasets, our approach outperforms BERT and GCN alone, and achieve higher effectiveness than that reported in previous studies.