论文标题
通过混合目标功能重新访问半监督文本分类的LSTM网络
Revisiting LSTM Networks for Semi-Supervised Text Classification via Mixed Objective Function
论文作者
论文摘要
在本文中,我们使用监督和半监督方法研究双向LSTM网络,以完成文本分类的任务。几项先前的工作表明,使用无监督的方法(例如语言建模(Dai and Le 2015; Miyato,Miyato,dai和Goodfellow 2016))或复杂模型(Johnson and Zhang 2017)是实现高分类精度是必要的。但是,我们制定了一种训练策略,即使经过跨透镜损失进行训练,即使是简单的BilstM模型,与更复杂的方法相比,可以实现竞争成果。此外,除了使用熵最小化,对抗性和未标记数据的熵最小化,对抗性和虚拟对抗性损失的组合,我们还报告了几个基准数据集中文本分类任务的最新结果。特别是,在ACL-IMDB情感分析和AG-NEWS主题分类数据集上,我们的方法比当前的方法大大优于当前方法。我们还通过提高关系提取任务的性能来显示混合目标函数的一般性。
In this paper, we study bidirectional LSTM network for the task of text classification using both supervised and semi-supervised approaches. Several prior works have suggested that either complex pretraining schemes using unsupervised methods such as language modeling (Dai and Le 2015; Miyato, Dai, and Goodfellow 2016) or complicated models (Johnson and Zhang 2017) are necessary to achieve a high classification accuracy. However, we develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results compared with more complex approaches. Furthermore, in addition to cross-entropy loss, by using a combination of entropy minimization, adversarial, and virtual adversarial losses for both labeled and unlabeled data, we report state-of-the-art results for text classification task on several benchmark datasets. In particular, on the ACL-IMDB sentiment analysis and AG-News topic classification datasets, our method outperforms current approaches by a substantial margin. We also show the generality of the mixed objective function by improving the performance on relation extraction task.