论文标题
通过数据采样和多任务优化的新型DNN培训框架
A Novel DNN Training Framework via Data Sampling and Multi-Task Optimization
论文作者
论文摘要
常规的DNN培训范式通常依赖于一个训练集和一个验证集,该集合通过分区用于培训的注释数据集(即以某种方式)进行培训。培训集用于训练模型,而验证集则用于估计受过训练模型的概括性能,以避免过度拟合。该范式存在两个主要问题。首先,由于潜在的与测试数据不匹配,验证集可能无法保证对概括性能的无偏估计。其次,训练A DNN对应于解决一个复杂的优化问题,该问题容易被困在次要的局部优势中,从而导致不希望的训练结果。为了解决这些问题,我们提出了一个新颖的DNN培训框架。它通过随机拆分生成了多对培训和验证集,通过随机分配,训练每对预先指定结构的DNN模型,同时从一个模型培训过程中获得有用的知识(例如,有希望的网络参数),从一个模型培训过程中获得了通过多任务的优化来传输到其他模型的其他模型中,并通过所有培训的模型中的整体培训进行了验证,该模型的整体表现为良好的整体表现。这个新框架中所介绍的知识转移机制不仅可以通过帮助模型培训过程逃脱本地最佳功能来提高培训效果,而且还可以通过从其他模型培训过程中对一个模型培训过程施加的隐式正则化来改善概括性能。我们实现了提出的框架,并同步在GPU群集上实现,并将其应用于训练多种使用的DNN模型。实验结果表明,所提出的框架优于常规训练范式。
Conventional DNN training paradigms typically rely on one training set and one validation set, obtained by partitioning an annotated dataset used for training, namely gross training set, in a certain way. The training set is used for training the model while the validation set is used to estimate the generalization performance of the trained model as the training proceeds to avoid over-fitting. There exist two major issues in this paradigm. Firstly, the validation set may hardly guarantee an unbiased estimate of generalization performance due to potential mismatching with test data. Secondly, training a DNN corresponds to solve a complex optimization problem, which is prone to getting trapped into inferior local optima and thus leads to undesired training results. To address these issues, we propose a novel DNN training framework. It generates multiple pairs of training and validation sets from the gross training set via random splitting, trains a DNN model of a pre-specified structure on each pair while making the useful knowledge (e.g., promising network parameters) obtained from one model training process to be transferred to other model training processes via multi-task optimization, and outputs the best, among all trained models, which has the overall best performance across the validation sets from all pairs. The knowledge transfer mechanism featured in this new framework can not only enhance training effectiveness by helping the model training process to escape from local optima but also improve on generalization performance via implicit regularization imposed on one model training process from other model training processes. We implement the proposed framework, parallelize the implementation on a GPU cluster, and apply it to train several widely used DNN models. Experimental results demonstrate the superiority of the proposed framework over the conventional training paradigm.