论文标题

通过训练表示对齐方式学习紧凑的特征

Learning Compact Features via In-Training Representation Alignment

论文作者

Li, Xin, Li, Xiangrui, Pan, Deng, Qiang, Yao, Zhu, Dongxiao

论文摘要

可以将用于监督学习的深神经网络(DNNS)视为特征提取器(即最后一个隐藏层)的管道和线性分类器(即输出层),这些分类器(即输出层)在损失功能上与随机梯度下降(SGD)联合训练(例如,交叉镜头)。在每个时期内,使用训练集采样的小批量估算损耗函数的真正梯度,然后使用Mini Batch梯度更新模型参数。尽管后者提供了对前者的公正估计,但它们仍会遵守采​​样的迷你批量的大小和数量,导致嘈杂和跳跃的更新。为了稳定这种不良差异在估计真实梯度时,我们提出了训练表示形式对准(ITRA),以明确对齐两个不同的迷你批次的分布与SGD训练过程中的匹配损失。我们还对匹配损失对特征表示学习的理想影响进行了严格的分析:(1)提取紧凑的特征表示; (2)通过自适应加权机制减少迷你批次过度适应; (3)适应多模式。最后,我们对图像和文本分类进行了大规模实验,以证明其优于强基础的性能。

Deep neural networks (DNNs) for supervised learning can be viewed as a pipeline of the feature extractor (i.e., last hidden layer) and a linear classifier (i.e., output layer) that are trained jointly with stochastic gradient descent (SGD) on the loss function (e.g., cross-entropy). In each epoch, the true gradient of the loss function is estimated using a mini-batch sampled from the training set and model parameters are then updated with the mini-batch gradients. Although the latter provides an unbiased estimation of the former, they are subject to substantial variances derived from the size and number of sampled mini-batches, leading to noisy and jumpy updates. To stabilize such undesirable variance in estimating the true gradients, we propose In-Training Representation Alignment (ITRA) that explicitly aligns feature distributions of two different mini-batches with a matching loss in the SGD training process. We also provide a rigorous analysis of the desirable effects of the matching loss on feature representation learning: (1) extracting compact feature representation; (2) reducing over-adaption on mini-batches via an adaptive weighting mechanism; and (3) accommodating to multi-modalities. Finally, we conduct large-scale experiments on both image and text classifications to demonstrate its superior performance to the strong baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源