论文标题
消失的决策边界复杂性和强大的第一部分
The Vanishing Decision Boundary Complexity and the Strong First Component
论文作者
论文摘要
我们表明,与机器学习分类器不同,训练有素的深层模型的决策边界中没有复杂的边界结构。但是,我们发现复杂的结构确实出现在训练中,但它们在塑造后不久就消失了。如果人们试图在决策边界中捕获不同级别的复杂性,以理解概括,这是一个悲观的新闻,这在机器学习中效果很好。尽管如此,我们发现培训数据上的前身模型的决策边界反映了最终模型的概括。我们展示了如何使用前身决策边界来研究深层模型的概括。我们有三个主要发现。一种是基于深层模型的第一个原理组成部分的强度,另一个是关于优化器的奇异性的,另一个是关于RESNETS中跳过连接的影响的强度。代码在https://github.com/hengshu1/decision_boundary_github上。
We show that unlike machine learning classifiers, there are no complex boundary structures in the decision boundaries for well-trained deep models. However, we found that the complicated structures do appear in training but they vanish shortly after shaping. This is a pessimistic news if one seeks to capture different levels of complexity in the decision boundary for understanding generalization, which works well in machine learning. Nonetheless, we found that the decision boundaries of predecessor models on the training data are reflective of the final model's generalization. We show how to use the predecessor decision boundaries for studying the generalization of deep models. We have three major findings. One is on the strength of the first principle component of deep models, another about the singularity of optimizers, and the other on the effects of the skip connections in ResNets. Code is at https://github.com/hengshu1/decision_boundary_github.