论文标题

深度神经网络的复杂性和深度特征表示的其他特征

Complexity for deep neural networks and other characteristics of deep feature representations

论文作者

Janik, Romuald A., Witaszczyk, Przemek

论文摘要

我们定义了复杂性的概念,该概念量化了神经网络计算的非线性,以及对特征表示有效维度的互补度量。我们研究了这些可观察到的训练网络的可观察到各种数据集的网络,并在培训过程中探索它们的动态,并在尤其是Power Law缩放中发现。这些可观察到的物品可以双重理解为揭示数据集本身的隐藏内部结构,这是比例或深度的函数。提出的复杂性概念的熵特征应允许将分析模式从神经科学和统计物理学转移到人工神经网络领域。可以应用引入的可观察物,而无需对生物神经元系统的分析进行任何更改。

We define a notion of complexity, which quantifies the nonlinearity of the computation of a neural network, as well as a complementary measure of the effective dimension of feature representations. We investigate these observables both for trained networks for various datasets as well as explore their dynamics during training, uncovering in particular power law scaling. These observables can be understood in a dual way as uncovering hidden internal structure of the datasets themselves as a function of scale or depth. The entropic character of the proposed notion of complexity should allow to transfer modes of analysis from neuroscience and statistical physics to the domain of artificial neural networks. The introduced observables can be applied without any change to the analysis of biological neuronal systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源