论文标题

深度神经网络之间的转变

Transformations between deep neural networks

论文作者

Bertalan, Tom, Dietrich, Felix, Kevrekidis, Ioannis G.

论文摘要

我们建议通过使用歧管学习技术在两个不同的人工神经网络之间进行测试,并在可能的情况下建立两个不同的人工神经网络之间的等效性。特别是,我们使用类似摩alano虫的度量标准采用扩散图。如果构建成功,则可以将两个网络视为属于同等等效类别。 我们首先仅讨论两个网络的输出之间的转换功能。然后,我们还考虑了每个网络中许多内部神经元的输出(激活)的转换。通常,惠特尼的定理决定重建第二个网络的每个功能所需的一个网络之一的测量数量。转换函数的构建依赖于网络输入空间的一致,内在表示。 我们通过匹配经过训练的(a)观察标量功能的神经网络对来说明我们的算法; (b)观察二维矢量场; (c)移动三维物体(旋转马)的图像表示。跨不同网络实例的这种等价类别的构建显然与转移学习有关。我们还期望它在通过不同工具和不同研究组观察到的相同现象的不同基于机器学习的模型之间建立同等价值将是有价值的。

We propose to test, and when possible establish, an equivalence between two different artificial neural networks by attempting to construct a data-driven transformation between them, using manifold-learning techniques. In particular, we employ diffusion maps with a Mahalanobis-like metric. If the construction succeeds, the two networks can be thought of as belonging to the same equivalence class. We first discuss transformation functions between only the outputs of the two networks; we then also consider transformations that take into account outputs (activations) of a number of internal neurons from each network. In general, Whitney's theorem dictates the number of measurements from one of the networks required to reconstruct each and every feature of the second network. The construction of the transformation function relies on a consistent, intrinsic representation of the network input space. We illustrate our algorithm by matching neural network pairs trained to learn (a) observations of scalar functions; (b) observations of two-dimensional vector fields; and (c) representations of images of a moving three-dimensional object (a rotating horse). The construction of such equivalence classes across different network instantiations clearly relates to transfer learning. We also expect that it will be valuable in establishing equivalence between different Machine Learning-based models of the same phenomenon observed through different instruments and by different research groups.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源