论文标题
输入Hessian神经网络正规化
Input Hessian Regularization of Neural Networks
论文作者
论文摘要
将输入梯度的正规化已证明可以有效促进神经网络的鲁棒性。因此,输入的Hessian的正则化是自然的下一步。这里的关键挑战是计算复杂性。计算输入的Hessian在计算上是不可行的。在本文中,我们提出了一种有效的算法,以使用HESSIAN操作员正则化训练深层神经网络。我们从理论上分析了该方法,并证明了Hessian操作员规范与神经网络承受对抗性攻击的能力有关。我们对MNIST和FMNIST数据集进行了初步的实验评估,该评估表明,新的正常器确实可以是可行的,并且此外,它增加了神经网络过于输入梯度正则化的稳健性。
Regularizing the input gradient has shown to be effective in promoting the robustness of neural networks. The regularization of the input's Hessian is therefore a natural next step. A key challenge here is the computational complexity. Computing the Hessian of inputs is computationally infeasible. In this paper we propose an efficient algorithm to train deep neural networks with Hessian operator-norm regularization. We analyze the approach theoretically and prove that the Hessian operator norm relates to the ability of a neural network to withstand an adversarial attack. We give a preliminary experimental evaluation on the MNIST and FMNIST datasets, which demonstrates that the new regularizer can, indeed, be feasible and, furthermore, that it increases the robustness of neural networks over input gradient regularization.