论文标题
有限神经元方法神经元的子空间校正方法
A neuron-wise subspace correction method for the finite neuron method
论文作者
论文摘要
在本文中,我们提出了一种新型算法,称为有限神经元方法,称为神经元平行的子空间校正方法(NPSC),该方法使用神经网络函数近似偏微分方程(PDES)的数值解。尽管在将神经网络应用于数值PDE方面极为广泛的研究活动,但仍然严重缺乏有效的培训算法,即使对于一维问题,也可以实现足够的准确性。基于对单个神经元问题线性层和景观分析的光谱特性的最新结果,我们开发了一种特殊类型的子空间校正方法,该方法分别优化了非线性层中的线性层和每个神经元。为一维问题提出了解决线性层不良条件的最佳预处理,因此与神经元数量相对于均匀数量的迭代训练线性层。在每个单个神经元问题中,避免平坦能量区域的良好局部最小值是由超线性收敛算法发现的。与其他基于梯度的方法相比,有关功能近似问题和PDE的数值实验证明了该方法的性能更好。
In this paper, we propose a novel algorithm called Neuron-wise Parallel Subspace Correction Method (NPSC) for the finite neuron method that approximates numerical solutions of partial differential equations (PDEs) using neural network functions. Despite extremely extensive research activities in applying neural networks for numerical PDEs, there is still a serious lack of effective training algorithms that can achieve adequate accuracy, even for one-dimensional problems. Based on recent results on the spectral properties of linear layers and landscape analysis for single neuron problems, we develop a special type of subspace correction method that optimizes the linear layer and each neuron in the nonlinear layer separately. An optimal preconditioner that resolves the ill-conditioning of the linear layer is presented for one-dimensional problems, so that the linear layer is trained in a uniform number of iterations with respect to the number of neurons. In each single neuron problem, a good local minimum that avoids flat energy regions is found by a superlinearly convergent algorithm. Numerical experiments on function approximation problems and PDEs demonstrate better performance of the proposed method than other gradient-based methods.