论文标题
有限的基础物理信息神经网络作为Schwarz域分解方法
Finite basis physics-informed neural networks as a Schwarz domain decomposition method
论文作者
论文摘要
物理信息神经网络(PINN)[4,10]是一种基于微分方程(PDE)解决边界价值问题的方法。 PINN的关键思想是使用神经网络近似于PDE的解决方案,并在训练它时将PDE的残差以及边界条件纳入其损失函数。这提供了一种简单无网格的方法,用于解决与PDE有关的问题。但是,PINN的关键局限性是在解决较大域和更复杂的多尺度解决方案的问题时缺乏准确性和效率。在最近的方法中,有限的基础物理知识神经网络(FBPINNS)[8]使用域分解中的思想来加速PINN的学习过程并提高其准确性。在这项工作中,我们展示了如何开发用于训练FBPINN的Schwarz样添加剂,乘法和混合迭代方法。我们介绍了这些不同训练策略对收敛和准确性的影响的数值实验。此外,我们提出和评估FBPINN的粗空间校正的初步实施。
Physics-informed neural networks (PINNs) [4, 10] are an approach for solving boundary value problems based on differential equations (PDEs). The key idea of PINNs is to use a neural network to approximate the solution to the PDE and to incorporate the residual of the PDE as well as boundary conditions into its loss function when training it. This provides a simple and mesh-free approach for solving problems relating to PDEs. However, a key limitation of PINNs is their lack of accuracy and efficiency when solving problems with larger domains and more complex, multi-scale solutions. In a more recent approach, finite basis physics-informed neural networks (FBPINNs) [8] use ideas from domain decomposition to accelerate the learning process of PINNs and improve their accuracy. In this work, we show how Schwarz-like additive, multiplicative, and hybrid iteration methods for training FBPINNs can be developed. We present numerical experiments on the influence of these different training strategies on convergence and accuracy. Furthermore, we propose and evaluate a preliminary implementation of coarse space correction for FBPINNs.