论文标题

使用神经网络解决PDE的最佳加权损失功能

Optimally weighted loss functions for solving PDEs with Neural Networks

论文作者

van der Meer, Remco, Oosterlee, Cornelis, Borovykh, Anastasia

论文摘要

最近的作品表明,可以使用深层神经网络来求解部分微分方程,从而产生了物理知识的神经网络的框架。我们引入了这些方法的概括,该方法表现为缩放参数,该参数平衡了部分微分方程施加的不同约束的相对重要性。提供了这些广义方法的数学动机,这表明对于线性和良好的部分偏微分方程,功能形式是凸。然后,我们为缩放参数提供了相对于相对误差量度最佳的缩放参数的选择。由于这种最佳选择依赖于对分析解决方案的充分了解,因此我们还提出了一种启发式方法来近似这种最佳选择。将所提出的方法与多种模型偏微分方程上的原始方法进行比较,并且数据点的数量被自适应更新。对于包括高维PDE在内的几个问题,所提出的方法显示出显着提高准确性。

Recent works have shown that deep neural networks can be employed to solve partial differential equations, giving rise to the framework of physics informed neural networks. We introduce a generalization for these methods that manifests as a scaling parameter which balances the relative importance of the different constraints imposed by partial differential equations. A mathematical motivation of these generalized methods is provided, which shows that for linear and well-posed partial differential equations, the functional form is convex. We then derive a choice for the scaling parameter that is optimal with respect to a measure of relative error. Because this optimal choice relies on having full knowledge of analytical solutions, we also propose a heuristic method to approximate this optimal choice. The proposed methods are compared numerically to the original methods on a variety of model partial differential equations, with the number of data points being updated adaptively. For several problems, including high-dimensional PDEs the proposed methods are shown to significantly enhance accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源