论文标题
自动细胞:理解和优化物理知识的神经体系结构
Auto-PINN: Understanding and Optimizing Physics-Informed Neural Architecture
论文作者
论文摘要
物理知识的神经网络(PINNS)正在通过将深度学习的力量融合到科学计算上,从而彻底改变了科学和工程实践。在正向建模问题中,PINN是可以处理不规则,高维物理域的无网状偏微分方程(PDE)求解器。自然,神经体系结构超参数对PINN求解器的效率和准确性产生了很大的影响。但是,由于搜索空间较大,并且难以确定PDE的适当搜索目标,这仍然是一个开放且具有挑战性的问题。在这里,我们提出了Auto-pinn,这是针对PINNS的第一种系统的自动化超参数优化方法,该方法采用了神经体系结构搜索(NAS)技术来实现Pinn Design。自动Pinn避免手动或详尽地搜索与PINN相关的超参数空间。使用标准PDE基准的一组全面的预先实验,使我们能够探测PINN中的结构 - 性能关系。我们发现可以将不同的超参数解耦,并且Pinn的训练损失函数是一个很好的搜索目标。与基线方法的比较实验表明,自动细胞会产生具有优于替代基线的稳定性和准确性的神经体系结构。
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation. In forward modeling problems, PINNs are meshless partial differential equation (PDE) solvers that can handle irregular, high-dimensional physical domains. Naturally, the neural architecture hyperparameters have a large impact on the efficiency and accuracy of the PINN solver. However, this remains an open and challenging problem because of the large search space and the difficulty of identifying a proper search objective for PDEs. Here, we propose Auto-PINN, the first systematic, automated hyperparameter optimization approach for PINNs, which employs Neural Architecture Search (NAS) techniques to PINN design. Auto-PINN avoids manually or exhaustively searching the hyperparameter space associated with PINNs. A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs. We find that the different hyperparameters can be decoupled, and that the training loss function of PINNs is a good search objective. Comparison experiments with baseline methods demonstrate that Auto-PINN produces neural architectures with superior stability and accuracy over alternative baselines.