论文标题

CROM:使用隐式神经表示的PDE的连续减少阶

CROM: Continuous Reduced-Order Modeling of PDEs Using Implicit Neural Representations

论文作者

Chen, Peter Yichen, Xiang, Jinxu, Cho, Dong Heon, Chang, Yue, Pershing, G A, Maia, Henrique Teles, Chiaramonte, Maurizio M., Carlberg, Kevin, Grinspun, Eitan

论文摘要

高保真偏差方程(PDE)求解器的长期运行使它们不适合时间关键应用。我们建议使用还原级建模(ROM)加速PDE求解器。虽然先前的ROM方法降低了离散矢量场的维度,但我们的连续减少订购建模(CROM)方法会建立对连续矢量场本身的低维嵌入,而不是其离散化。我们使用连续可区分的神经场表示这种减少的多种流形,即使使用多种方法或离散化获得了连续系统的任何可用数值解决方案,也可以训练它们。我们通过从体素电网,网格和点云的训练数据中验证了广泛的PDES方法。与先前依赖于离散化的ROM方法相比,例如线性子空间正确的正交分解(POD)和基于非线性神经网络的自动编码器,CROM具有更高的精度,较低的记忆消耗,动态适应性的分辨率,以及适用于任何离职。对于平等的潜在空间维度,CROM分别比POD和AutoCoder方法分别展示79 $ \ times $和49 $ \ times $ $更好的精度和39 $ \ times $和132 $ \ times $ $ \ times $ \ times $。实验证明了109 $ \ times $和89 $ \ times $ $ \ times $ wall-clock速度分别对CPU和GPU上的未还原模型。视频和代码可在项目页面上找到:https://crom-pde.github.io

The long runtime of high-fidelity partial differential equation (PDE) solvers makes them unsuitable for time-critical applications. We propose to accelerate PDE solvers using reduced-order modeling (ROM). Whereas prior ROM approaches reduce the dimensionality of discretized vector fields, our continuous reduced-order modeling (CROM) approach builds a low-dimensional embedding of the continuous vector fields themselves, not their discretization. We represent this reduced manifold using continuously differentiable neural fields, which may train on any and all available numerical solutions of the continuous system, even when they are obtained using diverse methods or discretizations. We validate our approach on an extensive range of PDEs with training data from voxel grids, meshes, and point clouds. Compared to prior discretization-dependent ROM methods, such as linear subspace proper orthogonal decomposition (POD) and nonlinear manifold neural-network-based autoencoders, CROM features higher accuracy, lower memory consumption, dynamically adaptive resolutions, and applicability to any discretization. For equal latent space dimension, CROM exhibits 79$\times$ and 49$\times$ better accuracy, and 39$\times$ and 132$\times$ smaller memory footprint, than POD and autoencoder methods, respectively. Experiments demonstrate 109$\times$ and 89$\times$ wall-clock speedups over unreduced models on CPUs and GPUs, respectively. Videos and codes are available on the project page: https://crom-pde.github.io

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源