论文标题
学习物理受限的亚网格尺度封闭,以稳定而准确的LE
Learning physics-constrained subgrid-scale closures in the small-data regime for stable and accurate LES
论文作者
论文摘要
我们展示了如何将物理限制纳入卷积神经网络(CNN)使学习子网格规模(SGS)封闭以在小型数据制度中稳定而准确的大型模拟(LES)(即,当高品质训练数据的可用性限制时)。 Using several setups of forced 2D turbulence as the testbeds, we examine the {\it a priori} and {\it a posteriori} performance of three methods for incorporating physics: 1) data augmentation (DA), 2) CNN with group convolutions (GCNN), and 3) loss functions that enforce a global enstrophy-transfer conservation (EnsCon).虽然在大数据制度中训练的物理敏锐的CNN的数据驱动封闭是准确稳定的,并且要超过动态的Smagorinsky(DSMAG)封闭,但当这些CNN受到40倍培训时,它们的性能会大大减少,但较少的样本(小型数据)(小型DATA制度)。我们表明,具有DA和GCNN的CNN解决了这个问题,并且每个问题都会在小型数据制度中产生准确稳定的数据驱动封闭。尽管它很简单,但DA还是在训练集中添加了适当旋转的样本,在某些情况下甚至比GCNN更好地表现了,而GCNN使用了具有复杂的均值构建结构的GCNN。将结构建模与功能建模方面相结合的Enscon还可以在小型数据制度中产生准确稳定的封闭。总体而言,结合了这两个物理约束的GCNN+ENCON在此制度中显示了最佳的{\ it posteriori}性能。这些结果说明了在小型数据方面的物理受限学习的力量,以准确稳定。
We demonstrate how incorporating physics constraints into convolutional neural networks (CNNs) enables learning subgrid-scale (SGS) closures for stable and accurate large-eddy simulations (LES) in the small-data regime (i.e., when the availability of high-quality training data is limited). Using several setups of forced 2D turbulence as the testbeds, we examine the {\it a priori} and {\it a posteriori} performance of three methods for incorporating physics: 1) data augmentation (DA), 2) CNN with group convolutions (GCNN), and 3) loss functions that enforce a global enstrophy-transfer conservation (EnsCon). While the data-driven closures from physics-agnostic CNNs trained in the big-data regime are accurate and stable, and outperform dynamic Smagorinsky (DSMAG) closures, their performance substantially deteriorate when these CNNs are trained with 40x fewer samples (the small-data regime). We show that CNN with DA and GCNN address this issue and each produce accurate and stable data-driven closures in the small-data regime. Despite its simplicity, DA, which adds appropriately rotated samples to the training set, performs as well or in some cases even better than GCNN, which uses a sophisticated equivariance-preserving architecture. EnsCon, which combines structural modeling with aspect of functional modeling, also produces accurate and stable closures in the small-data regime. Overall, GCNN+EnCon, which combines these two physics constraints, shows the best {\it a posteriori} performance in this regime. These results illustrate the power of physics-constrained learning in the small-data regime for accurate and stable LES.