论文标题

通过生成神经网络学习ISING模型

Learning the Ising Model with Generative Neural Networks

论文作者

D'Angelo, Francesco, Böttcher, Lucas

论文摘要

深度学习和神经网络的最新进展导致人们对在统计和凝结物理学中的生成模型的应用产生了越来越兴趣。特别是,受限制的玻尔兹曼机器(RBMS)和变异自动编码器(VAE)作为特定类别的神经网络已成功应用于物理特征提取和表示学习的背景下。但是,尽管取得了这些成功,但对它们的代表性属性和局限性的理解只有有限的理解。为了更好地了解RBM和VAE的代表性特征,我们研究了它们在不同温度下捕获Ising模型的物理特征的能力。这种方法使我们能够通过比较样本特征与相应的理论预测来定量评估学习的表示形式。我们的结果表明,所考虑的RBM和卷积VAE能够捕获磁化,能量和自旋旋转相关性的温度依赖性。与VAE产生的样品相比,RBMS产生的样品在温度上分布更均匀。我们还发现,VAE中的卷积层对于模型自旋相关性很重要,而RBMS在没有卷积过滤器的情况下取得了相似甚至更好的性能。

Recent advances in deep learning and neural networks have led to an increased interest in the application of generative models in statistical and condensed matter physics. In particular, restricted Boltzmann machines (RBMs) and variational autoencoders (VAEs) as specific classes of neural networks have been successfully applied in the context of physical feature extraction and representation learning. Despite these successes, however, there is only limited understanding of their representational properties and limitations. To better understand the representational characteristics of RBMs and VAEs, we study their ability to capture physical features of the Ising model at different temperatures. This approach allows us to quantitatively assess learned representations by comparing sample features with corresponding theoretical predictions. Our results suggest that the considered RBMs and convolutional VAEs are able to capture the temperature dependence of magnetization, energy, and spin-spin correlations. The samples generated by RBMs are more evenly distributed across temperature than those generated by VAEs. We also find that convolutional layers in VAEs are important to model spin correlations whereas RBMs achieve similar or even better performances without convolutional filters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源