论文标题

通过LSTM-AI加速求解耦合的非线性ODE

Accelerated solving of coupled, non-linear ODEs through LSTM-AI

论文作者

de Lima, Camila Faccini, Gianlupi, Juliano Ferrari, Metzcar, John, Zerick, Juliette

论文摘要

本项目旨在使用机器学习,特别是神经网络(NN),以了解一组耦合的普通微分方程(ODE)的轨迹,并减少通过使用此抛光模型获得ODE解决方案的计算时间。作为一个经过验证的生物学意义的示例系统,我们使用了与光合作用相关的蓝细菌基因调节电路的ODE模型\ cite {artiral_biology_kehoe,sundus_math_model}。使用由数字解决方案生成的数据,我们训练了几个长期记忆神经网络。当网络在测试数据时达到3 \%的准确度,导致网络能够预测ode时间序列中的值的准确性为3 \%时,我们会停止训练。当比较预测时间与计算时间以获取数字解决方案时,我们观察到了9.75到197倍的计算速度UPS。鉴于此概念证明的成功,我们计划将来继续进行该项目,并尝试在基于代理的建模柏拉图的背景下实现相同的计算加速。

The present project aims to use machine learning, specifically neural networks (NN), to learn the trajectories of a set of coupled ordinary differential equations (ODEs) and decrease compute times for obtaining ODE solutions by using this surragate model. As an example system of proven biological significance, we use an ODE model of a gene regulatory circuit of cyanobacteria related to photosynthesis \cite{original_biology_Kehoe, Sundus_math_model}. Using data generated by a numeric solution to the exemplar system, we train several long-short-term memory neural networks. We stopping training when the networks achieve an accuracy of of 3\% on testing data resulting in networks able to predict values in the ODE time series ranging from 0.25 minutes to 6.25 minutes beyond input values. We observed computational speed ups ranging from 9.75 to 197 times when comparing prediction compute time with compute time for obtaining the numeric solution. Given the success of this proof of concept, we plan on continuing this project in the future and will attempt to realize the same computational speed-ups in the context of an agent-based modeling platfom.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源