论文标题

随机化学反应网络的自动深度抽象

Automated Deep Abstractions for Stochastic Chemical Reaction Networks

论文作者

Petrov, Tatjana, Repin, Denis

论文摘要

从分子相互作用的机械模型中兴起的随机细胞动力学是系统生物学的一项长期挑战:低级化学反应网络(CRN)模型使高度维持持续时间马尔可夫链(CTMC)的提升在计算上是在计算上要求的,并且通常在实践中进行分析。最近提出的一种抽象方法使用深度学习来用离散的连续空间过程替换该CTMC,通过训练混合密度的深神经网络,并在定期时间间隔采样痕迹(可以通过模拟给定的CRN或作为实验中的时间序列数据获得的痕迹)。这种抽象的主要优点是,它产生了一个计算模型,该模型在执行方面非常便宜,同时保留了培训数据的统计特征。通常,通过训练数据的数量提高了抽象精度。但是,根据CRN的不同,该方法的总体质量(效率增益和抽象的精度)也将取决于选择由超参数类型和它们之间的连接等高参数提供的神经网络结构的选择。结果,实际上,建模器必须通过繁琐且耗时的试验和错误周期来照顾每个给定的CRN的合适架构。在本文中,我们建议通过学习最佳神经网络体系结构以及学习抽象过程的过渡内核来进一步自动化随机CRN的深度抽象。对体系结构的自动搜索使该方法直接适用于任何给定的CRN,这对于深度学习专家来说是省时的,对于非专业人士来说至关重要。我们实施该方法并在许多具有多模式新兴表型的代表性CRN上演示其性能。

Predicting stochastic cellular dynamics as emerging from the mechanistic models of molecular interactions is a long-standing challenge in systems biology: low-level chemical reaction network (CRN) models give raise to a highly-dimensional continuous-time Markov chain (CTMC) which is computationally demanding and often prohibitive to analyse in practice. A recently proposed abstraction method uses deep learning to replace this CTMC with a discrete-time continuous-space process, by training a mixture density deep neural network with traces sampled at regular time intervals (which can obtained either by simulating a given CRN or as time-series data from experiment). The major advantage of such abstraction is that it produces a computational model that is dramatically cheaper to execute, while preserving the statistical features of the training data. In general, the abstraction accuracy improves with the amount of training data. However, depending on a CRN, the overall quality of the method -- the efficiency gain and abstraction accuracy -- will also depend on the choice of neural network architecture given by hyper-parameters such as the layer types and connections between them. As a consequence, in practice, the modeller would have to take care of finding the suitable architecture manually, for each given CRN, through a tedious and time-consuming trial-and-error cycle. In this paper, we propose to further automatise deep abstractions for stochastic CRNs, through learning the optimal neural network architecture along with learning the transition kernel of the abstract process. Automated search of the architecture makes the method applicable directly to any given CRN, which is time-saving for deep learning experts and crucial for non-specialists. We implement the method and demonstrate its performance on a number of representative CRNs with multi-modal emergent phenotypes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源