论文标题
连接随机最佳控制和增强学习
Connecting Stochastic Optimal Control and Reinforcement Learning
论文作者
论文摘要
在本文中,研究了随机最佳控制与增强学习之间的联系。我们的主要动机是将重要性抽样应用于取样罕见事件,这些事件可以作为最佳控制问题进行重新审议。通过使用参数化方法,最佳控制问题成为一个随机优化问题,该问题仍然引发了一些关于如何解决高维问题的可伸缩性以及如何处理系统固有稳定性的开放问题。为了探索新方法,我们将最佳控制问题与加强学习联系起来,因为两者都共享相同的基础框架,即马尔可夫决策过程(MDP)。对于最佳控制问题,我们显示了如何制定MDP。此外,我们讨论了如何在强化学习框架中解释随机最佳控制问题。在本文的结尾,我们介绍了两种不同的强化学习算法在最佳控制问题上的应用,并比较了两种算法的优势和缺点。
In this paper the connection between stochastic optimal control and reinforcement learning is investigated. Our main motivation is to apply importance sampling to sampling rare events which can be reformulated as an optimal control problem. By using a parameterised approach the optimal control problem becomes a stochastic optimization problem which still raises some open questions regarding how to tackle the scalability to high-dimensional problems and how to deal with the intrinsic metastability of the system. To explore new methods we link the optimal control problem to reinforcement learning since both share the same underlying framework, namely a Markov Decision Process (MDP). For the optimal control problem we show how the MDP can be formulated. In addition we discuss how the stochastic optimal control problem can be interpreted in the framework of reinforcement learning. At the end of the article we present the application of two different reinforcement learning algorithms to the optimal control problem and a comparison of the advantages and disadvantages of the two algorithms.