论文标题
在使用汉密尔顿蒙特卡洛抽样中用于增强学习问题的高维度
On Using Hamiltonian Monte Carlo Sampling for Reinforcement Learning Problems in High-dimension
论文作者
论文摘要
例如,基于价值函数的增强学习(RL)算法,例如,$ q $ - 学习,从操作,奖励和状态转换数据集中学习最佳策略。但是,当潜在的状态过渡动力学是随机动力学并在高维空间上进化时,由于相关的积分正常化的积分的棘手性,因此创建这些数据集的独立且相同分布的数据样本(IID)数据样本会带来重大挑战。在这些情况下,汉密尔顿蒙特卡洛(HMC)采样提供了一种可计算的方法来生成用于培训RL算法的数据。在本文中,我们引入了一个框架,称为\ textit {hamiltonian $ q $ - 学习},这在理论上和经验上都证明了$ q $值可以从由HMC的动作,奖励,奖励和状态过渡产生的数据集中学习。此外,为了利用$ Q $函数的基本低级结构,Hamiltonian $ Q $ - 学习使用矩阵完成算法来从更小的国家action对子集中重建更新的$ Q $函数。因此,通过提供一种有效的方法来应用$ q $ - 在随机,高维设置中进行$ q $,拟议的方法扩大了现实世界应用程序的RL算法范围。
Value function based reinforcement learning (RL) algorithms, for example, $Q$-learning, learn optimal policies from datasets of actions, rewards, and state transitions. However, when the underlying state transition dynamics are stochastic and evolve on a high-dimensional space, generating independent and identically distributed (IID) data samples for creating these datasets poses a significant challenge due to the intractability of the associated normalizing integral. In these scenarios, Hamiltonian Monte Carlo (HMC) sampling offers a computationally tractable way to generate data for training RL algorithms. In this paper, we introduce a framework, called \textit{Hamiltonian $Q$-Learning}, that demonstrates, both theoretically and empirically, that $Q$ values can be learned from a dataset generated by HMC samples of actions, rewards, and state transitions. Furthermore, to exploit the underlying low-rank structure of the $Q$ function, Hamiltonian $Q$-Learning uses a matrix completion algorithm for reconstructing the updated $Q$ function from $Q$ value updates over a much smaller subset of state-action pairs. Thus, by providing an efficient way to apply $Q$-learning in stochastic, high-dimensional settings, the proposed approach broadens the scope of RL algorithms for real-world applications.