论文标题
从未来借用:解决无模型控制中的双重抽样
Borrowing From the Future: Addressing Double Sampling in Model-free Control
论文作者
论文摘要
在无模型的增强学习中,与非线性函数近似结合使用时,时间差异方法及其变体变得不稳定。随机梯度下降(SGD)的Bellman残留最小化更加稳定,但它具有双重抽样问题:鉴于当前状态,需要两个独立的样品,但通常只有一个样本可用。最近,[Zhu等,2020]的作者引入了未来(BFF)算法的借款,以解决此问题的预测问题。主要思想是借用未来的额外随机性,以大约重新样本,当问题的基本动态足够平滑时。本文将BFF算法扩展到基于动作值函数的无模型控制。我们证明,当基础动力学因动作而变化缓慢时,BFF接近公正的SGD。我们通过数值模拟确认了理论发现。
In model-free reinforcement learning, the temporal difference method and its variants become unstable when combined with nonlinear function approximations. Bellman residual minimization with stochastic gradient descent (SGD) is more stable, but it suffers from the double sampling problem: given the current state, two independent samples for the next state are required, but often only one sample is available. Recently, the authors of [Zhu et al, 2020] introduced the borrowing from the future (BFF) algorithm to address this issue for the prediction problem. The main idea is to borrow extra randomness from the future to approximately re-sample the next state when the underlying dynamics of the problem are sufficiently smooth. This paper extends the BFF algorithm to action-value function based model-free control. We prove that BFF is close to unbiased SGD when the underlying dynamics vary slowly with respect to actions. We confirm our theoretical findings with numerical simulations.