论文标题
乐观的政策优化,强盗反馈
Optimistic Policy Optimization with Bandit Feedback
论文作者
论文摘要
策略优化方法是使用最广泛的增强学习类(RL)算法之一。但是,到目前为止,此类方法主要是从优化的角度分析的,而没有解决探索问题或通过对与环境的相互作用做出强有力的假设。在本文中,我们考虑了具有未知过渡和匪徒反馈的表格有限马MDP设置中的基于模型的RL。对于此设置,我们提出了一个乐观的信任区域策略优化(TRPO)算法,我们为其建立$ \ tilde o(\ sqrt {s^2 a h^4 k})$ sextothastic Rewards的遗憾。此外,我们证明$ \ tilde o(\ sqrt {s^2 a h^4} k^{2/3})$遗憾的是对抗性奖励。有趣的是,此结果与匪徒反馈案例得出的先前界限相匹配,但具有已知的过渡。据我们所知,这两个结果是具有未知过渡和强盗反馈的政策优化算法获得的第一个子线性后悔界限。
Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. Yet, so far, such methods have been mostly analyzed from an optimization perspective, without addressing the problem of exploration, or by making strong assumptions on the interaction with the environment. In this paper we consider model-based RL in the tabular finite-horizon MDP setting with unknown transitions and bandit feedback. For this setting, we propose an optimistic trust region policy optimization (TRPO) algorithm for which we establish $\tilde O(\sqrt{S^2 A H^4 K})$ regret for stochastic rewards. Furthermore, we prove $\tilde O( \sqrt{ S^2 A H^4 } K^{2/3} ) $ regret for adversarial rewards. Interestingly, this result matches previous bounds derived for the bandit feedback case, yet with known transitions. To the best of our knowledge, the two results are the first sub-linear regret bounds obtained for policy optimization algorithms with unknown transitions and bandit feedback.