论文标题

在非政策加固学习中重新审视高斯混合批评家:一种基于样本的方法

Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach

论文作者

Shahriari, Bobak, Abdolmaleki, Abbas, Byravan, Arunkumar, Friesen, Abe, Liu, Siqi, Springenberg, Jost Tobias, Heess, Nicolas, Hoffman, Matt, Riedmiller, Martin

论文摘要

使用分配政策评估的参与者批评算法经常被证明在许多具有挑战性的控制任务上都优于其非分数对应物。与DDPG和MPO相比,此行为的例子包括D4PG和DMPO算法[Barth-Maron等,2018; Hoffman等人,2020年]。但是,两个代理商都依靠C51评论家来进行价值估计。C51方法的一个主要缺点是其要求对策略所能达到的最低和最小值的知识以及所使用的箱数量的数量,这可以解决分布估计的解决方案。尽管任务的DeepMind Control Suite使用了标准化的奖励和发作长度,从而使整个套件都可以通过这些超参数的单个设置来解决,但通常不是这种情况。本文重新讨论了一种自然的替代方法,该替代方法可以消除了这一要求,即高斯人的混合物以及简单的基于样本的损失功能,可以在政策范围内训练它。我们在广泛的连续控制任务上进行了经验评估其表现,并证明它消除了对这些分配超参数的需求,并在各种具有挑战性的任务上实现了最先进的绩效(例如,人形,狗,四倍体和操纵器域)。最后,我们在ACME代理存储库中提供了实现。

Actor-critic algorithms that make use of distributional policy evaluation have frequently been shown to outperform their non-distributional counterparts on many challenging control tasks. Examples of this behavior include the D4PG and DMPO algorithms as compared to DDPG and MPO, respectively [Barth-Maron et al., 2018; Hoffman et al., 2020]. However, both agents rely on the C51 critic for value estimation.One major drawback of the C51 approach is its requirement of prior knowledge about the minimum andmaximum values a policy can attain as well as the number of bins used, which fixes the resolution ofthe distributional estimate. While the DeepMind control suite of tasks utilizes standardized rewards and episode lengths, thus enabling the entire suite to be solved with a single setting of these hyperparameters, this is often not the case. This paper revisits a natural alternative that removes this requirement, namelya mixture of Gaussians, and a simple sample-based loss function to train it in an off-policy regime. We empirically evaluate its performance on a broad range of continuous control tasks and demonstrate that it eliminates the need for these distributional hyperparameters and achieves state-of-the-art performance on a variety of challenging tasks (e.g. the humanoid, dog, quadruped, and manipulator domains). Finallywe provide an implementation in the Acme agent repository.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源