论文标题

变压器是样本效率的世界模型

Transformers are Sample-Efficient World Models

论文作者

Micheli, Vincent, Alonso, Eloi, Fleuret, François

论文摘要

众所周知,深厚的增强学习师样本效率低下,这大大限制了其在现实世界中的应用。最近,已经设计了许多基于模型的方法来解决这个问题,以了解世界模型是最突出的方法之一。但是,尽管与模拟环境的几乎无限互动听起来很吸引人,但世界模型必须在较长时间内准确。在序列建模任务中变形金刚的成功的激励,我们介绍了Iris,这是一种数据效率的代理,它在由离散的自动编码器和自动回归变压器组成的世界模型中学习。在Atari 100k基准中,艾里斯(Iris)的平均正常化得分为1.046,在Atari 100k基准中仅需两个小时的游戏玩法,并且在26场比赛中的10场比赛中超过了人类的表现,为无需lookahead搜索的方法设定了新的技术状态。为了促进有关变压器和世界模型的未来研究,以用于样品高效的增强学习,我们在https://github.com/eloialonso/iris上发布了代码和模型。

Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems. Recently, many model-based methods have been designed to address this issue, with learning in the imagination of a world model being one of the most prominent approaches. However, while virtually unlimited interaction with a simulated environment sounds appealing, the world model has to be accurate over extended periods of time. Motivated by the success of Transformers in sequence modeling tasks, we introduce IRIS, a data-efficient agent that learns in a world model composed of a discrete autoencoder and an autoregressive Transformer. With the equivalent of only two hours of gameplay in the Atari 100k benchmark, IRIS achieves a mean human normalized score of 1.046, and outperforms humans on 10 out of 26 games, setting a new state of the art for methods without lookahead search. To foster future research on Transformers and world models for sample-efficient reinforcement learning, we release our code and models at https://github.com/eloialonso/iris.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源