论文标题
通过轨迹生成有效的增强学习
Efficient Reinforcement Learning Through Trajectory Generation
论文作者
论文摘要
在许多实际应用中使用加固学习(RL)的关键障碍是需要大量系统交互以学习良好控制策略的要求。已经提出了离线和离线RL方法,以通过从历史数据中学习控制策略来减少与物理环境的相互作用的数量。但是,一旦更新控制器,他们的表现都缺乏探索和轨迹的分配变化。此外,大多数RL方法都要求直接观察到所有状态,这在许多情况下很难获得。 为了克服这些挑战,我们提出了一种轨迹生成算法,该算法自适应地生成了新的轨迹,就好像在更新的控制策略下正在操作和探索系统一样。由线性系统的基本引理动机,假设有足够的激发,我们从历史轨迹的线性组合中产生轨迹。对于线性反馈控制,我们证明该算法会生成具有确切分布的轨迹,就好像它们使用更新的控制策略从真实系统中采样一样。特别是,该算法扩展到未直接观察到状态的系统。实验表明,所提出的方法显着减少了RL算法所需的采样数据数量。
A key barrier to using reinforcement learning (RL) in many real-world applications is the requirement of a large number of system interactions to learn a good control policy. Off-policy and Offline RL methods have been proposed to reduce the number of interactions with the physical environment by learning control policies from historical data. However, their performances suffer from the lack of exploration and the distributional shifts in trajectories once controllers are updated. Moreover, most RL methods require that all states are directly observed, which is difficult to be attained in many settings. To overcome these challenges, we propose a trajectory generation algorithm, which adaptively generates new trajectories as if the system is being operated and explored under the updated control policies. Motivated by the fundamental lemma for linear systems, assuming sufficient excitation, we generate trajectories from linear combinations of historical trajectories. For linear feedback control, we prove that the algorithm generates trajectories with the exact distribution as if they are sampled from the real system using the updated control policy. In particular, the algorithm extends to systems where the states are not directly observed. Experiments show that the proposed method significantly reduces the number of sampled data needed for RL algorithms.