论文标题
使用数据驱动的预测控制的安全加强学习
Safe Reinforcement Learning using Data-Driven Predictive Control
论文作者
论文摘要
强化学习(RL)算法可以在决策和持续控制任务中实现最先进的表现。但是,由于许多RL算法的探索性质,尤其是在机器人和环境的模型尚不清楚的情况下,因此在安全 - 关键系统上应用RL算法仍然需要充分证明是合理的。为了应对这一挑战,我们提出了一个数据驱动的安全层,该安全层充当不安全动作的过滤器。安全层使用数据驱动的预测控制器来在培训和部署后为RL政策执行安全保证。 RL代理提出了一个通过计算数据驱动的可及性分析来验证的动作。如果使用拟议的操作在机器人的可触手可及的集合之间存在相交,我们将其称为数据驱动的预测控制器,以找到最接近所提出的不安全操作的安全操作。如果提出的操作不安全,则安全层会惩罚RL代理,并用最接近安全的操作代替。在模拟中,我们表明我们的方法在凉亭3中的机器人导航问题上优于最先进的RL方法,而在Unreal Engine 4(UE4)中的四旋转器3(UE4)。
Reinforcement learning (RL) algorithms can achieve state-of-the-art performance in decision-making and continuous control tasks. However, applying RL algorithms on safety-critical systems still needs to be well justified due to the exploration nature of many RL algorithms, especially when the model of the robot and the environment are unknown. To address this challenge, we propose a data-driven safety layer that acts as a filter for unsafe actions. The safety layer uses a data-driven predictive controller to enforce safety guarantees for RL policies during training and after deployment. The RL agent proposes an action that is verified by computing the data-driven reachability analysis. If there is an intersection between the reachable set of the robot using the proposed action, we call the data-driven predictive controller to find the closest safe action to the proposed unsafe action. The safety layer penalizes the RL agent if the proposed action is unsafe and replaces it with the closest safe one. In the simulation, we show that our method outperforms state-of-the-art safe RL methods on the robotics navigation problem for a Turtlebot 3 in Gazebo and a quadrotor in Unreal Engine 4 (UE4).