论文标题

在没有GNN的监督的情况下学习广义政策

Learning Generalized Policies Without Supervision Using GNNs

论文作者

Ståhlberg, Simon, Bonet, Blai, Geffner, Hector

论文摘要

我们考虑使用抬高条中代表的小实例的图形神经网络来学习经典计划域的通用政策的问题。以前已经考虑过这个问题,但是提出的神经体系结构很复杂,结果通常混合在一起。在这项工作中,我们使用一种简单且普遍的GNN体系结构,旨在获得清晰的实验结果和更深入的理解:在学到的价值函数中,政策贪婪的政策贪婪实现了比培训中使用的实例接近100%的概括,或者必须在逻辑上理解失败,或者必须固定失败。为此,我们利用GNN的表达能力与一阶逻辑的$ C_ {2} $片段之间建立的关系(即,具有2个变量和计数量词)。例如,我们发现,一旦使用适当的“衍生原子”编码角色组成和不适合$ c_ {2} $的“派生原子”扩展,具有更需要表达功能的通用策略的域可以用GNN解决。这项工作遵循GNN方法以监督方式学习最佳的一般政策(Stahlberg,Bonet,Geffner,2022);但是,不再需要学习的政策是最佳的(扩大范围,因为许多规划域没有一般的最佳政策),并且在没有监督的情况下被学习。有趣的是,旨在制定最佳政策的基于价值的强化学习方法,并不总是产生概括的政策,因为最佳和一般性的目标在最佳计划是NP-HARD的领域中存在冲突。

We consider the problem of learning generalized policies for classical planning domains using graph neural networks from small instances represented in lifted STRIPS. The problem has been considered before but the proposed neural architectures are complex and the results are often mixed. In this work, we use a simple and general GNN architecture and aim at obtaining crisp experimental results and a deeper understanding: either the policy greedy in the learned value function achieves close to 100% generalization over instances larger than those used in training, or the failure must be understood, and possibly fixed, logically. For this, we exploit the relation established between the expressive power of GNNs and the $C_{2}$ fragment of first-order logic (namely, FOL with 2 variables and counting quantifiers). We find for example that domains with general policies that require more expressive features can be solved with GNNs once the states are extended with suitable "derived atoms" encoding role compositions and transitive closures that do not fit into $C_{2}$. The work follows the GNN approach for learning optimal general policies in a supervised fashion (Stahlberg, Bonet, Geffner, 2022); but the learned policies are no longer required to be optimal (which expands the scope, as many planning domains do not have general optimal policies) and are learned without supervision. Interestingly, value-based reinforcement learning methods that aim to produce optimal policies, do not always yield policies that generalize, as the goals of optimality and generality are in conflict in domains where optimal planning is NP-hard.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源