论文标题

NP搜索问题的启发式近似器的(联合国)可伸缩性

The (Un)Scalability of Heuristic Approximators for NP-Hard Search Problems

论文作者

Pendurkar, Sumedh, Huang, Taoan, Koenig, Sven, Sharon, Guni

论文摘要

A*算法通常用于求解NP-HARD组合优化问题。当提供完全知情的启发式函数时,A*求解了分支系数中时间多项式的许多NP最小成本路径问题以及最小成本路径中的边缘数量。因此,NP近似于近似其完全知情的启发式功能。因此,我们研究了为此目的提议使用神经网络的最新出版物。我们支持我们的主张,即这些方法在理论上和实验上都不能扩展到大型大小。我们针对三个代表性NP最低成本路径问题的第一个实验结果表明,使用神经网络近似于具有高精度的完全知情的启发式函数可能会导致在实例大小中呈指数级的网络大小。因此,研究界可能会从研究其他将启发式搜索与机器学习整合的方式中受益。

The A* algorithm is commonly used to solve NP-hard combinatorial optimization problems. When provided with a completely informed heuristic function, A* solves many NP-hard minimum-cost path problems in time polynomial in the branching factor and the number of edges in a minimum-cost path. Thus, approximating their completely informed heuristic functions with high precision is NP-hard. We therefore examine recent publications that propose the use of neural networks for this purpose. We support our claim that these approaches do not scale to large instance sizes both theoretically and experimentally. Our first experimental results for three representative NP-hard minimum-cost path problems suggest that using neural networks to approximate completely informed heuristic functions with high precision might result in network sizes that scale exponentially in the instance sizes. The research community might thus benefit from investigating other ways of integrating heuristic search with machine learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源