论文标题
通过采样来优化训练有素的Relu神经网络的目标功能
Optimizing Objective Functions from Trained ReLU Neural Networks via Sampling
论文作者
论文摘要
本文介绍了可扩展的,基于抽样的算法,这些算法通过RELU激活优化了受过训练的神经网络。我们首先提出了一种迭代算法,该算法利用了Relu神经网络的分段线性结构,并将初始的混合构成优化问题(MIP)减少到多个易于解决的线性优化问题(LPS)中。随后,我们通过在每次迭代中计算的LP解决方案的附近进行搜索来扩展这种方法。该方案使我们能够设计出一种增强的算法,从而将初始MIP问题减少到较小,易于解决的MIPS中。我们通过分析显示了方法的收敛性,并提供了样本复杂性保证。我们还通过将算法与最新基于MIP的方法进行比较来验证算法的性能。最后,我们从计算上显示了如何有效地将采样算法用于基于MIP的暖启动方法。
This paper introduces scalable, sampling-based algorithms that optimize trained neural networks with ReLU activations. We first propose an iterative algorithm that takes advantage of the piecewise linear structure of ReLU neural networks and reduces the initial mixed-integer optimization problem (MIP) into multiple easy-to-solve linear optimization problems (LPs) through sampling. Subsequently, we extend this approach by searching around the neighborhood of the LP solution computed at each iteration. This scheme allows us to devise a second, enhanced algorithm that reduces the initial MIP problem into smaller, easier-to-solve MIPs. We analytically show the convergence of the methods and we provide a sample complexity guarantee. We also validate the performance of our algorithms by comparing them against state-of-the-art MIP-based methods. Finally, we show computationally how the sampling algorithms can be used effectively to warm-start MIP-based methods.