论文标题
带有硬利润损失的支持向量机:通过组合弯曲器切割的最佳训练
Support Vector Machines with the Hard-Margin Loss: Optimal Training via Combinatorial Benders' Cuts
论文作者
论文摘要
由于其损耗函数的无限制性,经典的铰链支持向量机(SVM)模型对离群观测值敏感。为了解决这个问题,最近的研究集中在非凸损失函数上,例如硬质量损失,该损失将恒定的罚款与任何错误分类或账面内的样本相关联。应用此损失函数可为关键应用带来急需的鲁棒性,但它也导致NP硬化模型,这使训练变得困难,因为当前的精确优化算法显示有限的可伸缩性,而启发式方法无法始终如一地找到高质量的解决方案。在这种背景下,我们提出了新的整数编程策略,这些策略可显着提高我们将硬利润SVM模型培训为全球最优性的能力。我们引入了一种迭代采样和分解方法,其中使用较小的子问题来分离组合弯曲器的切割。这些切割量在分支和切割算法中使用的削减允许更快地收敛到全球最佳。通过对经典基准数据集的大量数值分析,我们的解决方案算法首次求解了117个新数据集以达到最佳性,并在基准最困难的数据集的平均最佳差距中降低了50%。
The classical hinge-loss support vector machines (SVMs) model is sensitive to outlier observations due to the unboundedness of its loss function. To circumvent this issue, recent studies have focused on non-convex loss functions, such as the hard-margin loss, which associates a constant penalty to any misclassified or within-margin sample. Applying this loss function yields much-needed robustness for critical applications but it also leads to an NP-hard model that makes training difficult, since current exact optimization algorithms show limited scalability, whereas heuristics are not able to find high-quality solutions consistently. Against this background, we propose new integer programming strategies that significantly improve our ability to train the hard-margin SVM model to global optimality. We introduce an iterative sampling and decomposition approach, in which smaller subproblems are used to separate combinatorial Benders' cuts. Those cuts, used within a branch-and-cut algorithm, permit to converge much more quickly towards a global optimum. Through extensive numerical analyses on classical benchmark data sets, our solution algorithm solves, for the first time, 117 new data sets to optimality and achieves a reduction of 50% in the average optimality gap for the hardest datasets of the benchmark.