论文标题

凸锥的梯度提升预测和优化问题

Gradient boosting for convex cone predict and optimize problems

论文作者

Butler, Andrew, Kwon, Roy H.

论文摘要

预测模型通常与决策优化独立地优化。 Smart预测然后优化(SPO)框架将优化预测模型,以最大程度地减少下游决策遗憾。在本文中,我们介绍了DBoost,这是“预测,然后优化”问题的智能梯度提升的第一个通用实现。该框架支持凸的二次锥编程和梯度提升,这是通过自定义固定点映射的隐式差异来执行的。与最先进的SPO方法进行比较的实验表明,DBoost可以进一步减少样本外决策后悔。

Prediction models are typically optimized independently from decision optimization. A smart predict then optimize (SPO) framework optimizes prediction models to minimize downstream decision regret. In this paper we present dboost, the first general purpose implementation of smart gradient boosting for `predict, then optimize' problems. The framework supports convex quadratic cone programming and gradient boosting is performed by implicit differentiation of a custom fixed-point mapping. Experiments comparing with state-of-the-art SPO methods show that dboost can further reduce out-of-sample decision regret.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源