论文标题
一类非平滑凸优化的乘数的近端方法
The Proximal Method of Multipliers for a Class of Nonsmooth Convex Optimization
论文作者
论文摘要
本文开发了一类非平滑凸优化的乘数近端方法。该方法生成了一系列最小化问题(子问题)。我们表明,即使原始优化问题可能具有多个解决方案,近似值的近似值也会收敛到拉格朗日的鞍点。由于Fortin而引起的增强拉格朗日人出现在子问题中。增强拉格朗日在标准Lagrangian上的非凡属性是它总是可区分的,并且通常是半脚的。这一事实使我们能够采用非平滑牛顿方法来计算与子问题的近似值。该近端术语是目标函数的正规化,并保证牛顿系统的溶解度,而无需在目标函数上具有强大的凸度。我们利用了非平滑牛顿方法的理论,为所提出算法的全球融合提供了严格的证据。
This paper develops the proximal method of multipliers for a class of nonsmooth convex optimization. The method generates a sequence of minimization problems (subproblems). We show that the sequence of approximations to the solutions of the subproblems converges to a saddle point of the Lagrangian even if the original optimization problem may possess multiple solutions. The augmented Lagrangian due to Fortin appears in the subproblem. The remarkable property of the augmented Lagrangian over the standard Lagrangian is that it is always differentiable, and it is often semismoothly differentiable. This fact allows us to employ a nonsmooth Newton method for computing an approximation to the subproblem. The proximal term serves as the regularization of the objective function and guarantees the solvability of the Newton system without assuming strong convexity on the objective function. We exploit the theory of the nonsmooth Newton method to provide a rigorous proof for the global convergence of the proposed algorithm.