论文标题

用于模型预测控制的在线优化的超级会议

Superconvergence of Online Optimization for Model Predictive Control

论文作者

Na, Sen, Anitescu, Mihai

论文摘要

我们开发了一种每种啤酒,在线,滞后$ l $,模型预测控制(MPC)算法,用于求解离散时间,等效性,非线性的非线性动态程序。基于目标问题类别的最新灵敏度分析结果,我们证明该方法表现出我们称之为超级融合的行为。也就是说,相对于完整的地平线解决方案的跟踪误差不仅在连续的地平线转移方面稳定,而且随着移位顺序的增加到最小值,在退缩范围的长度上呈指数衰减。关键的分析步骤是将算法的一步误差递归分解为算法误差和扰动误差。我们表明,扰动误差随两个连续退化的视野之间的滞后呈指数衰减,而〜由牛顿方法确定的算法误差则取得了二次收敛。总体而言,这种方法可以根据$ l $的合适值的退缩长度来诱导我们本地的指数融合结果。数值实验验证了我们的理论发现。

We develop a one-Newton-step-per-horizon, online, lag-$L$, model predictive control (MPC) algorithm for solving discrete-time, equality-constrained, nonlinear dynamic programs. Based on recent sensitivity analysis results for the target problems class, we prove that the approach exhibits a behavior that we call superconvergence; that is, the tracking error with respect to the full horizon solution is not only stable for successive horizon shifts, but also decreases with increasing shift order to a minimum value that decays exponentially in the length of the receding horizon. The key analytical step is the decomposition of the one-step error recursion of our algorithm into algorithmic error and perturbation error. We show that the perturbation error decays exponentially with the lag between two consecutive receding horizons, while~the algorithmic error, determined by Newton's method, achieves quadratic convergence instead. Overall this approach induces our local exponential convergence result in terms of the receding horizon length for suitable values of $L$. Numerical experiments validate our theoretical findings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源