论文标题

用于双重学习的无精度无衍生化优化

Inexact Derivative-Free Optimization for Bilevel Learning

论文作者

Ehrhardt, Matthias J., Roberts, Lindon

论文摘要

变异正则化技术在数学成像领域中占主导地位。这些技术的缺点是它们取决于用户必须设置的许多参数。解决此问题的共同策略是从数据中学习这些参数。尽管在数学上吸引这种策略会导致嵌套的优化问题(称为双层优化),这在计算上很难处理。在解决高级问题时,通常可以访问低级问题的精确解决方案,这实际上是不可行的。在这项工作中,我们建议使用不需要的无衍生衍生化优化算法解决这些问题,这些算法不需要确切的较低级别的问题解决方案,而是假设具有可控精度的近似解决方案,这在实践中是可以实现的。我们证明了全球融合,并且为我们的方法带来了最坏的复杂性。我们在RofDenoising和学习MRI抽样模式方面测试了我们提出的框架。动态调整较低级别的准确度的收益率具有与高准确评估相似的重建质量的学到的参数,但计算工作的降低显着降低(在某些情况下最多要快100倍)。

Variational regularization techniques are dominant in the field of mathematical imaging. A drawback of these techniques is that they are dependent on a number of parameters which have to be set by the user. A by now common strategy to resolve this issue is to learn these parameters from data. While mathematically appealing this strategy leads to a nested optimization problem (known as bilevel optimization) which is computationally very difficult to handle. It is common when solving the upper-level problem to assume access to exact solutions of the lower-level problem, which is practically infeasible. In this work we propose to solve these problems using inexact derivative-free optimization algorithms which never require exact lower-level problem solutions, but instead assume access to approximate solutions with controllable accuracy, which is achievable in practice. We prove global convergence and a worstcase complexity bound for our approach. We test our proposed framework on ROFdenoising and learning MRI sampling patterns. Dynamically adjusting the lower-level accuracy yields learned parameters with similar reconstruction quality as highaccuracy evaluations but with dramatic reductions in computational work (up to 100 times faster in some cases).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源