论文标题
展开盲目超级分辨率的交替优化
Unfolding the Alternating Optimization for Blind Super Resolution
论文作者
论文摘要
以前的方法将盲级超级分辨率(SR)问题分解为两个顺序的步骤:\ textIt {i})从给定的低分辨率(LR)图像和\ textit {II})估计基于估计的内核的SR图像的模糊内核。这个两步解决方案涉及两个独立训练的模型,它们可能彼此不兼容。第一步的小估计误差可能会导致第二个估计的严重下降。另一方面,第一步只能利用LR图像中有限的信息,这使得难以预测高度准确的模糊内核。对于这些问题,我们采用了交替的优化算法,而不是分别考虑这两个步骤,该算法可以估算单个模型中的模糊内核和还原SR映像。具体来说,我们设计了两个卷积神经模块,即\ textIt {restorer}和\ textIt {估算器}。 \ textIt {Restorer}基于预测的内核还原SR映像,\ textIt {inestator}借助还原的SR Image估算了Blur内核。我们反复交替交点这两个模块,然后展开此过程,形成一个端到端可训练的网络。这样,\ textIt {估算器}利用了LR和SR映像中的信息,这使模糊内核的估计更加容易。更重要的是,\ textIt {restorer}是通过\ textit {inestitator}估算的内核而不是基础真实内核来训练的,因此\ textit {restorer}可以更宽容于\ textit {estorit {estorit}的估计误差。关于合成数据集和现实世界图像的广泛实验表明,我们的模型在很大程度上可以胜过最先进的方法,并以更高的速度产生更有利的视觉效果。源代码可在https://github.com/greatlog/dan.git上找到。
Previous methods decompose blind super resolution (SR) problem into two sequential steps: \textit{i}) estimating blur kernel from given low-resolution (LR) image and \textit{ii}) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, which may not be well compatible with each other. Small estimation error of the first step could cause severe performance drop of the second one. While on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. Towards these issues, instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel and restore SR image in a single model. Specifically, we design two convolutional neural modules, namely \textit{Restorer} and \textit{Estimator}. \textit{Restorer} restores SR image based on predicted kernel, and \textit{Estimator} estimates blur kernel with the help of restored SR image. We alternate these two modules repeatedly and unfold this process to form an end-to-end trainable network. In this way, \textit{Estimator} utilizes information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, \textit{Restorer} is trained with the kernel estimated by \textit{Estimator}, instead of ground-truth kernel, thus \textit{Restorer} could be more tolerant to the estimation error of \textit{Estimator}. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the-art methods and produce more visually favorable results at much higher speed. The source code is available at https://github.com/greatlog/DAN.git.