论文标题

顺序分层学习,具有分布转换的图像超分辨率

Sequential Hierarchical Learning with Distribution Transformation for Image Super-Resolution

论文作者

Liu, Yuqing, Zhang, Xinfeng, Wang, Shanshe, Ma, Siwei, Gao, Wen

论文摘要

在最近的图像超分辨率(SR)中考虑了多尺度设计来探索层次特征信息。现有的多尺度网络旨在构建精心设计的块或渐进式体系结构以恢复。通常,较大的功能更多地集中在结构和高级信息上,而较小的规模功能包含大量细节和纹理信息。从这个角度来看,较大规模特征的信息可以源自较小的信息。基于观察结果,在本文中,我们构建了一个顺序分层学习超分辨率网络(SHSR),以实现有效的图像SR。特别是,我们考虑特征的尺度相关性,并设计一个顺序多尺度块(SMB)来逐步探索层次结构信息。 SMB是根据具有限制参数的卷积的线性设计以递归方式设计的。除了顺序的分层学习外,我们还研究了特征图之间的相关性,并设计了分布转换块(DTB)。与基于注意力的方法不同,DTB以归一化的方式对转换进行了转换,并共同考虑了与缩放和偏见因素的空间和渠道相关性。实验结果表明,SHSR可实现与最先进的方法相比,在缩放系数为$ \ times4 $时,SHSR的定量性能和视觉质量均高于最新的34 \%参数和50 \%MAC。为了在没有进一步培训的情况下提高性能,扩展型号SHSR $^+$具有自我满足的竞争性能,比较大的网络具有接近92 \%的参数和42 \%MAC,并以缩放系数$ \ times4 $。

Multi-scale design has been considered in recent image super-resolution (SR) works to explore the hierarchical feature information. Existing multi-scale networks aim to build elaborate blocks or progressive architecture for restoration. In general, larger scale features concentrate more on structural and high-level information, while smaller scale features contain plentiful details and textured information. In this point of view, information from larger scale features can be derived from smaller ones. Based on the observation, in this paper, we build a sequential hierarchical learning super-resolution network (SHSR) for effective image SR. Specially, we consider the inter-scale correlations of features, and devise a sequential multi-scale block (SMB) to progressively explore the hierarchical information. SMB is designed in a recursive way based on the linearity of convolution with restricted parameters. Besides the sequential hierarchical learning, we also investigate the correlations among the feature maps and devise a distribution transformation block (DTB). Different from attention-based methods, DTB regards the transformation in a normalization manner, and jointly considers the spatial and channel-wise correlations with scaling and bias factors. Experiment results show SHSR achieves superior quantitative performance and visual quality to state-of-the-art methods with near 34\% parameters and 50\% MACs off when scaling factor is $\times4$. To boost the performance without further training, the extension model SHSR$^+$ with self-ensemble achieves competitive performance than larger networks with near 92\% parameters and 42\% MACs off with scaling factor $\times4$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源