论文标题

纹理内存增强基于补丁的图像插图

Texture Memory-Augmented Deep Patch-Based Image Inpainting

论文作者

Xu, Rui, Guo, Minghao, Wang, Jiaqi, Li, Xiaoxiao, Zhou, Bolei, Loy, Chen Change

论文摘要

基于补丁的方法和深层网络已被用来以自己的优势和劣势来解决图像覆盖问题。基于补丁的方法能够通过搜索未掩盖区域的最近的邻居补丁来恢复具有高质量纹理的缺失区域。但是,这些方法在恢复大型缺失区域时会带来有问题的内容。另一方面,深层网络在完成大区域方面表现出了令人鼓舞的结果。尽管如此,结果通常缺乏类似于周围地区的忠实而尖锐的细节。通过将两个范式的最好的范式汇总在一起,我们提出了一个新的深层介绍框架,其中纹理生成的指导是通过从未掩盖区域提取的补丁样本的纹理记忆来指导的。该框架具有一种新颖的设计,可以通过深入的网络端到端训练纹理内存检索。此外,我们引入了斑块分布损失,以鼓励高质量的斑块合成。所提出的方法在定性和定量上显示出卓越的性能,即三个具有挑战性的图像基准,即位置,Celeba-HQ和巴黎街道视图数据集。

Patch-based methods and deep networks have been employed to tackle image inpainting problem, with their own strengths and weaknesses. Patch-based methods are capable of restoring a missing region with high-quality texture through searching nearest neighbor patches from the unmasked regions. However, these methods bring problematic contents when recovering large missing regions. Deep networks, on the other hand, show promising results in completing large regions. Nonetheless, the results often lack faithful and sharp details that resemble the surrounding area. By bringing together the best of both paradigms, we propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions. The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network. In addition, we introduce a patch distribution loss to encourage high-quality patch synthesis. The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks, i.e., Places, CelebA-HQ, and Paris Street-View datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源