论文标题

DeepRemaster:用于全面视频增强的时间来源引用注意力网络

DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement

论文作者

Iizuka, Satoshi, Simo-Serra, Edgar

论文摘要

古董膜的重新制作包括各种各样的子任务,包括超分辨率,去除噪声和对比度增强,旨在将恶化的膜培养基恢复到其原始状态。此外,由于时间的技术局限性,大多数老式膜都以黑白记录,或者具有低质量的颜色,因此为此进行着色。在这项工作中,我们提出了一个单一的框架,以半互相互作用处理整个重新制作任务。我们的工作基于时间卷积神经网络,其注意机制在具有数据驱动的劣化模拟的视频中训练。我们提出的来源引用的关注使该模型可以处理任意数量的参考颜色图像,以使长视频着色,而无需分割,同时保持时间一致性。定量分析表明,我们的框架的表现优于现有方法,与现有方法相比,我们的框架的性能随较长的视频和更多参考颜色图像而增加。

The remastering of vintage film comprises of a diversity of sub-tasks including super-resolution, noise removal, and contrast enhancement which aim to restore the deteriorated film medium to its original state. Additionally, due to the technical limitations of the time, most vintage film is either recorded in black and white, or has low quality colors, for which colorization becomes necessary. In this work, we propose a single framework to tackle the entire remastering task semi-interactively. Our work is based on temporal convolutional neural networks with attention mechanisms trained on videos with data-driven deterioration simulation. Our proposed source-reference attention allows the model to handle an arbitrary number of reference color images to colorize long videos without the need for segmentation while maintaining temporal consistency. Quantitative analysis shows that our framework outperforms existing approaches, and that, in contrast to existing approaches, the performance of our framework increases with longer videos and more reference color images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源