论文标题
双对抗网络:朝实际世界降噪和产生噪音
Dual Adversarial Network: Toward Real-world Noise Removal and Noise Generation
论文作者
论文摘要
在计算机视觉中,真实世界的图像去除是一项长期但非常具有挑战性的任务。深度神经网络在deno的成功刺激了噪声产生的研究,旨在综合更清洁的图像对,以促进对深层deoisiser的训练。在这项工作中,我们提出了一个新颖的统一框架,以同时处理降噪和产生噪声任务。我们提出的方法不仅可以推断出在传统地图框架中观察到的嘈杂图像的潜在清洁图像的后验分布,还可以了解清洁图像对的联合分布。具体而言,我们用两种不同的分解形式近似近似关节分布,它们可以作为DeOiser映射到清洁图像的Deoiser映射,并将清洁图像映射到嘈杂的图像。博学的联合分布隐含地包含嘈杂和干净的图像之间的所有信息,避免了将图像先验和噪声假设作为传统设计的必要性。此外,通过使用学识渊博的生成器来增强原始培训数据集,可以进一步提高我们的Denoiser的性能。此外,我们提出了两个指标来评估产生的嘈杂图像的质量,据我们所知,此类指标首先是在该研究线上提出的。已经进行了广泛的实验,以证明我们方法在实际降噪和生成任务中的优越性优于最先进的实验。培训和测试代码可在https://github.com/zsyoaoa/danet上找到。
Real-world image noise removal is a long-standing yet very challenging task in computer vision. The success of deep neural network in denoising stimulates the research of noise generation, aiming at synthesizing more clean-noisy image pairs to facilitate the training of deep denoisers. In this work, we propose a novel unified framework to simultaneously deal with the noise removal and noise generation tasks. Instead of only inferring the posteriori distribution of the latent clean image conditioned on the observed noisy image in traditional MAP framework, our proposed method learns the joint distribution of the clean-noisy image pairs. Specifically, we approximate the joint distribution with two different factorized forms, which can be formulated as a denoiser mapping the noisy image to the clean one and a generator mapping the clean image to the noisy one. The learned joint distribution implicitly contains all the information between the noisy and clean images, avoiding the necessity of manually designing the image priors and noise assumptions as traditional. Besides, the performance of our denoiser can be further improved by augmenting the original training dataset with the learned generator. Moreover, we propose two metrics to assess the quality of the generated noisy image, for which, to the best of our knowledge, such metrics are firstly proposed along this research line. Extensive experiments have been conducted to demonstrate the superiority of our method over the state-of-the-arts both in the real noise removal and generation tasks. The training and testing code is available at https://github.com/zsyOAOA/DANet.