论文标题

通过同型估计和注意力学习,深度暴露融合并通过脱发

Deep Exposure Fusion with Deghosting via Homography Estimation and Attention Learning

论文作者

Chen, Sheng-Yeh, Chuang, Yung-Yu

论文摘要

现代相机的动态范围有限,并且经常使用一次曝光产生饱和或深色区域的图像。尽管可以通过拍摄不同暴露的多个图像来解决问题,但是曝光融合方法需要处理摄像机运动或移动对象引起的hosthing fersing ferfacts和细节损失。本文提出了一个深厚的接触融合网络。为了减少潜在的重影问题,我们的网络仅拍摄两个图像,一个不充分的图像和一个曝光过度的图像。我们的网络将同型估算组合在一起,以补偿摄像机运动,纠正剩余未对准和移动像素的注意机制以及减轻其他剩余伪像的对抗性学习。使用手持手机拍摄的现实世界照片的实验表明,该提出的方法可以在黑暗和明亮的区域生成具有忠实细节的高质量图像和生动的色彩演绎。

Modern cameras have limited dynamic ranges and often produce images with saturated or dark regions using a single exposure. Although the problem could be addressed by taking multiple images with different exposures, exposure fusion methods need to deal with ghosting artifacts and detail loss caused by camera motion or moving objects. This paper proposes a deep network for exposure fusion. For reducing the potential ghosting problem, our network only takes two images, an underexposed image and an overexposed one. Our network integrates together homography estimation for compensating camera motion, attention mechanism for correcting remaining misalignment and moving pixels, and adversarial learning for alleviating other remaining artifacts. Experiments on real-world photos taken using handheld mobile phones show that the proposed method can generate high-quality images with faithful detail and vivid color rendition in both dark and bright areas.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源