论文标题

具有场景文本修复的极低光明图像增强

Extremely Low-light Image Enhancement with Scene Text Restoration

论文作者

Hsu, Pohao, Lin, Che-Tsung, Ng, Chun Chet, Kew, Jie-Long, Tan, Mei Yih, Lai, Shang-Hong, Chan, Chee Seng, Zach, Christopher

论文摘要

基于深度学习的方法在增强极低光线的图像方面取得了令人印象深刻的进步 - 重建图像的图像质量通常得到了改善。但是,我们发现这些方法中的大多数无法充分恢复图像详细信息,例如场景中的文本。在本文中,提出了一个新颖的图像增强框架,以精确恢复场景文本,以及在极低光明的图像条件下同时同时恢复图像的整体质量。主要是,我们采用了一个自我调节的注意图,边缘图和新的文本检测损失。此外,在文本检测方面,利用合成低光图像对真正的图像增强是有益的。定量和定性的实验结果表明,所提出的模型在图像恢复,文本检测和文本发现中的最新方法优于黑暗和ICDAR15数据集中的文本斑点。

Deep learning-based methods have made impressive progress in enhancing extremely low-light images - the image quality of the reconstructed images has generally improved. However, we found out that most of these methods could not sufficiently recover the image details, for instance, the texts in the scene. In this paper, a novel image enhancement framework is proposed to precisely restore the scene texts, as well as the overall quality of the image simultaneously under extremely low-light images conditions. Mainly, we employed a self-regularised attention map, an edge map, and a novel text detection loss. In addition, leveraging synthetic low-light images is beneficial for image enhancement on the genuine ones in terms of text detection. The quantitative and qualitative experimental results have shown that the proposed model outperforms state-of-the-art methods in image restoration, text detection, and text spotting on See In the Dark and ICDAR15 datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源