论文标题

从全向图像中捕获休闲室内HDR辐射率

Casual Indoor HDR Radiance Capture from Omnidirectional Images

论文作者

Gera, Pulkit, Dastjerdi, Mohammad Reza Karimi, Renaud, Charles, Narayanan, P. J., Lalonde, Jean-François

论文摘要

我们提出Panohdr-nerf,这是室内场景的完整HDR辐射场的神经表示,以及随便捕获它的管道,而无需精心设计或复杂的捕获协议。首先,用户通过在场景中自由挥舞现成的摄像头来捕获场景的低动态范围(LDR)全向视频。然后,LDR2HDR网络将捕获的LDR帧升至HDR,该框架用于训练定制的NERF ++模型。由此产生的panohdr-nerf可以从场景的任何位置呈现完整的HDR图像。通过在训练过程中未见位置捕获的地面真相HDR辐射的真实场景的新型测试数据集上进行的实验,我们表明Panohdr-nerf可以从任何场景点预测可见的HDR Radiance。我们还表明,预测的辐射能够合成正确的照明效果,从而使室内场景的增强具有正确点亮的合成对象。数据集和代码可在https://lvsn.github.io/panohdr-nerf/上找到。

We present PanoHDR-NeRF, a neural representation of the full HDR radiance field of an indoor scene, and a pipeline to capture it casually, without elaborate setups or complex capture protocols. First, a user captures a low dynamic range (LDR) omnidirectional video of the scene by freely waving an off-the-shelf camera around the scene. Then, an LDR2HDR network uplifts the captured LDR frames to HDR, which are used to train a tailored NeRF++ model. The resulting PanoHDR-NeRF can render full HDR images from any location of the scene. Through experiments on a novel test dataset of real scenes with the ground truth HDR radiance captured at locations not seen during training, we show that PanoHDR-NeRF predicts plausible HDR radiance from any scene point. We also show that the predicted radiance can synthesize correct lighting effects, enabling the augmentation of indoor scenes with synthetic objects that are lit correctly. Datasets and code are available at https://lvsn.github.io/PanoHDR-NeRF/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源