论文标题

通过单次编码图像获取动态光场

Acquiring a Dynamic Light Field through a Single-Shot Coded Image

论文作者

Mizuno, Ryoya, Takahashi, Keita, Yoshida, Michitaka, Tsutake, Chihiro, Fujii, Toshiaki, Nagahara, Hajime

论文摘要

我们提出了一种通过单次编码图像(2-D测量)来压缩动态光场(5-D体积)的方法。我们设计了一个成像模型,该模型在单个曝光时间内同步应用孔径编码和像素曝光编码。这种编码方案使我们能够有效地将原始信息嵌入到单个观察到的图像中。然后将观察到的图像送入卷积神经网络(CNN)进行灯场重建,该网络重建与摄像机侧编码模式共同训练。我们还开发了一个硬件原型,以捕获随着时间的推移移动的真实3-D场景。我们成功地从单个观察到的图像中获取了一个超过4个时间子框架(总共100个视图)的动态光场。随着时间的流逝,重复捕获和重建过程,我们可以以4倍的镜头速率获得动态光场。据我们所知,我们的方法是第一个在压缩光场获取中获得比相机本身更好的时间分辨率的方法。我们的软件可以从我们的项目网页上获得

We propose a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D measurement). We designed an imaging model that synchronously applies aperture coding and pixel-wise exposure coding within a single exposure time. This coding scheme enables us to effectively embed the original information into a single observed image. The observed image is then fed to a convolutional neural network (CNN) for light-field reconstruction, which is jointly trained with the camera-side coding patterns. We also developed a hardware prototype to capture a real 3-D scene moving over time. We succeeded in acquiring a dynamic light field with 5x5 viewpoints over 4 temporal sub-frames (100 views in total) from a single observed image. Repeating capture and reconstruction processes over time, we can acquire a dynamic light field at 4x the frame rate of the camera. To our knowledge, our method is the first to achieve a finer temporal resolution than the camera itself in compressive light-field acquisition. Our software is available from our project webpage

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源