论文标题

IRISFORMER:室内场景中的单图像逆渲染的密集视觉变压器

IRISformer: Dense Vision Transformers for Single-Image Inverse Rendering in Indoor Scenes

论文作者

Zhu, Rui, Li, Zhengqin, Matai, Janarbek, Porikli, Fatih, Chandraker, Manmohan

论文摘要

由于任意多样化的物体形状,空间变化的材料和复杂的照明之间的无数相互作用,室内场景表现出显着的外观变化。由可见光和看不见的光源引起的阴影,亮点和反射需要有关反向渲染的远程相互作用的推理,该相互作用旨在恢复图像形成的组成部分,即形成,形状,材料和照明。在这项工作中,我们的直觉是,变压器体系结构所学到的长期关注非常适合解决单像逆渲染中的长期挑战。我们通过对密集的视觉变压器Irisformer的特定实例化证明,在逆渲染所需的单任务和多任务推理方面都表现出色。具体而言,我们提出了一个变压器体系结构,以同时估算室内场景的单个图像中的深度,正态,空间变化的反照率,粗糙度和照明。我们在基准数据集上进行的广泛评估在上述每个任务上都展示了最新的结果,从而使应用程序诸如单个不受约束的真实图像中的对象插入和材料编辑之类的应用,具有比先前的工作更大的照片。 Code and data are publicly released at https://github.com/ViLab-UCSD/IRISformer.

Indoor scenes exhibit significant appearance variations due to myriad interactions between arbitrarily diverse object shapes, spatially-changing materials, and complex lighting. Shadows, highlights, and inter-reflections caused by visible and invisible light sources require reasoning about long-range interactions for inverse rendering, which seeks to recover the components of image formation, namely, shape, material, and lighting. In this work, our intuition is that the long-range attention learned by transformer architectures is ideally suited to solve longstanding challenges in single-image inverse rendering. We demonstrate with a specific instantiation of a dense vision transformer, IRISformer, that excels at both single-task and multi-task reasoning required for inverse rendering. Specifically, we propose a transformer architecture to simultaneously estimate depths, normals, spatially-varying albedo, roughness and lighting from a single image of an indoor scene. Our extensive evaluations on benchmark datasets demonstrate state-of-the-art results on each of the above tasks, enabling applications like object insertion and material editing in a single unconstrained real image, with greater photorealism than prior works. Code and data are publicly released at https://github.com/ViLab-UCSD/IRISformer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源