论文标题

可压缩的nerf通过排名分解

Compressible-composable NeRF via Rank-residual Decomposition

论文作者

Tang, Jiaxiang, Chen, Xiaokang, Wang, Jingbo, Zeng, Gang

论文摘要

神经辐射场(NERF)已成为一种令人信服的方法,可以代表3D对象和场景,以进行照片真实的渲染。但是,其隐式表示在操纵模型(如明确的网格表示)时会引起困难。 NERF操纵的最新进展通常受共享渲染器网络的限制,或者遭受较大的模型大小。为了规避障碍,在本文中,我们提出了一个明确的神经场表示,可以有效,方便地操纵模型。为了实现这一目标,我们学习了场景中没有神经网络的混合张量排列。由SVD算法的低级别近似属性的促进,我们提出了一种排行榜学习策略,以鼓励在较低等级中保存主要信息。然后,可以通过等级截断来动态调整模型大小以控制细节的水平,从而实现近乎最佳的压缩,而无需额外的优化。此外,可以通过沿等级维度连接来任意将不同的模型转换为一个场景。通过压缩组合场景中的不重要的物体,也可以减轻存储成本的增长。我们证明,我们的方法能够实现与最先进方法的可比渲染质量,同时可以实现额外的压缩和组成能力。代码将在https://github.com/ashawkey/ccnerf上提供。

Neural Radiance Field (NeRF) has emerged as a compelling method to represent 3D objects and scenes for photo-realistic rendering. However, its implicit representation causes difficulty in manipulating the models like the explicit mesh representation. Several recent advances in NeRF manipulation are usually restricted by a shared renderer network, or suffer from large model size. To circumvent the hurdle, in this paper, we present an explicit neural field representation that enables efficient and convenient manipulation of models. To achieve this goal, we learn a hybrid tensor rank decomposition of the scene without neural networks. Motivated by the low-rank approximation property of the SVD algorithm, we propose a rank-residual learning strategy to encourage the preservation of primary information in lower ranks. The model size can then be dynamically adjusted by rank truncation to control the levels of detail, achieving near-optimal compression without extra optimization. Furthermore, different models can be arbitrarily transformed and composed into one scene by concatenating along the rank dimension. The growth of storage cost can also be mitigated by compressing the unimportant objects in the composed scene. We demonstrate that our method is able to achieve comparable rendering quality to state-of-the-art methods, while enabling extra capability of compression and composition. Code will be made available at https://github.com/ashawkey/CCNeRF.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源