论文标题

Squeezenerf:进一步分解了用于记忆效率推理的FastNERF

SqueezeNeRF: Further factorized FastNeRF for memory-efficient inference

论文作者

Wadhwani, Krishna, Kojima, Tamaki

论文摘要

神经辐射场(NERF)已成为新型复杂场景的新型视图生成的最新方法,但在推断过程中非常慢。最近,已经有多项工作可以加快NERF推断,但是实时NERF推理的最新方法依赖于缓存神经网络输出,该神经网络输出占据了几种千物式磁盘空间,从而限制了其现实世界中的适用性。 Garbin等人,由于原始NERF网络的神经网络是不可行的。提出的“ fastnerf”将问题分配到2个子网络中 - 一个仅取决于样品点的3D坐标,另一个仅取决于2D摄像机查看方向。尽管此分解使他们能够减少缓存大小并以每秒200帧的速度进行推断,但内存开销仍然很大。在这项工作中,我们提出了Squeezenerf,它比Fastnerf的稀疏缓存超过60倍以上,并且在推理过程中仍能在高规格GPU上以每秒190帧的速度渲染。

Neural Radiance Fields (NeRF) has emerged as the state-of-the-art method for novel view generation of complex scenes, but is very slow during inference. Recently, there have been multiple works on speeding up NeRF inference, but the state of the art methods for real-time NeRF inference rely on caching the neural network output, which occupies several giga-bytes of disk space that limits their real-world applicability. As caching the neural network of original NeRF network is not feasible, Garbin et al. proposed "FastNeRF" which factorizes the problem into 2 sub-networks - one which depends only on the 3D coordinate of a sample point and one which depends only on the 2D camera viewing direction. Although this factorization enables them to reduce the cache size and perform inference at over 200 frames per second, the memory overhead is still substantial. In this work, we propose SqueezeNeRF, which is more than 60 times memory-efficient than the sparse cache of FastNeRF and is still able to render at more than 190 frames per second on a high spec GPU during inference.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源