论文标题

VR视口姿势模型用于量化和利用框架相关性

VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations

论文作者

Chen, Ying, Kwon, Hojung, Inaltekin, Hazer, Gorlatova, Maria

论文摘要

视口姿势动态的重要性,即用户视角的位置和方向对于虚拟现实(VR)体验,要求开发VR视口姿势模型。在本文中,通过我们对3种不同类型的VR接口的视口轨迹的实验测量,我们首先在VR环境中开发了视口姿势的统计模型。基于开发的模型,我们检查了与不同视口姿势相对应的VR框架中像素之间的相关性,并获得了不同VR帧的像素的可见性相似性(VIS)的分析表达式。然后,我们提出了一种轻巧的基于VIS的ALG-VIS算法,该算法将VR框架自适应地将VR帧分配到背景和前景中,从而在不同框架上重复背景。我们在两个Oculus Quest 2渲染系统中对ALG-VIS的实施展示了ALG-VIS实时运行,支持完整的VR帧速率,并且在框架质量和带宽消耗的测量方面表现优于基准。

The importance of the dynamics of the viewport pose, i.e., the location and the orientation of users' points of view, for virtual reality (VR) experiences calls for the development of VR viewport pose models. In this paper, informed by our experimental measurements of viewport trajectories across 3 different types of VR interfaces, we first develop a statistical model of viewport poses in VR environments. Based on the developed model, we examine the correlations between pixels in VR frames that correspond to different viewport poses, and obtain an analytical expression for the visibility similarity (ViS) of the pixels across different VR frames. We then propose a lightweight ViS-based ALG-ViS algorithm that adaptively splits VR frames into the background and the foreground, reusing the background across different frames. Our implementation of ALG-ViS in two Oculus Quest 2 rendering systems demonstrates ALG-ViS running in real time, supporting the full VR frame rate, and outperforming baselines on measures of frame quality and bandwidth consumption.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源