论文标题

3D面部几何从深度视图带有注意力引导的生成对抗网络的恢复

3D Facial Geometry Recovery from a Depth View with Attention Guided Generative Adversarial Network

论文作者

Cai, Xiaoxu, Yu, Hui, Lou, Jianwen, Zhang, Xuguang, Li, Gongfa, Dong, Junyu

论文摘要

我们提出了通过提出注意力引导的生成对抗网络(AGGAN),从单个深度视图中恢复了完整的3D面部几何形状。与通常需要两个或多个深度视图以恢复完整的3D面部几何形状的现有工作相反,拟议的Aggan能够从单个不受约束的深度视图中产生面部的密集的3D素网格。具体而言,Aggan编码体素空间内的3D面部几何形状,并利用注意引导的GAN对不张力的2.5D深度3D映射进行建模。将多个损失函数(可强制执行3D面部几何的一致性,以及在体素空间中面部表面点的先前分布)进行指导以指导训练过程。定性和定量比较都表明,Aggan恢复了更完整,更光滑的3D面部形状,并且能够处理更宽的视角角度,并且在深度视图中抗噪声范围更大

We present to recover the complete 3D facial geometry from a single depth view by proposing an Attention Guided Generative Adversarial Networks (AGGAN). In contrast to existing work which normally requires two or more depth views to recover a full 3D facial geometry, the proposed AGGAN is able to generate a dense 3D voxel grid of the face from a single unconstrained depth view. Specifically, AGGAN encodes the 3D facial geometry within a voxel space and utilizes an attention-guided GAN to model the illposed 2.5D depth-3D mapping. Multiple loss functions, which enforce the 3D facial geometry consistency, together with a prior distribution of facial surface points in voxel space are incorporated to guide the training process. Both qualitative and quantitative comparisons show that AGGAN recovers a more complete and smoother 3D facial shape, with the capability to handle a much wider range of view angles and resist to noise in the depth view than conventional methods

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源