论文标题

自我监督的单眼3D面部重建通过遮挡感知的多视图几何稠度

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency

论文作者

Shang, Jiaxiang, Shen, Tianwei, Li, Shiwei, Zhou, Lei, Zhen, Mingmin, Fang, Tian, Quan, Long

论文摘要

最近的基于学习的方法,其中通过单视图像训练模型,显示了单眼3D面部重建的令人鼓舞的结果,但它们遭受了不良的面部姿势和深度歧义问题的困扰。与以前仅执行2D特征限制的工作相反,我们通过利用多视图几何的一致性提出了一个自我监管的训练架构,该培训一致性在面部姿势和深度估计上提供了可靠的约束。我们首先提出了一种闭塞视图综合方法,以将多视文几何的一致性应用于自学学习。然后,我们为多视图一致性设计了三个新型损失函数,包括像素一致性损失,深度一致性损失和面部标志性的基于面积的外观损失。我们的方法是准确且健壮的,尤其是在表达,摆姿势和照明条件的巨大变化下。面部对齐和3D面重建基准的全面实验表明,与最先进的方法相比具有优势。我们的代码和模型在https://github.com/jiaxiangshang/mgcnet中发布。

Recent learning-based approaches, in which models are trained by single-view images have shown promising results for monocular 3D face reconstruction, but they suffer from the ill-posed face pose and depth ambiguity issue. In contrast to previous works that only enforce 2D feature constraints, we propose a self-supervised training architecture by leveraging the multi-view geometry consistency, which provides reliable constraints on face pose and depth estimation. We first propose an occlusion-aware view synthesis method to apply multi-view geometry consistency to self-supervised learning. Then we design three novel loss functions for multi-view consistency, including the pixel consistency loss, the depth consistency loss, and the facial landmark-based epipolar loss. Our method is accurate and robust, especially under large variations of expressions, poses, and illumination conditions. Comprehensive experiments on the face alignment and 3D face reconstruction benchmarks have demonstrated superiority over state-of-the-art methods. Our code and model are released in https://github.com/jiaxiangshang/MGCNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源