论文标题
朝着快速,准确和稳定的3D密集的面对面对齐
Towards Fast, Accurate and Stable 3D Dense Face Alignment
论文作者
论文摘要
现有的3D密度面对面对准的方法主要集中于准确性,从而限制了其实际应用的范围。在本文中,我们提出了一个名为3DDFA-V2的新型回归框架,该框架在速度,准确性和稳定性之间保持平衡。首先,根据轻巧的骨干,我们提出了一种元关节优化策略,以动态回归一组3DMM参数,从而同时增强了速度和准确性。为了进一步提高视频的稳定性,我们提出了一种虚拟综合方法,将一个静止图像转换为一个短视频,该简短视频融合了平面内和平面外的面部移动。在高精度和稳定性的前提下,3DDFA-V2在单个CPU核心上以超过50fps的速度运行,并同时超过其他最先进的重型模型。在几个具有挑战性的数据集上进行的实验验证了我们方法的效率。预训练的模型和代码可在https://github.com/cleardusk/3dddfa_v2上找到。
Existing methods of 3D dense face alignment mainly concentrate on accuracy, thus limiting the scope of their practical applications. In this paper, we propose a novel regression framework named 3DDFA-V2 which makes a balance among speed, accuracy and stability. Firstly, on the basis of a lightweight backbone, we propose a meta-joint optimization strategy to dynamically regress a small set of 3DMM parameters, which greatly enhances speed and accuracy simultaneously. To further improve the stability on videos, we present a virtual synthesis method to transform one still image to a short-video which incorporates in-plane and out-of-plane face moving. On the premise of high accuracy and stability, 3DDFA-V2 runs at over 50fps on a single CPU core and outperforms other state-of-the-art heavy models simultaneously. Experiments on several challenging datasets validate the efficiency of our method. Pre-trained models and code are available at https://github.com/cleardusk/3DDFA_V2.