论文标题

Pi-gan:学习多个轮廓面合成的姿势独立表示

PI-GAN: Learning Pose Independent representations for multiple profile face synthesis

论文作者

Alqahtani, Hamed

论文摘要

产生能够从单个姿势中综合多个面部姿势视图的姿势不变的表示仍然是一个困难的问题。该解决方案是在多媒体安全,计算机视觉,机器人技术等各个领域所要求的。生成的对抗网络(GAN)具有具有学习姿势无关表示的能力与歧视者网络合并的,以实现现实的面部合成。我们提出了Pigan,这是一个共享的编码器框架,以解决问题。与传统的gan相比,它由次级编码器框架组成,分别分享了主要结构的权重,并用原始姿势重建了面部。主要的框架着重于创建分离表示表示,次要框架旨在恢复原始面部。我们使用CFP高分辨率,现实数据集来检查性能。

Generating a pose-invariant representation capable of synthesizing multiple face pose views from a single pose is still a difficult problem. The solution is demanded in various areas like multimedia security, computer vision, robotics, etc. Generative adversarial networks (GANs) have encoder-decoder structures possessing the capability to learn pose-independent representation incorporated with discriminator network for realistic face synthesis. We present PIGAN, a cyclic shared encoder-decoder framework, in an attempt to solve the problem. As compared to traditional GAN, it consists of secondary encoder-decoder framework sharing weights from the primary structure and reconstructs the face with the original pose. The primary framework focuses on creating disentangle representation, and secondary framework aims to restore the original face. We use CFP high-resolution, realistic dataset to check the performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源