论文标题

带有身份保存的姿势操纵

Pose Manipulation with Identity Preservation

论文作者

Ardelean, Andrei-Timotei, Sasu, Lucian Mircea

论文摘要

本文描述了一种新模型,该模型在新颖的姿势中生成图像,例如通过改变人类主题的少数实例,改变面部表情和方向。与以前需要特定人员进行培训的大型数据集的方法不同,我们的方法也可能从稀缺的图像开始,即使是从单个图像开始。为此,我们介绍了字符自适应身份归一化gan(Caingan),该身份使用嵌入器提取并在源图像中组合的空间特征。通过应用条件归一化,可以在整个网络中传播身份信息。经过广泛的对抗性训练,Caingan收到了某个人的面孔数字,并在保留该人的身份的同时产生了新的面孔。实验结果表明,生成图像的质量与推理过程中使用的输入集的大小相比。此外,定量测量表明,与训练数据受到限制相比,卡根甘的性能更好。

This paper describes a new model which generates images in novel poses e.g. by altering face expression and orientation, from just a few instances of a human subject. Unlike previous approaches which require large datasets of a specific person for training, our approach may start from a scarce set of images, even from a single image. To this end, we introduce Character Adaptive Identity Normalization GAN (CainGAN) which uses spatial characteristic features extracted by an embedder and combined across source images. The identity information is propagated throughout the network by applying conditional normalization. After extensive adversarial training, CainGAN receives figures of faces from a certain individual and produces new ones while preserving the person's identity. Experimental results show that the quality of generated images scales with the size of the input set used during inference. Furthermore, quantitative measurements indicate that CainGAN performs better compared to other methods when training data is limited.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源