论文标题
真正以中心为中心的一致性学习以进行深层检测
Real-centric Consistency Learning for Deepfake Detection
论文作者
论文摘要
以前的大多数DeepFake检测研究都竭尽全力描述和区分人类可感知的人工制品,这些方式在忽略的一些关键不变性的网络中留下了偏见,这些网络忽略了某些关键的不变性特征,并且表现不佳,表现不佳。从本质上讲,深层检测问题的目标是在表示空间上表示自然的面孔和虚假面孔,它提醒我们是否可以通过限制阶层内的一致性和阶层间的不一致来优化表示代表空间的特征提取程序,以使阶级内部表示近距离表示并推开阶层间的表现?因此,受到对比表示学习的启发,我们通过学习两种类的不变表示并提出一种新颖的以实物为中心的一致性学习方法来解决DeepFake检测问题。我们从样本级别和特征级别限制表示形式。在样本级别上,我们考虑了深泡合成的程序,并提出了一种新型的基于伪造语义的配对策略,以挖掘潜在的与潜在生成相关的特征。在特征水平上,根据表示空间的自然面中心,我们设计了一种坚硬的阳性采矿和合成方法,以模拟潜在的边缘特征。此外,强硬的负融合方法旨在在我们开发的有监督的对比度损失的帮助下改善对负边缘特征的歧视。通过广泛的实验证明了所提出方法的有效性和鲁棒性。
Most of previous deepfake detection researches bent their efforts to describe and discriminate artifacts in human perceptible ways, which leave a bias in the learned networks of ignoring some critical invariance features intra-class and underperforming the robustness of internet interference. Essentially, the target of deepfake detection problem is to represent natural faces and fake faces at the representation space discriminatively, and it reminds us whether we could optimize the feature extraction procedure at the representation space through constraining intra-class consistence and inter-class inconsistence to bring the intra-class representations close and push the inter-class representations apart? Therefore, inspired by contrastive representation learning, we tackle the deepfake detection problem through learning the invariant representations of both classes and propose a novel real-centric consistency learning method. We constraint the representation from both the sample level and the feature level. At the sample level, we take the procedure of deepfake synthesis into consideration and propose a novel forgery semantical-based pairing strategy to mine latent generation-related features. At the feature level, based on the centers of natural faces at the representation space, we design a hard positive mining and synthesizing method to simulate the potential marginal features. Besides, a hard negative fusion method is designed to improve the discrimination of negative marginal features with the help of supervised contrastive margin loss we developed. The effectiveness and robustness of the proposed method has been demonstrated through extensive experiments.