论文标题

可学习保护隐私的匿名图像

Learnable Privacy-Preserving Anonymization for Pedestrian Images

论文作者

Zhang, Junwu, Ye, Mang, Yang, Yao

论文摘要

本文研究了行人图像的新型隐私匿名问题,该问题保留了授权模型的个人身份信息(PII),并防止PII被第三方认可。常规的匿名方法不可避免地会导致语义信息丢失,从而导致数据实用程序有限。此外,现有的学习匿名技术,同时保留各种身份 - 艾尔特尔维坦公用事业,将改变行人身份,因此不适合培训强大的重新识别模型。为了探索行人图像的隐私 - 实用性权衡取舍,我们提出了一个可逆学习的匿名框架,该框架可以可逆地生成全身匿名图像,而在人重新识别任务上的性能很少。核心思想是,我们采用常规方法产生的脱敏图像作为初始隐私的监督,并共同培训具有恢复解码器和身份不变模型的匿名编码器。我们进一步提出了一种渐进培训策略来改善绩效,迭代地升级了最初的匿名监督。实验进一步证明了我们的匿名行人图像对隐私保护的有效性,这在保留隐私的同时提高了重新识别性能。代码可在\ url {https://github.com/whuzjw/privacy-reid}中找到。

This paper studies a novel privacy-preserving anonymization problem for pedestrian images, which preserves personal identity information (PII) for authorized models and prevents PII from being recognized by third parties. Conventional anonymization methods unavoidably cause semantic information loss, leading to limited data utility. Besides, existing learned anonymization techniques, while retaining various identity-irrelevant utilities, will change the pedestrian identity, and thus are unsuitable for training robust re-identification models. To explore the privacy-utility trade-off for pedestrian images, we propose a joint learning reversible anonymization framework, which can reversibly generate full-body anonymous images with little performance drop on person re-identification tasks. The core idea is that we adopt desensitized images generated by conventional methods as the initial privacy-preserving supervision and jointly train an anonymization encoder with a recovery decoder and an identity-invariant model. We further propose a progressive training strategy to improve the performance, which iteratively upgrades the initial anonymization supervision. Experiments further demonstrate the effectiveness of our anonymized pedestrian images for privacy protection, which boosts the re-identification performance while preserving privacy. Code is available at \url{https://github.com/whuzjw/privacy-reid}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源