• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2025, Vol. 47 ›› Issue (12): 2181-2194.

• 图形与图像 • 上一篇    下一篇

面向图像后处理的面部操纵对抗防御方法

许锟,祁树仁,张玉书,温文媖,张华   

  1. (1.南京航空航天大学计算机科学与技术学院/软件学院,江苏 南京 211106;2.江西财经大学信息管理学院,江西 南昌 330013;
    3.中国科学院信息工程研究所信息安全国家重点实验室,北京 100093)

  • 收稿日期:2024-03-22 修回日期:2024-06-13 出版日期:2025-12-25 发布日期:2026-01-06
  • 基金资助:
    信息安全国家重点实验室开放课题(2022-MS-02);江西省“双千计划”科技创新高端人才项目(JXSQ2023201118);江西省自然科学基金(20232ACB212004)

A facial manipulation adversarial defense method for image post-processing

XU Kun,QI Shuren,ZHANG Yushu,WEN Wenying,ZHANG Hua   

  1. (1.College of Computer Science and Technology/College of Software,
    Nanjing University of Aeronautics and Astronautics,Nanjing 211106;
    2.School of Information Technology,Jiangxi University of Finance and Economics,Nanchang 330013;
    3.State Key Laboratory of Information Security,
    Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100093,China)
  • Received:2024-03-22 Revised:2024-06-13 Online:2025-12-25 Published:2026-01-06

摘要: 当前面部操纵技术已经能够轻易对人脸属性进行修改,人眼难以辨别真假。面部图像数据易被获取用来实现对人脸的伪造,这些都时刻威胁着用户的个人隐私和信息安全。因此利用对抗防御方法防止面部图像被操纵成为当前研究的热门。然而,目前现有的方法多关注图像添加对抗扰动的防御效果,缺乏后续对抗扰动被破坏情况下的深度分析。为此,提出了一种面向图像后处理的面部操纵对抗防御方法,通过深层次综合分析原始图像、添加对抗扰动的图像以及对抗扰动被破坏的图像,构建基于对比学习的图像对抗防御模型。对提出的对抗防御方法进行了全面的对比和评估,实验结果表明,所提出的方法具有较好的面部操纵防御效果。


关键词: 面部操纵防御, 图像后处理, 对抗防御, 对比学习

Abstract: Current facial manipulation technologies have advanced to the point where they can easily modify facial attributes, making it difficult for the human eye to distinguish between real and fake images. Facial image data is readily accessible and can be exploited to forge human faces, posing a constant threat to users’ personal privacy and information security. Consequently, leveraging adversarial defense methods to prevent facial images from being manipulated has become an active area of current research. However, most existing methods primarily focus on the defensive effectiveness against adversarial perturbations added to images, lacking in-depth analysis of scenarios where these adversarial perturbations are subsequently disrupted. To address this gap, this paper proposes an adversarial defense method for facial manipulation targeting image post-processing. By conducting a comprehensive and in-depth analysis of original images, images with adversarial perturbations, and images with disrupted adversarial perturbations, an image adversarial defense model based on contrastive learning is constructed. A thorough comparison and evaluation of the proposed adversarial defense method were conducted, and the experimental results demonstrate that the proposed method exhibits effective defense capabilities against facial manipulation.


Key words: facial manipulation defense, image post-processing, adversarial defense, contrastive learning