• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2025, Vol. 47 ›› Issue (12): 2181-2194.

• Graphics and Images • Previous Articles     Next Articles

A facial manipulation adversarial defense method for image post-processing

XU Kun,QI Shuren,ZHANG Yushu,WEN Wenying,ZHANG Hua   

  1. (1.College of Computer Science and Technology/College of Software,
    Nanjing University of Aeronautics and Astronautics,Nanjing 211106;
    2.School of Information Technology,Jiangxi University of Finance and Economics,Nanchang 330013;
    3.State Key Laboratory of Information Security,
    Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100093,China)
  • Received:2024-03-22 Revised:2024-06-13 Online:2025-12-25 Published:2026-01-06

Abstract: Current facial manipulation technologies have advanced to the point where they can easily modify facial attributes, making it difficult for the human eye to distinguish between real and fake images. Facial image data is readily accessible and can be exploited to forge human faces, posing a constant threat to users’ personal privacy and information security. Consequently, leveraging adversarial defense methods to prevent facial images from being manipulated has become an active area of current research. However, most existing methods primarily focus on the defensive effectiveness against adversarial perturbations added to images, lacking in-depth analysis of scenarios where these adversarial perturbations are subsequently disrupted. To address this gap, this paper proposes an adversarial defense method for facial manipulation targeting image post-processing. By conducting a comprehensive and in-depth analysis of original images, images with adversarial perturbations, and images with disrupted adversarial perturbations, an image adversarial defense model based on contrastive learning is constructed. A thorough comparison and evaluation of the proposed adversarial defense method were conducted, and the experimental results demonstrate that the proposed method exhibits effective defense capabilities against facial manipulation.


Key words: facial manipulation defense, image post-processing, adversarial defense, contrastive learning