• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2023, Vol. 45 ›› Issue (09): 1611-1620.

• Graphics and Images • Previous Articles     Next Articles

Face liveness detection based on multi-adversarial discrimination network

REN Tuo1,YAN Wei2,KUANG Li-qun1,XIE Jian-bin1,CHEN Zhong-yu1,GAO Feng1,GUO Rui1,SHU Wei3,XIE Chang-yi4   

  1. (1.School of Data Science and Technology,North University of China,Taiyuan  030051,China;
    2.College of Electronic Science and Technology,National University of Defense Technology,Changsha  410073,China;
    3.School of Electronic and Information Engineering,University of Science and Technology Liaoning,Anshan 114051,China;
    4.Faculty of Medicine,Dentistry and Health Sciences,The University of Melbourne,Melbourne 3010,Australia)
  • Received:2022-06-13 Revised:2022-09-15 Accepted:2023-09-25 Online:2023-09-25 Published:2023-09-12

Abstract: Face liveness detection is a key factor in ensuring the security of face recognition systems. In particular, disentangled learning methods can effectively address the problem of generalizing datasets in face liveness detection. However, existing disentangled learning methods often take the entire face image as input and parse out forged trace elements, ignoring the issue of local details of forged traces. To address this issue, this paper improves the existing forgery trace disentanglement network and proposes a multi-adversarial discriminative network model. The discriminator is designed with a primary discriminator and a regional discriminator. A facial mask module is introduced to generate facial skin and feature masks. Local facial information is integrated to make the generated images more closely resemble the distribution of face images in the dataset, while also disentangling an enhanced version of the forgery trace. The proposed multi-adversarial discriminative network effectively enhances the effect of forgery trace on forged face images and improves the accuracy of face liveness detection. Specifically, the detection error rate of our model on the OULU-NPU dataset in two experiments is only 0.8% and 1.4%, significantly lower than that of the STDN. At the same time, good detection results are achieved on the Idiap Replay-Attack dataset. To verify the transferability of our method, cross-domain experiments on the NUAA dataset and the Idiap Replay-Attack dataset also achieves good results.


Key words: face recognition, liveness detection, generative adversarial network, disentangled representation learning