• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2021, Vol. 43 ›› Issue (01): 125-133.

Previous Articles     Next Articles

Unsupervised learning for face sketch-photo synthesis using generative adversarial network

CHEN Jin-long,LIU Xiong-fei,ZHAN Shu   

  1. (School of Computer and Information,Hefei University of Technology,Hefei 231009,China)
  • Received:2020-03-07 Revised:2020-04-28 Accepted:2021-01-25 Online:2021-01-25 Published:2021-01-22

Abstract: The research in verification of human face issue has impelled the demand and interest of law enforcement agencies and digital entertainment industry in transferring sketches to photo-realistic images. However, sketch-photo synthesis remains a significant challenging problem despite the rapid development of neural networks in image-to-image generation tasks. So far, existing approaches still have inextricable limitations due to the lack of paired data in the training stage and the fact of the striking differences between sketch and photo. To solve this problem, a new framework is proposed to translate face sketches to photo-realistic images in an unsupervised fashion. Compared with current unsupervised image-to-image translation methods, the network leverages an additional semantic consistency loss to keep the input semantic information in the output, and replaces the pixel-wise cycle-consistency with perceptual loss to generate sharper images for face sketch-photo synthesis. This network also employs PGGAN's generator and train it with a GAN loss for realistic output and a cycle consistency loss for driving the same input and output to remain constant. Experiments on two open source data sets verify the effectiveness of our proposal in subjective evaluation and objective standards.



Key words:  , face sketch-photo synthesis, unsupervised learning, generative adversarial network,