• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2022, Vol. 44 ›› Issue (01): 118-123.

Previous Articles     Next Articles

An image saliency area style transfer method combining visual attention mechanism

WANG Yang1,2,YU Zhen-xin1,2,LU Jia1,2   

  1. (1.College of Electronics and Information Engineering,Hebei University of Technology,Tianjin 300401;

    2.Tianjin Key Laboratory of Electronic Materials & Devices,Hebei University of Technology,Tianjin 300401,China)

  • Received:2020-07-23 Revised:2020-09-28 Accepted:2022-01-25 Online:2022-01-25 Published:2022-01-13

Abstract: Performing style transfer on part of an image usually results in style overflow and insigni- ficant effects after stylization of smaller areas. Aiming at this problem, a style transfer algorithm for image saliency regions is proposed. Firstly, according to the characteristics of the human visual attention mechanism, the saliency regions in the training image data set are labeled, and the fast semantic segmentation model is used for training to obtain a binary mask image containing the saliency regions of the image. Then, by simplifying the network layer structure of the fast neural style transfer model, and adopting the instance regularization layer in the generating network part, a more realistic overall style transfer result is obtained. Finally, the binary mask image obtained by semantic segmentation is combined with the overall style transfer image, and the final result image is output. A comparative experiment was carried out on the Cityscapes dataset and the Microsoft COCO 2017 dataset. The results show that the local target area in the image is stylized uniformly and delicately, and can be well integrated with the background area. While a more realistic style transfer effect is achieved, operating efficiency is more dominant.


Key words: style transfer, saliency area detection, semantic segmentation, convolutional neural network