• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2022, Vol. 44 ›› Issue (11): 2056-2063.

• Artificial Intelligence and Data Mining • Previous Articles     Next Articles

Research of single sample generative adversarial networksbased on attention machanism using linear layers

CHEN Xi1,ZHAO Hong-dong1,2,YANG Dong-xu1,XU Ke-nan1,REN Xing-lin1,FENG Hui-jie1   

  1.  (1.School of Electronic and Information Engineering,Hebei University of Technology,Tianjin 300401;
    2.Science and Technology on Electro-Optical Information Security Control Laboratory,Tianjin 300308,China)alization
  • Received:2021-06-18 Revised:2021-08-27 Accepted:2022-11-25 Online:2022-11-25 Published:2022-11-25

Abstract: At present, using single-sample training to generate adversarial networks has become the focus of researchers. However, the problems that the model is not easy to converge, the generated image structure collapses, and the training speed is slow still need to be solved urgently. Researchers propose to use a self-attention model in the generative adversarial network to obtain a larger range of samples and improve the quality of the generated images. It is found that using the traditional convolutional self-attention model causes a waste of computing resources due to the redundancy of information in the attention map. A novel linear attention model is proposed, in which a double normalization method is used to alleviate the problem of the attention model being sensitive to input features, and a new single-sample generative adversarial network model is built using this model. In addition, the model uses residual network and spectral normalization methods for stable training, reducing the risk of collapse. A large number of experiments show that, compared with the existing training model, this model has the characteristics of fast training speed, high resolution of generated images, and obvious improvement of evaluation indicators.

Key words: generative adversarial network, single sample, linear attention model, self-attention machanism, spectral norm