[1] |
Diederik P, Welling M.Auto-encoding variational Bayes[J].arXiv:1312.6114,2013.
|
[2] |
Aaron van den O,Kalchbrenner N,Kavukcuoglu K.Pixel recurrent neural networks [C]∥Proc of the 33rd International Conference on Machine Learning,2016:1747-1756.
|
[3] |
Goodfellow I,Pouget-Abadie J,Mirza M,et al.Generative adversarial nets[C]∥Proc of Advances in Neural Information Processing Systems,2014:2672-2680.
|
[4] |
Reed S,Akata Z,Yan X,et al Generative adversarial text-to-image synthesis[C]∥Proc of International Conference on Machine Learning,2016:1060-1069.
|
[5] |
Zhang H,Xu T,Li H.StackGAN:Text to photo-realistic image synthesis with stacked generative adversarial networks[C]∥Proc of IEEE International Conference on Computer Vision,2017:5908-5916.
|
[6] |
Zhang H,Xu T,Li H,et al.StackGAN++:Realistic image synthesis with stacked generative adversarial networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2019,41(8):1947-1962.
|
[7] |
Zhang Z,Xie Y,Lin Y.Photographic text-to-image synthesis with a hierarchically-nested adversarial network[C] ∥Proc of IEEE Conference on Computer Vision and Pattern Recognition,2018:6199-6208.
|
[8] |
Xu T,Zhang P,Huang Q,et al.AttnGAN:Fine-grained text to image generation with attentional generative adversarial networks[C] ∥Proc of IEEE Conference on Computer Vision and Pattern Recognition,2018:1316-1324.
|
[9] |
Qiao T,Zhang J,Xu D,et al.MirrorGAN:Learning text-to-image generation by redescription[C]∥Proc of IEEE Confe- rence on Computer Vision and Pattern Recognition,2019:1505-1514.
|
[10] |
Tan H, Liu X,Li X,et al.Semantics-enhanced adversarial nets for text-to-image synthesis[C]∥Proc of IEEE International Conference on Computer Vision,2019:10500-10509.
|
[11] |
Tan H,Liu X,Liu M,et al.KT GAN:Knowledge transfer generative adversarial network for text to image synthesis[J].IEEE Transactions on Image Processing,2021,30:1275-1290.
|
[12] |
Zhu M,Pan P,Chen W,et al.DM-GAN:Dynamic memory generative adversarial networks for text-to-image synthesis[C] ∥Proc of IEEE Conference on Computer Vision and Pattern Recognition,2019:5802-5810.
|
[13] |
Wah C, Branson S,Welinder P,et al.The Caltech-UCSD birds-200-2011 dataset[EB/OL].[2020-08-10].http://authors.library.caltech.edu/27452/1/CUB_200_2011.pdf.
|
[14] |
Salimans T,Goodfellow I,Zaremba W,et al.Improved techniques for training GANs[C]∥Proc of Advances in Neural Information Processing Systems,2016:2226-2234.
|
[15] |
Schuster M,Pailiwal K K.Bidirectional recurrent neural networks[J].IEEE Transactions on Signal Processing,1997,45(11):2673-2681.
|
[16] |
Xu Tian-yu,Wang Zhi.Text-to-image synthesis optimization based on aesthetic assessment[J].Journal of Beijing University of Aeronautics and Astronautics,2019,45 (12) :2438-2448.(in Chinese)
|
[17] |
Sun Yu,Li Lin-yan,Ye Zi-han,et al.Text-to-image synthesis method based on multi-level structure generative adversarial networks[J].Journal of Computer Applications,2019,39 (11):3204-3209.(in Chinese)
|
[18] |
Mo Jian-wen,Xu Kai-liang,Lin Le-ping,et al.Text-to-image generation combined with mutual information maximization[J].Journal of Xidian University(Natural Science),2019,46(5):180-188.(in Chinese)
|
[19] |
Li B,Qi X,Thomas L,et al.Controllable text-to-image generation[C]∥Proc of Advances in Neural Information Processing Systems,2019:2063-2073.
|
[20] |
Tan H,Liu X,Yin B,et al.Cross modal semantic matching generative adversarial networks for text to image synthesis[J].IEEE Transactions on Multimedia,2022,24:832-845.
|
[21] |
Qiao T,Zhang J,Xu D,et al.Learn,imagine and create:Text to image generation from prior knowledge[C]∥Proc of Advances in Neural Information Processing Systems,2019:887-897.
|
[22] |
Szegedy C,Vanhoucke V,Ioffe S,et al.Rethinking the inception architecture for computer vision[C] ∥Proc of IEEE Conference on Computer Vision and Pattern Recognition,2016:2818-2826.
|
|
附中文参考文献:
|
[16] |
徐天宇,王智.基于美学评判的文本生成图像优化[J].北京航空航天大学学报,2019,45 (12):2438-2448.
|
[17] |
孙钰,李林燕,叶子寒,等.多层次结构生成对抗网络的文本生成图像方法[J].计算机应用,2019,39 (11):3204-3209.
|
[18] |
莫建文,徐凯亮,林乐平,等.结合互信息最大化的文本到图像生成方法[J].西安电子科技大学学报,2019,46 (5):180-188
|