• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2023, Vol. 45 ›› Issue (12): 2186-2196.

• 图形与图像 • 上一篇    下一篇

耦合单词与句子级文本特征的图像对抗级联生成

白志远1,2,杨智翔1,2,栾鸿康1,2,孙玉宝1,2   

  1. (1.南京信息工程大学计算机学院,江苏 南京 210044;
    2.南京信息工程大学计算机学院江苏省大数据分析技术实验室,江苏 南京 210044)
  • 收稿日期:2023-03-24 修回日期:2023-06-07 接受日期:2023-12-25 出版日期:2023-12-25 发布日期:2023-12-14
  • 基金资助:
    国家自然科学基金(U2001211,62276139)

Image adversarial cascade generation via coupling word and sentence-level text features

BAI Zhi-yuan1,2,YANG Zhi-xiang1,2,LUAN Hong-kang1,2,SUN Yu-bao1,2   

  1. (1.School of Computer Science,Nanjing University of Information Science & Techology,Nanjing 210044;
    2.Jiangsu Key Laboratory of Big Data Analysis Technology,
    School of Computer Science,Nanjing University of Information Science & Techology,Nanjing 210044,China)
  • Received:2023-03-24 Revised:2023-06-07 Accepted:2023-12-25 Online:2023-12-25 Published:2023-12-14

摘要: 文本生成图像旨在根据自然语言描述生成逼真的图像,是一个涉及文本与图像的跨模态分析任务。鉴于生成对抗网络具有生成图像逼真、效率高等优势,已经成为文本生成图像任务的主流模型。然而,当前方法往往将文本特征分为单词级和句子级单独训练,文本信息利用不充分,容易导致生成的图像与文本不匹配的问题。针对该问题,提出了一种耦合单词级与句子级文本特征的图像对抗级联生成模型(Union-GAN),在每个图像生成阶段引入了文本图像联合感知模块(Union-Block),使用通道仿射变换和跨模态注意力相结合的方式,充分利用了文本的单词级语义与整体语义信息,促使生成的图像既符合文本语义描述又能够保持清晰结构。同时联合优化鉴别器,将空间注意力加入到对应的鉴别器中,使来自文本的监督信号促使生成器生成更加相关的图像。在CUB-200-2011数据集上将其与AttnGAN等多个当前的代表性模型进行了对比,实验结果表明,Union-GAN的FID分数达到了13.67,与AttnGAN相比,提高了42.9%,IS分数达到了4.52,提高了0.16。

关键词: 文本生成图像, 生成对抗网络, 多模态任务

Abstract: Text-to-image generation aims to generate realistic images from natural language descriptions, and is a cross-modal analysis task involving text and images. In view of the fact that the generative confrontation network has the advantages of realistic image generation and high efficiency, it has become the mainstream model for text generation image tasks. However, the current methods often divide text features into word-level and sentence-level training separately, and the text information is not fully utilized, which easily leads to the problem that the generated image does not match the text. In response to this problem, this paper proposes an image confrontation cascade generation model (Union-GAN) that couples word-level and sentence-level text features, and introduces a text-image joint perception module (Union-Block) in each image generation stage.  By combining channel affine transformation and cross-modal attention, it fully utilizes the word-level semantic and overall semantic information of the text to generate images that not only match the text semantic description but also maintain clear structures. Meanwhile, jointly optimizing the discriminator and adding spatial attention to the corresponding discriminator allows the supervisory signal from the text to prompt the generator to generate more relevant images. Compared with multiple current representative networks such as AttnGAN on the CUB-200-2011 dataset, experimental results show that the FID score of our Union-GAN is 13.67, an increase of 42.9% compared to AttnGAN, and the IS score is 4.52, an increase of 0.16.

Key words: text-to-image generation, generative adversarial network(GAN), multimodal task