• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2025, Vol. 47 ›› Issue (8): 1408-1416.

• 计算机网络与信息安全 • 上一篇    下一篇

CNN-ViTAMR:一种基于Transformer的自动信号调制识别算法及其轻量化实现#br#

刘畅,徐炜遐   

  1. (国防科技大学计算机学院,湖南 长沙 410073)
  • 收稿日期:2024-10-08 修回日期:2024-11-14 出版日期:2025-08-25 发布日期:2025-08-27
  • 基金资助:
    中国博士后科学基金 (2023M744320) 

CNN-ViTAMR:A Transformer-based automatic modulation recognition algorithm and its light-weighted implementation

LIU Chang,XU Weixia   

  1. (College of Computer Science and Technology,National University of Defense Technology,Changsha 410073,China)

  • Received:2024-10-08 Revised:2024-11-14 Online:2025-08-25 Published:2025-08-27

摘要: 随着物联网、5G通信、无线自组网以及无人集群系统等技术的迅猛发展和普及应用,信号自动调制识别在无线通信、雷达信号处理和电子战等领域有着广泛的应用,并且开始向边缘智能端侧设备渗透。因此,轻量化的智能调制识别算法及其实现成为当前通信领域亟待解决的关键问题之一。传统的基于CNN和RNN的信号调制识别算法模型无法准确地把握信号的全局信息,因而在AMR任务中存在一定的局限性。近年来,Transformer借助其内部多头自注意力机制的全局信息特征提取能力,突破了DNN模型泛化能力的约束,在时间序列信息处理中取得了重大突破。因此,提出一种基于Transformer结构的AMR算法模型,该模型在Transformer中嵌入基于CNN的Token化模块,从而使模型在兼具Transformer的全局信息提取能力的同时,又保留了Token内部的局部时序特征,从而保证了算法的识别正确率。同时,所提模型的参数量较少,适合部署在边缘侧设备终端。基于Zynq Ultrascale+MPSoC平台的评估结果表明,相较运行于较高基准频率CPU平台的软件算法版本,FPGA硬件加速平台以较低的时钟频率实现了高达2.47倍的硬件加速。

关键词: 信号自动调制识别, Transformer, 多头自注意力机制, 硬件加速, 边缘计算

Abstract: With the rapid development and widespread adoption of technologies such as the Internet of Things (IoT),5G communications,wireless ad hoc networks,and unmanned swarm systems,automatic modulation recognition (AMR) has found extensive applications in wireless communications,radar signal processing,electronic warfare,and other domains,while progressively penetrating into edge intelligent terminal devices.Consequently,the development of light-weight intelligent modulation recognition algorithms and their implementation has emerged as one of the critical challenges to be addressed in the field of communications.Traditional signal modulation recognition algorithm models based on CNN and RNN fail to accurately capture the global characteristics of signals,thus exhibiting certain limitations in AMR tasks.In recent years,the Transformer technology,leveraging the global feature extraction capability of its built-in multi-head self-attention mechanism,has broken through the generalization constraints of DNN models and achieved significant breakthroughs in timeseries information processing.To address these challenges,this paper proposes an AMR algorithm model based on the Transformer structure.The model embeds a CNN-based Tokenization module into the Transformer,enabling it to combine the global information extraction ability of the Transformer and retain the local time series features inside the Token,thereby ensuring the recognition accuracy of the algorithm.At the same time,due to the small number of parameters of the model,it is suitable for deployment on edge device terminals.Evaluation results on the Zynq Ultrascale+MPSoC platform demonstrate that,compared to the software implementation running on a higher-frequency CPU platform,the FPGA-based hardware acceleration solution achieves a significant speedup of up to 2.47× while operating at a lower clock frequency.

Key words: automatic modulation recognition, Transformer, multi-head self-attention mechanism, hardware acceleration, edge computing