• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

J4 ›› 2014, Vol. 36 ›› Issue (09): 1637-1643.

• 论文 • 上一篇    下一篇

基于并行演化计算的记忆非线性功率放大器数字预失真研究

刘钊,胡力   

  1. (1.武汉科技大学计算机科学与技术学院,湖北 武汉 430065;2.智能信息处理与实时工业系统湖北省重点实验室,湖北 武汉 430065)
  • 收稿日期:2013-04-11 修回日期:2013-07-08 出版日期:2014-09-25 发布日期:2014-09-25
  • 基金资助:

    国家自然科学基金资助项目(51174151,61100133)

Research of digital predistortion in nonlinear power amplifier
with memory based on parallel evolutionary computation            

LIU Zhao,HU Li   

  1. (1.College of Computer Science and Technology,Wuhan University of Science and Technology,Wuhan 430065;
    2.Hubei Province Key Laboratory of Intelligent Information Processing and Realtime Industrial System,Wuhan 430065,China)
  • Received:2013-04-11 Revised:2013-07-08 Online:2014-09-25 Published:2014-09-25

摘要:

自适应数字预失真是克服高功率放大器非线性失真最有前途的一项技术。为提高预失真的效率和效果,引入并行计算平台下的演化计算技术,提出了基于PSO算法预训练神经网络的方法,给出了算法软件实现的基本流程。在所述基础上,采用带抽头延时的双入双出三层前向神经网络结构,根据非直接学习结构和反向传播算法实现自适应,可同时补偿放大器的记忆失真和非线性失真的预失真技术。仿真实验表明,通过与无PSO预训练算法的相比,基于PSO预训练的神经网络训练算法有更好的性能。

关键词: 功率放大器, 记忆非线性, 预失真, PSO, 神经网络, 并行计算

Abstract:

Adaptive digital predistortion is the most promising technique to overcome the nonlinearity of High Power Amplifier (HPA). In order to improve the efficiency and effectiveness of the predistortion, the evolutionary computation techniques of the parallel computing platform are introduced, the method of training neural network in advance based on the PSO algorithm is proposed, and the basic process of the algorithm is given. Based on the above, a threelayer forward neural network predistorter with two inputs and two outputs is proposed for HPA with memory. The predistorter is realized using indirect learning architecture associated with the Backpropagation algorithm .This technique allows us to correct for general nonlinearities and memory effects simultaneously. Simulation results show that the new approach is more efficient than the conventional BP algorithm, without training in advance based on PSO.

Key words: power amplifier;memory nonlinear;predistortion;PSO;neural network;parallel computing