• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学

• 论文 • 上一篇    下一篇

CNN卷积计算在移动GPU上的加速研究

王湘新1,时洋2,文梅2   

  1. (1.武警湖南省消防总队信息中心,湖南 长沙 410205;2.国防科技大学计算机学院,湖南 长沙 410073)
  • 收稿日期:2016-11-08 修回日期:2017-02-15 出版日期:2018-01-25 发布日期:2018-01-25
  • 基金资助:

    国家自然科学基金(61272145)

Accelerating CNN on mobile GPU

WANG Xiang-xin1,SHI Yang2,WEN Mei2   

  1. (1.Information Center of Armed Police Fire Center,Changsha 410205;
    2.College of Computer,National University of Defense Technology,Changsha 410073,China)
     
  • Received:2016-11-08 Revised:2017-02-15 Online:2018-01-25 Published:2018-01-25

摘要:

卷积神经网络(CNN)凭借其优秀的表现正在诸如图像分类、语音识别等领域里扮演着越来越重要的角色,已经有一些研究人员想要将这个深度学习过程复制到手机上。但是,由于CNN巨大的计算量,移植程序的性能一直难以令人满意。为了探讨如何解决这一问题,借助MXNet这样一个深度学习的框架在手机上实现了CNN的前向过程,并且将注意力放在了使用手机上另一个强大的计算设备——GPU上。最终选择使用OpenCL通用编程框架将前向过程中最耗时的卷积操作利用矩阵乘来完成,并转移到GPU上进行。在此基础之上还针对手机GPU做了一些优化。最终,实验结果显示我们成功地将前向过程的时间降低到了原来时间的一半。
 

关键词: CNN, 手机, 移动GPU, 快速算法, OpenCL

Abstract:

Convolutional Neural Networks (CNNs) are playing an increasingly important role in areas such as image classification and speech recognition because of their excellent performance. Some researchers have already wanted to apply this deep learning process on mobile phones, but the performance of the porting program is unsatisfactory due to the huge amount of computation of CNN. In order to explore how to solve this problem, this paper uses a deep learning framework named MXNet to realize the forward process of CNN on mobile phones and focuses on the use of GPU that is another powerful computing device on the mobile phone. Based on the OpenCL common programming framework, we use matrix multiplication to compute the most time-consuming convolution in the forward process and move it to the GPU. Besides, serval improvements are made to achieve better performance. Finally, the experimental results show that we succeed in reducing the time of the forward process to half of the original time.
 

Key words: CNN, mobile phone, mobile GPU, fast algorithm, OpenCL