• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2025, Vol. 47 ›› Issue (10): 1737-1744.

• 高性能计算 • 上一篇    下一篇

基于近似计算的脉冲神经网络加速器设计

许炜康,孙岩,张建民   

  1. (国防科技大学计算机学院,湖南 长沙 410073) 
  • 出版日期:2025-10-25 发布日期:2025-10-28
  • 基金资助:
    国家重点研发计划(2022YFB2803405);国家自然科学基金(62072464)  

A spiking neural network accelerator based on approximate computing

XU Weikang,SUN Yan,ZHANG Jianmin   

  1. (College of Computer Science and Technology,National University of Defense Technology,Changsha 410073,China)
  • Online:2025-10-25 Published:2025-10-28

摘要: 脉冲神经网络(SNN)实现了对生物神经更为接近的模拟,其高能效的特性极为适用于边缘和终端计算场景。然而,在对功耗高度敏感的应用中,进一步降低功耗依旧是一个至关重要的任务。近似计算通过引入一定的误差来简化设计,为容错应用的高能效硬件设计带来了新的契机。对将近似计算应用于 SNN 加速器的方法进行了探索,首先,针对 SNN 的应用特点展开分析与实验,总结出 SNN 加速器中大量加法器输入数据的分布特性。基于此特性,提出一种针对应用敏感的近似运算部件误差评估指标AARE。依据AARE指标以及提出的最优近似加法器选择策略,能够针对特定应用选择更为合适的近似运算部件。在此基础上,利用开源的 EDA 工具和 PDK 实现了一种基于近似计算的 SNN 硬件加速器 AxSpike,并使用 snnTorch 开发了相应的模拟器。实验结果表明,AxSpike加速器能够实现 37.32% 的功耗节约以及 31.26% 的面积节省,精度仅下降 3.47个百分点,极大提高了 SNN 硬件加速器的能效比。

关键词: 近似计算, 脉冲神经网络, 硬件加速器, 高能效, 低功耗

Abstract: Spiking neural network (SNN) achieves a closer simulation of biological neurons, and their high energy efficiency makes them exceptionally suitable for edge and end-device computing scena- rios. However, in applications highly sensitive to power consumption, further reducing power consumption  remains a crucial objective. Approximate computing simplifies design by introducing a certain degree of error, offering new opportunities for energy-efficient hardware design for fault-tolerant applications. This paper explores methods for applying approximate computing to SNN accelerators. Firstly, through analysis and experiments tailored to the application characteristics of SNNs, the distribution characteristics of input data for numerous adders in SNN accelerators are summarized. Based on these characteristics, an application-sensitive error evaluation metric for approximate arithmetic components, named AARE (application-aware approximation error), is proposed. By using this metric and the optimal approximate adder selection strategy introduced in this paper, more appropriate approximate arithmetic components can be selected for specific applications. Building on this, an approximate computing-based SNN hardware accelerator, AxSpike, is implemented using open-source EDA tools and PDKs, along with a corresponding simulator developed using snnTorch. Experimental results demonstrate that the accelerator achieves a 37.32% reduction in power consumption and a 31.26% reduction in area, with only a 3.47 percen- tage point decrease in accuracy, significantly enhancing the energy efficiency of SNN hardware accelerators.


Key words: approximate computing, spiking neural network, hardware accelerator, high energy efficiency, low power consumption