• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2024, Vol. 46 ›› Issue (08): 1361-1371.

• High Performance Computing • Previous Articles     Next Articles

Block-grained domain adaptation for neural networks at edge

XIN Gao-feng,LIU Yu-xiao,ZHANG Qing-long,HAN Rui,LIU Chi   

  1. (School of Computer Science & Technology,Beijing Institute of Technology,Beijing 100081,China)
  • Received:2023-11-10 Revised:2023-12-29 Accepted:2024-08-25 Online:2024-08-25 Published:2024-09-02

Abstract: Running deep neural networks on edge devices faces two challenges: model scaling and domain adaptation. Existing model scaling techniques and unsupervised online domain adaptation techniques suffer from coarse scaling granularity, limited scaling space, and long online domain adaptation time. To address these two challenges, this paper proposes a block-grained model scaling and domain adaptation training method called EdgeScaler, which consists of offline and online phases. For the model scaling challenge, in the offline phase, blocks are detected and extracted from various DNN and then are converted into multiple derived blocks. In the online phase, based on the combination of blocks and the connections between them, a large-scale scaling space is provided to solve the model scaling problem. For the domain adaptation challenge, a block-specific residual Adapter is designed, which is inserted into the blocks in the offline phase. In the online phase, when a new target domain arrives, all adapters are trained to solve the domain adaptation problem for all options in the block-grained scaling space. Test results on the real edge device, Jetson TX2, show that EdgeScaler can reduce the domain adaptation training time by an average of 85.14%  and reduce the training energy consumption by an average of 84.1%, while providing a large-scale scaling option.

Key words: deep neural network, edge device, elastic scaling, block, domain adaptation