• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2021, Vol. 43 ›› Issue (02): 228-234.

Previous Articles     Next Articles

Research on virtual-physical address translation architectures of multi-GPU system

WEI Jin-hui,LI Chen,LU Jian-zhuang   

  1. (College of Computer Science and Technology,National University of Defense Technology,Changsha 410073,China)
  • Received:2020-06-12 Revised:2020-08-24 Accepted:2021-02-25 Online:2021-02-25 Published:2021-02-23

Abstract: In recent years, with the development of big data, the dataset size of GPU applications has increased significantly, which raises challenges for current GPUs. However, as Moore's Law reaches its limit, it is not easy to improve the performance of single GPU any further; Instead, multi-GPU systems have been shown to be an effective solution due to its GPU processor-level parallelism. The support for memory virtualization in multi-GPU systems further simplifies the programming and improves the resource utilization. Memory virtualization requires the support for address translation, and the overhead of address translation has important impact on system’s performance. This paper studies two common address translation architectures in multi-GPU systems, that is, distributed address translation architecture and centralized address translation architecture. Through simulation experiments, this paper ana- lyzes and compares the advantages and drawbacks of two address translation architectures in-depth. On this basis, this paper proposes optimization suggestions for address translation in multi-GPU systems.


Key words: multi-GPU system, memory virtualization, address translation architecture