Computer Engineering & Science ›› 2021, Vol. 43 ›› Issue (02): 228-234.
Previous Articles Next Articles
WEI Jin-hui,LI Chen,LU Jian-zhuang
Received:
Revised:
Accepted:
Online:
Published:
Abstract: In recent years, with the development of big data, the dataset size of GPU applications has increased significantly, which raises challenges for current GPUs. However, as Moore's Law reaches its limit, it is not easy to improve the performance of single GPU any further; Instead, multi-GPU systems have been shown to be an effective solution due to its GPU processor-level parallelism. The support for memory virtualization in multi-GPU systems further simplifies the programming and improves the resource utilization. Memory virtualization requires the support for address translation, and the overhead of address translation has important impact on system’s performance. This paper studies two common address translation architectures in multi-GPU systems, that is, distributed address translation architecture and centralized address translation architecture. Through simulation experiments, this paper ana- lyzes and compares the advantages and drawbacks of two address translation architectures in-depth. On this basis, this paper proposes optimization suggestions for address translation in multi-GPU systems.
Key words: multi-GPU system, memory virtualization, address translation architecture
WEI Jin-hui, LI Chen, LU Jian-zhuang. Research on virtual-physical address translation architectures of multi-GPU system[J]. Computer Engineering & Science, 2021, 43(02): 228-234.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: http://joces.nudt.edu.cn/EN/
http://joces.nudt.edu.cn/EN/Y2021/V43/I02/228