• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2022, Vol. 44 ›› Issue (09): 1542-1549.

• High Performance Computing • Previous Articles     Next Articles

A Gatherv optimization method for large scale concurrency

SUN Hao-nan1,WANG Fei2,WEI Di2,YIN Wan-wang1,SHI Jun-da1   

  1. (1.National Research Center of Parallel Computer Engineering & Technology,Beijing 100080;
    2.Department of Computer Science and Technology,Tsinghua University,Beijing 100084,China)
  • Received:2022-01-15 Revised:2022-05-18 Accepted:2022-09-25 Online:2022-09-25 Published:2022-09-25

Abstract: As an irregular MPI (Message Passing Interface) collective communication, Gatherv provides great flexibility for the description of parallel communication behavior, but its irregularity brings high implementation difficulties. Existing methods have some problems, such as outstanding communication hotspots, high memory overhead, low memory access efficiency, etc., which are difficult to satisfy the performance requirements of todays large-scale parallel applications. A Gatherv optimization method for large scale concurrency is proposed. Starting from the optimization level, buffer management and other key issues, the binomial tree model commonly used in the implementation of regular collective communication is applied to the implementation of Gatherv. Besides, a message chain scheduling is proposed to further reduce the overhead and improve the optimization effect. Test data shows that the proposed method can effectively solve the performance problems of the existing methods, and achieve efficient scalability of Gatherv performance under the condition of large-scale concurrency.


Key words: message passing interface (MPI), irregular collectives, Gatherv, Binomial-Tree, message chain scheduling