• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2022, Vol. 44 ›› Issue (07): 1191-1198.

• 高性能计算 • 上一篇    下一篇

一种面向分布式深度学习的轻量级聚合通信库

王笑雨,董德尊   

  1. (国防科技大学计算机学院,湖南 长沙 410073)
  • 收稿日期:2021-12-20 修回日期:2022-03-03 接受日期:2022-07-25 出版日期:2022-07-25 发布日期:2022-07-25
  • 基金资助:
    湖南省自然科学杰出青年基金(2021JJ10050)

A lightweight collective communication library for distributed deep learning

WANG Xiao-yu,DONG De-zun   

  1. (College of Computer Science and Technology,National University of Defense Technology,Changsha 410073,China)
  • Received:2021-12-20 Revised:2022-03-03 Accepted:2022-07-25 Online:2022-07-25 Published:2022-07-25

摘要: 聚合通信操作在分布式训练中应用广泛,特别是AllReduce操作被用于同步每个节点上模型的参数。为了获得更高的精度,数据集和神经网络模型的规模越来越大,节点间的通信开销在训练过程中占比很大且已成为训练加速的瓶颈。目前已有许多针对这一场景下聚合操作的优化工作,但都聚焦于操作的合理使用而不是其本身,例如通信调度和梯度量化。事实上,聚合操作与分布式训练应用之间存在许多不相匹配的地方,比如后者不要求所有节点同时同步梯度,而前者却需要。这使得针对分布式训练中聚合通信的研究是有必要的。然而发现目前分布式训练中的通信框架结构复杂、代码量大,对开展相关工作来说是不合适的。为了解决这一问题,设计并实现了一个轻量级的聚合通信库,以方便分析和改进分布式训练中的聚合操作。它支持主流框架和网络,并且架构简洁。这便于研究人员实现自定义通信操作,并能应用到主流的实验环境中以产生较广的影响。在多种情况下分别通过纯聚合操作和分布式深度学习应用来评估所设计的聚合通信库。实验结果显示,该库可以实现与MPI相近的性能,可以作为分析和研究分布式训练中梯度同步的聚合通信库。

关键词: 分布式深度学习, 神经网络, 聚合通信, Gloo, UCX ,

Abstract: Collective communication operations are widely used in distributed training, especially AllReduce operations are used to synchronize model parameters on each node. In order to obtain higher accuracy, the scale of datasets and neural network models is getting larger and larger, and the communication overhead between nodes accounts for a large proportion in the training process and becomes a bottleneck for accelerating training. There have been many optimizations for collective operations in this scene, such as communication scheduling and gradient quantization, but they typically focus on the rational employ instead of the operations themselves. Actually, there are mismatches between the collective operations and distributed training applications. For example, the latter does not require all nodes to synchronize gradients simultaneously while the former does. This makes researches on collective communication in distributed training necessary. However, we found that current communication frameworks in distributed training are inappropriate, because of their complex architecture and vast codes. To overcome this difficulty, a lightweight collective communication library is designed and implemented for analyzing and improving the collective operations in distributed training conveniently. It supports the mainstream frameworks, and comes with a clean architecture. This makes researchers to implement custom communication operations efficiently, and these operations can be applied to mainstream experimental environments for wider impact. Our collective communication library is evaluated by pure collective operations and distributed deep learning applications respectively in various network situations. The experiments show that the library can achieve similar performance to the MPI, and can be used as an collective communication library for analyzing and researching gradient synchronization in distributed training.

Key words: distributed deep learning, neural network, collective communication, Gloo, unified communication X(UCX)