• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2024, Vol. 46 ›› Issue (06): 959-967.

• 高性能计算 • 上一篇    下一篇

基于Actor模型的众核数据流硬件架构探索

张家豪,邓金易,尹首一,魏少军,胡杨   

  1. (清华大学集成电路学院,北京 100084)
  • 收稿日期:2023-10-06 修回日期:2023-11-23 接受日期:2024-06-25 出版日期:2024-06-25 发布日期:2024-06-17

Exploration of the many-core data flow hardware architecture based on Actor model

ZHANG Jia-hao,DENG Jin-yi,YIN Shou-yi,WEI Shao-jun,HU Yang   

  1. (School of Integrated Circuits,Tsinghua University,Beijing 100084,China)
  • Received:2023-10-06 Revised:2023-11-23 Accepted:2024-06-25 Online:2024-06-25 Published:2024-06-17

摘要: 超大规模AI模型的分布式训练对芯片架构的通信能力和可扩展性提出了挑战。晶圆级芯片通过在同一片晶圆上集成大量的计算核心和互联网络,实现了超高的计算密度和通信性能,成为了训练超大规模AI模型的理想选择。AMCoDA是一种基于Actor模型的众核数据流硬件架构,旨在利用Actor并行编程模型的高度并行性、异步消息传递和高扩展性等特点,在晶圆级芯片上实现AI模型的分布式训练。AMCoDA的设计包括计算模型、执行模型和硬件架构3个层面。实验表明,AMCoDA 能广泛支持分布式训练中的各种并行模式和集合通信模式,灵活高效地完成复杂分布式训练策略的部署和执行。

关键词: 晶圆级芯片, 分布式训练, Actor模型, 众核数据流架构

Abstract: The distributed training of ultra-large-scale AI models poses challenges to the communication capability and scalability of chip architectures. Wafer-level chips integrate a large number of computing cores and inter-connect networks on the same wafer, achieving ultra-high computing density and communication performance, making them an ideal choice for training ultra-large-scale AI models. AMCoDA is a hardware architecture based on the Actor model, aiming to leverage the highly parallel, asynchronous message passing, and scalable characteristics of the Actor parallel programming model to achieve distributed training of AI models on wafer-level chips. The design of AMCoDA includes three levels: computational model, execution model, and hardware architecture. Experiments show that AMCoDA extensively supports various parallel patterns and collective communications in distributed training, flexibly and efficiently deploying and executing complex distributed training strategies. 


Key words: wafer-level chip, distributed training, Actor model, many-core dataflow architecture ,