Emerging Trends in Multi-Accelerator and Distributed System for ML: Devices, Architectures, Tools and Applications

Muhammad Shafique

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

As the complexity and diversity of machine/deep learning models is increasing at a rapid pace, multi-accelerator and distributed systems are becoming a critical component of the machine learning (ML) stack. Besides efficient compute engines and communication mechanisms, these systems also require intelligent strategies for mapping workloads to accelerators and memory management to achieve high performance and energy efficiency, while meeting the demands for high-performance ML/AI systems. This article presents an overview of the emerging trends in multi-accelerator and distributed systems designed for handling complex AI-powered application workloads.

Original languageEnglish (US)
Title of host publication2023 60th ACM/IEEE Design Automation Conference, DAC 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350323481
DOIs
StatePublished - 2023
Event60th ACM/IEEE Design Automation Conference, DAC 2023 - San Francisco, United States
Duration: Jul 9 2023Jul 13 2023

Publication series

NameProceedings - Design Automation Conference
Volume2023-July
ISSN (Print)0738-100X

Conference

Conference60th ACM/IEEE Design Automation Conference, DAC 2023
Country/TerritoryUnited States
CitySan Francisco
Period7/9/237/13/23

Keywords

  • AI
  • Architecture
  • Deep Learning
  • Distributed System
  • DNN
  • Efficiency
  • Energy
  • Machine Learning
  • Memory
  • Multi-Accelerator

ASJC Scopus subject areas

  • Computer Science Applications
  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'Emerging Trends in Multi-Accelerator and Distributed System for ML: Devices, Architectures, Tools and Applications'. Together they form a unique fingerprint.

Cite this