CAMEL: Co-Designing AI Models and eDRAMs for Efficient On-Device Learning

Sai Qian Zhang, Thierry Tambe, Nestor Cuevas, Gu Yeon Wei, David Brooks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

On-device learning allows AI models to adapt to user data, thereby enhancing service quality on edge platforms. However, training AI on resource-limited devices poses significant challenges due to the demanding computing workload and the substantial memory consumption and data access required by deep neural networks (DNNs). To address these issues, we propose utilizing embedded dynamic random-access memory (eDRAM) as the primary storage medium for transient training data. In comparison to static random-access memory (SRAM), eDRAM provides higher storage density and lower leakage power, resulting in reduced access cost and power leakage. Nevertheless, to maintain the integrity of the stored data, periodic power-hungry refresh operations could potentially degrade system performance. To minimize the occurrence of expensive eDRAM refresh operations, it is beneficial to shorten the lifetime of stored data during the training process. To achieve this, we adopt the principles of algorithm and hardware co-design, introducing a family of reversible DNN architectures that effectively decrease data lifetime and storage costs throughout training. Additionally, we present a highly efficient on-device training engine named CAMEL, which leverages eDRAM as the primary on-chip memory. This engine enables efficient on-device training with significantly reduced memory usage and off-chip DRAM traffic while maintaining superior training accuracy. We evaluate our CAMEL system on multiple DNNs with different datasets, demonstrating a 2.5× speedup of the training process and 2.8× training energy savings than the other baseline hardware platforms.

Original languageEnglish (US)
Title of host publicationProceedings - 2024 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2024
PublisherIEEE Computer Society
Pages861-875
Number of pages15
ISBN (Electronic)9798350393132
DOIs
StatePublished - 2024
Event30th IEEE International Symposium on High-Performance Computer Architecture, HPCA 2024 - Edinburgh, United Kingdom
Duration: Mar 2 2024Mar 6 2024

Publication series

NameProceedings - International Symposium on High-Performance Computer Architecture
ISSN (Print)1530-0897

Conference

Conference30th IEEE International Symposium on High-Performance Computer Architecture, HPCA 2024
Country/TerritoryUnited Kingdom
CityEdinburgh
Period3/2/243/6/24

ASJC Scopus subject areas

  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'CAMEL: Co-Designing AI Models and eDRAMs for Efficient On-Device Learning'. Together they form a unique fingerprint.

Cite this