Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap

Muhammad Shafique, Rehan Hafiz, Muhammad Usama Javed, Sarmad Abbas, Lukas Sekanina, Zdenek Vasicek, Vojtech Mrazek

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.

Original languageEnglish (US)
Title of host publicationProceedings - 2017 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2017
EditorsRicardo Reis, Mircea Stan, Michael Huebner, Nikolaos Voros
PublisherIEEE Computer Society Press
Pages627-632
Number of pages6
ISBN (Electronic)9781509067626
DOIs
StatePublished - Jul 20 2017
Event2017 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2017 - Bochum, North Rhine-Westfalia, Germany
Duration: Jul 3 2017Jul 5 2017

Publication series

NameProceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
Volume2017-July
ISSN (Print)2159-3469
ISSN (Electronic)2159-3477

Conference

Conference2017 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2017
Country/TerritoryGermany
CityBochum, North Rhine-Westfalia
Period7/3/177/5/17

Keywords

  • accelerators
  • adaptive
  • approximate computing
  • architecture
  • CGRA
  • deep learning
  • energy efficiency
  • FPGA
  • low power
  • machine learning
  • memory
  • neural networks
  • performance
  • roadmap

ASJC Scopus subject areas

  • Hardware and Architecture
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap'. Together they form a unique fingerprint.

Cite this