The exponentially growing rates of data production in the current era of internet of things (IoT), cyber-physical systems (CPS), and big data pose ever-increasing demands for massive data processing, storage, and transmission. Such systems are required to be robust, intelligent, and self-learning while possessing the capabilities of high-performance and power-/energy-efficient systems. As a result, a hype in the artificial intelligence and machine learning research has surfaced in numerous communities (e.g., deep learning and hardware architecture). This chapter first provides a brief overview of machine learning and neural networks followed by few of the most prominent techniques that have been used so far for designing energy-efficient accelerators for machine learning algorithms, particularly related to deep neural networks. Inspired by the scalable effort principles of human brains (i.e., scaling computing effort for required precision of the task, or for the recurrent execution of same/similar tasks), we focus on the (re-)emerging area of approximate computing (aka InExact Computing) which aims at relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power/energy, and performance efficiency of systems by orders of magnitude at the cost of reduced output quality. We also guide through a holistic methodology that encompasses the complete design phase, i.e., from algorithm to architectures. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.
|Original language||English (US)|
|Title of host publication||Machine Learning in VLSI Computer-Aided Design|
|Publisher||Springer International Publishing|
|Number of pages||32|
|State||Published - Jan 1 2019|
ASJC Scopus subject areas
- Computer Science(all)