HW/SW co-design and co-optimizations for deep learning

Alberto Marchisio, Muhammad Abdullah Hanif, Rachmad Vidya Wicaksana Putra, Muhammad Shafique

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep Learning algorithms have been proven to provide state-of-the-art results in many applications but at the cost of a high computational complexity. Therefore, accelerating such algorithms in hardware is highly needed. However, since the computational requirements are growing exponentially along with the accuracy, their demand for hardware resources is significant. To tackle this issue, we propose a methodology, involving both software and hardware, to optimize the Deep Neural Networks (DNNs). We discuss and analyze pruning, approximations through quantization and specialized accelerators for DNN inference. For each phase of the methodology, we provide quantitative comparisons with the existing techniques and hardware platforms.

Original languageEnglish (US)
Title of host publicationWorkshop Proceedings - 2018 INTelligent Embedded Systems Architectures and Applications, INTESA 2018
PublisherAssociation for Computing Machinery
Pages13-18
Number of pages6
ISBN (Electronic)9781450365987
DOIs
StatePublished - Oct 4 2018
Event2018 Workshop on INTelligent Embedded Systems Architectures and Applications, INTESA 2018 - Torino, Italy
Duration: Oct 4 2018 → …

Publication series

NameACM International Conference Proceeding Series

Conference

Conference2018 Workshop on INTelligent Embedded Systems Architectures and Applications, INTESA 2018
CountryItaly
CityTorino
Period10/4/18 → …

Keywords

  • Deep Learning
  • Pruning
  • Quantization
  • Systolic Array

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Computer Networks and Communications

Fingerprint Dive into the research topics of 'HW/SW co-design and co-optimizations for deep learning'. Together they form a unique fingerprint.

Cite this