Abstract
Spiking Neural Networks (SNNs) bear the potential for achieving high accuracy with unsupervised learning settings and ultra-low-energy consumption due to their bio-plausible sparse computations. The unsupervised learning capabilities enable the SNNs to efficiently learn unlabeled data, which is desired for real-world applications, as gathering unlabeled data is cheaper than the labeled one. These advantages make SNNs suitable for solving Machine Learning (ML) tasks on resource-and energy-constrained embedded platforms. However, state-of-the-art SNN models require large memory and high energy consumption to achieve high accuracy, thereby making it challenging to employ SNNs on embedded platforms. In this chapter, we discuss our design methodology to improve the energy efficiency of SNNs for enabling their embedded implementations, while maintaining accuracy through unsupervised learning settings and meeting the memory and energy constraints. The key ideas of our design methodology are reducing the neuron operations, improving the learning quality, quantizing the network parameters, and employing approximate DRAM while considering the memory and energy budgets.
Original language | English (US) |
---|---|
Title of host publication | Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing |
Subtitle of host publication | Software Optimizations and Hardware/Software Codesign |
Publisher | Springer Nature |
Pages | 15-35 |
Number of pages | 21 |
ISBN (Electronic) | 9783031399329 |
ISBN (Print) | 9783031399312 |
DOIs | |
State | Published - Jan 1 2023 |
Keywords
- Approximate DRAM
- Embedded systems
- Energy efficiency
- Learning enhancements
- Memory optimization
- Spiking neural networks
ASJC Scopus subject areas
- General Computer Science
- General Engineering
- General Social Sciences