TY - GEN
T1 - TopSpark
T2 - 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023
AU - Putra, Rachmad Vidya Wicaksana
AU - Shafique, Muhammad
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Autonomous mobile agents (e.g., mobile ground robots and UAVs) typically require low-power/energy-efficient machine learning (ML) algorithms to complete their ML-based tasks (e.g., object recognition) while adapting to diverse environments, as mobile agents are usually powered by batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments. Recent works studied that the energy consumption of SNNs can be optimized by reducing the computation time of each neuron for processing a sequence of spikes (i.e., timestep). However, state-of-the-art techniques rely on intensive design searches to determine fixed timestep settings for only the inference phase, thereby hindering the SNN systems from achieving further energy efficiency gains in both the training and inference phases. These techniques also restrict the SNN systems from performing efficient online learning at run time. Toward this, we propose TopSpark, a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both the training and inference phases, while keeping its accuracy close to the accuracy of SNNs without timestep reduction. The key ideas of our TopSpark include: (1) analyzing the impact of different timestep settings on the accuracy; (2) identifying neuron parameters that have a significant impact on accuracy in different timesteps; (3) employing parameter enhancements that make SNNs effectively perform learning and inference using less spiking activity due to reduced timesteps; and (4) developing a strategy to tradeoff accuracy, latency, and energy to meet the design requirements. The experimental results show that, our TopSpark saves the SNN latency by 3.9x as well as energy consumption by 3.5x for training and 3.3x for inference on average, across different network sizes, learning rules, and workloads, while maintaining the accuracy within 2 % of that of SNNs without timestep reduction. In this manner, TopSpark enables low-power/energy-efficient SNN processing for autonomous mobile agents.
AB - Autonomous mobile agents (e.g., mobile ground robots and UAVs) typically require low-power/energy-efficient machine learning (ML) algorithms to complete their ML-based tasks (e.g., object recognition) while adapting to diverse environments, as mobile agents are usually powered by batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments. Recent works studied that the energy consumption of SNNs can be optimized by reducing the computation time of each neuron for processing a sequence of spikes (i.e., timestep). However, state-of-the-art techniques rely on intensive design searches to determine fixed timestep settings for only the inference phase, thereby hindering the SNN systems from achieving further energy efficiency gains in both the training and inference phases. These techniques also restrict the SNN systems from performing efficient online learning at run time. Toward this, we propose TopSpark, a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both the training and inference phases, while keeping its accuracy close to the accuracy of SNNs without timestep reduction. The key ideas of our TopSpark include: (1) analyzing the impact of different timestep settings on the accuracy; (2) identifying neuron parameters that have a significant impact on accuracy in different timesteps; (3) employing parameter enhancements that make SNNs effectively perform learning and inference using less spiking activity due to reduced timesteps; and (4) developing a strategy to tradeoff accuracy, latency, and energy to meet the design requirements. The experimental results show that, our TopSpark saves the SNN latency by 3.9x as well as energy consumption by 3.5x for training and 3.3x for inference on average, across different network sizes, learning rules, and workloads, while maintaining the accuracy within 2 % of that of SNNs without timestep reduction. In this manner, TopSpark enables low-power/energy-efficient SNN processing for autonomous mobile agents.
UR - http://www.scopus.com/inward/record.url?scp=85182523392&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85182523392&partnerID=8YFLogxK
U2 - 10.1109/IROS55552.2023.10342499
DO - 10.1109/IROS55552.2023.10342499
M3 - Conference contribution
AN - SCOPUS:85182523392
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 3561
EP - 3567
BT - 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 October 2023 through 5 October 2023
ER -