TY - GEN
T1 - Respawn
T2 - 40th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2021
AU - Putra, Rachmad Vidya Wicaksana
AU - Hanif, Muhammad Abdullah
AU - Shafique, Muhammad
N1 - Funding Information:
VII. ACKNOWLEDGMENT This work was partly supported by Intel Corporation through Gift funding for the project ”Cost-Effective [22] Dependability for Deep Neural Networks and Spiking Neural Networks”, and by Indonesia Endowment Fund for Education (LPDP) through Graduate Scholarship Program.
Publisher Copyright:
©2021 IEEE
PY - 2021
Y1 - 2021
N2 - Spiking neural networks (SNNs) have shown a potential for having low energy with unsupervised learning capabilities due to their biologically-inspired computation. However, they may suffer from accuracy degradation if their processing is performed under the presence of hardware-induced faults in memories, which can come from manufacturing defects or voltage-induced approximation errors. Since recent works still focus on the fault-modeling and random fault injection in SNNs, the impact of memory faults in SNN hardware architectures on accuracy and the respective fault-mitigation techniques are not thoroughly explored. Toward this, we propose ReSpawn, a novel framework for mitigating the negative impacts of faults in both the off-chip and on-chip memories for resilient and energy-efficient SNNs. The key mechanisms of ReSpawn are: (1) analyzing the fault tolerance of SNNs; and (2) improving the SNN fault tolerance through (a) fault-aware mapping (FAM) in memories, and (b) fault-aware training-and-mapping (FATM). If the training dataset is not fully available, FAM is employed through efficient bit-shuffling techniques that place the significant bits on the non-faulty memory cells and the insignificant bits on the faulty ones, while minimizing the memory access energy. Meanwhile, if the training dataset is fully available, FATM is employed by considering the faulty memory cells in the data mapping and training processes. The experimental results show that, compared to the baseline SNN without fault-mitigation techniques, ReSpawn with a fault-aware mapping scheme improves the accuracy by up to 70% for a network with 900 neurons without retraining.
AB - Spiking neural networks (SNNs) have shown a potential for having low energy with unsupervised learning capabilities due to their biologically-inspired computation. However, they may suffer from accuracy degradation if their processing is performed under the presence of hardware-induced faults in memories, which can come from manufacturing defects or voltage-induced approximation errors. Since recent works still focus on the fault-modeling and random fault injection in SNNs, the impact of memory faults in SNN hardware architectures on accuracy and the respective fault-mitigation techniques are not thoroughly explored. Toward this, we propose ReSpawn, a novel framework for mitigating the negative impacts of faults in both the off-chip and on-chip memories for resilient and energy-efficient SNNs. The key mechanisms of ReSpawn are: (1) analyzing the fault tolerance of SNNs; and (2) improving the SNN fault tolerance through (a) fault-aware mapping (FAM) in memories, and (b) fault-aware training-and-mapping (FATM). If the training dataset is not fully available, FAM is employed through efficient bit-shuffling techniques that place the significant bits on the non-faulty memory cells and the insignificant bits on the faulty ones, while minimizing the memory access energy. Meanwhile, if the training dataset is fully available, FATM is employed by considering the faulty memory cells in the data mapping and training processes. The experimental results show that, compared to the baseline SNN without fault-mitigation techniques, ReSpawn with a fault-aware mapping scheme improves the accuracy by up to 70% for a network with 900 neurons without retraining.
KW - Approximation errors
KW - Energy efficiency
KW - Fault tolerance
KW - Fault-aware mapping
KW - Fault-aware training
KW - Manufacturing defects
KW - Memory faults
KW - Spiking neural networks
UR - http://www.scopus.com/inward/record.url?scp=85115835334&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85115835334&partnerID=8YFLogxK
U2 - 10.1109/ICCAD51958.2021.9643524
DO - 10.1109/ICCAD51958.2021.9643524
M3 - Conference contribution
AN - SCOPUS:85115835334
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
BT - 2021 40th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 November 2021 through 4 November 2021
ER -