TY - JOUR
T1 - A Quality-Aware Voltage Overscaling Framework to Improve the Energy Efficiency and Lifetime of TPUs Based on Statistical Error Modeling
AU - Senobari, Alireza
AU - Vafaei, Jafar
AU - Akbari, Omid
AU - Hochberger, Christian
AU - Shafique, Muhammad
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - Deep neural networks (DNNs) are a type of artificial intelligence models that are inspired by the structure and function of the human brain, designed to process and learn from large amounts of data, making them particularly well-suited for tasks such as image and speech recognition. However, applications of DNNs are experiencing emerging growth due to the deployment of specialized accelerators such as the Google's Tensor Processing Units (TPUs). In large-scale deployments, the energy efficiency of such accelerators may become a critical concern. In the voltage overscaling (VOS) technique, the operating voltage of the system is scaled down beyond the nominal operating voltage, which increases the energy efficiency and lifetime of digital circuits. The VOS technique is usually performed without changing the frequency resulting in timing errors. However, some applications such as multimedia processing, including DNNs, have intrinsic resilience against errors and noise. In this paper, we exploit the inherent resilience of DNNs to propose a quality-aware voltage overscaling framework for TPUs, named X-TPU, which offers higher energy efficiency and lifetime compared to conventional TPUs. The X-TPU framework is composed of two main parts, a modified TPU architecture that supports a runtime voltage overscaling, and a statistical error modeling-based algorithm to determine the voltage of neurons such that the output quality is retained above a given user-defined quality threshold. We synthesized a single-neuron architecture using a 15-nm FinFET technology under various operating voltage levels. Then, we extracted different statistical error models for a neuron corresponding to those voltage levels. Using these models and the proposed algorithm, we determined the appropriate voltage of each neuron (the voltage level of each column of the X-TPU). Results show that running a DNN on X-TPU can achieve 32% energy saving for only 0.6% accuracy loss.
AB - Deep neural networks (DNNs) are a type of artificial intelligence models that are inspired by the structure and function of the human brain, designed to process and learn from large amounts of data, making them particularly well-suited for tasks such as image and speech recognition. However, applications of DNNs are experiencing emerging growth due to the deployment of specialized accelerators such as the Google's Tensor Processing Units (TPUs). In large-scale deployments, the energy efficiency of such accelerators may become a critical concern. In the voltage overscaling (VOS) technique, the operating voltage of the system is scaled down beyond the nominal operating voltage, which increases the energy efficiency and lifetime of digital circuits. The VOS technique is usually performed without changing the frequency resulting in timing errors. However, some applications such as multimedia processing, including DNNs, have intrinsic resilience against errors and noise. In this paper, we exploit the inherent resilience of DNNs to propose a quality-aware voltage overscaling framework for TPUs, named X-TPU, which offers higher energy efficiency and lifetime compared to conventional TPUs. The X-TPU framework is composed of two main parts, a modified TPU architecture that supports a runtime voltage overscaling, and a statistical error modeling-based algorithm to determine the voltage of neurons such that the output quality is retained above a given user-defined quality threshold. We synthesized a single-neuron architecture using a 15-nm FinFET technology under various operating voltage levels. Then, we extracted different statistical error models for a neuron corresponding to those voltage levels. Using these models and the proposed algorithm, we determined the appropriate voltage of each neuron (the voltage level of each column of the X-TPU). Results show that running a DNN on X-TPU can achieve 32% energy saving for only 0.6% accuracy loss.
KW - accuracy
KW - approximate computing
KW - deep neural networks
KW - energy efficiency
KW - statistical error analysis
KW - TPU
KW - Voltage overscaling
UR - http://www.scopus.com/inward/record.url?scp=85197553970&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85197553970&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3422012
DO - 10.1109/ACCESS.2024.3422012
M3 - Article
AN - SCOPUS:85197553970
SN - 2169-3536
VL - 12
SP - 92181
EP - 92197
JO - IEEE Access
JF - IEEE Access
ER -