TY - JOUR
T1 - CAxCNN
T2 - Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks
AU - Riaz, Mohsin
AU - Hafiz, Rehan
AU - Khaliq, Salman Abdul
AU - Faisal, Muhammad
AU - Iqbal, Hafiz Talha
AU - Ali, Mohsen
AU - Shafique, Muhammad
N1 - Funding Information:
This work was supported in part by the HEC (NRPU Project), AxVision-Application-Specific and Data-Aware Approximate-Computing for Energy Efficient Image and Vision Processing Applications under Grant 10150.
Publisher Copyright:
© 2013 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020
Y1 - 2020
N2 - The design of hardware-friendly architectures with low computational overhead is desirable for low latency realization of CNN on resource-constrained embedded platforms. In this work, we propose CAxCNN, a Canonic Sign Digit (CSD) based approximation methodology for representing the filter weights of pre-trained CNNs.The proposed CSD representation allows the use of multipliers with reduced computational complexity. The technique can be applied on top of state-of-the-art CNN quantization schemes in a complementary manner. Our experimental results on a variety of CNNs, trained on MNIST, CIFAR-10 and ImageNet datasets, demonstrate that our methodology provides CNN designs with multiple levels of classification accuracy, without requiring any retraining, and while having a low area and computational overhead. Furthermore, when applied in conjunction with a state-of-art quantization scheme, CAxCNN allows the use of multipliers, which offer 77% logic area reduction, as compared to their accurate counterpart, while incurring a drop in Top-1 accuracy of just 5.63% for a VGG-16 network trained on ImageNet.
AB - The design of hardware-friendly architectures with low computational overhead is desirable for low latency realization of CNN on resource-constrained embedded platforms. In this work, we propose CAxCNN, a Canonic Sign Digit (CSD) based approximation methodology for representing the filter weights of pre-trained CNNs.The proposed CSD representation allows the use of multipliers with reduced computational complexity. The technique can be applied on top of state-of-the-art CNN quantization schemes in a complementary manner. Our experimental results on a variety of CNNs, trained on MNIST, CIFAR-10 and ImageNet datasets, demonstrate that our methodology provides CNN designs with multiple levels of classification accuracy, without requiring any retraining, and while having a low area and computational overhead. Furthermore, when applied in conjunction with a state-of-art quantization scheme, CAxCNN allows the use of multipliers, which offer 77% logic area reduction, as compared to their accurate counterpart, while incurring a drop in Top-1 accuracy of just 5.63% for a VGG-16 network trained on ImageNet.
KW - approximate computing
KW - canonic sign digits
KW - Convolution neural networks
KW - dedicated accelerators
UR - http://www.scopus.com/inward/record.url?scp=85089232188&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089232188&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2020.3008256
DO - 10.1109/ACCESS.2020.3008256
M3 - Article
AN - SCOPUS:85089232188
SN - 2169-3536
VL - 8
SP - 127014
EP - 127021
JO - IEEE Access
JF - IEEE Access
M1 - 9137167
ER -