TY - GEN
T1 - SwiftTron
T2 - 2023 International Joint Conference on Neural Networks, IJCNN 2023
AU - Marchisio, Alberto
AU - Dura, Davide
AU - Capra, Maurizio
AU - Martina, Maurizio
AU - Masera, Guido
AU - Shafique, Muhammad
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Transformers' compute- intensive operations pose enormous challenges for their deployment in resource- constrained EdgeAI / tiny ML devices. As an established neural network compression technique, quantization reduces the hardware computational and memory resources. In particular, fixed-point quantization is desirable to ease the computations using lightweight blocks, like adders and multipliers, of the underlying hardware. However, deploying fully-quantized Transformers on existing general-purpose hardware, generic AI accelerators, or specialized architectures for Transformers with floating-point units might be infeasible and/or inefficient. Towards this, we propose SwiftTron, an efficient specialized hardware accelerator designed for Quantized Transformers. SwiftTron supports the execution of different types of Transformers' operations (like Attention, Softmax, GELU, and Layer Normalization) and accounts for diverse scaling factors to perform correct computations. We synthesize the complete SwiftTron architecture in a 65 nm CMOS technology with the ASIC design flow. Our Accelerator executes the RoBERTa-base model in 1.83 ns, while consuming 33.64 mW power, and occupying an area of 273 mm 2• To ease the reproducibility, the RTL of our SwiftTron architecture is released at https://github.com/albertomarchisio/SwiftTron.
AB - Transformers' compute- intensive operations pose enormous challenges for their deployment in resource- constrained EdgeAI / tiny ML devices. As an established neural network compression technique, quantization reduces the hardware computational and memory resources. In particular, fixed-point quantization is desirable to ease the computations using lightweight blocks, like adders and multipliers, of the underlying hardware. However, deploying fully-quantized Transformers on existing general-purpose hardware, generic AI accelerators, or specialized architectures for Transformers with floating-point units might be infeasible and/or inefficient. Towards this, we propose SwiftTron, an efficient specialized hardware accelerator designed for Quantized Transformers. SwiftTron supports the execution of different types of Transformers' operations (like Attention, Softmax, GELU, and Layer Normalization) and accounts for diverse scaling factors to perform correct computations. We synthesize the complete SwiftTron architecture in a 65 nm CMOS technology with the ASIC design flow. Our Accelerator executes the RoBERTa-base model in 1.83 ns, while consuming 33.64 mW power, and occupying an area of 273 mm 2• To ease the reproducibility, the RTL of our SwiftTron architecture is released at https://github.com/albertomarchisio/SwiftTron.
KW - ASIC
KW - Attention
KW - GELU
KW - Hardware Architecture
KW - Layer Normalization
KW - Machine Learning
KW - Quantization
KW - Softmax
KW - Transformers
UR - http://www.scopus.com/inward/record.url?scp=85169586445&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85169586445&partnerID=8YFLogxK
U2 - 10.1109/IJCNN54540.2023.10191521
DO - 10.1109/IJCNN54540.2023.10191521
M3 - Conference contribution
AN - SCOPUS:85169586445
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2023 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 June 2023 through 23 June 2023
ER -