TY - GEN
T1 - Embedded-ViT
T2 - 19th International Symposium on Visual Computing, ISVC 2024
AU - Ostrowski, Erik
AU - Shafique, Muhammad
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - Transformer architectures have dramatically influenced the field of natural language processing and are becoming more popular in the computer vision field, too. However, the Transformer’s core self-attention mechanism has quadratic computational complexity concerning the number of tokens. Thus, they usually require big GPUs for deployment, contrary to the Internet of Things trend, which enables the mobile deployment of AI applications, which involves the development of efficient, lightweight neural networks to meet the strict hardware limitations of the target platforms. The cost and ease of deployment are even more critical in the medical field, and not every clinic can afford to buy a lot of powerful GPUs to aid the physicians. Therefore, research proposed some methods to achieve more efficient transformer networks, but to our knowledge, very limited work targeted a level of complexity reduction that allows the embedded deployment of transformers in the medical field. In this paper, we propose our Embedded-ViT framework with which we can drastically reduce the complexity of standard vision transformer (ViT) networks. We accomplish that using several compression techniques: efficient model architecture changes, reduced input resolution, pruning, or quantization. Our optimizations can significantly compress the model while maintaining a desired prediction quality level. We prove the capabilities of our framework by applying it to a state-of-the-art ViT and its variations. We tested the results of our Embedded-ViT on the publicly available Synapse dataset for multi-organ segmentation. Our framework cuts the computational load by half while maintaining a slightly higher level of prediction quality. Moreover, we will thoroughly analyze the hardware requirements and throughput achieved on different platforms, including the embedded Jetson Nano from Nvidia. The framework is open-source and accessible online at https://github.com/ErikOstrowski/Embedded-ViT.
AB - Transformer architectures have dramatically influenced the field of natural language processing and are becoming more popular in the computer vision field, too. However, the Transformer’s core self-attention mechanism has quadratic computational complexity concerning the number of tokens. Thus, they usually require big GPUs for deployment, contrary to the Internet of Things trend, which enables the mobile deployment of AI applications, which involves the development of efficient, lightweight neural networks to meet the strict hardware limitations of the target platforms. The cost and ease of deployment are even more critical in the medical field, and not every clinic can afford to buy a lot of powerful GPUs to aid the physicians. Therefore, research proposed some methods to achieve more efficient transformer networks, but to our knowledge, very limited work targeted a level of complexity reduction that allows the embedded deployment of transformers in the medical field. In this paper, we propose our Embedded-ViT framework with which we can drastically reduce the complexity of standard vision transformer (ViT) networks. We accomplish that using several compression techniques: efficient model architecture changes, reduced input resolution, pruning, or quantization. Our optimizations can significantly compress the model while maintaining a desired prediction quality level. We prove the capabilities of our framework by applying it to a state-of-the-art ViT and its variations. We tested the results of our Embedded-ViT on the publicly available Synapse dataset for multi-organ segmentation. Our framework cuts the computational load by half while maintaining a slightly higher level of prediction quality. Moreover, we will thoroughly analyze the hardware requirements and throughput achieved on different platforms, including the embedded Jetson Nano from Nvidia. The framework is open-source and accessible online at https://github.com/ErikOstrowski/Embedded-ViT.
KW - CAD
KW - Computer Vision
KW - Embedded Deployment
KW - Lightweight
KW - Semantic Segmentation
KW - Vision Transformer
UR - http://www.scopus.com/inward/record.url?scp=85218470820&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85218470820&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-77389-1_29
DO - 10.1007/978-3-031-77389-1_29
M3 - Conference contribution
AN - SCOPUS:85218470820
SN - 9783031773884
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 371
EP - 382
BT - Advances in Visual Computing - 19th International Symposium, ISVC 2024, Proceedings
A2 - Bebis, George
A2 - Patel, Vishal
A2 - Gu, Jinwei
A2 - Panetta, Julian
A2 - Gingold, Yotam
A2 - Johnsen, Kyle
A2 - Arefin, Mohammed Safayet
A2 - Dutta, Soumya
A2 - Biswas, Ayan
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 21 October 2024 through 23 October 2024
ER -