Embedded-ViT: A Framework for Embedded Deployment of Vision-Transformer in Medical Applications

Erik Ostrowski, Muhammad Shafique

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Transformer architectures have dramatically influenced the field of natural language processing and are becoming more popular in the computer vision field, too. However, the Transformer’s core self-attention mechanism has quadratic computational complexity concerning the number of tokens. Thus, they usually require big GPUs for deployment, contrary to the Internet of Things trend, which enables the mobile deployment of AI applications, which involves the development of efficient, lightweight neural networks to meet the strict hardware limitations of the target platforms. The cost and ease of deployment are even more critical in the medical field, and not every clinic can afford to buy a lot of powerful GPUs to aid the physicians. Therefore, research proposed some methods to achieve more efficient transformer networks, but to our knowledge, very limited work targeted a level of complexity reduction that allows the embedded deployment of transformers in the medical field. In this paper, we propose our Embedded-ViT framework with which we can drastically reduce the complexity of standard vision transformer (ViT) networks. We accomplish that using several compression techniques: efficient model architecture changes, reduced input resolution, pruning, or quantization. Our optimizations can significantly compress the model while maintaining a desired prediction quality level. We prove the capabilities of our framework by applying it to a state-of-the-art ViT and its variations. We tested the results of our Embedded-ViT on the publicly available Synapse dataset for multi-organ segmentation. Our framework cuts the computational load by half while maintaining a slightly higher level of prediction quality. Moreover, we will thoroughly analyze the hardware requirements and throughput achieved on different platforms, including the embedded Jetson Nano from Nvidia. The framework is open-source and accessible online at https://github.com/ErikOstrowski/Embedded-ViT.

Original languageEnglish (US)
Title of host publicationAdvances in Visual Computing - 19th International Symposium, ISVC 2024, Proceedings
EditorsGeorge Bebis, Vishal Patel, Jinwei Gu, Julian Panetta, Yotam Gingold, Kyle Johnsen, Mohammed Safayet Arefin, Soumya Dutta, Ayan Biswas
PublisherSpringer Science and Business Media Deutschland GmbH
Pages371-382
Number of pages12
ISBN (Print)9783031773884
DOIs
StatePublished - 2025
Event19th International Symposium on Visual Computing, ISVC 2024 - Lake Tahoe, United States
Duration: Oct 21 2024Oct 23 2024

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume15047 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference19th International Symposium on Visual Computing, ISVC 2024
Country/TerritoryUnited States
CityLake Tahoe
Period10/21/2410/23/24

Keywords

  • CAD
  • Computer Vision
  • Embedded Deployment
  • Lightweight
  • Semantic Segmentation
  • Vision Transformer

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Embedded-ViT: A Framework for Embedded Deployment of Vision-Transformer in Medical Applications'. Together they form a unique fingerprint.

Cite this