ViT-ReT: Vision and Recurrent Transformer Neural Networks for Human Activity Recognition in Videos

James Wensel, Hayat Ullah, Arslan Munir

Research output: Contribution to journalArticlepeer-review

Abstract

Human activity recognition is an emerging and important area in computer vision which seeks to determine the activity an individual or group of individuals are performing. The applications of this field ranges from generating highlight videos in sports, to intelligent surveillance and gesture recognition. Most activity recognition systems rely on a combination of convolutional neural networks (CNNs) to perform feature extraction from the data and recurrent neural networks (RNNs) to determine the time dependent nature of the data. This paper proposes and designs two transformer neural networks for human activity recognition: a recurrent transformer (ReT), a specialized neural network used to make predictions on sequences of data, as well as a vision transformer (ViT), a transformer optimized for extracting salient features from images, to improve speed and scalability of activity recognition. We have provided an extensive comparison of the proposed transformer neural networks with the contemporary CNN and RNN-based human activity recognition models in terms of speed and accuracy for four publicly available human action datasets. Experimental results reveal that the proposed ViT-ReT framework attains a speedup of $2\times $ over the baseline ResNet50-LSTM approach while attaining nearly the same level of accuracy. Furthermore, results show that the proposed ViT-ReT framework attains significant improvements over the state-of-the-art human action recognition methods in terms of both model accuracy and runtime for each of the datasets used in our experiments, thus verifying the suitability of the proposed ViT-ReT framework for human activity recognition in resource-constrained and real-time environments.

Original languageEnglish (US)
Pages (from-to)72227-72249
Number of pages23
JournalIEEE Access
Volume11
DOIs
StatePublished - 2023

Keywords

  • Transformer neural networks
  • convolutional neural networks
  • human activity recognition
  • recurrent neural networks
  • video analytics

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'ViT-ReT: Vision and Recurrent Transformer Neural Networks for Human Activity Recognition in Videos'. Together they form a unique fingerprint.

Cite this