Progressive Learning With Recurrent Neural Network for Sequence Classification

Rupesh Raj Karn, Johann Knechtel, Ozgur Sinanoglu

Research output: Contribution to journalArticlepeer-review

Abstract

Progressive learning is a deep learning framework in which tasks are learned sequentially, with the capacity to leverage past knowledge from previously acquired tasks to aid in the learning and execution of new ones. It is a critical component for dealing with dynamic data, as the model is trained for a few labels and then re-used to categorize additional labels when fresh data becomes available over time. The model size is scaled for new learning while the model parameters used for previous learnings are preserved. The relevance of progressive learning is well accepted in convolutional and fully connected vanilla neural networks; however, it is primarily under-documented in a recurrent neural network for sequence classification problems. Furthermore, there is no defined strategy in the literature for tuning the recurrent layers to learn increasing tasks. We present a method for using a recurrent neural network for progressive learning. We utilize Ray Tune, a well-known hyper-parameter optimization toolbox, to standardize the methods of concurrent hyper-parameter tuning for the model that adapts with each progressive task. We demonstrate our method using two image datasets, MNIST and Devanagari.

Original languageEnglish (US)
Pages (from-to)1591-1595
Number of pages5
JournalIEEE Transactions on Circuits and Systems II: Express Briefs
Volume71
Issue number3
DOIs
StatePublished - Mar 1 2024

Keywords

  • Devanagari
  • MNIST
  • Ray Tune
  • Recurrent neural network
  • hyper-parameters
  • progressive learning

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Progressive Learning With Recurrent Neural Network for Sequence Classification'. Together they form a unique fingerprint.

Cite this