Guest Editorial Special Issue on Reinforcement Learning-Based Control: Data-Efficient and Resilient Methods

Weinan Gao, Na Li, Kyriakos G. Vamvoudakis, Fei Richard Yu, Zhong Ping Jiang

Research output: Contribution to journalReview articlepeer-review

Abstract

As an important branch of machine learning, reinforcement learning (RL) has proved its efficiency in many emerging applications in science and engineering. A remarkable advantage of RL is that it enables agents to maximize their cumulative rewards through online exploration and interactions with unknown (or partially unknown) and uncertain environments, which is regarded as a variant of data-driven adaptive optimal control methods. However, the successful implementation of RL-based control systems usually relies on a good quantity of online data due to its data-driven nature. Therefore, it is imperative to develop data-efficient RL methods for control systems to reduce the required number of interactions with the external environment. Moreover, network-aware issues, such as cyberattacks, dropout packet and communication latency, and actuator and sensor faults, are challenging conundrums that threaten the safety, security, stability, and reliability of network control systems. Consequently, it is significant to develop safe and resilient RL mechanisms.

Original languageEnglish (US)
Pages (from-to)3103-3106
Number of pages4
JournalIEEE transactions on neural networks and learning systems
Volume35
Issue number3
DOIs
StatePublished - Mar 1 2024

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Guest Editorial Special Issue on Reinforcement Learning-Based Control: Data-Efficient and Resilient Methods'. Together they form a unique fingerprint.

Cite this