TY - JOUR
T1 - Guest Editorial Special Issue on Reinforcement Learning-Based Control
T2 - Data-Efficient and Resilient Methods
AU - Gao, Weinan
AU - Li, Na
AU - Vamvoudakis, Kyriakos G.
AU - Yu, Fei Richard
AU - Jiang, Zhong Ping
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2024/3/1
Y1 - 2024/3/1
N2 - As an important branch of machine learning, reinforcement learning (RL) has proved its efficiency in many emerging applications in science and engineering. A remarkable advantage of RL is that it enables agents to maximize their cumulative rewards through online exploration and interactions with unknown (or partially unknown) and uncertain environments, which is regarded as a variant of data-driven adaptive optimal control methods. However, the successful implementation of RL-based control systems usually relies on a good quantity of online data due to its data-driven nature. Therefore, it is imperative to develop data-efficient RL methods for control systems to reduce the required number of interactions with the external environment. Moreover, network-aware issues, such as cyberattacks, dropout packet and communication latency, and actuator and sensor faults, are challenging conundrums that threaten the safety, security, stability, and reliability of network control systems. Consequently, it is significant to develop safe and resilient RL mechanisms.
AB - As an important branch of machine learning, reinforcement learning (RL) has proved its efficiency in many emerging applications in science and engineering. A remarkable advantage of RL is that it enables agents to maximize their cumulative rewards through online exploration and interactions with unknown (or partially unknown) and uncertain environments, which is regarded as a variant of data-driven adaptive optimal control methods. However, the successful implementation of RL-based control systems usually relies on a good quantity of online data due to its data-driven nature. Therefore, it is imperative to develop data-efficient RL methods for control systems to reduce the required number of interactions with the external environment. Moreover, network-aware issues, such as cyberattacks, dropout packet and communication latency, and actuator and sensor faults, are challenging conundrums that threaten the safety, security, stability, and reliability of network control systems. Consequently, it is significant to develop safe and resilient RL mechanisms.
UR - http://www.scopus.com/inward/record.url?scp=85187137451&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85187137451&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2024.3362092
DO - 10.1109/TNNLS.2024.3362092
M3 - Review article
AN - SCOPUS:85187137451
SN - 2162-237X
VL - 35
SP - 3103
EP - 3106
JO - IEEE transactions on neural networks and learning systems
JF - IEEE transactions on neural networks and learning systems
IS - 3
ER -