TY - JOUR
T1 - Adaptive backstepping for distributed optimization
AU - Qin, Zhengyan
AU - Liu, Tengfei
AU - Jiang, Zhong Ping
N1 - Funding Information:
Prof. Jiang is a recipient of the prestigious Queen Elizabeth II Fellowship Award from the Australian Research Council, CAREER Award from the U.S. National Science Foundation, JSPS Invitation Fellowship from the Japan Society for the Promotion of Science, Distinguished Overseas Chinese Scholar Award from the NSF of China, and several best paper awards. He has served as Deputy Editor-in-Chief, Senior Editor and Associate Editor for numerous journals. Prof. Jiang is a Fellow of the IEEE, a Fellow of the IFAC, a Fellow of the CAA and is among the Clarivate Analytics Highly Cited Researchers. In 2021, he received the 2020 Control Theory and Technology Best Paper Award and was elected as a foreign member of the Academia Europaea (Academy of Europe).
Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grants U1911401 , and in part by the U.S. National Science Foundation under Grant EPCN-1903781 . The material in this paper was partially presented at the 21st IFAC World Congress (IFAC 2020), July 12–17, 2020, Berlin, Germany. This paper was recommended for publication in revised form by Associate Editor Changyun Wen under the direction of Editor Miroslav Krstic.
Publisher Copyright:
© 2022 Elsevier Ltd
PY - 2022/7
Y1 - 2022/7
N2 - This paper presents an adaptive backstepping approach to distributed optimization for a class of nonlinear multi-agent systems with each agent represented by the parametric strict-feedback form. In particular, this paper does not assume known gradient functions of the local objective functions, and uses the measured gradient values depending on the agents’ real-time outputs instead. A stepwise method is presented to derive novel distributed adaptive optimization algorithms that steer the outputs of all the agents to the optimal solution of the total objective function. First, a novel distributed adaptive optimization algorithm is developed for first-order nonlinear uncertain multi-agent systems, supported by stability analysis and convergence proofs using Lyapunov arguments. Second, by means of Lyapunov arguments in the spirit of backstepping, a distributed adaptive optimization algorithm is presented for high-order strict-feedback systems with parametric uncertainty. Interesting extensions of the main result to practically important classes of systems with unknown virtual control coefficients, output feedback, and relative-measurement feedback are also discussed.
AB - This paper presents an adaptive backstepping approach to distributed optimization for a class of nonlinear multi-agent systems with each agent represented by the parametric strict-feedback form. In particular, this paper does not assume known gradient functions of the local objective functions, and uses the measured gradient values depending on the agents’ real-time outputs instead. A stepwise method is presented to derive novel distributed adaptive optimization algorithms that steer the outputs of all the agents to the optimal solution of the total objective function. First, a novel distributed adaptive optimization algorithm is developed for first-order nonlinear uncertain multi-agent systems, supported by stability analysis and convergence proofs using Lyapunov arguments. Second, by means of Lyapunov arguments in the spirit of backstepping, a distributed adaptive optimization algorithm is presented for high-order strict-feedback systems with parametric uncertainty. Interesting extensions of the main result to practically important classes of systems with unknown virtual control coefficients, output feedback, and relative-measurement feedback are also discussed.
KW - Adaptive backstepping
KW - Distributed optimization
KW - Feedback optimization
KW - Nonlinear systems
KW - Parametric uncertainties
UR - http://www.scopus.com/inward/record.url?scp=85129303171&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85129303171&partnerID=8YFLogxK
U2 - 10.1016/j.automatica.2022.110304
DO - 10.1016/j.automatica.2022.110304
M3 - Article
AN - SCOPUS:85129303171
SN - 0005-1098
VL - 141
JO - Automatica
JF - Automatica
M1 - 110304
ER -