Adaptive backstepping for distributed optimization

Zhengyan Qin, Tengfei Liu, Zhong Ping Jiang

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents an adaptive backstepping approach to distributed optimization for a class of nonlinear multi-agent systems with each agent represented by the parametric strict-feedback form. In particular, this paper does not assume known gradient functions of the local objective functions, and uses the measured gradient values depending on the agents’ real-time outputs instead. A stepwise method is presented to derive novel distributed adaptive optimization algorithms that steer the outputs of all the agents to the optimal solution of the total objective function. First, a novel distributed adaptive optimization algorithm is developed for first-order nonlinear uncertain multi-agent systems, supported by stability analysis and convergence proofs using Lyapunov arguments. Second, by means of Lyapunov arguments in the spirit of backstepping, a distributed adaptive optimization algorithm is presented for high-order strict-feedback systems with parametric uncertainty. Interesting extensions of the main result to practically important classes of systems with unknown virtual control coefficients, output feedback, and relative-measurement feedback are also discussed.

Original languageEnglish (US)
Article number110304
JournalAutomatica
Volume141
DOIs
StatePublished - Jul 2022

Keywords

  • Adaptive backstepping
  • Distributed optimization
  • Feedback optimization
  • Nonlinear systems
  • Parametric uncertainties

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Adaptive backstepping for distributed optimization'. Together they form a unique fingerprint.

Cite this