Distributed strategic learning with application to network security

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider in this paper a class of two-player nonzero-sum stochastic games with incomplete information. We develop fully distributed reinforcement learning algorithms, which require for each player a minimal amount of information regarding the other player. At each time, each player can be in an active mode or in a sleep mode. If a player is in an active mode, she updates her strategy and estimates of unknown quantities using a specific pure or hybrid learning pattern. We use stochastic approximation techniques to show that, under appropriate conditions, the pure or hybrid learning schemes with random updates can be studied using their deterministic ordinary differential equation (ODE) counterparts. Convergence to state-independent equilibria is analyzed under specific payoff functions. Results are applied to a class of security games in which the attacker and the defender adopt different learning schemes and update their strategies at random times.

Original languageEnglish (US)
Title of host publicationProceedings of the 2011 American Control Conference, ACC 2011
Pages4057-4062
Number of pages6
StatePublished - Sep 29 2011
Event2011 American Control Conference, ACC 2011 - San Francisco, CA, United States
Duration: Jun 29 2011Jul 1 2011

Publication series

NameProceedings of the American Control Conference
ISSN (Print)0743-1619

Other

Other2011 American Control Conference, ACC 2011
CountryUnited States
CitySan Francisco, CA
Period6/29/117/1/11

    Fingerprint

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Zhu, Q., Tembine, H., & Başar, T. (2011). Distributed strategic learning with application to network security. In Proceedings of the 2011 American Control Conference, ACC 2011 (pp. 4057-4062). [5991373] (Proceedings of the American Control Conference).