TY - JOUR
T1 - Reinforcement Learning in Repeated Interaction Games
AU - Bendor, Jonathan
AU - Mookherjee, Dilip
AU - Ray, Debraj
N1 - Publisher Copyright:
© 2011 Walter de Gruyter GmbH & Co. KG, Berlin/Boston 2011.
PY - 2001
Y1 - 2001
N2 - We study long run implications of reinforcement learning when two players repeatedly interact with one another over multiple rounds to play a finite action game. Within each round, the players play the game many successive times with a fixed set of aspirations used to evaluate payoff experiences as successes or failures. The probability weight on successful actions is increased, while failures result in players trying alternative actions in subsequent rounds. The learning rule is supplemented by small amounts of inertia and random perturbations to the states of players. Aspirations are adjusted across successive rounds on the basis of the discrepancy between the average payoff and aspirations in the most recently concluded round. We define and characterize pure steady states of this model, and establish convergence to these under appropriate conditions. Pure steady states are shown to be individually rational, and are either Pareto-efficient or a protected Nash equilibrium of the stage game. Conversely, any Pareto-efficient and strictly individually rational action pair, or any strict protected Nash equilibrium, constitutes a pure steady state, to which the process converges from non-negligible sets of initial aspirations. Applications to games of coordination, cooperation, oligopoly, and electoral competition are discussed.
AB - We study long run implications of reinforcement learning when two players repeatedly interact with one another over multiple rounds to play a finite action game. Within each round, the players play the game many successive times with a fixed set of aspirations used to evaluate payoff experiences as successes or failures. The probability weight on successful actions is increased, while failures result in players trying alternative actions in subsequent rounds. The learning rule is supplemented by small amounts of inertia and random perturbations to the states of players. Aspirations are adjusted across successive rounds on the basis of the discrepancy between the average payoff and aspirations in the most recently concluded round. We define and characterize pure steady states of this model, and establish convergence to these under appropriate conditions. Pure steady states are shown to be individually rational, and are either Pareto-efficient or a protected Nash equilibrium of the stage game. Conversely, any Pareto-efficient and strictly individually rational action pair, or any strict protected Nash equilibrium, constitutes a pure steady state, to which the process converges from non-negligible sets of initial aspirations. Applications to games of coordination, cooperation, oligopoly, and electoral competition are discussed.
KW - aspirations
KW - bounded rationality
KW - cooperation
KW - coordination
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=0038660317&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0038660317&partnerID=8YFLogxK
U2 - 10.2202/1534-5963.1008
DO - 10.2202/1534-5963.1008
M3 - Article
AN - SCOPUS:0038660317
SN - 1935-1704
VL - 1
JO - B.E. Journal of Theoretical Economics
JF - B.E. Journal of Theoretical Economics
IS - 1
M1 - 3
ER -