AWESOME: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents
FOS: Computer and information sciences
Computer Science - Machine Learning
I.2.11
02 engineering and technology
0102 computer and information sciences
01 natural sciences
Machine Learning (cs.LG)
Computer Science - Computer Science and Game Theory
0202 electrical engineering, electronic engineering, information engineering
Computer Science - Multiagent Systems
Computer Science and Game Theory (cs.GT)
Multiagent Systems (cs.MA)
DOI:
10.1007/s10994-006-0143-1
Publication Date:
2006-09-18T10:05:58Z
AUTHORS (2)
ABSTRACT
A satisfactory multiagent learning algorithm should, {\em at a minimum}, learn to play optimally against stationary opponents and converge to a Nash equilibrium in self-play. The algorithm that has come closest, WoLF-IGA, has been proven to have these two properties in 2-player 2-action repeated games--assuming that the opponent's (mixed) strategy is observable. In this paper we present AWESOME, the first algorithm that is guaranteed to have these two properties in {\em all} repeated (finite) games. It requires only that the other players' actual actions (not their strategies) can be observed at each step. It also learns to play optimally against opponents that {\em eventually become} stationary. The basic idea behind AWESOME ({\em Adapt When Everybody is Stationary, Otherwise Move to Equilibrium}) is to try to adapt to the others' strategies when they appear stationary, but otherwise to retreat to a precomputed equilibrium strategy. The techniques used to prove the properties of AWESOME are fundamentally different from those used for previous algorithms, and may help in analyzing other multiagent learning algorithms also.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (58)
CITATIONS (71)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....