Fast Convergence of Regularized Learning in Games Fast Convergence of Regularized Learning in Games
Paper summary The authors perform theoretical analysis about faster convergence with multi-player normal-form games by generalizing techniques for two-player zero-sum games. They also perform empirical evaluation by using the 4-bidder simultaneous auction game. The paper is concerned with two problems: 1. How does the social welfare of players using regret minimization algorithms compare to the optimal welfare. 2. Can one obtain better regret bounds when all players use a regret minimization algorithm The paper deals with bounds on regret minimization algorithms in games. The usual regret bounds on these algorithms is in $O(\sqrt{T})$. However, this assumes that the learner faces a completely adversarial opponent. However, it is natural to assume that on a game everyone will play a regret minimization algorithm and the question is whether or not one can obtain better rates in this scenario. The authors show that regret in $O(T^{1/4})$ is achievable for general games.

Summary by NIPS Conference Reviews 4 years ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and