Anonymous

Changes

From Theory
1,368 bytes added ,  20:24, 6 November 2012
Line 7: Line 7:     
== Abstract ==
 
== Abstract ==
 +
We give mechanisms in which each of n players in a game is given their
 +
component of an (approximate) equilibrium in a way that guarantees
 +
differential privacy --- that is, the revelation of the equilibrium
 +
components does not reveal too much information about the utilities of
 +
other players. More precisely, we show how to compute an approximate
 +
correlated equilibrium (CE) under the constraint of differential privacy
 +
(DP), provided $n$ is large and any player's action affects any other's
 +
payoff by at most a small amount. Our results draw interesting connections
 +
between noisy generalizations of classical convergence results for
 +
no-regret learning, and the noisy mechanisms developed for differential
 +
privacy. Our results imply the ability to truthfully implement good
 +
social-welfare solutions in many games, such as games with small Price of
 +
Anarchy, even if the mechanism does not have the ability to enforce
 +
outcomes. We give two different mechanisms for DP computation of
 +
approximate CE. The first is computationally efficient, but has a
 +
suboptimal dependence on the number of actions in the game; the second is
 +
computationally inefficient, but allows for games with exponentially many
 +
actions. We also give a matching lower bound, showing that our results are
 +
tight up to logarithmic factors.
 +
Joint work with Michael Kearns, Mallesh Pai, and Jonathan Ullman.