Abstract:
The model of congestion games is widely used to analyze games related to traffic and communication. A central property of these games is that they are potential games and hence posses a pure Nash equilibrium. In reality it is often the case that some players cooperatively decide on their joint action in order to maximize the coalition's total utility. This is by modeled by Coalitional Congestion Games. Typical settings include truck drivers who work for the same shipping company, or routers that belong to the same ISP. The formation of coalitions will typically imply that the resulting coalitional congestion game will no longer posses a pure Nash equilibrium. In this paper we provide conditions under which such games are potential games and posses a pure Nash equilibrium.

Abstract:
Rosenthal (1973) introduced the class of congestion games and proved that they always possess a Nash equilibrium in pure strategies. Fotakis et al. (2005) introduce the notion of a greedy strategy tuple, where players sequentially and irrevocably choose a strategy that is a best response to the choice of strategies by former players. Whereas the former solution concept is driven by strong assumptions on the rationality of the players and the common knowledge thereof, the latter assumes very little rationality on the players' behavior. From Fotakis \cite{fotakis10} it follows that for Tree Representable congestion Games greedy behavior leads to a NE. In this paper we obtain necessary and sufficient conditions for the equivalence of these two solution concepts. Such equivalence enhances the viability of these concepts as realistic outcomes of the environment. The conditions for such equivalence to emerge for monotone symmetric games is that the strategy set has a tree-form, or equivalently is a `extension-parallel graph'.

Abstract:
Consider an abstract social choice setting with incomplete information, where the number of alternatives is large. Albeit natural, implementing VCG mechanisms may not be feasible due to the prohibitive communication constraints. However, if players restrict attention to a subset of the alternatives, feasibility may be recovered. This paper characterizes the class of subsets which induce an ex-post equilibrium in the original game. It turns out that a crucial condition for such subsets to exist is the existence of a type-independent optimal social alternative, for each player. We further analyze the welfare implications of these restrictions. This work follows work by Holzman, Kfir-Dahav, Monderer and Tennenholtz (2004) and Holzman and Monderer (2004) where similar analysis is done for combinatorial auctions.

Abstract:
We introduce the study of sequential information elicitation in strategic multi-agent systems. In an information elicitation setup a center attempts to compute the value of a function based on private information (a-k-a secrets) accessible to a set of agents. We consider the classical multi-party computation setup where each agent is interested in knowing the result of the function. However, in our setting each agent is strategic,and since acquiring information is costly, an agent may be tempted not spending the efforts of obtaining the information, free-riding on other agents' computations. A mechanism which elicits agents' secrets and performs the desired computation defines a game. A mechanism is 'appropriate' if there exists an equilibrium in which it is able to elicit (sufficiently many) agents' secrets and perform the computation, for all possible secret vectors.We characterize a general efficient procedure for determining an appropriate mechanism, if such mechanism exists. Moreover, we also address the existence problem, providing a polynomial algorithm for verifying the existence of an appropriate mechanism.

Abstract:
We analyze a general framework for modeling agents whose utility is derived from both their actions and the perceptions of others about their type. We show that such perception games always have equilibria, and discuss two natural refinements. We demonstrate the applicability of our framework in a variety of contexts, with a particular emphasis on privacy-related issues.

Abstract:
In traditional mechanism design, agents only care about the utility they derive from the outcome of the mechanism. We look at a richer model where agents also assign non-negative dis-utility to the information about their private types leaked by the outcome of the mechanism. We present a new model for privacy-aware mechanism design, where we only assume an upper bound on the agents' loss due to leakage, as opposed to previous work where a full characterization of the loss was required. In this model, under a mild assumption on the distribution of how agents value their privacy, we show a generic construction of privacy-aware mechanisms and demonstrate its applicability to electronic polling and pricing of a digital good.

Abstract:
We consider an environment where sellers compete over buyers. All sellers are a-priori identical and strategically signal buyers about the product they sell. In a setting motivated by on-line advertising in display ad exchanges, where firms use second price auctions, a firm's strategy is a decision about its signaling scheme for a stream of goods (e.g. user impressions), and a buyer's strategy is a selection among the firms. In this setting, a single seller will typically provide partial information and consequently a product may be allocated inefficiently. Intuitively, competition among sellers may induce sellers to provide more information in order to attract buyers and thus increase efficiency. Surprisingly, we show that such a competition among firms may yield significant loss in consumers' social welfare with respect to the monopolistic setting. Although we also show that in some cases the competitive setting yields gain in social welfare, we provide a tight bound on that gain, which is shown to be small in respect to the above possible loss. Our model is tightly connected with the literature on bundling in auctions.

Abstract:
In the on-line Explore and Exploit literature, central to Machine Learning, a central planner is faced with a set of alternatives, each yielding some unknown reward. The planner's goal is to learn the optimal alternative as soon as possible, via experimentation. A typical assumption in this model is that the planner has full control over the experiment design and implementation. When experiments are implemented by a society of self-motivated agents the planner can only recommend experimentation but has no power to enforce it. Kremer et al (JPE, 2014) introduce the first study of explore and exploit schemes that account for agents' incentives. In their model it is implicitly assumed that agents do not see nor communicate with each other. Their main result is a characterization of an optimal explore and exploit scheme. In this work we extend Kremer et al (JPE, 2014) by adding a layer of a social network according to which agents can observe each other. It turns out that when observability is factored in the scheme proposed by Kremer et al (JPE, 2014) is no longer incentive compatible. In our main result we provide a tight bound on how many other agents can each agent observe and still have an incentive-compatible algorithm and asymptotically optimal outcome. More technically, for a setting with N agents where the number of nodes with degree greater than N^alpha is bounded by N^beta and 2*alpha+beta < 1 we construct incentive-compatible asymptotically optimal mechanism. The bound 2*alpha+beta < 1 is shown to be tight.

Abstract:
In this paper we study the implementation challenge in an abstract interdependent values model and an arbitrary objective function. We design a mechanism that allows for approximate optimal implementation of insensitive objective functions in ex-post Nash equilibrium. If, furthermore, values are private then the same mechanism is strategy proof. We cast our results onto two specific models: pricing and facility location. The mechanism we design is optimal up to an additive factor of the order of magnitude of one over the square root of the number of agents and involves no utility transfers. Underlying our mechanism is a lottery between two auxiliary mechanisms: with high probability we actuate a mechanism that reduces players' influence on the choice of the social alternative, while choosing the optimal outcome with high probability. This is where the recent notion of differential privacy is employed. With the complementary probability we actuate a mechanism that is typically far from optimal but is incentive compatible. The joint mechanism inherits the desired properties from both.