全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Learning in General Games with Nature’s Moves

DOI: 10.1155/2014/453168

Full-Text   Cite this paper   Add to My Lib

Abstract:

This paper investigates simultaneous learning about both nature and others’ actions in repeated games and identifies a set of sufficient conditions for which Harsanyi’s doctrine holds. Players have a utility function over infinite histories that are continuous for the sup-norm topology. Nature’s drawing after any history may depend on any past actions. Provided that (1) every player maximizes her expected payoff against her own beliefs, (2) every player updates her beliefs in a Bayesian manner, (3) prior beliefs about both nature and other players’ strategies have a grain of truth, and (4) beliefs about nature are independent of actions chosen during the game, we construct a Nash equilibrium, that is, realization-equivalent to the actual plays, where Harsanyi’s doctrine holds. Those assumptions are shown to be tight. 1. Introduction Consider a finite number of agents interacting simultaneously. Every agent possibly plays an infinite number of times, and her payoff depends on the joint choice of actions as well as events beyond agents’ control (called choices of nature). To analyze such interactions, it is always assumed that the players share a common prior about the probability distribution of nature. Such an approach is known as Harsanyi’s doctrine, introduced in Harsanyi [1]. We provide a learning foundation for this doctrine. We consider a class of games where nature’s choices may (or not) depend on any past actions by players, and payoff functions are continuous for the sup-norm topology over the set of infinite histories. Provided that Bayesian players have a grain of truth, we show that resulting outcomes converge for the sup-norm topology to a Nash equilibrium that we construct, where Harsanyi’s doctrine holds. Kalai and Lehrer [2, 3] consider a similar type of learning model, without nature’s choice and with a weaker topology of convergence that significantly restricts their class of games. More precisely, those last references use a structure, that is, not a topology, in sharp contrast with the general sup-norm topology. The notion in Kalai and Lehrer states that, for any two probability measures and and for some small real , the measure is -close to if two conditions hold. There must exist a measurable set that is assigned a measure of at least both by and , and for any measurable set it must be true that A given strategy profile with associated probability measure is then said to play -like another strategy profile with associated probability measure if is -close to . The main problem with this concept is the lack of symmetry; that is, if is

References

[1]  J. C. Harsanyi, “Games with incomplete information played by “Bayesian” players. I. The basic model,” Management Science, vol. 14, pp. 159–182, 1967.
[2]  E. Kalai and E. Lehrer, “Rational learning leads to Nash equilibrium,” Econometrica, vol. 61, no. 5, pp. 1019–1045, 1993.
[3]  E. Kalai and E. Lehrer, “Subjective equilibrium in repeated games,” Econometrica, vol. 61, no. 5, pp. 1231–1240, 1993.
[4]  P. Battigalli, M. Gilli, and M. C. Molinari, “Learning and convergence to equilibrium in repeated strategic interactions: an introductory survey,” Ricerche Economiche, vol. 46, pp. 335–377, 1992.
[5]  D. Fudenberg and D. K. Levine, “Self-confirming equilibrium,” Econometrica, vol. 61, no. 3, pp. 523–545, 1993.
[6]  D. Blackwell and L. Dubins, “Merging of opinions with increasing information,” Annals of Mathematical Statistics, vol. 33, pp. 882–886, 1962.
[7]  A. Sandroni, “Does rational learning lead to Nash equilibrium in finitely repeated games?” Journal of Economic Theory, vol. 78, no. 1, pp. 195–218, 1998.
[8]  Y. Noguchi, Bayesian Learning with Bounded Rationality: Convergence to ε-Nash Equilibirum, Kanto Gakuin University, 2007.
[9]  D. P. Foster and H. P. Young, “On the impossibility of predicting the behavior of rational agents,” Proceedings of the National Academy of Sciences of the United States of America, vol. 98, no. 22, pp. 12848–12853, 2001.
[10]  J. H. Nachbar, “Beliefs in repeated games,” Econometrica, vol. 73, no. 2, pp. 459–480, 2005.
[11]  D. Fudenberg and D. K. Levine, “Steady-state learning and self-confirming equilibrium,” Econometrica, vol. 61, pp. 547–573, 1993.
[12]  J. S. Jordan, “Bayesian learning in normal form games,” Games and Economic Behavior, vol. 3, no. 1, pp. 60–81, 1991.
[13]  R. J. Aumann, “Mixed and behavior strategies in infinite extensive games,” in Advances in Game Theory, M. Dresher, L. Shapley, and A. Tucker, Eds., vol. 52 of Annals of Mathematics Studies, pp. 627–650, Princeton University Press, Princeton, NJ, USA, 1964.
[14]  H. W. Kuhn, “Extensive games and the problem of information,” in Contributions to the Theory of Games, H. Kuhn and A. Tucker, Eds., vol. 28 of Annals of Mathematics Studies, pp. 193–216, Princeton University Press, Princeton, NJ, USA, 1953.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133