122:). A player may normally choose to act selfishly to increase their own reward rather than play the socially optimum strategy. However, if it is known that the other player is following a trigger strategy, then the player expects to receive reduced payoffs in the future if they deviate at this stage. An effective trigger strategy ensures that cooperating has more utility to the player than acting selfishly now and facing the other player's punishment in the future.
584:. Because these equilibria differ markedly in terms of payoffs for Player 2, Player 1 can propose a strategy over multiple stages of the game that incorporates the possibility for punishment or reward for Player 2. For example, Player 1 might propose that they play (A, X) in the first round. If Player 2 complies in round one, Player 1 will reward them by playing the equilibrium (A, Z) in round two, yielding a total payoff over two rounds of (7, 9).
79:
with their own personal interests, and do not care about the benefits or costs that their actions bring to competitors. On the other hand, gas stations make a profit even if there is another gas station adjacent. One of the most crucial reasons is that their interaction is not one-off. This condition is portrayed by repeated games, in which two gas stations compete for pricing (stage games) across an indefinite time range t = 0, 1, 2,....
491:. The unique stage game Nash equilibrium must be played in the last round regardless of what happened in earlier rounds. Knowing this, players have no incentive to deviate from the unique stage game Nash equilibrium in the second-to-last round, and so on this logic is applied back to the first round of the game. This âunravellingâ of a game from its endpoint can be observed in the
78:
is inefficient (for gas stations) that both charge p = c. This is more of a rule than an exception: in a staged game, the Nash equilibrium is the only result that an agent can consistently acquire in an interaction, and it is usually inefficient for them. This is because the agents are just concerned
591:
In this way, the threat of punishment in a future round incentivizes a collaborative, non-equilibrium strategy in the first round. Because the final round of any finitely repeated game, by its very nature, removes the threat of future punishment, the optimal strategy in the last round will always be
475:
Repeated games allow for the study of the interaction between immediate gains and long-term incentives. A finitely repeated game is a game in which the same one-shot stage game is played repeatedly over a number of discrete time periods, or rounds. Each time period is indexed by 0 < t †T where T
117:
games, it is found that the preferred strategy is not to play a Nash strategy of the stage game, but to cooperate and play a socially optimum strategy. An essential part of strategies in infinitely repeated game is punishing players who deviate from this cooperative strategy. The punishment may be
99:
Infinite games are those in which the game is being played an infinite number of times. A game with an infinite number of rounds is also equivalent (in terms of strategies to play) to a game in which the players in the game do not know for how many rounds the game is being played. Infinite games
73:
price of gasoline). Assume that when they both charge p = 10, their joint profit is maximized, resulting in a high profit for everyone. Despite the fact that this is the best outcome for them, they are motivated to deviate. By modestly lowering the price, either can steal all of their competitors'
662:
shows a two-stage repeated game with a unique Nash equilibrium. Because there is only one equilibrium here, there is no mechanism for either player to threaten punishment or promise reward in the game's second round. As such, the only strategy that can be supported as a subgame perfect Nash
703:. While it is easier to treat a situation where one player is informed and the other not, and when information received by each player is independent, it is possible to deal with zero-sum games with incomplete information on both sides and signals that are not independent.
502:. While a Nash equilibrium must be played in the last round, the presence of multiple equilibria introduces the possibility of reward and punishment strategies that can be used to support deviation from stage game Nash equilibria in earlier rounds.
587:
If Player 2 deviates to (A, Z) in round one instead of playing the agreed-upon (A, X), Player 1 can threaten to punish them by playing the (B, Y) equilibrium in round two. This latter situation yields payoff (5, 7), leaving both players worse off.
667:. To interpret: this result means that the very presence of a known, finite time horizon sabotages cooperation in every single round of the game. Cooperation in iterated games is only possible when the number of rounds is infinite or unknown.
687:. It may be deducted that you can determine the characterization of equilibrium payoffs in infinitely repeated games. Through alternation between two payoffs, say a and f, the average payoff profile may be a weighted average between a and f.
91:
Finite games are those in which both players know that the game is being played a specific (and finite) number of rounds, and that the game ends for certain after that many rounds have been played. In general, finite games can be solved by
131:. An important feature of a repeated game is the way in which a player's preferences may be modelled. There are many different ways in which a preference relation may be modelled in an infinitely repeated game, but two key ones are :
592:
one of the game's equilibria. It is the payoff differential between equilibria in the game represented in
Example 1 that makes a punishment/reward strategy viable (for more on the influence of punishment and reward on game strategy, see '
308:
104:
Even if the game being played in each round is identical, repeating that game a finite or an infinite number of times can, in general, lead to very different outcomes (equilibria), as well as very different optimal strategies.
663:
equilibrium is that of playing the game's unique Nash equilibrium strategy (D, N) every round. In this case, that means playing (D, N) each stage for two stages (n=2), but it would be true for any finite number of stages
505:
Finitely repeated games with an unknown or indeterminate number of time periods, on the other hand, are regarded as if they were an infinitely repeated game. It is not possible to apply backward induction to these games.
54:. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation.
434:
74:
customers, nearly doubling their revenues. P = c, where their profit is zero, is the only price without this profit deviation. In other words, in the pricing competition game, the only
344:
100:(or games that are being repeated an unknown number of times) cannot be solved by backwards induction as there is no "last round" to start the backwards induction from.
784:
125:
There are many results in theorems which deal with how to achieve and maintain a socially optimal equilibrium in repeated games. These results are collectively called
457:
194:
163:
207:
1004:
1903:
1720:
1255:
1053:
2066:
1539:
1358:
951:
932:
913:
894:
866:
838:
87:
Repeated games may be broadly divided into two classes, finite and infinite, depending on how long the game is being played for.
357:
1160:
499:
484:
69:
that are adjacent to one another. They compete by publicly posting pricing, and have the same and constant marginal cost c (the
1629:
1170:
1499:
721:
114:
1680:
1098:
1073:
2030:
1456:
1210:
1200:
1135:
51:
1250:
1230:
1715:
695:
Repeated games can include some incomplete information. Repeated games with incomplete information were pioneered by
1964:
1685:
1343:
1185:
1180:
676:
314:
135:
126:
2000:
1923:
1659:
1215:
1140:
997:
2015:
1748:
1634:
1431:
1225:
1043:
1818:
854:
2020:
1619:
1589:
1245:
1033:
1954:
2045:
2025:
2005:
1624:
1529:
1388:
1338:
1333:
1265:
1235:
1155:
1083:
1063:
799:
778:
1504:
1489:
479:
For those repeated games with a fixed and known number of time periods, if the stage game has a unique
1838:
1823:
1710:
1705:
1609:
1594:
1559:
1524:
1123:
1068:
990:
476:
is the total number of periods. A player's final payoff is the sum of their payoffs from each round.
118:
playing a strategy which leads to reduced payoff to both players for the rest of the game (called a
113:
The most widely studied repeated games are games that are repeated an infinite number of times. In
1995:
1614:
1564:
1401:
1328:
1308:
1165:
1048:
43:
977:
487:
strategy profile of playing the stage game equilibrium in each round. This can be deduced through
323:
1974:
1833:
1664:
1644:
1494:
1373:
1278:
1205:
1150:
766:
492:
488:
93:
679:. Complex repeated games can be solved using various techniques most of which rely heavily on
1959:
1928:
1883:
1778:
1649:
1604:
1579:
1509:
1383:
1313:
1303:
1195:
1145:
1093:
947:
928:
909:
890:
862:
834:
593:
442:
2040:
2035:
1969:
1933:
1913:
1873:
1843:
1798:
1753:
1738:
1695:
1549:
1190:
1127:
1113:
1078:
972:
758:
700:
581:
480:
464:
119:
75:
172:
141:
1938:
1898:
1853:
1768:
1763:
1484:
1436:
1323:
1088:
1058:
1028:
684:
498:
If the stage game has more than one Nash equilibrium, the repeated game may have multiple
318:
1803:
303:{\displaystyle U_{i}=\lim _{T\to \infty }\inf {\frac {1}{T}}\sum _{t=0}^{T}u_{i}(x_{t})}
1878:
1868:
1858:
1793:
1783:
1773:
1758:
1554:
1534:
1519:
1514:
1474:
1441:
1426:
1421:
1411:
1220:
680:
2060:
1918:
1908:
1863:
1848:
1828:
1654:
1599:
1574:
1446:
1406:
1393:
1298:
1240:
1175:
1108:
696:
1893:
1888:
1743:
1318:
828:
2010:
1813:
1808:
1788:
1584:
1569:
1378:
1348:
1283:
1273:
1103:
1038:
1014:
882:
66:
31:
1639:
1293:
17:
1544:
1464:
1288:
70:
459:), it can be proved that every strategy that has a payoff greater than the
859:
Proceedings of the
International Congress of Mathematicians, Berkeley 1986
675:
In general, repeated games are easily solved using strategies provided by
317:- If player i's valuation of the game diminishes with time depending on a
1979:
1479:
982:
439:
For sufficiently patient players (e.g. those with high enough values of
1700:
1690:
1368:
770:
460:
46:
that consists of a number of repetitions of some base game (called a
762:
967:
1469:
749:
Benoit, J.P. & Krishna, V. (1985). "Finitely
Repeated Games".
861:. Providence: American Mathematical Society. pp. 1528â1577.
574:
Example 1: Two-Stage
Repeated Game with Multiple Nash Equilibria
656:
Example 2: Two-Stage
Repeated Game with Unique Nash Equilibrium
986:
580:
shows a two-stage repeated game with multiple pure strategy
429:{\displaystyle U_{i}=\sum _{t\geq 0}\delta ^{t}u_{i}(x_{t})}
65:
For a real-life example of a repeated game, consider two
968:
Game-Theoretic
Solution to Poker Using Fictitious Play
906:
Repeated games and reputations: long-run relationships
50:). The stage game is usually one of the well-studied
445:
360:
326:
210:
175:
144:
1988:
1947:
1729:
1673:
1455:
1357:
1264:
1122:
1021:
510:Examples of cooperation in finitely repeated games
451:
428:
338:
302:
188:
157:
594:Public Goods Game with Punishment and for Reward
240:
225:
923:Osborne, Martin J.; Rubinstein, Ariel (1994).
998:
8:
978:on Repeated Games and the Chainstore Paradox
783:: CS1 maint: multiple names: authors list (
138:- If the game results in a path of outcomes
1005:
991:
983:
830:Repeated Games with Incomplete Information
944:A First Course on Zero-Sum Repeated Games
444:
417:
404:
394:
378:
365:
359:
325:
291:
278:
268:
257:
243:
228:
215:
209:
180:
174:
149:
143:
904:Mailath, G. & Samuelson, L. (2006).
800:""Repeated Games I: Perfect Monitoring""
598:
513:
712:
776:
483:, then the repeated game has a unique
908:. New York: Oxford University Press.
83:Finitely vs infinitely repeated games
7:
827:Aumann, R. J.; Maschler, M. (1995).
744:
742:
169:has the basic-game utility function
973:Game Theory notes on Repeated games
1054:First-player and second-player win
467:- a very large set of strategies.
235:
62:are names for non-repeated games.
25:
1161:Coalition-proof Nash equilibrium
485:subgame perfect Nash equilibrium
833:. Cambridge London: MIT Press.
500:subgame perfect Nash equilibria
1171:Evolutionarily stable strategy
683:and the concepts expressed in
423:
410:
297:
284:
232:
1:
1099:Simultaneous action selection
27:Game that repeats a base game
2031:List of games in game theory
1211:Quantal response equilibrium
1201:Perfect Bayesian equilibrium
1136:Bayes correlated equilibrium
798:Levin, Jonathan (May 2006).
339:{\displaystyle \delta <1}
1500:Optional prisoner's dilemma
1231:Self-confirming equilibrium
115:iterated prisoner's dilemma
2083:
1965:Principal variation search
1681:Aumann's agreement theorem
1344:Strategy-stealing argument
1256:Trembling hand equilibrium
1186:Markov perfect equilibrium
1181:Mertens-stable equilibrium
857:(1987). "Repeated Games".
2001:Combinatorial game theory
1660:Princess and monster game
1216:Quasi-perfect equilibrium
1141:Bayesian Nash equilibrium
722:"Finitely Repeated Games"
109:Infinitely repeated games
2067:Game theory game classes
2016:Evolutionary game theory
1749:Antoine Augustin Cournot
1635:Guess 2/3 of the average
1432:Strictly determined game
1226:Satisfaction equilibrium
1044:Escalation of commitment
927:. Cambridge: MIT Press.
889:. Cambridge: MIT Press.
2021:Glossary of game theory
1620:Stackelberg competition
1246:Strong Nash equilibrium
942:Sorin, Sylvain (2002).
925:A Course in Game Theory
471:Finitely repeated games
452:{\displaystyle \delta }
2046:Tragedy of the commons
2026:List of game theorists
2006:Confrontation analysis
1716:SpragueâGrundy theorem
1236:Sequential equilibrium
1156:Correlated equilibrium
691:Incomplete information
671:Solving repeated games
453:
430:
340:
304:
273:
190:
159:
1819:Jean-François Mertens
454:
431:
341:
305:
253:
191:
189:{\displaystyle u_{i}}
160:
158:{\displaystyle x_{t}}
1948:Search optimizations
1824:Jennifer Tour Chayes
1711:Revelation principle
1706:Purification theorem
1645:Nash bargaining game
1610:Bertrand competition
1595:El Farol Bar problem
1560:Electronic mail game
1525:Lewis signaling game
1069:Hierarchy of beliefs
946:. Berlin: Springer.
443:
358:
324:
208:
173:
142:
1996:Bounded rationality
1615:Cournot competition
1565:Rock paper scissors
1540:Battle of the sexes
1530:Volunteer's dilemma
1402:Perfect information
1329:Dominant strategies
1166:Epsilon-equilibrium
1049:Extensive-form game
94:backwards induction
44:extensive form game
1975:Paranoid algorithm
1955:Alphaâbeta pruning
1834:John Maynard Smith
1665:Rendezvous problem
1505:Traveler's dilemma
1495:Gift-exchange game
1490:Prisoner's dilemma
1407:Large Poisson game
1374:Bargaining problem
1279:Backward induction
1251:Subgame perfection
1206:Proper equilibrium
493:Chainstore paradox
489:backward induction
449:
426:
389:
336:
300:
239:
186:
155:
2054:
2053:
1960:Aspiration window
1929:Suzanne Scotchmer
1884:Oskar Morgenstern
1779:Donald B. Gillies
1721:Zermelo's theorem
1650:Induction puzzles
1605:Fair cake-cutting
1580:Public goods game
1510:Coordination game
1384:Intransitive game
1314:Forward induction
1196:Pareto efficiency
1176:Gibbs equilibrium
1146:Berge equilibrium
1094:Simultaneous game
881:Fudenberg, Drew;
654:
653:
572:
571:
374:
251:
224:
56:Single stage game
16:(Redirected from
2074:
2041:Topological game
2036:No-win situation
1934:Thomas Schelling
1914:Robert B. Wilson
1874:Merrill M. Flood
1844:John von Neumann
1754:Ariel Rubinstein
1739:Albert W. Tucker
1590:War of attrition
1550:Matching pennies
1191:Nash equilibrium
1114:Mechanism design
1079:Normal-form game
1034:Cooperative game
1007:
1000:
993:
984:
957:
938:
919:
900:
873:
872:
851:
845:
844:
824:
818:
817:
815:
813:
807:www.stanford.edu
804:
795:
789:
788:
782:
774:
746:
737:
736:
734:
732:
717:
599:
514:
481:Nash equilibrium
465:Nash equilibrium
463:payoff can be a
458:
456:
455:
450:
435:
433:
432:
427:
422:
421:
409:
408:
399:
398:
388:
370:
369:
345:
343:
342:
337:
309:
307:
306:
301:
296:
295:
283:
282:
272:
267:
252:
244:
238:
220:
219:
195:
193:
192:
187:
185:
184:
164:
162:
161:
156:
154:
153:
120:trigger strategy
76:Nash equilibrium
60:single shot game
21:
2082:
2081:
2077:
2076:
2075:
2073:
2072:
2071:
2057:
2056:
2055:
2050:
1984:
1970:max^n algorithm
1943:
1939:William Vickrey
1899:Reinhard Selten
1854:Kenneth Binmore
1769:David K. Levine
1764:Daniel Kahneman
1731:
1725:
1701:Negamax theorem
1691:Minimax theorem
1669:
1630:Diner's dilemma
1485:All-pay auction
1451:
1437:Stochastic game
1389:Mean-field game
1360:
1353:
1324:Markov strategy
1260:
1126:
1118:
1089:Sequential game
1074:Information set
1059:Game complexity
1029:Congestion game
1017:
1011:
964:
954:
941:
935:
922:
916:
903:
897:
880:
877:
876:
869:
853:
852:
848:
841:
826:
825:
821:
811:
809:
802:
797:
796:
792:
775:
763:10.2307/1912660
748:
747:
740:
730:
728:
720:Knight, Vince.
719:
718:
714:
709:
693:
685:fictitious play
673:
582:Nash equilibria
512:
473:
441:
440:
413:
400:
390:
361:
356:
355:
322:
321:
319:discount factor
287:
274:
211:
206:
205:
176:
171:
170:
145:
140:
139:
128:"Folk Theorems"
111:
85:
28:
23:
22:
15:
12:
11:
5:
2080:
2078:
2070:
2069:
2059:
2058:
2052:
2051:
2049:
2048:
2043:
2038:
2033:
2028:
2023:
2018:
2013:
2008:
2003:
1998:
1992:
1990:
1986:
1985:
1983:
1982:
1977:
1972:
1967:
1962:
1957:
1951:
1949:
1945:
1944:
1942:
1941:
1936:
1931:
1926:
1921:
1916:
1911:
1906:
1904:Robert Axelrod
1901:
1896:
1891:
1886:
1881:
1879:Olga Bondareva
1876:
1871:
1869:Melvin Dresher
1866:
1861:
1859:Leonid Hurwicz
1856:
1851:
1846:
1841:
1836:
1831:
1826:
1821:
1816:
1811:
1806:
1801:
1796:
1794:Harold W. Kuhn
1791:
1786:
1784:Drew Fudenberg
1781:
1776:
1774:David M. Kreps
1771:
1766:
1761:
1759:Claude Shannon
1756:
1751:
1746:
1741:
1735:
1733:
1727:
1726:
1724:
1723:
1718:
1713:
1708:
1703:
1698:
1696:Nash's theorem
1693:
1688:
1683:
1677:
1675:
1671:
1670:
1668:
1667:
1662:
1657:
1652:
1647:
1642:
1637:
1632:
1627:
1622:
1617:
1612:
1607:
1602:
1597:
1592:
1587:
1582:
1577:
1572:
1567:
1562:
1557:
1555:Ultimatum game
1552:
1547:
1542:
1537:
1535:Dollar auction
1532:
1527:
1522:
1520:Centipede game
1517:
1512:
1507:
1502:
1497:
1492:
1487:
1482:
1477:
1475:Infinite chess
1472:
1467:
1461:
1459:
1453:
1452:
1450:
1449:
1444:
1442:Symmetric game
1439:
1434:
1429:
1427:Signaling game
1424:
1422:Screening game
1419:
1414:
1412:Potential game
1409:
1404:
1399:
1391:
1386:
1381:
1376:
1371:
1365:
1363:
1355:
1354:
1352:
1351:
1346:
1341:
1339:Mixed strategy
1336:
1331:
1326:
1321:
1316:
1311:
1306:
1301:
1296:
1291:
1286:
1281:
1276:
1270:
1268:
1262:
1261:
1259:
1258:
1253:
1248:
1243:
1238:
1233:
1228:
1223:
1221:Risk dominance
1218:
1213:
1208:
1203:
1198:
1193:
1188:
1183:
1178:
1173:
1168:
1163:
1158:
1153:
1148:
1143:
1138:
1132:
1130:
1120:
1119:
1117:
1116:
1111:
1106:
1101:
1096:
1091:
1086:
1081:
1076:
1071:
1066:
1064:Graphical game
1061:
1056:
1051:
1046:
1041:
1036:
1031:
1025:
1023:
1019:
1018:
1012:
1010:
1009:
1002:
995:
987:
981:
980:
975:
970:
963:
962:External links
960:
959:
958:
952:
939:
933:
920:
914:
901:
895:
875:
874:
867:
855:Mertens, J.-F.
846:
839:
819:
790:
757:(4): 905â922.
738:
711:
710:
708:
705:
692:
689:
681:linear algebra
672:
669:
652:
651:
648:
639:
636:
632:
631:
625:
622:
616:
612:
611:
608:
605:
602:
570:
569:
566:
557:
554:
550:
549:
540:
537:
531:
527:
526:
523:
520:
517:
511:
508:
472:
469:
448:
437:
436:
425:
420:
416:
412:
407:
403:
397:
393:
387:
384:
381:
377:
373:
368:
364:
352:
351:
346:, then player
335:
332:
329:
311:
310:
299:
294:
290:
286:
281:
277:
271:
266:
263:
260:
256:
250:
247:
242:
237:
234:
231:
227:
223:
218:
214:
202:
201:
183:
179:
152:
148:
136:Limit of means
110:
107:
102:
101:
97:
84:
81:
52:2-person games
26:
24:
14:
13:
10:
9:
6:
4:
3:
2:
2079:
2068:
2065:
2064:
2062:
2047:
2044:
2042:
2039:
2037:
2034:
2032:
2029:
2027:
2024:
2022:
2019:
2017:
2014:
2012:
2009:
2007:
2004:
2002:
1999:
1997:
1994:
1993:
1991:
1989:Miscellaneous
1987:
1981:
1978:
1976:
1973:
1971:
1968:
1966:
1963:
1961:
1958:
1956:
1953:
1952:
1950:
1946:
1940:
1937:
1935:
1932:
1930:
1927:
1925:
1924:Samuel Bowles
1922:
1920:
1919:Roger Myerson
1917:
1915:
1912:
1910:
1909:Robert Aumann
1907:
1905:
1902:
1900:
1897:
1895:
1892:
1890:
1887:
1885:
1882:
1880:
1877:
1875:
1872:
1870:
1867:
1865:
1864:Lloyd Shapley
1862:
1860:
1857:
1855:
1852:
1850:
1849:Kenneth Arrow
1847:
1845:
1842:
1840:
1837:
1835:
1832:
1830:
1829:John Harsanyi
1827:
1825:
1822:
1820:
1817:
1815:
1812:
1810:
1807:
1805:
1802:
1800:
1799:Herbert Simon
1797:
1795:
1792:
1790:
1787:
1785:
1782:
1780:
1777:
1775:
1772:
1770:
1767:
1765:
1762:
1760:
1757:
1755:
1752:
1750:
1747:
1745:
1742:
1740:
1737:
1736:
1734:
1728:
1722:
1719:
1717:
1714:
1712:
1709:
1707:
1704:
1702:
1699:
1697:
1694:
1692:
1689:
1687:
1684:
1682:
1679:
1678:
1676:
1672:
1666:
1663:
1661:
1658:
1656:
1653:
1651:
1648:
1646:
1643:
1641:
1638:
1636:
1633:
1631:
1628:
1626:
1623:
1621:
1618:
1616:
1613:
1611:
1608:
1606:
1603:
1601:
1600:Fair division
1598:
1596:
1593:
1591:
1588:
1586:
1583:
1581:
1578:
1576:
1575:Dictator game
1573:
1571:
1568:
1566:
1563:
1561:
1558:
1556:
1553:
1551:
1548:
1546:
1543:
1541:
1538:
1536:
1533:
1531:
1528:
1526:
1523:
1521:
1518:
1516:
1513:
1511:
1508:
1506:
1503:
1501:
1498:
1496:
1493:
1491:
1488:
1486:
1483:
1481:
1478:
1476:
1473:
1471:
1468:
1466:
1463:
1462:
1460:
1458:
1454:
1448:
1447:Zero-sum game
1445:
1443:
1440:
1438:
1435:
1433:
1430:
1428:
1425:
1423:
1420:
1418:
1417:Repeated game
1415:
1413:
1410:
1408:
1405:
1403:
1400:
1398:
1396:
1392:
1390:
1387:
1385:
1382:
1380:
1377:
1375:
1372:
1370:
1367:
1366:
1364:
1362:
1356:
1350:
1347:
1345:
1342:
1340:
1337:
1335:
1334:Pure strategy
1332:
1330:
1327:
1325:
1322:
1320:
1317:
1315:
1312:
1310:
1307:
1305:
1302:
1300:
1299:De-escalation
1297:
1295:
1292:
1290:
1287:
1285:
1282:
1280:
1277:
1275:
1272:
1271:
1269:
1267:
1263:
1257:
1254:
1252:
1249:
1247:
1244:
1242:
1241:Shapley value
1239:
1237:
1234:
1232:
1229:
1227:
1224:
1222:
1219:
1217:
1214:
1212:
1209:
1207:
1204:
1202:
1199:
1197:
1194:
1192:
1189:
1187:
1184:
1182:
1179:
1177:
1174:
1172:
1169:
1167:
1164:
1162:
1159:
1157:
1154:
1152:
1149:
1147:
1144:
1142:
1139:
1137:
1134:
1133:
1131:
1129:
1125:
1121:
1115:
1112:
1110:
1109:Succinct game
1107:
1105:
1102:
1100:
1097:
1095:
1092:
1090:
1087:
1085:
1082:
1080:
1077:
1075:
1072:
1070:
1067:
1065:
1062:
1060:
1057:
1055:
1052:
1050:
1047:
1045:
1042:
1040:
1037:
1035:
1032:
1030:
1027:
1026:
1024:
1020:
1016:
1008:
1003:
1001:
996:
994:
989:
988:
985:
979:
976:
974:
971:
969:
966:
965:
961:
955:
953:3-540-43028-8
949:
945:
940:
936:
934:0-262-15041-7
930:
926:
921:
917:
915:0-19-530079-3
911:
907:
902:
898:
896:0-262-06141-4
892:
888:
884:
879:
878:
870:
868:0-8218-0110-4
864:
860:
856:
850:
847:
842:
840:9780262011471
836:
832:
831:
823:
820:
808:
801:
794:
791:
786:
780:
772:
768:
764:
760:
756:
752:
745:
743:
739:
727:
723:
716:
713:
706:
704:
702:
698:
690:
688:
686:
682:
678:
677:folk theorems
670:
668:
666:
661:
657:
649:
647:
643:
640:
637:
634:
633:
630:
626:
623:
620:
617:
614:
613:
609:
606:
603:
601:
600:
597:
595:
589:
585:
583:
579:
575:
567:
565:
561:
558:
555:
552:
551:
548:
544:
541:
538:
535:
532:
529:
528:
524:
521:
518:
516:
515:
509:
507:
503:
501:
496:
494:
490:
486:
482:
477:
470:
468:
466:
462:
446:
418:
414:
405:
401:
395:
391:
385:
382:
379:
375:
371:
366:
362:
354:
353:
350:s utility is:
349:
333:
330:
327:
320:
316:
313:
312:
292:
288:
279:
275:
269:
264:
261:
258:
254:
248:
245:
229:
221:
216:
212:
204:
203:
200:s utility is:
199:
181:
177:
168:
150:
146:
137:
134:
133:
132:
130:
129:
123:
121:
116:
108:
106:
98:
95:
90:
89:
88:
82:
80:
77:
72:
68:
63:
61:
57:
53:
49:
45:
41:
40:iterated game
37:
36:repeated game
33:
19:
18:Iterated game
1894:Peyton Young
1889:Paul Milgrom
1804:Hervé Moulin
1744:Amos Tversky
1686:Folk theorem
1416:
1397:-player game
1394:
1319:Grim trigger
943:
924:
905:
886:
883:Tirole, Jean
858:
849:
829:
822:
810:. Retrieved
806:
793:
779:cite journal
754:
751:Econometrica
750:
729:. Retrieved
725:
715:
694:
674:
664:
659:
658:
655:
645:
641:
628:
618:
590:
586:
577:
576:
573:
563:
559:
546:
542:
533:
504:
497:
478:
474:
438:
347:
197:
166:
127:
124:
112:
103:
86:
67:gas stations
64:
59:
55:
47:
39:
35:
29:
2011:Coopetition
1814:Jean Tirole
1809:John Conway
1789:Eric Maskin
1585:Blotto game
1570:Pirate game
1379:Global game
1349:Tit for tat
1284:Bid shading
1274:Appeasement
1124:Equilibrium
1104:Solved game
1039:Determinacy
1022:Definitions
1015:game theory
887:Game Theory
812:12 December
726:Game Theory
315:Discounting
165:and player
32:game theory
1655:Trust game
1640:Kuhn poker
1309:Escalation
1304:Deterrence
1294:Cheap talk
1266:Strategies
1084:Preference
1013:Topics of
731:6 December
707:References
48:stage game
1839:John Nash
1545:Stag hunt
1289:Collusion
660:Example 2
578:Example 1
447:δ
392:δ
383:≥
376:∑
328:δ
255:∑
236:∞
233:→
196:, player
71:wholesale
2061:Category
1980:Lazy SMP
1674:Theorems
1625:Deadlock
1480:Checkers
1361:of games
1128:concepts
885:(1991).
701:Maschler
42:) is an
1732:figures
1515:Chicken
1369:Auction
1359:Classes
771:1912660
950:
931:
912:
893:
865:
837:
769:
697:Aumann
461:minmax
1470:Chess
1457:Games
803:(PDF)
767:JSTOR
650:1, 1
638:1, 1
624:1, 1
568:1, 1
556:1, 1
539:1, 1
1151:Core
948:ISBN
929:ISBN
910:ISBN
891:ISBN
863:ISBN
835:ISBN
814:2017
785:link
733:2017
699:and
621:, 4
596:').
536:, 4
331:<
38:(or
34:, a
1730:Key
759:doi
627:0,
241:inf
226:lim
58:or
30:In
2063::
1465:Go
805:.
781:}}
777:{{
765:.
755:53
753:.
741:^
724:.
644:,
635:D
615:C
610:O
607:N
604:M
562:,
553:B
545:,
530:A
525:Z
522:Y
519:X
495:.
348:i'
198:i'
1395:n
1006:e
999:t
992:v
956:.
937:.
918:.
899:.
871:.
843:.
816:.
787:)
773:.
761::
735:.
665:n
646:2
642:3
629:5
619:5
564:2
560:3
547:5
543:2
534:5
424:)
419:t
415:x
411:(
406:i
402:u
396:t
386:0
380:t
372:=
367:i
363:U
334:1
298:)
293:t
289:x
285:(
280:i
276:u
270:T
265:0
262:=
259:t
249:T
246:1
230:T
222:=
217:i
213:U
182:i
178:u
167:i
151:t
147:x
96:.
20:)
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.