Knowledge (XXG)

Superintelligence: Paths, Dangers, Strategies

Source 📝

232:
and to free them for a life of leisure. The sparrows start the difficult search for an owl egg; only "Scronkfinkle", a "one-eyed sparrow with a fretful temperament", suggests thinking about the complicated question of how to tame the owl before bringing it "into our midst". The other sparrows demur; the search for an owl egg will already be hard enough on its own: "Why not get the owl first and work out the fine details later?" Bostrom states that "It is not known how the story ends", but he dedicates his book to Scronkfinkle.
1348: 228:" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences. 231:
The owl on the book cover alludes to an analogy which Bostrom calls the "Unfinished Fable of the Sparrows". A group of sparrows decide to find an owl chick and raise it as their servant. They eagerly imagine "how easy life would be" if they had an owl to help build their nests, to defend the sparrows
331:
stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of
183:
could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit
200:
will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest"
302:
found that Bostrom's writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with
309:
pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but the review finds common ground with Bostrom in advising that "one would be
220:(hypothetical material optimized for computation) to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an 42: 965: 882: 350: 2117: 1351: 958: 2083: 1877: 1315: 414: 1941: 951: 1756: 1032: 974: 185: 1103: 781:"He's played chess with Peter Thiel, sparred with Elon Musk and once, supposedly, stopped a plane crash: Inside Sam Altman's world, where truth is stranger than fiction" 1143: 805: 208:" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the 1138: 1302: 1595: 603: 1460: 1382: 826: 578: 2147: 1934: 1430: 487: 455: 204:
While the ultimate goals of superintelligences could vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "
1605: 1108: 1685: 1148: 708: 2142: 2005: 1927: 408: 354:
broke with others by stating the book's "writing style is clear" and praised the book for avoiding "overly technical jargon". A reviewer in
1600: 518: 2132: 1290: 149: 342:
that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age". According to Tom Chivers of
1027: 656: 547: 2069: 1700: 1440: 1375: 1037: 160: 780: 2137: 2112: 1842: 1610: 992: 2127: 1806: 1098: 2045: 1590: 1420: 682: 1950: 1705: 1630: 1585: 1329: 1189: 1118: 389: 1856: 1801: 1368: 1335: 2030: 1572: 1465: 734: 317:
presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of
2122: 1908: 1835: 1695: 1580: 1088: 1072: 1022: 849: 2076: 2035: 2010: 1863: 1776: 1761: 1635: 1123: 1042: 205: 1665: 1415: 978: 821: 366: 201:
would most likely follow surprisingly quickly. Such a superintelligence would be very difficult to control.
197: 99: 75: 263:’s concern for the existential risks facing humanity over the coming century. In a March 2015 interview by 1891: 1720: 1455: 1047: 251:
made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than
2062: 2040: 1849: 1811: 1002: 434: 1475: 1093: 913: 356: 2020: 1710: 1308: 344: 1791: 1751: 1736: 1690: 1655: 1625: 1410: 1391: 930: 225: 209: 943: 1828: 1680: 1675: 1551: 1516: 1491: 1405: 1274: 1249: 1067: 854: 799: 630: 338: 256: 180: 144: 488:"Superintelligence by Nick Bostrom and A Rough Ride to the Future by James Lovelock – review" 1870: 1521: 1511: 1450: 1296: 1269: 1169: 1017: 922: 891: 395: 245: 221: 65: 757: 1984: 1969: 1640: 1615: 1435: 1234: 1214: 1204: 1194: 1128: 1062: 298: 252: 89: 758:"Baidu CEO Robin Li interviews Bill Gates and Elon Musk at the Boao Forum, March 29 2015" 216:) could create and act upon a subgoal of transforming the entire Earth into some form of 17: 1989: 1796: 1786: 1771: 1741: 1715: 1660: 1506: 1470: 1425: 1057: 318: 277: 121: 2106: 2025: 1766: 1746: 1670: 1264: 1209: 1179: 934: 459: 333: 327: 322: 2015: 1974: 1964: 1645: 1567: 1541: 1536: 1526: 1501: 1254: 1184: 997: 552: 379: 305: 286: 282: 217: 176: 55: 911:
Richmond, Sheldon (8 July 2015). "Superintelligence: Paths, Dangers, Strategies".
896: 877: 1620: 1445: 1259: 1244: 1052: 1012: 314: 41: 1919: 1496: 1224: 1199: 1174: 1133: 1113: 926: 290: 260: 213: 85: 657:"Elon Musk tweets Artificial Intelligence may be "more dangerous than nukes"" 348:, the book is difficult to read but nonetheless rewarding. A reviewer in the 1546: 1239: 1229: 1007: 384: 293:
wrote in 2015 that the book is the best thing he has ever read on AI risks.
248: 1979: 1781: 1650: 1531: 1219: 402: 268: 244:
list of best selling science books for August 2014. In the same month,
1360: 1153: 184:
humanity. It was particularly influential for raising concerns about
264: 883:
Journal of Experimental & Theoretical Artificial Intelligence
351:
Journal of Experimental & Theoretical Artificial Intelligence
822:"Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom" 709:"Bill Gates Says You Should Worry About Artificial Intelligence" 548:"Nick Bostrom: 'We are like small children playing with a bomb'" 1923: 1364: 947: 579:"Superintelligence by Nick Bostrom, review: 'a hard read'" 415:
The Precipice: Existential Risk and the Future of Humanity
631:"Artificial intelligence 'may wipe out the human race'" 735:"Bill Gates Is Worried About the Rise of the Machines" 2118:
Existential risk from artificial general intelligence
1757:
Existential risk from artificial general intelligence
1033:
Existential risk from artificial general intelligence
512: 510: 508: 1577:
All-Party Parliamentary Group for Future Generations
878:"In defense of philosophy: a review of Nick Bostrom" 310:
ill-advised to dismiss the possibility altogether".
2054: 1998: 1957: 1901: 1820: 1729: 1560: 1484: 1398: 1283: 1162: 1104:
Center for Human-Compatible Artificial Intelligence
1081: 985: 155: 143: 135: 127: 117: 105: 95: 81: 71: 61: 51: 850:"Will Superintelligent Machines Destroy Humanity?" 820: 1144:Leverhulme Centre for the Future of Intelligence 541: 539: 1596:Centre for Enabling EA Learning & Research 1139:Institute for Ethics and Emerging Technologies 271:, Gates said that he would "highly recommend" 2091:Superintelligence: Paths, Dangers, Strategies 1935: 1885:Superintelligence: Paths, Dangers, Strategies 1376: 1323:Superintelligence: Paths, Dangers, Strategies 1303:Open letter on artificial intelligence (2015) 959: 224:, it is necessary to successfully solve the " 186:existential risk from artificial intelligence 172:Superintelligence: Paths, Dangers, Strategies 8: 1461:Psychological barriers to effective altruism 804:: CS1 maint: multiple names: authors list ( 289:have "received it as a work of importance". 32: 572: 570: 481: 479: 477: 364:to be "more realistic" than Ray Kurzweil's 1942: 1928: 1920: 1431:Distributional cost-effectiveness analysis 1383: 1369: 1361: 966: 952: 944: 435:"Superintelligent Swede snapped up by OUP" 313:Some of Bostrom's colleagues suggest that 40: 31: 895: 681:Bratton, Benjamin H. (23 February 2015). 1606:Centre for the Study of Existential Risk 1109:Centre for the Study of Existential Risk 1686:Machine Intelligence Research Institute 1149:Machine Intelligence Research Institute 426: 2006:Differential technological development 797: 683:"Outing A.I.: Beyond the Turing Test" 409:Philosophy of artificial intelligence 7: 848:Bailey, Ronald (12 September 2014). 655:Augenbraun, Eliene (4 August 2014). 450: 448: 1601:Center for High Impact Philanthropy 1291:Statement on AI risk of extinction 829:from the original on 6 August 2014 486:Henderson, Caspar (17 July 2014). 196:It is unknown whether human-level 175:is a 2014 book by the philosopher 25: 2148:Books in philosophy of technology 1028:Ethics of artificial intelligence 733:Lumby, Andrew (28 January 2015). 332:actually doing so seems remote." 1441:Equal consideration of interests 1347: 1346: 1038:Friendly artificial intelligence 212:(a famous unsolved mathematical 1843:Famine, Affluence, and Morality 1611:Development Media International 819:Cookson, Clive (13 July 2014). 577:Chivers, Tom (10 August 2014). 1807:Risk of astronomical suffering 1099:Center for Applied Rationality 707:Mack, Eric (28 January 2015). 517:Khatchadourian, Raffi (2015). 1: 2143:Oxford University Press books 1591:Centre for Effective Altruism 1421:Disability-adjusted life year 897:10.1080/0952813X.2015.1055829 779:Black, Melia Russell, Julia. 629:Dean, James (5 August 2014). 303:positive values. A review in 1951:Future of Humanity Institute 1706:Raising for Effective Giving 1631:Future of Humanity Institute 1119:Future of Humanity Institute 756:Kaiser Kuo (31 March 2015). 604:"Best Selling Science Books" 390:Future of Humanity Institute 1857:Living High and Letting Die 1802:Neglected tropical diseases 1336:Artificial Intelligence Act 1330:Do You Trust This Computer? 546:Adams, Tim (12 June 2016). 35:Paths, Dangers, Strategies 2164: 2031:Self-indication assumption 1573:Against Malaria Foundation 1466:Quality-adjusted life year 876:Thomas, Joel (July 2015). 296:The science editor of the 2133:English non-fiction books 2070:Global Catastrophic Risks 1909:Effective Altruism Global 1836:The End of Animal Farming 1696:Nuclear Threat Initiative 1581:Animal Charity Evaluators 1344: 1089:Alignment Research Center 1073:Technological singularity 1023:Effective accelerationism 927:10.1017/S0031819115000340 336:wrote in the libertarian 161:Global Catastrophic Risks 39: 27:2014 book by Nick Bostrom 2036:Self-sampling assumption 2011:Global catastrophic risk 1864:The Most Good You Can Do 1777:Intensive animal farming 1762:Global catastrophic risk 1636:Future of Life Institute 1124:Future of Life Institute 1043:Instrumental convergence 519:"The Doomsday Invention" 131:Print, e-book, audiobook 18:Superintelligence (book) 1666:The Good Food Institute 1416:Demandingness objection 979:artificial intelligence 825:. The Financial Times. 367:The Singularity Is Near 240:The book ranked #17 on 222:existential catastrophe 198:artificial intelligence 100:Oxford University Press 76:Artificial intelligence 2138:English-language books 2113:2014 non-fiction books 1892:What We Owe the Future 1721:Wild Animal Initiative 1456:Moral circle expansion 1048:Intelligence explosion 113:September 1, 2014 (US) 2128:Works by Nick Bostrom 2041:Simulation hypothesis 1850:The Life You Can Save 1812:Wild animal suffering 1003:AI capability control 1476:Venture philanthropy 1094:Center for AI Safety 259:has also influenced 255:. Bostrom's work on 1711:Sentience Institute 1309:Our Final Invention 441:. 21 November 2013. 345:The Daily Telegraph 275:. According to the 36: 1792:Malaria prevention 1752:Economic stability 1737:Biotechnology risk 1691:Malaria Consortium 1656:Giving What We Can 1626:Fistula Foundation 1411:Charity assessment 1392:Effective altruism 737:. The Fiscal Times 687:The New York Times 610:. 8 September 2014 608:The New York Times 242:The New York Times 226:AI control problem 210:Riemann hypothesis 206:instrumental goals 179:. It explores how 33:Superintelligence: 2100: 2099: 2077:Human Enhancement 1917: 1916: 1829:Doing Good Better 1701:Open Philanthropy 1681:Mercy for Animals 1676:The Humane League 1552:Eliezer Yudkowsky 1517:William MacAskill 1492:Sam Bankman-Fried 1406:Aid effectiveness 1358: 1357: 1275:Eliezer Yudkowsky 1250:Stuart J. Russell 1068:Superintelligence 362:Superintelligence 273:Superintelligence 257:superintelligence 181:superintelligence 168: 167: 118:Publication place 111:July 3, 2014 (UK) 16:(Redirected from 2155: 2123:Futurology books 2021:Pascal's mugging 1944: 1937: 1930: 1921: 1871:Practical Ethics 1522:Dustin Moskovitz 1512:Holden Karnofsky 1451:Marginal utility 1385: 1378: 1371: 1362: 1350: 1349: 1297:Human Compatible 1270:Roman Yampolskiy 1018:Consequentialism 975:Existential risk 968: 961: 954: 945: 939: 938: 908: 902: 901: 899: 890:(6): 1089–1094. 873: 867: 866: 864: 862: 845: 839: 838: 836: 834: 824: 816: 810: 809: 803: 795: 793: 791: 785:Business Insider 776: 770: 769: 767: 765: 753: 747: 746: 744: 742: 730: 724: 723: 721: 719: 704: 698: 697: 695: 693: 678: 672: 671: 669: 667: 652: 646: 645: 643: 641: 626: 620: 619: 617: 615: 600: 594: 593: 591: 589: 574: 565: 564: 562: 560: 543: 534: 533: 531: 529: 514: 503: 502: 500: 498: 483: 472: 471: 469: 467: 452: 443: 442: 431: 396:Human Compatible 246:business magnate 156:Preceded by 107:Publication date 44: 37: 21: 2163: 2162: 2158: 2157: 2156: 2154: 2153: 2152: 2103: 2102: 2101: 2096: 2050: 1994: 1985:Anders Sandberg 1970:K. Eric Drexler 1953: 1948: 1918: 1913: 1897: 1816: 1782:Land use reform 1725: 1641:Founders Pledge 1616:Evidence Action 1556: 1480: 1436:Earning to give 1394: 1389: 1359: 1354: 1340: 1279: 1235:Steve Omohundro 1215:Geoffrey Hinton 1205:Stephen Hawking 1190:Paul Christiano 1170:Scott Alexander 1158: 1129:Google DeepMind 1077: 1063:Suffering risks 981: 972: 942: 910: 909: 905: 875: 874: 870: 860: 858: 847: 846: 842: 832: 830: 818: 817: 813: 796: 789: 787: 778: 777: 773: 763: 761: 755: 754: 750: 740: 738: 732: 731: 727: 717: 715: 706: 705: 701: 691: 689: 680: 679: 675: 665: 663: 654: 653: 649: 639: 637: 628: 627: 623: 613: 611: 602: 601: 597: 587: 585: 576: 575: 568: 558: 556: 545: 544: 537: 527: 525: 516: 515: 506: 496: 494: 485: 484: 475: 465: 463: 462:. 9 August 2014 454: 453: 446: 433: 432: 428: 424: 376: 299:Financial Times 281:, philosophers 253:nuclear weapons 238: 194: 128:Media type 112: 108: 90:popular science 47: 34: 28: 23: 22: 15: 12: 11: 5: 2161: 2159: 2151: 2150: 2145: 2140: 2135: 2130: 2125: 2120: 2115: 2105: 2104: 2098: 2097: 2095: 2094: 2087: 2080: 2073: 2066: 2063:Anthropic Bias 2058: 2056: 2052: 2051: 2049: 2048: 2043: 2038: 2033: 2028: 2023: 2018: 2013: 2008: 2002: 2000: 1996: 1995: 1993: 1992: 1990:Rebecca Roache 1987: 1982: 1977: 1972: 1967: 1961: 1959: 1955: 1954: 1949: 1947: 1946: 1939: 1932: 1924: 1915: 1914: 1912: 1911: 1905: 1903: 1899: 1898: 1896: 1895: 1888: 1881: 1874: 1867: 1860: 1853: 1846: 1839: 1832: 1824: 1822: 1818: 1817: 1815: 1814: 1809: 1804: 1799: 1797:Mass deworming 1794: 1789: 1787:Life extension 1784: 1779: 1774: 1772:Global poverty 1769: 1764: 1759: 1754: 1749: 1744: 1742:Climate change 1739: 1733: 1731: 1727: 1726: 1724: 1723: 1718: 1716:Unlimit Health 1713: 1708: 1703: 1698: 1693: 1688: 1683: 1678: 1673: 1668: 1663: 1661:Good Food Fund 1658: 1653: 1648: 1643: 1638: 1633: 1628: 1623: 1618: 1613: 1608: 1603: 1598: 1593: 1588: 1583: 1578: 1575: 1570: 1564: 1562: 1558: 1557: 1555: 1554: 1549: 1544: 1539: 1534: 1529: 1524: 1519: 1514: 1509: 1507:Hilary Greaves 1504: 1499: 1494: 1488: 1486: 1482: 1481: 1479: 1478: 1473: 1471:Utilitarianism 1468: 1463: 1458: 1453: 1448: 1443: 1438: 1433: 1428: 1426:Disease burden 1423: 1418: 1413: 1408: 1402: 1400: 1396: 1395: 1390: 1388: 1387: 1380: 1373: 1365: 1356: 1355: 1345: 1342: 1341: 1339: 1338: 1333: 1326: 1319: 1312: 1305: 1300: 1293: 1287: 1285: 1281: 1280: 1278: 1277: 1272: 1267: 1262: 1257: 1252: 1247: 1242: 1237: 1232: 1227: 1222: 1217: 1212: 1207: 1202: 1197: 1192: 1187: 1182: 1177: 1172: 1166: 1164: 1160: 1159: 1157: 1156: 1151: 1146: 1141: 1136: 1131: 1126: 1121: 1116: 1111: 1106: 1101: 1096: 1091: 1085: 1083: 1079: 1078: 1076: 1075: 1070: 1065: 1060: 1058:Machine ethics 1055: 1050: 1045: 1040: 1035: 1030: 1025: 1020: 1015: 1010: 1005: 1000: 995: 989: 987: 983: 982: 973: 971: 970: 963: 956: 948: 941: 940: 921:(1): 125–130. 903: 868: 840: 811: 771: 748: 725: 699: 673: 647: 621: 595: 566: 535: 523:The New Yorker 504: 473: 444: 439:The Bookseller 425: 423: 420: 419: 418: 411: 406: 399: 392: 387: 382: 375: 372: 319:nanotechnology 237: 234: 193: 190: 166: 165: 157: 153: 152: 150:978-0199678112 147: 141: 140: 137: 133: 132: 129: 125: 124: 122:United Kingdom 119: 115: 114: 109: 106: 103: 102: 97: 93: 92: 83: 79: 78: 73: 69: 68: 63: 59: 58: 53: 49: 48: 45: 26: 24: 14: 13: 10: 9: 6: 4: 3: 2: 2160: 2149: 2146: 2144: 2141: 2139: 2136: 2134: 2131: 2129: 2126: 2124: 2121: 2119: 2116: 2114: 2111: 2110: 2108: 2093: 2092: 2088: 2086: 2085: 2084:The Precipice 2081: 2079: 2078: 2074: 2072: 2071: 2067: 2065: 2064: 2060: 2059: 2057: 2053: 2047: 2044: 2042: 2039: 2037: 2034: 2032: 2029: 2027: 2026:Reversal test 2024: 2022: 2019: 2017: 2014: 2012: 2009: 2007: 2004: 2003: 2001: 1997: 1991: 1988: 1986: 1983: 1981: 1978: 1976: 1973: 1971: 1968: 1966: 1963: 1962: 1960: 1956: 1952: 1945: 1940: 1938: 1933: 1931: 1926: 1925: 1922: 1910: 1907: 1906: 1904: 1900: 1894: 1893: 1889: 1887: 1886: 1882: 1880: 1879: 1878:The Precipice 1875: 1873: 1872: 1868: 1866: 1865: 1861: 1859: 1858: 1854: 1852: 1851: 1847: 1845: 1844: 1840: 1838: 1837: 1833: 1831: 1830: 1826: 1825: 1823: 1819: 1813: 1810: 1808: 1805: 1803: 1800: 1798: 1795: 1793: 1790: 1788: 1785: 1783: 1780: 1778: 1775: 1773: 1770: 1768: 1767:Global health 1765: 1763: 1760: 1758: 1755: 1753: 1750: 1748: 1747:Cultured meat 1745: 1743: 1740: 1738: 1735: 1734: 1732: 1728: 1722: 1719: 1717: 1714: 1712: 1709: 1707: 1704: 1702: 1699: 1697: 1694: 1692: 1689: 1687: 1684: 1682: 1679: 1677: 1674: 1672: 1671:Good Ventures 1669: 1667: 1664: 1662: 1659: 1657: 1654: 1652: 1649: 1647: 1644: 1642: 1639: 1637: 1634: 1632: 1629: 1627: 1624: 1622: 1619: 1617: 1614: 1612: 1609: 1607: 1604: 1602: 1599: 1597: 1594: 1592: 1589: 1587: 1586:Animal Ethics 1584: 1582: 1579: 1576: 1574: 1571: 1569: 1566: 1565: 1563: 1561:Organizations 1559: 1553: 1550: 1548: 1545: 1543: 1540: 1538: 1535: 1533: 1530: 1528: 1525: 1523: 1520: 1518: 1515: 1513: 1510: 1508: 1505: 1503: 1500: 1498: 1495: 1493: 1490: 1489: 1487: 1483: 1477: 1474: 1472: 1469: 1467: 1464: 1462: 1459: 1457: 1454: 1452: 1449: 1447: 1444: 1442: 1439: 1437: 1434: 1432: 1429: 1427: 1424: 1422: 1419: 1417: 1414: 1412: 1409: 1407: 1404: 1403: 1401: 1397: 1393: 1386: 1381: 1379: 1374: 1372: 1367: 1366: 1363: 1353: 1343: 1337: 1334: 1332: 1331: 1327: 1325: 1324: 1320: 1318: 1317: 1316:The Precipice 1313: 1311: 1310: 1306: 1304: 1301: 1299: 1298: 1294: 1292: 1289: 1288: 1286: 1282: 1276: 1273: 1271: 1268: 1266: 1265:Frank Wilczek 1263: 1261: 1258: 1256: 1253: 1251: 1248: 1246: 1243: 1241: 1238: 1236: 1233: 1231: 1228: 1226: 1223: 1221: 1218: 1216: 1213: 1211: 1210:Dan Hendrycks 1208: 1206: 1203: 1201: 1198: 1196: 1193: 1191: 1188: 1186: 1183: 1181: 1180:Yoshua Bengio 1178: 1176: 1173: 1171: 1168: 1167: 1165: 1161: 1155: 1152: 1150: 1147: 1145: 1142: 1140: 1137: 1135: 1132: 1130: 1127: 1125: 1122: 1120: 1117: 1115: 1112: 1110: 1107: 1105: 1102: 1100: 1097: 1095: 1092: 1090: 1087: 1086: 1084: 1082:Organizations 1080: 1074: 1071: 1069: 1066: 1064: 1061: 1059: 1056: 1054: 1051: 1049: 1046: 1044: 1041: 1039: 1036: 1034: 1031: 1029: 1026: 1024: 1021: 1019: 1016: 1014: 1011: 1009: 1006: 1004: 1001: 999: 996: 994: 991: 990: 988: 984: 980: 976: 969: 964: 962: 957: 955: 950: 949: 946: 936: 932: 928: 924: 920: 916: 915: 907: 904: 898: 893: 889: 885: 884: 879: 872: 869: 857: 856: 851: 844: 841: 828: 823: 815: 812: 807: 801: 786: 782: 775: 772: 759: 752: 749: 736: 729: 726: 714: 710: 703: 700: 688: 684: 677: 674: 662: 658: 651: 648: 636: 632: 625: 622: 609: 605: 599: 596: 584: 583:The Telegraph 580: 573: 571: 567: 555: 554: 549: 542: 540: 536: 524: 520: 513: 511: 509: 505: 493: 489: 482: 480: 478: 474: 461: 460:The Economist 457: 456:"Clever cogs" 451: 449: 445: 440: 436: 430: 427: 421: 417: 416: 412: 410: 407: 405: 404: 400: 398: 397: 393: 391: 388: 386: 383: 381: 378: 377: 373: 371: 369: 368: 363: 359: 358: 353: 352: 347: 346: 341: 340: 335: 334:Ronald Bailey 330: 329: 328:The Economist 324: 323:biotechnology 320: 316: 311: 308: 307: 301: 300: 294: 292: 288: 284: 280: 279: 274: 270: 266: 262: 258: 254: 250: 247: 243: 235: 233: 229: 227: 223: 219: 215: 211: 207: 202: 199: 191: 189: 187: 182: 178: 174: 173: 164: 162: 158: 154: 151: 148: 146: 142: 138: 134: 130: 126: 123: 120: 116: 110: 104: 101: 98: 94: 91: 87: 84: 80: 77: 74: 70: 67: 64: 60: 57: 54: 50: 46:First edition 43: 38: 30: 19: 2090: 2089: 2082: 2075: 2068: 2061: 2016:Great Filter 1975:Robin Hanson 1965:Nick Bostrom 1890: 1884: 1883: 1876: 1869: 1862: 1855: 1848: 1841: 1834: 1827: 1646:GiveDirectly 1568:80,000 Hours 1542:Peter Singer 1537:Derek Parfit 1527:Yew-Kwang Ng 1502:Nick Bostrom 1328: 1322: 1321: 1314: 1307: 1295: 1255:Jaan Tallinn 1195:Eric Drexler 1185:Nick Bostrom 998:AI alignment 918: 912: 906: 887: 881: 871: 861:16 September 859:. Retrieved 853: 843: 831:. Retrieved 814: 788:. Retrieved 784: 774: 762:. Retrieved 751: 739:. Retrieved 728: 716:. Retrieved 712: 702: 690:. Retrieved 686: 676: 664:. Retrieved 660: 650: 638:. Retrieved 634: 624: 612:. Retrieved 607: 598: 586:. Retrieved 582: 557:. Retrieved 553:The Observer 551: 526:. Retrieved 522: 495:. Retrieved 492:The Guardian 491: 464:. Retrieved 438: 429: 413: 401: 394: 380:AI alignment 365: 361: 355: 349: 343: 337: 326: 312: 306:The Guardian 304: 297: 295: 287:Derek Parfit 283:Peter Singer 276: 272: 241: 239: 230: 218:computronium 203: 195: 177:Nick Bostrom 171: 170: 169: 159: 56:Nick Bostrom 29: 1730:Focus areas 1621:Faunalytics 1485:Key figures 1446:Longtermism 1260:Max Tegmark 1245:Martin Rees 1053:Longtermism 1013:AI takeover 741:19 February 718:19 February 315:nuclear war 2107:Categories 1821:Literature 1497:Liv Boeree 1225:Shane Legg 1200:Sam Harris 1175:Sam Altman 1114:EleutherAI 914:Philosophy 614:9 November 422:References 357:Philosophy 291:Sam Altman 278:New Yorker 261:Bill Gates 214:conjecture 86:Philosophy 2046:Singleton 1547:Cari Tuna 1240:Huw Price 1230:Elon Musk 1134:Humanity+ 1008:AI safety 935:171005535 790:15 August 760:. YouTube 635:The Times 588:16 August 385:AI safety 249:Elon Musk 236:Reception 96:Publisher 1999:Concepts 1980:Toby Ord 1651:GiveWell 1532:Toby Ord 1399:Concepts 1352:Category 1220:Bill Joy 986:Concepts 827:Archived 800:cite web 666:5 August 661:CBC News 640:5 August 559:29 March 528:29 March 466:9 August 403:Life 3.0 374:See also 269:Robin Li 267:'s CEO, 192:Synopsis 62:Language 833:30 July 764:8 April 692:4 March 497:30 July 360:judged 139:352 pp. 72:Subject 66:English 1958:People 1902:Events 1163:People 1154:OpenAI 933:  855:Reason 713:Forbes 339:Reason 163:  52:Author 2055:Works 1284:Other 977:from 931:S2CID 265:Baidu 136:Pages 82:Genre 863:2014 835:2014 806:link 792:2023 766:2015 743:2015 720:2015 694:2015 668:2014 642:2014 616:2014 590:2014 561:2020 530:2020 499:2014 468:2014 321:and 285:and 145:ISBN 993:AGI 923:doi 892:doi 2109:: 929:. 919:91 917:. 888:28 886:. 880:. 852:. 802:}} 798:{{ 783:. 711:. 685:. 659:. 633:. 606:. 581:. 569:^ 550:. 538:^ 521:. 507:^ 490:. 476:^ 458:. 447:^ 437:. 370:. 325:. 188:. 88:, 1943:e 1936:t 1929:v 1384:e 1377:t 1370:v 967:e 960:t 953:v 937:. 925:: 900:. 894:: 865:. 837:. 808:) 794:. 768:. 745:. 722:. 696:. 670:. 644:. 618:. 592:. 563:. 532:. 501:. 470:. 20:)

Index

Superintelligence (book)

Nick Bostrom
English
Artificial intelligence
Philosophy
popular science
Oxford University Press
United Kingdom
ISBN
978-0199678112
Global Catastrophic Risks
Nick Bostrom
superintelligence
existential risk from artificial intelligence
artificial intelligence
instrumental goals
Riemann hypothesis
conjecture
computronium
existential catastrophe
AI control problem
business magnate
Elon Musk
nuclear weapons
superintelligence
Bill Gates
Baidu
Robin Li
New Yorker

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.