Knowledge

AI Safety Institute

Source 📝

67:, the Prime Minister of the United Kingdom, expressed his intention to "make the U.K. not just the intellectual home but the geographical home of global AI safety regulation" and unveiled plans for an AI Safety Summit. He emphasized the need for independent safety evaluations, stating that AI companies cannot "mark their own homework". During the summit in November 2023, the UK AISI was officially established as an evolution of the 587: 181:
In March 2024, a budget of $ 10 million was allocated. Observers noted that this investment is relatively small, especially considering the presence of many big AI companies in the US. The NIST itself, which hosts the AISI, is also known for its chronic lack of funding. Biden administration's request
93:
said that many AI companies were waiting for the UK and the US AI Safety Institutes to work out common evaluation rules and procedures. An agreement was indeed concluded between the UK and the US in April 2024 to collaborate on at least one joint safety test. Initially established in
588:"Majority Leader Schumer Announces First-Of-Its-Kind Funding To Establish A U.S. Artificial Intelligence Safety Institute; Funding Is A Down Payment On Balancing Safety With AI Innovation And Will Aid Development Standards, Tools, And Tests To Ensure AI Systems Operate Safely" 160: 54:
in May 2024, international leaders agreed to form a network of AI Safety Institutes, comprising institutes from the UK, the US, Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada and the European Union.
715: 43: 134: 475: 147:
In May 2024, the institute open-sourced an AI safety tool called "Inspect", which evaluates AI model capabilities such as reasoning and their degree of autonomy.
144:, the UK is reluctant to legislate early, considering that it may lower the sector's growth, and that laws might be rendered obsolete by technological progress. 220: 102:, where many AI companies are located. This is part of a plan to "set new, international standards on AI safety", according to UK's technology minister 249: 569: 326: 201: 500: 451: 432: 166:
In February 2024, the US government created the US AI Safety Institute Consortium (AISIC), regrouping more than 200 organizations such as
274: 113:
in May 2024, the European Union and other countries agreed to create their own AI safety institutes, forming an international network.
39: 85:
reported in April 2024 that many AI companies had not shared pre-deployment access to their most advanced AI models for evaluation.
710: 275:"British Prime Minister Rishi Sunak pitches UK as home of A.I. safety regulation as London bids to be next Silicon Valley" 616: 140:
The United Kingdom's AI strategy aims to balance safety and innovation. Unlike the European Union which adopted the
129:, with an initial budget of £100 million. In November 2023, it evolved into the UK AISI, and continued to be led by 381: 191: 407: 50:
in November 2023, the United Kingdom (UK) and the United States (US) both created their own AISI. During the
25: 544: 163:. In February 2024, Joe Biden's former economic policy adviser Elizabeth Kelly was appointed to lead it. 24:), in general, is a state-backed institute aiming to evaluate and ensure the safety of the most advanced 357: 705: 476:"Initial £100 million for expert taskforce to help UK build and adopt next generation of safe AI" 452:"Britain expands AI Safety Institute to San Francisco amid scrutiny over regulatory shortcomings" 161:
Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
675: 624: 648: 196: 103: 47: 182:
for additional funding was met with further budget cuts from congressional appropriators.
110: 51: 649:"NIST would 'have to consider' workforce reductions if appropriations cut goes through" 86: 29: 699: 99: 98:, the UK AI Safety Institute announced in May 2024 that it would open an office in 221:"Safety institutes to form 'international network' to boost AI research and tests" 130: 64: 298: 545:"Biden Economic Adviser Elizabeth Kelly Picked to Lead AI Safety Testing Body" 90: 628: 175: 171: 35: 519: 82: 617:"This agency is tasked with keeping AI safe. Its offices are crumbling" 79:. Japan followed by launching an AI safety institute in February 2024. 570:"US says leading AI companies join safety consortium to address risks" 121:
The United Kingdom founded in April 2023 a safety organisation called
167: 141: 95: 501:"AI Safety Institute Launches AI Model Safety Testing Tool Platform" 408:"US and UK will work together to test AI models for safety threats" 382:"Rishi Sunak promised to make AI safe. Big Tech's not playing ball" 156: 76: 250:"World leaders agree to launch network of AI safety institutes" 690: 685: 358:"U.S., U.K. Announce Partnership to Safety Test AI Models" 299:"Rishi Sunak: AI firms cannot 'mark their own homework'" 155:
The US AISI was founded in November 2023 as part of the
680: 716:
Existential risk from artificial general intelligence
159:. This happened the day after the signature of the 520:"Why Biden's AI Executive Order Only Goes So Far" 433:"Britain's AI safety institute to open US office" 135:Department for Science, Innovation and Technology 8: 133:. The AISI is part of the United Kingdom's 38:gained prominence in 2023, notably with 212: 568:Shepardson, David (February 8, 2024). 610: 608: 327:"Introducing the AI Safety Institute" 202:Regulation of artificial intelligence 7: 351: 349: 347: 321: 319: 243: 241: 14: 75:, and the US AISI as part of the 356:Henshall, Will (April 1, 2024). 431:Coulter, Martin (20 May 2024). 89:'s president of global affairs 615:Zakrzewski, Cat (2024-03-08). 248:Desmarais, Anna (2024-05-22). 1: 543:Henshall, Will (2024-02-07). 518:Henshall, Will (2023-11-01). 499:Wodecki, Ben (May 15, 2024). 406:David, Emilia (2024-04-02). 450:Browne, Ryan (2024-05-20). 273:Browne, Ryan (2023-06-12). 732: 681:Japan AI Safety Institute 192:Alignment Research Center 44:existential risks from AI 28:(AI) models, also called 592:www.democrats.senate.gov 26:artificial intelligence 691:US AI Safety Institute 686:UK AI Safety Institute 711:Safety organizations 40:public declarations 18:AI Safety Institute 676:European AI Office 30:frontier AI models 723: 664: 663: 661: 660: 645: 639: 638: 636: 635: 612: 603: 602: 600: 599: 584: 578: 577: 565: 559: 558: 556: 555: 540: 534: 533: 531: 530: 515: 509: 508: 496: 490: 489: 487: 486: 472: 466: 465: 463: 462: 447: 441: 440: 428: 422: 421: 419: 418: 403: 397: 396: 394: 393: 378: 372: 371: 369: 368: 353: 342: 341: 339: 338: 323: 314: 313: 311: 310: 295: 289: 288: 286: 285: 270: 264: 263: 261: 260: 245: 236: 235: 233: 232: 217: 197:Foundation model 126: 72: 48:AI Safety Summit 42:about potential 731: 730: 726: 725: 724: 722: 721: 720: 696: 695: 672: 667: 658: 656: 647: 646: 642: 633: 631: 621:Washington Post 614: 613: 606: 597: 595: 586: 585: 581: 567: 566: 562: 553: 551: 542: 541: 537: 528: 526: 517: 516: 512: 498: 497: 493: 484: 482: 474: 473: 469: 460: 458: 449: 448: 444: 430: 429: 425: 416: 414: 405: 404: 400: 391: 389: 380: 379: 375: 366: 364: 355: 354: 345: 336: 334: 333:. November 2023 325: 324: 317: 308: 306: 297: 296: 292: 283: 281: 272: 271: 267: 258: 256: 247: 246: 239: 230: 228: 225:The Independent 219: 218: 214: 210: 188: 153: 124: 119: 111:AI Seoul Summit 104:Michele Donelan 70: 61: 52:AI Seoul Summit 12: 11: 5: 729: 727: 719: 718: 713: 708: 698: 697: 694: 693: 688: 683: 678: 671: 670:External links 668: 666: 665: 640: 604: 579: 560: 535: 510: 491: 467: 442: 423: 398: 373: 343: 315: 290: 265: 237: 211: 209: 206: 205: 204: 199: 194: 187: 184: 152: 149: 118: 117:United Kingdom 115: 60: 57: 13: 10: 9: 6: 4: 3: 2: 728: 717: 714: 712: 709: 707: 704: 703: 701: 692: 689: 687: 684: 682: 679: 677: 674: 673: 669: 654: 650: 644: 641: 630: 626: 622: 618: 611: 609: 605: 593: 589: 583: 580: 575: 571: 564: 561: 550: 546: 539: 536: 525: 521: 514: 511: 506: 502: 495: 492: 481: 477: 471: 468: 457: 453: 446: 443: 438: 434: 427: 424: 413: 409: 402: 399: 387: 383: 377: 374: 363: 359: 352: 350: 348: 344: 332: 328: 322: 320: 316: 304: 300: 294: 291: 280: 276: 269: 266: 255: 251: 244: 242: 238: 226: 222: 216: 213: 207: 203: 200: 198: 195: 193: 190: 189: 185: 183: 179: 177: 173: 169: 164: 162: 158: 151:United States 150: 148: 145: 143: 138: 136: 132: 128: 116: 114: 112: 107: 105: 101: 100:San Francisco 97: 92: 88: 84: 80: 78: 74: 66: 58: 56: 53: 49: 46:. During the 45: 41: 37: 33: 31: 27: 23: 19: 657:. Retrieved 655:. 2024-05-24 652: 643: 632:. Retrieved 620: 596:. Retrieved 594:. 2024-03-07 591: 582: 573: 563: 552:. Retrieved 548: 538: 527:. Retrieved 523: 513: 504: 494: 483:. Retrieved 479: 470: 459:. Retrieved 455: 445: 436: 426: 415:. Retrieved 411: 401: 390:. Retrieved 388:. 2024-04-26 385: 376: 365:. Retrieved 361: 335:. Retrieved 330: 307:. Retrieved 305:. 2023-11-01 302: 293: 282:. Retrieved 278: 268: 257:. Retrieved 253: 229:. Retrieved 227:. 2024-05-21 224: 215: 180: 165: 154: 146: 139: 122: 120: 108: 81: 68: 62: 34: 21: 17: 15: 505:AI Business 131:Ian Hogarth 65:Rishi Sunak 700:Categories 659:2024-07-06 634:2024-07-06 598:2024-07-06 554:2024-07-06 529:2024-07-07 485:2024-07-06 461:2024-06-15 417:2024-06-21 392:2024-06-15 367:2024-07-06 337:2024-06-15 309:2024-06-21 284:2024-06-21 259:2024-06-15 231:2024-07-06 208:References 91:Nick Clegg 706:AI safety 629:0190-8286 412:The Verge 176:Microsoft 172:Anthropic 127:Taskforce 123:Frontier 73:Taskforce 69:Frontier 63:In 2023, 36:AI safety 653:FedScoop 386:Politico 254:euronews 186:See also 83:Politico 59:Timeline 574:Reuters 437:Reuters 109:At the 627:  480:GOV.UK 331:GOV.UK 168:Google 142:AI Act 96:London 625:ISSN 549:TIME 524:TIME 456:CNBC 362:TIME 279:CNBC 157:NIST 87:Meta 77:NIST 22:AISI 303:BBC 174:or 16:An 702:: 651:. 623:. 619:. 607:^ 590:. 572:. 547:. 522:. 503:. 478:. 454:. 435:. 410:. 384:. 360:. 346:^ 329:. 318:^ 301:. 277:. 252:. 240:^ 223:. 178:. 170:, 137:. 125:AI 106:. 71:AI 32:. 662:. 637:. 601:. 576:. 557:. 532:. 507:. 488:. 464:. 439:. 420:. 395:. 370:. 340:. 312:. 287:. 262:. 234:. 20:(

Index

artificial intelligence
frontier AI models
AI safety
public declarations
existential risks from AI
AI Safety Summit
AI Seoul Summit
Rishi Sunak
NIST
Politico
Meta
Nick Clegg
London
San Francisco
Michele Donelan
AI Seoul Summit
Ian Hogarth
Department for Science, Innovation and Technology
AI Act
NIST
Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Google
Anthropic
Microsoft
Alignment Research Center
Foundation model
Regulation of artificial intelligence
"Safety institutes to form 'international network' to boost AI research and tests"

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.