Knowledge

Computationally bounded adversary

Source đź“ť

35:– is restricted to only being able to perform a reasonable amount of computation to decide which bits of the code word need to change. In other words, this model does not need to consider how many errors can possibly be handled, but only how many errors could possibly be introduced given a reasonable amount of computing power on the part of the adversary. Once the channel has been given this restriction it becomes possible to construct codes that are both faster to encode and decode compared to previous methods that can also handle a large number of errors. 131: 123:) time flip a coin for each bit, it is intuitively clear that any encoding and decoding system which can work against this adversary must also work in the stochastic noise model. The converse is less simple; however, it can be shown that any system which works in the stochastic noise model can also efficiently encode and decode against a computationally bounded adversary, and only at an additional cost which is polynomial in 31:/2 errors, where d was the Hamming distance of the code. The problem with doing it this way is that it does not take into consideration the actual amount of computing power available to the adversary. Rather, it only concerns itself with how many bits of a given code word can change and still have the message decode properly. In the computationally bounded adversary model the channel – the 881:
which is both efficient and near-optimal, with a negligible error probability. These codes are used in complexity theory for things like self-correcting computations, probabilistically checkable proof systems, and worst-case to average-case hardness reductions in the constructions of pseudo-random
105:
Therefore, a computationally bounded adversarial model has been proposed as a compromise between the two. This forces one to consider that messages may be perverted in conscious, even malicious ways, but without forcing an algorithm designer to worry about rare cases which likely will never occur.
52:
seems intuitively ideal. The guarantee that an algorithm will succeed no matter what is, of course, highly alluring. However, it demands too much. A real-life adversary cannot spend an indefinite amount of time examining a message in order to find the one error pattern which an algorithm would
100:
The stochastic noise model can be described as a kind of "dumb" noise model. That is to say that it does not have the adaptability to deal with "intelligent" threats. Even if the attacker is bounded it is still possible that they might be able to overcome the stochastic model with a bit of
134:
An illustration of the method. The first row gives the initial encoded message; the second, after random permutation and random R; the third, after the adversary adds N; the fourth, after unpermuting; the fifth, the encoded message with the adversary's error
460:
Similarly to the Quicksort comparison above, if the channel wants to do something smart, it must first test all the permutations. However, this is infeasible for a computationally bounded adversary, so the most it can do is make a random error pattern
92:!) time, it is clearly infeasible for an adversary to do this. Similarly, it is unreasonable to assume an adversary for an encoding and decoding system would be able to test every single error pattern in order to find the most effective one. 919:
error rate. This can be done by concatenating timestamped digital signatures onto messages. A computationally bounded channel cannot forge a signature; and while it may have valid past signatures, the receiver can use
578: 101:
cleverness. The stochastic model has no real way to fight against this sort of attack and as such is unsuited to dealing with the kind of "intelligent" threats that would be preferable to have defenses against.
680: 417: 733: 27:
problem is a different way of looking at the problem of sending data over a noisy channel. In previous models the best that could be done was ensuring correct decoding for up to
776: 618: 311: 88:! permutations of the input string and test the algorithm on each until he found the one for which the algorithm runs significantly slower. But since this would take O( 1004: 847: 361: 455: 267: 822: 209: 917: 189: 163: 867: 799: 479: 334: 229: 191:
be a simple decoder for the same, each of which runs in polynomial time. Furthermore, let both the sender and receiver share some random permutation function
1048: 487: 130: 893:
Furthermore, it is possible to construct codes which surpass known bounds for worst-case codes—specifically, unique decoding with a
883: 626: 32: 366: 1053: 878: 49: 275: 685: 849:
is just random noise and we can use the simple decoder to decode the received message and get back
959: 744: 586: 20: 1019: 998: 933: 425: 237: 807: 194: 127:. The following method to accomplish this was designed by Dick Lipton, and is taken from: 80:) behavior. Let us suppose an adversary wishes to force the Quicksort algorithm to make O( 896: 168: 142: 827: 341: 64:) comparisons; however, such an occurrence is rare. Quicksort almost invariably makes O( 887: 852: 784: 464: 319: 214: 1042: 921: 72:) comparisons instead, and even outperforms other algorithms which can guarantee O( 882:
generators. They are useful in cryptography as a result of their connection with
981: 877:
By assuming a computationally bounded adversary, it is possibly to design a
57: 924:
and select a message only if its signature has the correct timestamp.
573:{\displaystyle Z=\pi ^{-1}(Y'\oplus R)=\pi ^{-1}(Y\oplus N\oplus R),} 886:
protocols. They are also in a number of database applications like
129: 1020:"Optimal Error Correction for Computationally Bounded Noise" 60:
algorithm. In the worst-case scenario, Quicksort makes O(
119:
Since any computationally bounded adversary could in O(
84:) comparisons. Then he would have to search all of the 899: 855: 830: 810: 787: 747: 738:
since any permutation is linear with respect to XOR,
688: 629: 589: 490: 467: 428: 369: 344: 322: 278: 240: 217: 197: 171: 145: 911: 861: 841: 816: 793: 770: 727: 674: 612: 572: 473: 449: 411: 355: 328: 305: 261: 223: 203: 183: 157: 165:be an encoder for the stochastic noise model and 675:{\displaystyle =\pi ^{-1}(Y\oplus R)\oplus N'} 8: 1003:: CS1 maint: multiple names: authors list ( 898: 854: 829: 809: 786: 746: 704: 687: 637: 628: 588: 537: 501: 489: 466: 427: 380: 368: 343: 321: 277: 239: 216: 196: 170: 144: 953: 951: 949: 945: 412:{\displaystyle Z=\pi ^{-1}(Y'\oplus R)} 996: 115:Comparison to stochastic noise channel 7: 1018:Micali, Peikert; Sudan, A. Wilson. 14: 982:"Private Locally Decodable Codes" 306:{\displaystyle Y=\pi (X)\oplus R} 25:computationally bounded adversary 728:{\displaystyle N'=\pi ^{-1}(N),} 1049:Computational complexity theory 962:. Gödel’s Lost Letter and P=NP 719: 713: 658: 646: 564: 546: 527: 510: 444: 438: 406: 389: 338:Then for decoding: 1. Receive 294: 288: 256: 250: 178: 175: 152: 149: 56:As a comparison, consider the 1: 884:private information retrieval 771:{\displaystyle =X\oplus N',} 613:{\displaystyle Y'=Y\oplus N} 1070: 980:Ostrovsky, Pandey, Sahai. 39:Comparison to other models 781:as per the definition of 960:"Worst Case Complexity" 913: 879:locally decodable code 863: 843: 818: 795: 772: 729: 676: 614: 574: 475: 451: 450:{\displaystyle M=D(Z)} 413: 357: 330: 307: 263: 262:{\displaystyle X=E(M)} 225: 205: 185: 159: 136: 96:Stochastic noise model 958:Lipton (6 May 2009). 914: 873:Specific applications 864: 844: 819: 796: 773: 730: 677: 615: 575: 476: 452: 414: 358: 331: 308: 264: 234:For encoding: 1. Let 226: 211:and a random pattern 206: 186: 160: 133: 48:At first glance, the 897: 853: 828: 817:{\displaystyle \pi } 808: 785: 745: 686: 627: 587: 488: 465: 426: 367: 342: 320: 276: 238: 215: 204:{\displaystyle \pi } 195: 169: 143: 912:{\displaystyle 1-R} 184:{\displaystyle D()} 158:{\displaystyle E()} 16:Sending data method 909: 859: 842:{\displaystyle N'} 839: 814: 791: 768: 725: 672: 610: 570: 471: 447: 409: 356:{\displaystyle Y'} 353: 326: 303: 259: 221: 201: 181: 155: 137: 21:information theory 934:Concrete security 862:{\displaystyle M} 794:{\displaystyle Y} 474:{\displaystyle N} 329:{\displaystyle Y} 224:{\displaystyle R} 1061: 1033: 1032: 1030: 1029: 1024: 1015: 1009: 1008: 1002: 994: 992: 991: 986: 977: 971: 970: 968: 967: 955: 918: 916: 915: 910: 868: 866: 865: 860: 848: 846: 845: 840: 838: 823: 821: 820: 815: 800: 798: 797: 792: 777: 775: 774: 769: 764: 734: 732: 731: 726: 712: 711: 696: 681: 679: 678: 673: 671: 645: 644: 619: 617: 616: 611: 597: 579: 577: 576: 571: 545: 544: 520: 509: 508: 480: 478: 477: 472: 456: 454: 453: 448: 418: 416: 415: 410: 399: 388: 387: 362: 360: 359: 354: 352: 335: 333: 332: 327: 312: 310: 309: 304: 268: 266: 265: 260: 230: 228: 227: 222: 210: 208: 207: 202: 190: 188: 187: 182: 164: 162: 161: 156: 50:worst-case model 44:Worst-case model 1069: 1068: 1064: 1063: 1062: 1060: 1059: 1058: 1039: 1038: 1037: 1036: 1027: 1025: 1022: 1017: 1016: 1012: 995: 989: 987: 984: 979: 978: 974: 965: 963: 957: 956: 947: 942: 930: 895: 894: 875: 851: 850: 831: 826: 825: 806: 805: 783: 782: 757: 743: 742: 700: 689: 684: 683: 664: 633: 625: 624: 620:by definition. 590: 585: 584: 533: 513: 497: 486: 485: 463: 462: 424: 423: 392: 376: 365: 364: 345: 340: 339: 318: 317: 274: 273: 236: 235: 213: 212: 193: 192: 167: 166: 141: 140: 117: 112: 103: 98: 76: log  68: log  53:struggle with. 46: 41: 17: 12: 11: 5: 1067: 1065: 1057: 1056: 1051: 1041: 1040: 1035: 1034: 1010: 972: 944: 943: 941: 938: 937: 936: 929: 926: 908: 905: 902: 890:data storage. 888:fault-tolerant 874: 871: 858: 837: 834: 813: 790: 779: 778: 767: 763: 760: 756: 753: 750: 736: 735: 724: 721: 718: 715: 710: 707: 703: 699: 695: 692: 670: 667: 663: 660: 657: 654: 651: 648: 643: 640: 636: 632: 609: 606: 603: 600: 596: 593: 581: 580: 569: 566: 563: 560: 557: 554: 551: 548: 543: 540: 536: 532: 529: 526: 523: 519: 516: 512: 507: 504: 500: 496: 493: 470: 446: 443: 440: 437: 434: 431: 408: 405: 402: 398: 395: 391: 386: 383: 379: 375: 372: 351: 348: 325: 302: 299: 296: 293: 290: 287: 284: 281: 258: 255: 252: 249: 246: 243: 220: 200: 180: 177: 174: 154: 151: 148: 116: 113: 111: 108: 97: 94: 45: 42: 40: 37: 15: 13: 10: 9: 6: 4: 3: 2: 1066: 1055: 1054:Coding theory 1052: 1050: 1047: 1046: 1044: 1021: 1014: 1011: 1006: 1000: 983: 976: 973: 961: 954: 952: 950: 946: 939: 935: 932: 931: 927: 925: 923: 922:list decoding 906: 903: 900: 891: 889: 885: 880: 872: 870: 856: 835: 832: 811: 802: 788: 765: 761: 758: 754: 751: 748: 741: 740: 739: 722: 716: 708: 705: 701: 697: 693: 690: 668: 665: 661: 655: 652: 649: 641: 638: 634: 630: 623: 622: 621: 607: 604: 601: 598: 594: 591: 567: 561: 558: 555: 552: 549: 541: 538: 534: 530: 524: 521: 517: 514: 505: 502: 498: 494: 491: 484: 483: 482: 468: 458: 441: 435: 432: 429: 422:2. Calculate 420: 403: 400: 396: 393: 384: 381: 377: 373: 370: 349: 346: 336: 323: 314: 300: 297: 291: 285: 282: 279: 270: 253: 247: 244: 241: 232: 218: 198: 172: 146: 132: 128: 126: 122: 114: 109: 107: 102: 95: 93: 91: 87: 83: 79: 75: 71: 67: 63: 59: 54: 51: 43: 38: 36: 34: 30: 26: 22: 1026:. Retrieved 1013: 988:. Retrieved 975: 964:. Retrieved 892: 876: 803: 780: 737: 582: 481:. But then: 459: 421: 337: 316:3. Transmit 315: 271: 233: 138: 124: 120: 118: 110:Applications 104: 99: 89: 85: 81: 77: 73: 69: 65: 61: 55: 47: 28: 24: 18: 824:is random, 1043:Categories 1028:2013-04-01 990:2013-04-01 966:2013-04-01 940:References 363:. Compute 904:− 812:π 755:⊕ 706:− 702:π 662:⊕ 653:⊕ 639:− 635:π 605:⊕ 559:⊕ 553:⊕ 539:− 535:π 522:⊕ 503:− 499:π 401:⊕ 382:− 378:π 298:⊕ 286:π 199:π 58:Quicksort 33:adversary 999:cite web 928:See also 836:′ 762:′ 694:′ 682:, where 669:′ 595:′ 518:′ 397:′ 350:′ 135:removed. 801:above. 272:2. Let 804:Since 583:since 23:, the 1023:(PDF) 985:(PDF) 1005:link 139:Let 19:In 1045:: 1001:}} 997:{{ 948:^ 869:. 457:. 419:. 313:. 269:. 231:. 1031:. 1007:) 993:. 969:. 907:R 901:1 857:M 833:N 789:Y 766:, 759:N 752:X 749:= 723:, 720:) 717:N 714:( 709:1 698:= 691:N 666:N 659:) 656:R 650:Y 647:( 642:1 631:= 608:N 602:Y 599:= 592:Y 568:, 565:) 562:R 556:N 550:Y 547:( 542:1 531:= 528:) 525:R 515:Y 511:( 506:1 495:= 492:Z 469:N 445:) 442:Z 439:( 436:D 433:= 430:M 407:) 404:R 394:Y 390:( 385:1 374:= 371:Z 347:Y 324:Y 301:R 295:) 292:X 289:( 283:= 280:Y 257:) 254:M 251:( 248:E 245:= 242:X 219:R 179:) 176:( 173:D 153:) 150:( 147:E 125:n 121:n 90:n 86:n 82:n 78:n 74:n 70:n 66:n 62:n 29:d

Index

information theory
adversary
worst-case model
Quicksort
An illustration of the method. The first row gives the initial encoded message; the second, after random permutation and random R; the third, after the adversary adds N; the fourth, after unpermuting; the fifth, the encoded message with the adversary's error removed.
locally decodable code
private information retrieval
fault-tolerant
list decoding
Concrete security



"Worst Case Complexity"
"Private Locally Decodable Codes"
cite web
link
"Optimal Error Correction for Computationally Bounded Noise"
Categories
Computational complexity theory
Coding theory

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑