Knowledge

RAMnets

Source đź“ť

33: 1074:
inputs and a summing device (Σ). Any such RAM-discriminator can receive a binary pattern of X⋅n bits as input. The RAM input lines are connected to the input pattern by means of a biunivocal pseudo-random mapping. The summing device enables this network of RAMs to exhibit – just like other ANN models
1078:
In order to train the discriminator one has to set all RAM memory locations to 0 and choose a training set formed by binary patterns of Xâ‹…n bits. For each training pattern, a 1 is stored in the memory location of each RAM addressed by this input pattern. Once the training of patterns is completed,
1102:
A system formed by various RAM-discriminators is called WiSARD. Each RAM-discriminator is trained on a particular class of patterns, and classification by the multi-discriminator system is performed in the following way. When a pattern is given as input, each RAM-discriminator gives a response to
326: 1107:
of the highest response (e.g., the difference d between the highest response and the second highest response, divided by the highest response). A schematic representation of a RAM-discriminator and a 10 RAM-discriminator WiSARD is shown in Figure 1.
1054: 1082:
The information stored by the RAM during the training phase is used to deal with previous unseen patterns. When one of these is given as input, the RAM memory contents addressed by the input pattern are read and summed by ÎŁ. The number
1099:-bit component of the input pattern appears in the training set (not a single RAM outputs 1). Intermediate values of r express a kind of “similarity measure” of the input pattern with respect to the patterns in the training set. 172: 867: 1168:
Advances in computational intelligence and learning : 17th European Symposium on Artificial Neural Networks ; ESANN 2009 ; Bruges, Belgium, April 22-23-24, 2009 ; proceedings
962: 644: 177: 948:
and unset otherwise. Recognition is accomplished by summing the contents of the nodes of each class at the addresses given by the features of the unclassified pattern. That is, pattern
1170:. Verleysen, Michel, Université catholique de Louvain, ESANN (17 2009.04.22-24 Bruges), European Symposium on Artificial Neural Networks (17 2009.04.22-24 Bruges). Evere: d-side. 2009. 1063:
The RAMnets formed the basis of a commercial product known as WiSARD (Wilkie, Stonham and Aleksander Recognition Device) was the first artificial neural network machine to be patented.
393: 428: 566: 957: 922: 784: 729: 689: 524: 487: 454: 361: 1316: 946: 889: 749: 165: 62: 321:{\displaystyle {\begin{aligned}{\underset {c}{a}}rgmax(\sum _{i=1}^{N}\Theta (\sum _{v\in D_{c}}\delta (\alpha _{i}(u),\alpha _{i}(v))))\end{aligned}}} 694:
With C classes to distinguish, the system can be implemented as a network of NC nodes, each of which is a random access memory (RAM); hence the term
789: 143:
A pattern is classified as belonging to the class for which it has the most features in common with at least one training pattern of that class.
1346: 1236: 1175: 1388: 1369: 1266: 84: 1087:
thus obtained, which is called the discriminator response, is equal to the number of RAMs that output 1. r reaches the maximum
1412: 1103:
that input. The various responses are evaluated by an algorithm which compares them and computes the relative confidence
573: 1199: 1117: 45: 55: 49: 41: 66: 366: 1310: 1250: 1132: 398: 1049:{\displaystyle {\begin{aligned}{\underset {c}{a}}rgmax(\sum _{i=1}^{N}m_{ci\alpha }(u))\end{aligned}}} 529: 1323: 1137: 1127: 894: 756: 701: 655: 496: 459: 433: 1193: 337: 1281:
Y. Guan,J.G. Taylor,D. Gorse, .G. Clarkson (1993). "Generalization in probabilistic RAM nets".
1384: 1365: 1342: 1298: 1262: 1232: 1181: 1171: 931: 874: 734: 150: 1290: 1224: 1142: 1398:
An introductory tutorial to classifiers (introducing the basic terms, with numeric example)
1381:
Applied Pattern Recognition: A Practical Introduction to Image and Speech Processing in C++
1246: 1122: 490: 167:= 0 case of a more general rule whereby the class assigned to unclassified pattern u is 1358: 1274: 1406: 1147: 1228: 1397: 101: 1212: 1185: 116: 17: 1258: 1221:
The Elements of Statistical Learning: Data mining, Inference, and Prediction
1302: 862:{\displaystyle \Theta (\sum _{v\in D_{c}}\delta (\alpha ,\alpha _{i}(v)))} 127:-tuples. The restriction of a pattern to an n-tuple can be regarded as an 1294: 1079:
RAM memory contents will be set to a certain number of 0’s and 1’s.
1341:. Springer Science+Business Media New York: Springer, Boston, MA. 1075:
based on synaptic weights – generalization and noise tolerance.
100:
is one of the oldest practical neurally inspired classification
26: 135:-tuple, constitutes a `feature' of the pattern. The standard 1383:(2nd ed.). San Francisco: Morgan Kaufmann Publishers. 108:-tuple recognition method" or "weightless neural network". 121:) sets of n distinct bit locations are selected randomly. 1255:
Unsupervised Learning: Foundations of Neural Computation
960: 934: 897: 877: 792: 759: 737: 704: 658: 576: 532: 499: 462: 436: 401: 369: 340: 175: 153: 131:-bit number which, together with the identity of the 1379:Hornegger, Joachim; Paulus, Dietrich W. R. (1999). 639:{\displaystyle \sum _{j=0}^{n-1}u_{\eta }i(j)2^{j}} 1357: 1048: 940: 916: 883: 861: 778: 743: 723: 683: 638: 560: 518: 481: 448: 422: 387: 355: 320: 159: 1324:A brief introduction to Weightless NeuralSystems 1273:(This book focuses on unsupervised learning in 54:but its sources remain unclear because it lacks 1360:Introduction to Statistical Pattern Recognition 139:-tuple recognizer operates simply as follows: 751:of the i node allocated to class c is set to 8: 1315:: CS1 maint: multiple names: authors list ( 334:is the set of training patterns in class c, 1219:Hastie, Trevor; Tibshirani, Robert (2009). 1211:Michal Morciniec and Richard Rohwer(1995) " 104:. The RAMnets is also known as a type of " 1213:The n-tuple Classifier: Too Good to Ignore 1091:if the input belongs to the training set. 1066:A RAM-discriminator consists of a set of 1018: 1008: 997: 965: 961: 959: 933: 902: 896: 876: 838: 814: 803: 791: 764: 758: 736: 709: 703: 663: 657: 630: 608: 592: 581: 575: 540: 531: 504: 498: 467: 461: 435: 400: 368: 339: 290: 268: 250: 239: 223: 212: 180: 176: 174: 152: 85:Learn how and when to remove this message 1364:(2nd ed.). Boston: Academic Press. 1223:. New York: Springer. pp. 485–586. 691:is the j bit location of the i n-tuple. 1159: 1308: 1191: 7: 1337:N. M. Allinson, A. R. Kolcz (1997). 1283:IEEE Transactions on Neural Networks 568:is the i feature of the pattern u: 388:{\displaystyle 0\leq x\leq \theta } 793: 423:{\displaystyle \Theta (x)=\theta } 402: 341: 229: 154: 25: 561:{\displaystyle (\alpha _{i}(u))} 31: 891:= 1 case, the 1-bit content of 526:=1 if i=j and 0 otherwise.)and 1039: 1036: 1030: 990: 856: 853: 850: 844: 825: 796: 678: 672: 623: 617: 555: 552: 546: 533: 411: 405: 350: 344: 311: 308: 305: 302: 296: 280: 274: 261: 232: 205: 1: 1059:RAM-discriminators and WiSARD 917:{\displaystyle m_{ci\alpha }} 779:{\displaystyle m_{ci\alpha }} 724:{\displaystyle m_{ci\alpha }} 684:{\displaystyle u_{\eta }i(j)} 519:{\displaystyle \delta _{i,j}} 482:{\displaystyle \delta _{i,j}} 449:{\displaystyle x\geq \theta } 1356:Fukunaga, Keinosuke (1990). 1229:10.1007/978-0-387-84858-7_14 1429: 924:is set if any pattern of D 356:{\displaystyle \Theta (x)} 1118:Artificial Neural Network 40:This article includes a 1339:N-Tuple Neural Networks 1070:one-bit word RAMs with 941:{\displaystyle \alpha } 884:{\displaystyle \theta } 744:{\displaystyle \alpha } 160:{\displaystyle \Theta } 69:more precise citations. 1251:Sejnowski, Terrence J. 1198:: CS1 maint: others ( 1050: 1013: 942: 918: 885: 863: 780: 745: 725: 685: 652:is the k bit of u and 640: 603: 562: 520: 483: 450: 424: 389: 357: 322: 228: 161: 1413:Unsupervised learning 1133:Unsupervised learning 1051: 993: 952:is assigned to class 943: 919: 886: 864: 781: 746: 726: 686: 641: 577: 563: 521: 484: 451: 425: 390: 358: 323: 208: 162: 117:Consider (let us say 1095:is equal to 0 if no 958: 932: 895: 875: 790: 757: 735: 702: 656: 574: 530: 497: 460: 434: 399: 367: 338: 173: 151: 1138:Erlang distribution 1128:Pattern Recognition 698:The memory content 1046: 1044: 973: 938: 914: 881: 859: 821: 776: 741: 721: 681: 636: 558: 516: 479: 446: 420: 385: 353: 318: 316: 257: 188: 157: 42:list of references 1348:978-1-4615-6099-9 1295:10.1109/72.207603 1238:978-0-387-84857-0 966: 799: 235: 181: 95: 94: 87: 16:(Redirected from 1420: 1394: 1375: 1363: 1352: 1320: 1314: 1306: 1272: 1247:Hinton, Geoffrey 1242: 1204: 1203: 1197: 1189: 1164: 1143:Machine learning 1055: 1053: 1052: 1047: 1045: 1029: 1028: 1012: 1007: 974: 947: 945: 944: 939: 923: 921: 920: 915: 913: 912: 890: 888: 887: 882: 868: 866: 865: 860: 843: 842: 820: 819: 818: 785: 783: 782: 777: 775: 774: 750: 748: 747: 742: 730: 728: 727: 722: 720: 719: 690: 688: 687: 682: 668: 667: 645: 643: 642: 637: 635: 634: 613: 612: 602: 591: 567: 565: 564: 559: 545: 544: 525: 523: 522: 517: 515: 514: 488: 486: 485: 480: 478: 477: 455: 453: 452: 447: 429: 427: 426: 421: 394: 392: 391: 386: 362: 360: 359: 354: 327: 325: 324: 319: 317: 295: 294: 273: 272: 256: 255: 254: 227: 222: 189: 166: 164: 163: 158: 90: 83: 79: 76: 70: 65:this article by 56:inline citations 35: 34: 27: 21: 1428: 1427: 1423: 1422: 1421: 1419: 1418: 1417: 1403: 1402: 1391: 1378: 1372: 1355: 1349: 1336: 1333: 1331:Further reading 1307: 1280: 1275:neural networks 1269: 1253:, eds. (1999). 1245: 1239: 1218: 1208: 1207: 1190: 1178: 1166: 1165: 1161: 1156: 1123:Kronecker delta 1114: 1106: 1094: 1090: 1086: 1073: 1069: 1061: 1056: 1043: 1042: 1014: 956: 955: 930: 929: 927: 898: 893: 892: 873: 872: 869: 834: 810: 788: 787: 760: 755: 754: 733: 732: 705: 700: 699: 659: 654: 653: 651: 646: 626: 604: 572: 571: 536: 528: 527: 500: 495: 494: 491:Kronecker delta 463: 458: 457: 432: 431: 397: 396: 365: 364: 336: 335: 333: 328: 315: 314: 286: 264: 246: 171: 170: 149: 148: 114: 91: 80: 74: 71: 60: 46:related reading 36: 32: 23: 22: 15: 12: 11: 5: 1426: 1424: 1416: 1415: 1405: 1404: 1401: 1400: 1395: 1389: 1376: 1370: 1353: 1347: 1332: 1329: 1328: 1327: 1321: 1289:(2): 360–363. 1278: 1267: 1243: 1237: 1216: 1206: 1205: 1177:978-2930307091 1176: 1158: 1157: 1155: 1152: 1151: 1150: 1145: 1140: 1135: 1130: 1125: 1120: 1113: 1110: 1104: 1092: 1088: 1084: 1071: 1067: 1060: 1057: 1041: 1038: 1035: 1032: 1027: 1024: 1021: 1017: 1011: 1006: 1003: 1000: 996: 992: 989: 986: 983: 980: 977: 972: 969: 964: 963: 954: 937: 925: 911: 908: 905: 901: 880: 858: 855: 852: 849: 846: 841: 837: 833: 830: 827: 824: 817: 813: 809: 806: 802: 798: 795: 773: 770: 767: 763: 753: 740: 718: 715: 712: 708: 680: 677: 674: 671: 666: 662: 649: 633: 629: 625: 622: 619: 616: 611: 607: 601: 598: 595: 590: 587: 584: 580: 570: 557: 554: 551: 548: 543: 539: 535: 513: 510: 507: 503: 476: 473: 470: 466: 445: 442: 439: 419: 416: 413: 410: 407: 404: 384: 381: 378: 375: 372: 352: 349: 346: 343: 331: 313: 310: 307: 304: 301: 298: 293: 289: 285: 282: 279: 276: 271: 267: 263: 260: 253: 249: 245: 242: 238: 234: 231: 226: 221: 218: 215: 211: 207: 204: 201: 198: 195: 192: 187: 184: 179: 178: 169: 156: 123:These are the 113: 110: 93: 92: 50:external links 39: 37: 30: 24: 14: 13: 10: 9: 6: 4: 3: 2: 1425: 1414: 1411: 1410: 1408: 1399: 1396: 1392: 1390:3-528-15558-2 1386: 1382: 1377: 1373: 1371:0-12-269851-7 1367: 1362: 1361: 1354: 1350: 1344: 1340: 1335: 1334: 1330: 1325: 1322: 1318: 1312: 1304: 1300: 1296: 1292: 1288: 1284: 1279: 1276: 1270: 1268:0-262-58168-X 1264: 1260: 1256: 1252: 1248: 1244: 1240: 1234: 1230: 1226: 1222: 1217: 1214: 1210: 1209: 1201: 1195: 1187: 1183: 1179: 1173: 1169: 1163: 1160: 1153: 1149: 1148:Erlang (unit) 1146: 1144: 1141: 1139: 1136: 1134: 1131: 1129: 1126: 1124: 1121: 1119: 1116: 1115: 1111: 1109: 1100: 1098: 1080: 1076: 1064: 1058: 1033: 1025: 1022: 1019: 1015: 1009: 1004: 1001: 998: 994: 987: 984: 981: 978: 975: 970: 967: 953: 951: 935: 909: 906: 903: 899: 878: 871:In the usual 847: 839: 835: 831: 828: 822: 815: 811: 807: 804: 800: 771: 768: 765: 761: 752: 738: 716: 713: 710: 706: 697: 692: 675: 669: 664: 660: 631: 627: 620: 614: 609: 605: 599: 596: 593: 588: 585: 582: 578: 569: 549: 541: 537: 511: 508: 505: 501: 492: 474: 471: 468: 464: 443: 440: 437: 417: 414: 408: 382: 379: 376: 373: 370: 347: 299: 291: 287: 283: 277: 269: 265: 258: 251: 247: 243: 240: 236: 224: 219: 216: 213: 209: 202: 199: 196: 193: 190: 185: 182: 168: 145: 144: 140: 138: 134: 130: 126: 122: 120: 111: 109: 107: 103: 99: 89: 86: 78: 68: 64: 58: 57: 51: 47: 43: 38: 29: 28: 19: 18:Draft:RAMnets 1380: 1359: 1338: 1311:cite journal 1286: 1282: 1254: 1220: 1167: 1162: 1101: 1096: 1081: 1077: 1065: 1062: 949: 928:has feature 870: 695: 693: 647: 329: 147:This is the 146: 142: 141: 136: 132: 128: 124: 118: 115: 105: 97: 96: 81: 72: 61:Please help 53: 731:at address 67:introducing 1154:References 102:algorithms 1259:MIT Press 1194:cite book 1186:553956424 1026:α 995:∑ 936:α 910:α 879:θ 836:α 829:α 823:δ 808:∈ 801:∑ 794:Θ 772:α 739:α 717:α 665:η 610:η 597:− 579:∑ 538:α 502:δ 465:δ 444:θ 441:≥ 418:θ 403:Θ 383:θ 380:≤ 374:≤ 342:Θ 288:α 266:α 259:δ 244:∈ 237:∑ 230:Θ 210:∑ 155:Θ 112:Algorithm 75:June 2018 1407:Category 1303:18267737 1112:See also 363:= x for 696:RAMnet. 489:is the 330:where D 98:RAMnets 63:improve 1387:  1368:  1345:  1326:(2009) 1301:  1265:  1235:  1184:  1174:  648:Here u 48:, or 1385:ISBN 1366:ISBN 1343:ISBN 1317:link 1299:PMID 1263:ISBN 1233:ISBN 1200:link 1182:OCLC 1172:ISBN 430:for 1291:doi 1225:doi 1409:: 1313:}} 1309:{{ 1297:. 1285:. 1261:. 1257:. 1249:; 1231:. 1196:}} 1192:{{ 1180:. 786:= 52:, 44:, 1393:. 1374:. 1351:. 1319:) 1305:. 1293:: 1287:4 1277:) 1271:. 1241:. 1227:: 1215:" 1202:) 1188:. 1105:c 1097:n 1093:r 1089:X 1085:r 1072:n 1068:X 1040:) 1037:) 1034:u 1031:( 1023:i 1020:c 1016:m 1010:N 1005:1 1002:= 999:i 991:( 988:x 985:a 982:m 979:g 976:r 971:c 968:a 950:u 926:c 907:i 904:c 900:m 857:) 854:) 851:) 848:v 845:( 840:i 832:, 826:( 816:c 812:D 805:v 797:( 769:i 766:c 762:m 714:i 711:c 707:m 679:) 676:j 673:( 670:i 661:u 650:k 632:j 628:2 624:) 621:j 618:( 615:i 606:u 600:1 594:n 589:0 586:= 583:j 556:) 553:) 550:u 547:( 542:i 534:( 512:j 509:, 506:i 493:( 475:j 472:, 469:i 456:, 438:x 415:= 412:) 409:x 406:( 395:, 377:x 371:0 351:) 348:x 345:( 332:c 312:) 309:) 306:) 303:) 300:v 297:( 292:i 284:, 281:) 278:u 275:( 270:i 262:( 252:c 248:D 241:v 233:( 225:N 220:1 217:= 214:i 206:( 203:x 200:a 197:m 194:g 191:r 186:c 183:a 137:n 133:n 129:n 125:n 119:N 106:n 88:) 82:( 77:) 73:( 59:. 20:)

Index

Draft:RAMnets
list of references
related reading
external links
inline citations
improve
introducing
Learn how and when to remove this message
algorithms
Consider (let us say N) sets of n distinct bit locations are selected randomly.
Kronecker delta
Artificial Neural Network
Kronecker delta
Pattern Recognition
Unsupervised learning
Erlang distribution
Machine learning
Erlang (unit)
ISBN
978-2930307091
OCLC
553956424
cite book
link
The n-tuple Classifier: Too Good to Ignore
doi
10.1007/978-0-387-84858-7_14
ISBN
978-0-387-84857-0
Hinton, Geoffrey

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑