Knowledge

Generalized Hebbian algorithm

Source 📝

757: 824:
Its importance comes from the fact that learning is a single-layer process—that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer, thus avoiding the multi-layer dependence associated with the
212: 424: 544: 62:
about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic
58:
in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by
595: 84: 589:
is the function that sets all matrix elements on or above the diagonal equal to 0. We can combine these equations to get our original rule in matrix form,
1111: 283: 1031: 992:
Gorrell, Genevieve (2006), "Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing.",
1206: 976: 951: 829:
algorithm. It also has a simple and predictable trade-off between learning speed and accuracy of convergence as set by the
436: 814: 51: 1134: 856: 1104: 752:{\displaystyle \,\Delta w(t)~=~\eta (t)\left(\mathbf {y} (t)\mathbf {x} (t)^{\mathrm {T} }-\mathrm {LT} w(t)\right)} 1165: 43: 1097: 1160: 1201: 818: 997: 903: 582: 888: 47: 75: 1002: 908: 884: 810: 1075: 937: 1067: 1027: 972: 947: 207:{\displaystyle \,\Delta w_{ij}~=~\eta \left(y_{i}x_{j}-y_{i}\sum _{k=1}^{i}w_{kj}y_{k}\right)} 17: 1120: 1059: 913: 846: 1170: 851: 826: 230: 1195: 1144: 917: 861: 265: 55: 1079: 1019: 889:"Optimal unsupervised learning in a single-layer linear feedforward neural network" 419:{\displaystyle \,{\frac {{\text{d}}w(t)}{{\text{d}}t}}~=~w(t)Q-\mathrm {diag} w(t)} 941: 767:
sets all matrix elements above the diagonal equal to 0, and note that our output
59: 1050:(November 1982). "Simplified neuron model as a principal component analyzer". 1047: 1071: 830: 1180: 1063: 63: 1175: 1089: 577:
is the autocorrelation matrix, simply the outer product of inputs,
1093: 561:
is any matrix, in this case representing synaptic weights,
971:. Redwood City, CA: Addison-Wesley Publishing Company. 967:
Hertz, John; Anders Krough; Richard G. Palmer (1991).
598: 539:{\displaystyle \,\Delta w(t)~=~-\mathrm {lower} w(t)} 439: 286: 87: 257:
are the input and output vectors, respectively, and
1153: 1127: 751: 538: 418: 206: 969:Introduction to the Theory of Neural Computation 1105: 8: 817:can be used. Examples of such cases include 1024:Neural Networks: A Comprehensive Foundation 1112: 1098: 1090: 1014: 1012: 879: 877: 277:In matrix form, Oja's rule can be written 54:. First defined in 1989, it is similar to 1001: 907: 722: 721: 706: 692: 681: 671: 670: 655: 641: 599: 597: 514: 513: 468: 440: 438: 394: 393: 348: 310: 291: 288: 287: 285: 193: 180: 170: 159: 149: 136: 126: 96: 88: 86: 809:The GHA is used in applications where a 873: 78:to produce a learning rule of the form 27:Linear feedforward neural network model 74:The GHA combines Oja's rule with the 7: 813:is necessary, or where a feature or 233:or connection strength between the 38:), also known in the literature as 723: 685: 682: 672: 600: 515: 481: 478: 475: 472: 469: 441: 430:and the Gram-Schmidt algorithm is 395: 358: 355: 352: 349: 89: 25: 821:and speech and image processing. 707: 693: 656: 642: 1052:Journal of Mathematical Biology 50:with applications primarily in 946:. New York: Wiley & Sons. 741: 735: 729: 718: 711: 703: 697: 689: 667: 660: 652: 646: 633: 627: 612: 606: 533: 527: 521: 510: 503: 497: 491: 485: 453: 447: 413: 407: 401: 390: 383: 374: 368: 362: 339: 333: 305: 299: 1: 1140:Generalized Hebbian algorithm 1026:(2 ed.). Prentice Hall. 815:principal components analysis 52:principal components analysis 32:generalized Hebbian algorithm 18:Generalized Hebbian Algorithm 1135:Contrastive Hebbian learning 943:The Organization of Behavior 918:10.1016/0893-6080(89)90044-0 857:Contrastive Hebbian learning 1223: 1207:Artificial neural networks 1166:Feedforward neural network 44:feedforward neural network 1161:Engram (neuropsychology) 819:artificial intelligence 753: 540: 420: 208: 175: 1128:True Hebbian learning 754: 581:is the function that 541: 421: 209: 155: 48:unsupervised learning 794:is a linear neuron. 596: 437: 284: 85: 76:Gram-Schmidt process 811:self-organizing map 763:where the function 245:th output neurons, 1064:10.1007/BF00275687 885:Sanger, Terence D. 749: 536: 416: 204: 1189: 1188: 1033:978-0-13-273350-2 798:Stability and PCA 623: 617: 464: 458: 329: 323: 319: 313: 294: 113: 107: 16:(Redirected from 1214: 1154:Related concepts 1121:Hebbian learning 1114: 1107: 1100: 1091: 1084: 1083: 1044: 1038: 1037: 1016: 1007: 1006: 1005: 989: 983: 982: 964: 958: 957: 934: 928: 927: 925: 924: 911: 893: 881: 847:Hebbian learning 836: 793: 766: 758: 756: 755: 750: 748: 744: 728: 727: 726: 710: 696: 688: 677: 676: 675: 659: 645: 621: 615: 588: 580: 576: 560: 545: 543: 542: 537: 520: 519: 518: 484: 462: 456: 425: 423: 422: 417: 400: 399: 398: 361: 327: 321: 320: 318: 314: 311: 308: 295: 292: 289: 262: 256: 250: 244: 238: 228: 213: 211: 210: 205: 203: 199: 198: 197: 188: 187: 174: 169: 154: 153: 141: 140: 131: 130: 111: 105: 104: 103: 21: 1222: 1221: 1217: 1216: 1215: 1213: 1212: 1211: 1192: 1191: 1190: 1185: 1171:Backpropagation 1149: 1123: 1118: 1088: 1087: 1046: 1045: 1041: 1034: 1018: 1017: 1010: 1003:10.1.1.102.2084 991: 990: 986: 979: 966: 965: 961: 954: 936: 935: 931: 922: 920: 909:10.1.1.128.6893 896:Neural Networks 891: 883: 882: 875: 870: 852:Factor analysis 843: 834: 833:rate parameter 827:backpropagation 807: 800: 768: 764: 717: 666: 640: 636: 594: 593: 586: 578: 562: 551: 509: 435: 434: 389: 309: 290: 282: 281: 275: 258: 252: 246: 240: 234: 231:synaptic weight 227: 219: 189: 176: 145: 132: 122: 121: 117: 92: 83: 82: 72: 28: 23: 22: 15: 12: 11: 5: 1220: 1218: 1210: 1209: 1204: 1202:Hebbian theory 1194: 1193: 1187: 1186: 1184: 1183: 1178: 1173: 1168: 1163: 1157: 1155: 1151: 1150: 1148: 1147: 1142: 1137: 1131: 1129: 1125: 1124: 1119: 1117: 1116: 1109: 1102: 1094: 1086: 1085: 1058:(3): 267–273. 1039: 1032: 1008: 984: 978:978-0201515602 977: 959: 952: 929: 902:(6): 459–473. 872: 871: 869: 866: 865: 864: 859: 854: 849: 842: 839: 806: 803: 799: 796: 761: 760: 747: 743: 740: 737: 734: 731: 725: 720: 716: 713: 709: 705: 702: 699: 695: 691: 687: 684: 680: 674: 669: 665: 662: 658: 654: 651: 648: 644: 639: 635: 632: 629: 626: 620: 614: 611: 608: 605: 602: 585:a matrix, and 548: 547: 535: 532: 529: 526: 523: 517: 512: 508: 505: 502: 499: 496: 493: 490: 487: 483: 480: 477: 474: 471: 467: 461: 455: 452: 449: 446: 443: 428: 427: 415: 412: 409: 406: 403: 397: 392: 388: 385: 382: 379: 376: 373: 370: 367: 364: 360: 357: 354: 351: 347: 344: 341: 338: 335: 332: 326: 317: 307: 304: 301: 298: 274: 271: 223: 216: 215: 202: 196: 192: 186: 183: 179: 173: 168: 165: 162: 158: 152: 148: 144: 139: 135: 129: 125: 120: 116: 110: 102: 99: 95: 91: 71: 68: 42:, is a linear 26: 24: 14: 13: 10: 9: 6: 4: 3: 2: 1219: 1208: 1205: 1203: 1200: 1199: 1197: 1182: 1179: 1177: 1174: 1172: 1169: 1167: 1164: 1162: 1159: 1158: 1156: 1152: 1146: 1143: 1141: 1138: 1136: 1133: 1132: 1130: 1126: 1122: 1115: 1110: 1108: 1103: 1101: 1096: 1095: 1092: 1082:. BF00275687. 1081: 1077: 1073: 1069: 1065: 1061: 1057: 1053: 1049: 1043: 1040: 1035: 1029: 1025: 1021: 1020:Haykin, Simon 1015: 1013: 1009: 1004: 999: 995: 988: 985: 980: 974: 970: 963: 960: 955: 953:9781135631918 949: 945: 944: 939: 933: 930: 919: 915: 910: 905: 901: 897: 890: 886: 880: 878: 874: 867: 863: 860: 858: 855: 853: 850: 848: 845: 844: 840: 838: 832: 828: 822: 820: 816: 812: 804: 802: 797: 795: 791: 787: 783: 779: 775: 771: 745: 738: 732: 714: 700: 678: 663: 649: 637: 630: 624: 618: 609: 603: 592: 591: 590: 584: 575: 572: 569: 565: 558: 554: 530: 524: 506: 500: 494: 488: 465: 459: 450: 444: 433: 432: 431: 410: 404: 386: 380: 377: 371: 365: 345: 342: 336: 330: 324: 315: 302: 296: 280: 279: 278: 272: 270: 268: 267: 266:learning rate 261: 255: 249: 243: 239:th input and 237: 232: 226: 222: 200: 194: 190: 184: 181: 177: 171: 166: 163: 160: 156: 150: 146: 142: 137: 133: 127: 123: 118: 114: 108: 100: 97: 93: 81: 80: 79: 77: 69: 67: 65: 61: 57: 53: 49: 45: 41: 40:Sanger's rule 37: 33: 19: 1139: 1055: 1051: 1042: 1023: 993: 987: 968: 962: 942: 932: 921:. Retrieved 899: 895: 823: 808: 805:Applications 801: 789: 785: 781: 777: 773: 769: 762: 583:diagonalizes 573: 570: 567: 563: 556: 552: 549: 429: 276: 264: 259: 253: 247: 241: 235: 229:defines the 224: 220: 217: 73: 39: 35: 31: 29: 269:parameter. 60:Donald Hebb 1196:Categories 1145:Oja's rule 1048:Oja, Erkki 938:Hebb, D.O. 923:2007-11-24 868:References 862:Oja's rule 273:Derivation 56:Oja's rule 998:CiteSeerX 904:CiteSeerX 679:− 625:η 601:Δ 466:− 442:Δ 346:− 157:∑ 143:− 115:η 90:Δ 1080:16577977 1022:(1998). 940:(1949). 887:(1989). 841:See also 831:learning 1181:GeneRec 1072:7153672 263:is the 64:neurons 1176:Leabra 1078:  1070:  1030:  1000:  975:  950:  906:  622:  616:  550:where 463:  457:  328:  322:  218:where 112:  106:  70:Theory 1076:S2CID 892:(PDF) 587:lower 1068:PMID 1028:ISBN 994:EACL 973:ISBN 948:ISBN 776:) = 579:diag 251:and 46:for 30:The 1060:doi 914:doi 36:GHA 1198:: 1074:. 1066:. 1056:15 1054:. 1011:^ 996:, 912:. 898:. 894:. 876:^ 837:. 784:) 765:LT 566:= 225:ij 66:. 1113:e 1106:t 1099:v 1062:: 1036:. 981:. 956:. 926:. 916:: 900:2 835:η 792:) 790:t 788:( 786:x 782:t 780:( 778:w 774:t 772:( 770:y 759:, 746:) 742:) 739:t 736:( 733:w 730:] 724:T 719:) 715:t 712:( 708:y 704:) 701:t 698:( 694:y 690:[ 686:T 683:L 673:T 668:) 664:t 661:( 657:x 653:) 650:t 647:( 643:y 638:( 634:) 631:t 628:( 619:= 613:) 610:t 607:( 604:w 574:x 571:x 568:η 564:Q 559:) 557:t 555:( 553:w 546:, 534:) 531:t 528:( 525:w 522:] 516:T 511:) 507:t 504:( 501:w 498:) 495:t 492:( 489:w 486:[ 482:r 479:e 476:w 473:o 470:l 460:= 454:) 451:t 448:( 445:w 426:, 414:) 411:t 408:( 405:w 402:] 396:T 391:) 387:t 384:( 381:w 378:Q 375:) 372:t 369:( 366:w 363:[ 359:g 356:a 353:i 350:d 343:Q 340:) 337:t 334:( 331:w 325:= 316:t 312:d 306:) 303:t 300:( 297:w 293:d 260:η 254:y 248:x 242:i 236:j 221:w 214:, 201:) 195:k 191:y 185:j 182:k 178:w 172:i 167:1 164:= 161:k 151:i 147:y 138:j 134:x 128:i 124:y 119:( 109:= 101:j 98:i 94:w 34:( 20:)

Index

Generalized Hebbian Algorithm
feedforward neural network
unsupervised learning
principal components analysis
Oja's rule
Donald Hebb
neurons
Gram-Schmidt process
synaptic weight
learning rate
diagonalizes
self-organizing map
principal components analysis
artificial intelligence
backpropagation
learning
Hebbian learning
Factor analysis
Contrastive Hebbian learning
Oja's rule


Sanger, Terence D.
"Optimal unsupervised learning in a single-layer linear feedforward neural network"
CiteSeerX
10.1.1.128.6893
doi
10.1016/0893-6080(89)90044-0
Hebb, D.O.
The Organization of Behavior

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.