Knowledge (XXG)

Recursive neural network

Source đź“ť

1009: 1967: 1264:
with a certain structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden
1168:
This architecture, with a few improvements, has been used for successfully parsing natural scenes, syntactic parsing of natural language sentences, and recursive autoencoding and generative modeling of 3D shape structures in the form of cuboid abstractions.
873: 1128: 911: 868: 858: 699: 1163: 1538:
Bianucci, Anna Maria; Micheli, Alessio; Sperduti, Alessandro; Starita, Antonina (2000). "Application of Cascade Correlation Networks for Structures to Chemistry".
906: 1687:
Hammer, Barbara; Micheli, Alessio; Sperduti, Alessandro; Strickert, Marc (2004-03-01). "A general framework for unsupervised processing of structured data".
863: 714: 1016:
In the most simple architecture, nodes are combined into parents using a weight matrix that is shared across the whole network, and a non-linearity such as
445: 946: 749: 2008: 1773:
Hammer, Barbara; Micheli, Alessio; Sperduti, Alessandro (2005-05-01). "Universal Approximation Capability of Cascade Correlation for Structures".
825: 374: 2032: 1341: 979:. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree structures in 883: 646: 181: 2027: 901: 1177:
RecCC is a constructive neural network approach to deal with tree domains with pioneering applications to chemistry and extension to
734: 709: 658: 1757: 1219: 782: 777: 430: 440: 78: 1581:
Micheli, A.; Sona, D.; Sperduti, A. (2004-11-01). "Contextual processing of structured data by recursive cascade correlation".
835: 1642:
Hammer, Barbara; Micheli, Alessio; Sperduti, Alessandro; Strickert, Marc (2004). "Recursive self-organizing network models".
939: 599: 420: 1314:
Goller, C.; KĂĽchler, A. (1996). "Learning task-dependent distributed representations by backpropagation through structure".
1724:
Socher, Richard; Perelygin, Alex; Y. Wu, Jean; Chuang, Jason; D. Manning, Christopher; Y. Ng, Andrew; Potts, Christopher.
810: 512: 288: 1294: 1286: 1223: 767: 704: 614: 592: 435: 425: 2001: 1215: 980: 918: 830: 815: 276: 98: 1411:
Frasconi, P.; Gori, M.; Sperduti, A. (1998-09-01). "A general framework for adaptive processing of data structures".
805: 988: 878: 555: 450: 238: 171: 131: 1049: 1261: 1227: 932: 538: 306: 176: 1273:
An efficient approach to implement recursive neural networks is given by the Tree Echo State Network within the
1257: 1252: 560: 480: 403: 321: 151: 113: 108: 68: 63: 20: 1974: 507: 356: 256: 83: 1368:
Sperduti, A.; Starita, A. (1997-05-01). "Supervised neural networks for the classification of structures".
1994: 1782: 1696: 1651: 1590: 1420: 1319: 1178: 687: 663: 565: 326: 301: 261: 73: 972: 641: 463: 415: 271: 186: 58: 975:
over variable-size input structures, or a scalar prediction on it, by traversing a given structure in
1290: 570: 520: 1787: 1656: 1595: 1425: 1324: 1274: 1008: 673: 609: 580: 485: 311: 244: 230: 216: 191: 141: 93: 53: 1943: 1892: 1800: 1701: 1624: 1563: 1520: 1502: 1347: 992: 651: 575: 361: 156: 1139: 1465: 1935: 1927: 1884: 1876: 1753: 1669: 1616: 1608: 1555: 1446: 1438: 1393: 1385: 1337: 744: 587: 500: 296: 266: 211: 206: 161: 103: 1978: 1919: 1868: 1835: 1827: 1792: 1747: 1706: 1661: 1600: 1547: 1512: 1486:
Li, Jun; Xu, Kai; Chaudhuri, Siddhartha; Yumer, Ersin; Zhang, Hao; Guibas, Leonadis (2017).
1430: 1377: 1329: 976: 772: 525: 475: 385: 369: 339: 201: 196: 146: 136: 34: 1910:
Micheli, A. (2009-03-01). "Neural Network for Graphs: A Contextual Constructive Approach".
800: 604: 470: 410: 1487: 984: 820: 351: 88: 1725: 995:. Models and general frameworks have been developed in further works since the 1990s. 2021: 1896: 1855:
Scarselli, F.; Gori, M.; Tsoi, A. C.; Hagenbuchner, M.; Monfardini, G. (2009-01-01).
964: 739: 668: 550: 281: 166: 1947: 1804: 1628: 1567: 1524: 1238:
Universal approximation capability of RNN over trees has been proved in literature.
1351: 1831: 1710: 1665: 1966: 1818:
Gallicchio, Claudio; Micheli, Alessio (2013-02-04). "Tree Echo State Networks".
545: 39: 1726:"Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank" 1201:
networks use one, tensor-based composition function for all nodes in the tree.
1551: 694: 390: 316: 1931: 1923: 1880: 1872: 1796: 1612: 1559: 1442: 1389: 1333: 1604: 1516: 1466:"Parsing Natural Scenes and Natural Language with Recursive Neural Networks" 968: 853: 634: 1939: 1888: 1856: 1673: 1620: 1450: 1397: 1040:-dimensional vector representation of nodes, their parent will also be an 1840: 629: 1434: 1381: 1198: 380: 1464:
Socher, Richard; Lin, Cliff; Ng, Andrew Y.; Manning, Christopher D.
1316:
Proceedings of International Conference on Neural Networks (ICNN'96)
1507: 1218:(SGD) is used to train the network. The gradient is computed using 1265:
representation into the representation for the current time step.
1007: 624: 619: 346: 1473:
The 28th International Conference on Machine Learning (ICML 2011)
983:, mainly phrase and sentence continuous representations based on 1018: 1488:"GRASS: Generative Recursive Autoencoders for Shape Structures" 1189:
A framework for unsupervised RNN has been introduced in 2004.
1293:(GNN), Neural Network for Graphs (NN4G), and more recently 912:
List of datasets in computer vision and image processing
1982: 1142: 1052: 1157: 1122: 16:Type of neural network which utilizes recursion 1012:A simple recursive neural network architecture 907:List of datasets for machine-learning research 2002: 940: 8: 1123:{\displaystyle p_{1,2}=\tanh \left(W\right)} 987:. RvNNs have first been introduced to learn 967:created by applying the same set of weights 2009: 1995: 947: 933: 25: 1839: 1786: 1700: 1655: 1594: 1506: 1424: 1323: 1141: 1106: 1093: 1057: 1051: 1749:Learning with Recurrent Neural Networks 1306: 33: 971:over a structured input, to produce a 1173:Recursive cascade correlation (RecCC) 7: 1963: 1961: 1912:IEEE Transactions on Neural Networks 1861:IEEE Transactions on Neural Networks 1583:IEEE Transactions on Neural Networks 1413:IEEE Transactions on Neural Networks 1370:IEEE Transactions on Neural Networks 1363: 1361: 1044:-dimensional vector, calculated as 902:Glossary of artificial intelligence 1981:. You can help Knowledge (XXG) by 14: 1318:. Vol. 1. pp. 347–352. 1220:backpropagation through structure 1965: 1857:"The Graph Neural Network Model" 1746:Hammer, Barbara (2007-10-03). 1112: 1086: 322:Relevance vector machine (RVM) 1: 2033:Artificial intelligence stubs 1295:convolutional neural networks 811:Computational learning theory 375:Expectation–maximization (EM) 1832:10.1016/j.neucom.2012.08.017 1711:10.1016/j.neucom.2004.01.008 1666:10.1016/j.neunet.2004.06.009 1495:ACM Transactions on Graphics 1224:backpropagation through time 768:Coefficient of determination 615:Convolutional neural network 327:Support vector machine (SVM) 1216:stochastic gradient descent 1210:Stochastic gradient descent 989:distributed representations 981:natural language processing 919:Outline of machine learning 816:Empirical risk minimization 2049: 2028:Artificial neural networks 1960: 1262:artificial neural networks 1250: 1158:{\displaystyle n\times 2n} 556:Feedforward neural network 307:Artificial neural networks 18: 1258:Recurrent neural networks 1247:Recurrent neural networks 1228:recurrent neural networks 539:Artificial neural network 1924:10.1109/TNN.2008.2010350 1873:10.1109/TNN.2008.2005605 1797:10.1162/0899766053491878 1334:10.1109/ICNN.1996.548916 1269:Tree Echo State Networks 1253:Recurrent neural network 961:recursive neural network 848:Journals and conferences 795:Mathematical foundations 705:Temporal difference (TD) 561:Recurrent neural network 481:Conditional random field 404:Dimensionality reduction 152:Dimensionality reduction 114:Quantum machine learning 109:Neuromorphic engineering 69:Self-supervised learning 64:Semi-supervised learning 21:recurrent neural network 19:Not to be confused with 1975:artificial intelligence 1605:10.1109/TNN.2004.837783 1552:10.1023/A:1008368105614 1517:10.1145/3072959.3073613 1179:directed acyclic graphs 257:Apprenticeship learning 1977:-related article is a 1159: 1124: 1013: 991:of structure, such as 806:Bias–variance tradeoff 688:Reinforcement learning 664:Spiking neural network 74:Reinforcement learning 1222:(BPTS), a variant of 1160: 1125: 1011: 973:structured prediction 642:Neural radiance field 464:Structured prediction 187:Structured prediction 59:Unsupervised learning 1540:Applied Intelligence 1291:graph neural network 1140: 1050: 831:Statistical learning 729:Learning with humans 521:Local outlier factor 1281:Extension to graphs 1275:reservoir computing 965:deep neural network 674:Electrochemical RAM 581:reservoir computing 312:Logistic regression 231:Supervised learning 217:Multimodal learning 192:Feature engineering 137:Generative modeling 99:Rule-based learning 94:Curriculum learning 54:Supervised learning 29:Part of a series on 1775:Neural Computation 1650:(8–9): 1061–1085. 1155: 1120: 1014: 242: • 157:Density estimation 1990: 1989: 1435:10.1109/72.712151 1382:10.1109/72.572108 1343:978-0-7803-3210-2 1197:Recursive neural 977:topological order 957: 956: 762:Model diagnostics 745:Human-in-the-loop 588:Boltzmann machine 501:Anomaly detection 297:Linear regression 212:Ontology learning 207:Grammar induction 182:Semantic analysis 177:Association rules 162:Anomaly detection 104:Neuro-symbolic AI 2040: 2011: 2004: 1997: 1969: 1962: 1952: 1951: 1907: 1901: 1900: 1852: 1846: 1845: 1843: 1815: 1809: 1808: 1790: 1781:(5): 1109–1159. 1770: 1764: 1763: 1743: 1737: 1736: 1730: 1721: 1715: 1714: 1704: 1684: 1678: 1677: 1659: 1639: 1633: 1632: 1598: 1589:(6): 1396–1410. 1578: 1572: 1571: 1546:(1–2): 117–147. 1535: 1529: 1528: 1510: 1492: 1483: 1477: 1476: 1470: 1461: 1455: 1454: 1428: 1408: 1402: 1401: 1365: 1356: 1355: 1327: 1311: 1185:Unsupervised RNN 1164: 1162: 1161: 1156: 1129: 1127: 1126: 1121: 1119: 1115: 1111: 1110: 1098: 1097: 1068: 1067: 949: 942: 935: 896:Related articles 773:Confusion matrix 526:Isolation forest 471:Graphical models 250: 249: 202:Learning to rank 197:Feature learning 35:Machine learning 26: 2048: 2047: 2043: 2042: 2041: 2039: 2038: 2037: 2018: 2017: 2016: 2015: 1958: 1956: 1955: 1909: 1908: 1904: 1854: 1853: 1849: 1817: 1816: 1812: 1788:10.1.1.138.2224 1772: 1771: 1767: 1760: 1745: 1744: 1740: 1728: 1723: 1722: 1718: 1686: 1685: 1681: 1657:10.1.1.129.6155 1644:Neural Networks 1641: 1640: 1636: 1596:10.1.1.135.8772 1580: 1579: 1575: 1537: 1536: 1532: 1490: 1485: 1484: 1480: 1468: 1463: 1462: 1458: 1410: 1409: 1405: 1367: 1366: 1359: 1344: 1313: 1312: 1308: 1303: 1283: 1271: 1255: 1249: 1244: 1236: 1212: 1207: 1195: 1187: 1175: 1165:weight matrix. 1138: 1137: 1102: 1089: 1082: 1078: 1053: 1048: 1047: 1035: 1028: 1006: 1001: 953: 924: 923: 897: 889: 888: 849: 841: 840: 801:Kernel machines 796: 788: 787: 763: 755: 754: 735:Active learning 730: 722: 721: 690: 680: 679: 605:Diffusion model 541: 531: 530: 503: 493: 492: 466: 456: 455: 411:Factor analysis 406: 396: 395: 379: 342: 332: 331: 252: 251: 235: 234: 233: 222: 221: 127: 119: 118: 84:Online learning 49: 37: 24: 17: 12: 11: 5: 2046: 2044: 2036: 2035: 2030: 2020: 2019: 2014: 2013: 2006: 1999: 1991: 1988: 1987: 1970: 1954: 1953: 1918:(3): 498–511. 1902: 1847: 1820:Neurocomputing 1810: 1765: 1758: 1738: 1716: 1689:Neurocomputing 1679: 1634: 1573: 1530: 1478: 1456: 1426:10.1.1.64.2580 1419:(5): 768–786. 1403: 1376:(3): 714–735. 1357: 1342: 1325:10.1.1.52.4759 1305: 1304: 1302: 1299: 1285:Extensions to 1282: 1279: 1270: 1267: 1260:are recursive 1251:Main article: 1248: 1245: 1243: 1242:Related models 1240: 1235: 1232: 1211: 1208: 1206: 1203: 1194: 1191: 1186: 1183: 1174: 1171: 1154: 1151: 1148: 1145: 1118: 1114: 1109: 1105: 1101: 1096: 1092: 1088: 1085: 1081: 1077: 1074: 1071: 1066: 1063: 1060: 1056: 1033: 1026: 1005: 1002: 1000: 997: 985:word embedding 955: 954: 952: 951: 944: 937: 929: 926: 925: 922: 921: 916: 915: 914: 904: 898: 895: 894: 891: 890: 887: 886: 881: 876: 871: 866: 861: 856: 850: 847: 846: 843: 842: 839: 838: 833: 828: 823: 821:Occam learning 818: 813: 808: 803: 797: 794: 793: 790: 789: 786: 785: 780: 778:Learning curve 775: 770: 764: 761: 760: 757: 756: 753: 752: 747: 742: 737: 731: 728: 727: 724: 723: 720: 719: 718: 717: 707: 702: 697: 691: 686: 685: 682: 681: 678: 677: 671: 666: 661: 656: 655: 654: 644: 639: 638: 637: 632: 627: 622: 612: 607: 602: 597: 596: 595: 585: 584: 583: 578: 573: 568: 558: 553: 548: 542: 537: 536: 533: 532: 529: 528: 523: 518: 510: 504: 499: 498: 495: 494: 491: 490: 489: 488: 483: 478: 467: 462: 461: 458: 457: 454: 453: 448: 443: 438: 433: 428: 423: 418: 413: 407: 402: 401: 398: 397: 394: 393: 388: 383: 377: 372: 367: 359: 354: 349: 343: 338: 337: 334: 333: 330: 329: 324: 319: 314: 309: 304: 299: 294: 286: 285: 284: 279: 274: 264: 262:Decision trees 259: 253: 239:classification 229: 228: 227: 224: 223: 220: 219: 214: 209: 204: 199: 194: 189: 184: 179: 174: 169: 164: 159: 154: 149: 144: 139: 134: 132:Classification 128: 125: 124: 121: 120: 117: 116: 111: 106: 101: 96: 91: 89:Batch learning 86: 81: 76: 71: 66: 61: 56: 50: 47: 46: 43: 42: 31: 30: 15: 13: 10: 9: 6: 4: 3: 2: 2045: 2034: 2031: 2029: 2026: 2025: 2023: 2012: 2007: 2005: 2000: 1998: 1993: 1992: 1986: 1984: 1980: 1976: 1971: 1968: 1964: 1959: 1949: 1945: 1941: 1937: 1933: 1929: 1925: 1921: 1917: 1913: 1906: 1903: 1898: 1894: 1890: 1886: 1882: 1878: 1874: 1870: 1866: 1862: 1858: 1851: 1848: 1842: 1837: 1833: 1829: 1825: 1821: 1814: 1811: 1806: 1802: 1798: 1794: 1789: 1784: 1780: 1776: 1769: 1766: 1761: 1759:9781846285677 1755: 1751: 1750: 1742: 1739: 1734: 1727: 1720: 1717: 1712: 1708: 1703: 1698: 1694: 1690: 1683: 1680: 1675: 1671: 1667: 1663: 1658: 1653: 1649: 1645: 1638: 1635: 1630: 1626: 1622: 1618: 1614: 1610: 1606: 1602: 1597: 1592: 1588: 1584: 1577: 1574: 1569: 1565: 1561: 1557: 1553: 1549: 1545: 1541: 1534: 1531: 1526: 1522: 1518: 1514: 1509: 1504: 1500: 1496: 1489: 1482: 1479: 1474: 1467: 1460: 1457: 1452: 1448: 1444: 1440: 1436: 1432: 1427: 1422: 1418: 1414: 1407: 1404: 1399: 1395: 1391: 1387: 1383: 1379: 1375: 1371: 1364: 1362: 1358: 1353: 1349: 1345: 1339: 1335: 1331: 1326: 1321: 1317: 1310: 1307: 1300: 1298: 1296: 1292: 1288: 1280: 1278: 1276: 1268: 1266: 1263: 1259: 1254: 1246: 1241: 1239: 1233: 1231: 1229: 1225: 1221: 1217: 1209: 1204: 1202: 1200: 1192: 1190: 1184: 1182: 1180: 1172: 1170: 1166: 1152: 1149: 1146: 1143: 1136:is a learned 1135: 1130: 1116: 1107: 1103: 1099: 1094: 1090: 1083: 1079: 1075: 1072: 1069: 1064: 1061: 1058: 1054: 1045: 1043: 1039: 1032: 1025: 1021: 1020: 1010: 1003: 999:Architectures 998: 996: 994: 993:logical terms 990: 986: 982: 978: 974: 970: 966: 963:is a kind of 962: 950: 945: 943: 938: 936: 931: 930: 928: 927: 920: 917: 913: 910: 909: 908: 905: 903: 900: 899: 893: 892: 885: 882: 880: 877: 875: 872: 870: 867: 865: 862: 860: 857: 855: 852: 851: 845: 844: 837: 834: 832: 829: 827: 824: 822: 819: 817: 814: 812: 809: 807: 804: 802: 799: 798: 792: 791: 784: 781: 779: 776: 774: 771: 769: 766: 765: 759: 758: 751: 748: 746: 743: 741: 740:Crowdsourcing 738: 736: 733: 732: 726: 725: 716: 713: 712: 711: 708: 706: 703: 701: 698: 696: 693: 692: 689: 684: 683: 675: 672: 670: 669:Memtransistor 667: 665: 662: 660: 657: 653: 650: 649: 648: 645: 643: 640: 636: 633: 631: 628: 626: 623: 621: 618: 617: 616: 613: 611: 608: 606: 603: 601: 598: 594: 591: 590: 589: 586: 582: 579: 577: 574: 572: 569: 567: 564: 563: 562: 559: 557: 554: 552: 551:Deep learning 549: 547: 544: 543: 540: 535: 534: 527: 524: 522: 519: 517: 515: 511: 509: 506: 505: 502: 497: 496: 487: 486:Hidden Markov 484: 482: 479: 477: 474: 473: 472: 469: 468: 465: 460: 459: 452: 449: 447: 444: 442: 439: 437: 434: 432: 429: 427: 424: 422: 419: 417: 414: 412: 409: 408: 405: 400: 399: 392: 389: 387: 384: 382: 378: 376: 373: 371: 368: 366: 364: 360: 358: 355: 353: 350: 348: 345: 344: 341: 336: 335: 328: 325: 323: 320: 318: 315: 313: 310: 308: 305: 303: 300: 298: 295: 293: 291: 287: 283: 282:Random forest 280: 278: 275: 273: 270: 269: 268: 265: 263: 260: 258: 255: 254: 247: 246: 241: 240: 232: 226: 225: 218: 215: 213: 210: 208: 205: 203: 200: 198: 195: 193: 190: 188: 185: 183: 180: 178: 175: 173: 170: 168: 167:Data cleaning 165: 163: 160: 158: 155: 153: 150: 148: 145: 143: 140: 138: 135: 133: 130: 129: 123: 122: 115: 112: 110: 107: 105: 102: 100: 97: 95: 92: 90: 87: 85: 82: 80: 79:Meta-learning 77: 75: 72: 70: 67: 65: 62: 60: 57: 55: 52: 51: 45: 44: 41: 36: 32: 28: 27: 22: 1983:expanding it 1972: 1957: 1915: 1911: 1905: 1867:(1): 61–80. 1864: 1860: 1850: 1841:11568/158480 1823: 1819: 1813: 1778: 1774: 1768: 1752:. Springer. 1748: 1741: 1732: 1719: 1702:10.1.1.3.984 1692: 1688: 1682: 1647: 1643: 1637: 1586: 1582: 1576: 1543: 1539: 1533: 1498: 1494: 1481: 1472: 1459: 1416: 1412: 1406: 1373: 1369: 1315: 1309: 1297:for graphs. 1284: 1272: 1256: 1237: 1213: 1196: 1188: 1176: 1167: 1133: 1131: 1046: 1041: 1037: 1030: 1023: 1017: 1015: 960: 958: 826:PAC learning 513: 362: 357:Hierarchical 289: 243: 237: 1826:: 319–337. 1214:Typically, 969:recursively 710:Multi-agent 647:Transformer 546:Autoencoder 302:Naive Bayes 40:data mining 2022:Categories 1733:EMNLP 2013 1508:1705.02090 1301:References 1277:paradigm. 1234:Properties 695:Q-learning 593:Restricted 391:Mean shift 340:Clustering 317:Perceptron 245:regression 147:Clustering 142:Regression 1932:1045-9227 1897:206756462 1881:1045-9227 1783:CiteSeerX 1697:CiteSeerX 1652:CiteSeerX 1613:1045-9227 1591:CiteSeerX 1560:0924-669X 1501:(4): 52. 1443:1045-9227 1421:CiteSeerX 1390:1045-9227 1320:CiteSeerX 1226:used for 1147:× 1076:⁡ 854:ECML PKDD 836:VC theory 783:ROC curve 715:Self-play 635:DeepDream 476:Bayes net 267:Ensembles 48:Paradigms 1948:17486263 1940:19193509 1889:19068426 1805:10845957 1695:: 3–35. 1674:15555852 1629:12370239 1621:15565768 1568:10031212 1525:20432407 1451:18255765 1398:18255672 1289:include 1205:Training 277:Boosting 126:Problems 1352:6536466 859:NeurIPS 676:(ECRAM) 630:AlexNet 272:Bagging 1946:  1938:  1930:  1895:  1887:  1879:  1803:  1785:  1756:  1699:  1672:  1654:  1627:  1619:  1611:  1593:  1566:  1558:  1523:  1449:  1441:  1423:  1396:  1388:  1350:  1340:  1322:  1287:graphs 1199:tensor 1193:Tensor 1132:Where 652:Vision 508:RANSAC 386:OPTICS 381:DBSCAN 365:-means 172:AutoML 1973:This 1944:S2CID 1893:S2CID 1801:S2CID 1729:(PDF) 1625:S2CID 1564:S2CID 1521:S2CID 1503:arXiv 1491:(PDF) 1469:(PDF) 1348:S2CID 1022:. If 1004:Basic 874:IJCAI 700:SARSA 659:Mamba 625:LeNet 620:U-Net 446:t-SNE 370:Fuzzy 347:BIRCH 1979:stub 1936:PMID 1928:ISSN 1885:PMID 1877:ISSN 1754:ISBN 1670:PMID 1617:PMID 1609:ISSN 1556:ISSN 1447:PMID 1439:ISSN 1394:PMID 1386:ISSN 1338:ISBN 1073:tanh 1036:are 1029:and 1019:tanh 884:JMLR 869:ICLR 864:ICML 750:RLHF 566:LSTM 352:CURE 38:and 1920:doi 1869:doi 1836:hdl 1828:doi 1824:101 1793:doi 1707:doi 1662:doi 1601:doi 1548:doi 1513:doi 1431:doi 1378:doi 1330:doi 610:SOM 600:GAN 576:ESN 571:GRU 516:-NN 451:SDL 441:PGD 436:PCA 431:NMF 426:LDA 421:ICA 416:CCA 292:-NN 2024:: 1942:. 1934:. 1926:. 1916:20 1914:. 1891:. 1883:. 1875:. 1865:20 1863:. 1859:. 1834:. 1822:. 1799:. 1791:. 1779:17 1777:. 1731:. 1705:. 1693:57 1691:. 1668:. 1660:. 1648:17 1646:. 1623:. 1615:. 1607:. 1599:. 1587:15 1585:. 1562:. 1554:. 1544:12 1542:. 1519:. 1511:. 1499:36 1497:. 1493:. 1471:. 1445:. 1437:. 1429:. 1415:. 1392:. 1384:. 1372:. 1360:^ 1346:. 1336:. 1328:. 1230:. 1181:. 959:A 879:ML 2010:e 2003:t 1996:v 1985:. 1950:. 1922:: 1899:. 1871:: 1844:. 1838:: 1830:: 1807:. 1795:: 1762:. 1735:. 1713:. 1709:: 1676:. 1664:: 1631:. 1603:: 1570:. 1550:: 1527:. 1515:: 1505:: 1475:. 1453:. 1433:: 1417:9 1400:. 1380:: 1374:8 1354:. 1332:: 1153:n 1150:2 1144:n 1134:W 1117:) 1113:] 1108:2 1104:c 1100:; 1095:1 1091:c 1087:[ 1084:W 1080:( 1070:= 1065:2 1062:, 1059:1 1055:p 1042:n 1038:n 1034:2 1031:c 1027:1 1024:c 948:e 941:t 934:v 514:k 363:k 290:k 248:) 236:( 23:.

Index

recurrent neural network
Machine learning
data mining
Supervised learning
Unsupervised learning
Semi-supervised learning
Self-supervised learning
Reinforcement learning
Meta-learning
Online learning
Batch learning
Curriculum learning
Rule-based learning
Neuro-symbolic AI
Neuromorphic engineering
Quantum machine learning
Classification
Generative modeling
Regression
Clustering
Dimensionality reduction
Density estimation
Anomaly detection
Data cleaning
AutoML
Association rules
Semantic analysis
Structured prediction
Feature engineering
Feature learning

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑