Knowledge (XXG)

Multiclass classification

Source 📝

1022: 1587:. Batch learning algorithms require all the data samples to be available beforehand. It trains the model using the entire training data and then predicts the test sample using the found relationship. The online learning algorithms, on the other hand, incrementally build their models in sequential iterations. In iteration t, an online algorithm receives a sample, x 1560:(MEP) is an evolutionary algorithm for generating computer programs (that can be used for classification tasks too). MEP has a unique feature: it encodes multiple programs into a single chromosome. Each of these programs can be used to generate the output for a class, thus making MEP naturally suitable for solving multi-class classification problems. 990:). For example, deciding on whether an image is showing a banana, an orange, or an apple is a multiclass classification problem, with three possible classes (banana, orange, apple), while deciding on whether an image contains an apple or not is a binary classification problem (with the two possible classes being: apple, no apple). 1548:
are based upon the idea of maximizing the margin i.e. maximizing the minimum distance from the separating hyperplane to the nearest example. The basic SVM supports only binary classification, but extensions have been proposed to handle the multiclass classification case as well. In these extensions,
1508:
kNN is considered among the oldest non-parametric classification algorithms. To classify an unknown example, the distance from that example to every other training example is measured. The k smallest distances are identified, and the most represented class by these k nearest neighbours is considered
1398:
that suffers from several problems. Firstly, the scale of the confidence values may differ between the binary classifiers. Second, even if the class distribution is balanced in the training set, the binary classification learners see unbalanced distributions because typically the set of negatives
1536:
is a powerful classification technique. The tree tries to infer a split of the training data based on the values of the available features to produce a good generalization. The algorithm can naturally handle binary or multiclass classification problems. The leaf nodes can refer to any of the K
1480:
Multiclass perceptrons provide a natural extension to the multi-class problem. Instead of just having one neuron in the output layer, with binary output, one could have N binary neurons leading to multi-class classification. In practice, the last layer of a neural network is usually a
1611:). Recently, a new learning paradigm called progressive learning technique has been developed. The progressive learning technique is capable of not only learning from new samples but also capable of learning new classes of data and yet retain the knowledge learnt thus far. 1485:
layer, which is the algebraic simplification of N logistic classifiers, normalized per class by the sum of the N-1 other logistic classifiers. Neural Network-based classification has brought significant improvements and scopes for thinking from different perspectives.
1117:, OAA) strategy involves training a single classifier per class, with the samples of that class as positive samples and all other samples as negatives. This strategy requires the base classifiers to produce a real-valued score for its decision (see also 1520:
is a successful classifier based upon the principle of maximum a posteriori (MAP). This approach is naturally extensible to the case of having more than two classes, and was shown to perform well in spite of the underlying simplifying assumption of
1497:(ELM) is a special case of single hidden layer feed-forward neural networks (SLFNs) wherein the input weights and the hidden node biases can be chosen at random. Many variants and developments are made to the ELM for multiclass classification. 1389: 1575:. Each parent node is divided into multiple child nodes and the process is continued until each child node represents only one class. Several methods have been proposed based on hierarchical classification. 1426:-way multiclass problem; each receives the samples of a pair of classes from the original training set, and must learn to distinguish these two classes. At prediction time, a voting scheme is applied: all 1619:
The performance of a multi-class classification system is often assessed by comparing the predictions of the system against reference labels with an evaluation metric. Common evaluation metrics are
884: 922: 879: 869: 1448:
This section discusses strategies of extending the existing binary classifiers to solve multi-class classification problems. Several algorithms have been developed based on
710: 917: 1093:
This section discusses strategies for reducing the problem of multiclass classification to multiple binary classification problems. It can be categorized into
1008:, where multiple labels are to be predicted for each instance (e.g., predicting that an image contains both an apple and an orange, in the previous example). 874: 725: 456: 1306: 1101:. The techniques developed based on reducing the multi-class problem into multiple binary problems can also be called problem transformation techniques. 1039: 957: 760: 1437:
classifiers are applied to an unseen sample and the class that got the highest number of "+1" predictions gets predicted by the combined classifier.
1705: 1121:), rather than just a class label; discrete class labels alone can lead to ambiguities, where multiple classes are predicted for a single sample. 836: 385: 894: 657: 192: 1895: 912: 1890: 745: 720: 669: 1061: 793: 788: 441: 986:
is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called
451: 89: 1549:
additional parameters and constraints are added to the optimization problem to handle the separation of the different classes.
994: 1472:
to address multi-class classification problems. These types of techniques can also be called algorithm adaptation techniques.
846: 1043: 950: 610: 431: 1583:
Based on learning paradigms, the existing multi-class classification techniques can be classified into batch learning and
821: 523: 299: 1557: 778: 715: 625: 603: 446: 436: 1829: 1568: 929: 841: 826: 287: 109: 1783:
Venkatesan, Rajasekar; Meng Joo, Er (2016). "A novel progressive learning technique for multi-class classification".
1440:
Like OvR, OvO suffers from ambiguities in that some regions of its input space may receive the same number of votes.
816: 1676: 1646: 1005: 975: 889: 566: 461: 249: 182: 142: 31: 1032: 943: 549: 317: 187: 1641: 1522: 1494: 1469: 571: 491: 414: 332: 162: 124: 119: 79: 74: 1845:"A Closer Look at Classification Evaluation Metrics and a Critical Reflection of Common Evaluation Practice" 1584: 1545: 1465: 518: 367: 267: 94: 1620: 1533: 1124:
In pseudocode, the training algorithm for an OvR learner constructed from a binary classification learner
698: 674: 576: 337: 312: 272: 84: 1636: 1572: 998: 987: 652: 474: 426: 282: 197: 69: 581: 531: 1656: 1505: 1457: 1001:
algorithms; these can, however, be turned into multinomial classifiers by a variety of strategies.
684: 620: 591: 496: 322: 255: 241: 227: 202: 152: 104: 64: 1856: 1810: 1792: 1763: 1762:
Kabir, H M Dipu (2023). "Reduction of class activation uncertainty with background information".
662: 586: 372: 167: 755: 598: 511: 307: 277: 222: 217: 172: 114: 1866: 1802: 1482: 971: 783: 536: 486: 396: 380: 350: 212: 207: 157: 147: 45: 1449: 811: 615: 481: 421: 1571:
tackles the multi-class classification problem by dividing the output space i.e. into a
1453: 831: 362: 99: 1884: 750: 679: 561: 292: 177: 1814: 1384:{\displaystyle {\hat {y}}={\underset {k\in \{1\ldots K\}}{\arg \!\max }}\;f_{k}(x)} 1118: 17: 1806: 1749:
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
1517: 1461: 1021: 556: 50: 1747:
Ekin, Cubuk (2019). "Autoaugment: Learning augmentation strategies from data".
1683:
and the prediction of multiple classes is considered a feature, not a problem.
1651: 705: 401: 327: 1300:
for which the corresponding classifier reports the highest confidence score:
1395: 864: 645: 1074:
The existing multi-class classification techniques can be categorised into
1870: 997:) naturally permit the use of more than two classes, some are by nature 1844: 1624: 1046: in this section. Unsourced material may be challenged and removed. 640: 391: 1292:
Making decisions means applying all classifiers to an unseen sample
1861: 1797: 1768: 635: 630: 357: 1015: 1849:
Transactions of the Association for Computational Linguistics
27:
Problem in machine learning and statistical classification
1603:
and updates its model based on the sample-label pair: (x
923:
List of datasets in computer vision and image processing
1138:, a learner (training algorithm for binary classifiers) 1595:
using the current model; the algorithm then receives y
1004:
Multiclass classification should not be confused with
1309: 1399:they see is much larger than the set of positives. 1383: 1331: 1332: 993:While many classification algorithms (notably 918:List of datasets for machine-learning research 1706:"Survey on multiclass classification methods" 951: 8: 1355: 1343: 1394:Although this strategy is popular, it is a 1361: 958: 944: 36: 1860: 1796: 1767: 1366: 1325: 1311: 1310: 1308: 1062:Learn how and when to remove this message 1734:Pattern Recognition and Machine Learning 1727: 1725: 1723: 1721: 1719: 1696: 1668: 44: 7: 1044:adding citations to reliable sources 913:Glossary of artificial intelligence 25: 1830:"Progressive Learning Technique" 1020: 1732:Bishop, Christopher M. (2006). 1031:needs additional citations for 995:multinomial logistic regression 1378: 1372: 1316: 1162:} is the label for the sample 333:Relevance vector machine (RVM) 1: 1210:Construct a new label vector 822:Computational learning theory 386:Expectation–maximization (EM) 1807:10.1016/j.neucom.2016.05.006 1558:Multi expression programming 1553:Multi expression programming 1411:(OvO) reduction, one trains 1084:hierarchical classification. 779:Coefficient of determination 626:Convolutional neural network 338:Support vector machine (SVM) 1569:Hierarchical classification 1564:Hierarchical classification 930:Outline of machine learning 827:Empirical risk minimization 1912: 1896:Statistical classification 1677:multi-label classification 1647:Multi-label classification 1006:multi-label classification 984:multinomial classification 976:statistical classification 567:Feedforward neural network 318:Artificial neural networks 32:multi-label classification 29: 1891:Classification algorithms 1710:Technical Report, Caltech 1495:Extreme learning machines 1490:Extreme learning machines 1470:extreme learning machines 1422:binary classifiers for a 1296:and predicting the label 980:multiclass classification 550:Artificial neural network 1642:One-class classification 1591:and predicts its label š 1523:conditional independence 1509:the output class label. 1089:Transformation to binary 1078:transformation to binary 859:Journals and conferences 806:Mathematical foundations 716:Temporal difference (TD) 572:Recurrent neural network 492:Conditional random field 415:Dimensionality reduction 163:Dimensionality reduction 125:Quantum machine learning 120:Neuromorphic engineering 80:Self-supervised learning 75:Semi-supervised learning 30:Not to be confused with 1828:Venkatesan, Rajasekar. 1546:Support vector machines 1541:Support vector machines 1466:support vector machines 268:Apprenticeship learning 1534:Decision tree learning 1385: 1176:a list of classifiers 817:Bias–variance tradeoff 699:Reinforcement learning 675:Spiking neural network 85:Reinforcement learning 1704:Mohamed, Aly (2005). 1652:Multiclass perceptron 1637:Binary classification 1599:, the true label of x 1444:Extension from binary 1386: 1109:One-vs.-rest (OvR or 1081:extension from binary 988:binary classification 653:Neural radiance field 475:Structured prediction 198:Structured prediction 70:Unsupervised learning 1871:10.1162/tacl_a_00675 1843:Opitz, Juri (2024). 1501:k-nearest neighbours 1307: 1040:improve this article 842:Statistical learning 740:Learning with humans 532:Local outlier factor 1657:Multi-task learning 1537:classes concerned. 1506:k-nearest neighbors 1458:k-nearest neighbors 685:Electrochemical RAM 592:reservoir computing 323:Logistic regression 242:Supervised learning 228:Multimodal learning 203:Feature engineering 148:Generative modeling 110:Rule-based learning 105:Curriculum learning 65:Supervised learning 40:Part of a series on 1679:, OvR is known as 1579:Learning paradigms 1381: 1359: 1012:General strategies 253: • 168:Density estimation 18:Multiclass problem 1326: 1319: 1072: 1071: 1064: 968: 967: 773:Model diagnostics 756:Human-in-the-loop 599:Boltzmann machine 512:Anomaly detection 308:Linear regression 223:Ontology learning 218:Grammar induction 193:Semantic analysis 188:Association rules 173:Anomaly detection 115:Neuro-symbolic AI 16:(Redirected from 1903: 1875: 1874: 1864: 1840: 1834: 1833: 1825: 1819: 1818: 1800: 1780: 1774: 1773: 1771: 1759: 1753: 1752: 1744: 1738: 1737: 1729: 1714: 1713: 1701: 1684: 1681:binary relevance 1673: 1483:softmax function 1436: 1425: 1421: 1390: 1388: 1387: 1382: 1371: 1370: 1360: 1358: 1335: 1321: 1320: 1312: 1299: 1295: 1284: 1277: 1273: 1269: 1262: 1250: 1235: 1224: 1213: 1206: 1202: 1190: 1186: 1182: 1168: 1161: 1157: 1150: 1144: 1137: 1127: 1067: 1060: 1056: 1053: 1047: 1024: 1016: 972:machine learning 960: 953: 946: 907:Related articles 784:Confusion matrix 537:Isolation forest 482:Graphical models 261: 260: 213:Learning to rank 208:Feature learning 46:Machine learning 37: 21: 1911: 1910: 1906: 1905: 1904: 1902: 1901: 1900: 1881: 1880: 1879: 1878: 1842: 1841: 1837: 1827: 1826: 1822: 1782: 1781: 1777: 1761: 1760: 1756: 1746: 1745: 1741: 1731: 1730: 1717: 1703: 1702: 1698: 1693: 1688: 1687: 1674: 1670: 1665: 1633: 1617: 1610: 1606: 1602: 1598: 1594: 1590: 1585:online learning 1581: 1566: 1555: 1543: 1531: 1515: 1503: 1492: 1478: 1476:Neural networks 1450:neural networks 1446: 1427: 1423: 1412: 1405: 1362: 1336: 1327: 1305: 1304: 1297: 1293: 1283: 1279: 1275: 1271: 1267: 1260: 1252: 1245: 1237: 1234: 1226: 1223: 1215: 1211: 1204: 1200: 1188: 1184: 1181: 1177: 1167: 1163: 1159: 1156: 1152: 1148: 1142: 1135: 1128:is as follows: 1125: 1115:one-against-all 1107: 1091: 1068: 1057: 1051: 1048: 1037: 1025: 1014: 964: 935: 934: 908: 900: 899: 860: 852: 851: 812:Kernel machines 807: 799: 798: 774: 766: 765: 746:Active learning 741: 733: 732: 701: 691: 690: 616:Diffusion model 552: 542: 541: 514: 504: 503: 477: 467: 466: 422:Factor analysis 417: 407: 406: 390: 353: 343: 342: 263: 262: 246: 245: 244: 233: 232: 138: 130: 129: 95:Online learning 60: 48: 35: 28: 23: 22: 15: 12: 11: 5: 1909: 1907: 1899: 1898: 1893: 1883: 1882: 1877: 1876: 1835: 1820: 1785:Neurocomputing 1775: 1754: 1739: 1715: 1695: 1694: 1692: 1689: 1686: 1685: 1667: 1666: 1664: 1661: 1660: 1659: 1654: 1649: 1644: 1639: 1632: 1629: 1616: 1613: 1608: 1604: 1600: 1596: 1592: 1588: 1580: 1577: 1565: 1562: 1554: 1551: 1542: 1539: 1530: 1529:Decision trees 1527: 1514: 1511: 1502: 1499: 1491: 1488: 1477: 1474: 1454:decision trees 1445: 1442: 1404: 1401: 1392: 1391: 1380: 1377: 1374: 1369: 1365: 1357: 1354: 1351: 1348: 1345: 1342: 1339: 1334: 1330: 1324: 1318: 1315: 1290: 1289: 1288: 1287: 1286: 1285: 1281: 1264: 1256: 1241: 1230: 1219: 1194: 1193: 1192: 1179: 1171: 1170: 1169: 1165: 1154: 1145: 1139: 1106: 1103: 1090: 1087: 1086: 1085: 1082: 1079: 1070: 1069: 1028: 1026: 1019: 1013: 1010: 966: 965: 963: 962: 955: 948: 940: 937: 936: 933: 932: 927: 926: 925: 915: 909: 906: 905: 902: 901: 898: 897: 892: 887: 882: 877: 872: 867: 861: 858: 857: 854: 853: 850: 849: 844: 839: 834: 832:Occam learning 829: 824: 819: 814: 808: 805: 804: 801: 800: 797: 796: 791: 789:Learning curve 786: 781: 775: 772: 771: 768: 767: 764: 763: 758: 753: 748: 742: 739: 738: 735: 734: 731: 730: 729: 728: 718: 713: 708: 702: 697: 696: 693: 692: 689: 688: 682: 677: 672: 667: 666: 665: 655: 650: 649: 648: 643: 638: 633: 623: 618: 613: 608: 607: 606: 596: 595: 594: 589: 584: 579: 569: 564: 559: 553: 548: 547: 544: 543: 540: 539: 534: 529: 521: 515: 510: 509: 506: 505: 502: 501: 500: 499: 494: 489: 478: 473: 472: 469: 468: 465: 464: 459: 454: 449: 444: 439: 434: 429: 424: 418: 413: 412: 409: 408: 405: 404: 399: 394: 388: 383: 378: 370: 365: 360: 354: 349: 348: 345: 344: 341: 340: 335: 330: 325: 320: 315: 310: 305: 297: 296: 295: 290: 285: 275: 273:Decision trees 270: 264: 250:classification 240: 239: 238: 235: 234: 231: 230: 225: 220: 215: 210: 205: 200: 195: 190: 185: 180: 175: 170: 165: 160: 155: 150: 145: 143:Classification 139: 136: 135: 132: 131: 128: 127: 122: 117: 112: 107: 102: 100:Batch learning 97: 92: 87: 82: 77: 72: 67: 61: 58: 57: 54: 53: 42: 41: 26: 24: 14: 13: 10: 9: 6: 4: 3: 2: 1908: 1897: 1894: 1892: 1889: 1888: 1886: 1872: 1868: 1863: 1858: 1854: 1850: 1846: 1839: 1836: 1831: 1824: 1821: 1816: 1812: 1808: 1804: 1799: 1794: 1790: 1786: 1779: 1776: 1770: 1765: 1758: 1755: 1750: 1743: 1740: 1735: 1728: 1726: 1724: 1722: 1720: 1716: 1711: 1707: 1700: 1697: 1690: 1682: 1678: 1672: 1669: 1662: 1658: 1655: 1653: 1650: 1648: 1645: 1643: 1640: 1638: 1635: 1634: 1630: 1628: 1626: 1622: 1614: 1612: 1586: 1578: 1576: 1574: 1570: 1563: 1561: 1559: 1552: 1550: 1547: 1540: 1538: 1535: 1528: 1526: 1524: 1519: 1512: 1510: 1507: 1500: 1498: 1496: 1489: 1487: 1484: 1475: 1473: 1471: 1467: 1463: 1459: 1455: 1451: 1443: 1441: 1438: 1434: 1430: 1419: 1415: 1410: 1402: 1400: 1397: 1375: 1367: 1363: 1352: 1349: 1346: 1340: 1337: 1328: 1322: 1313: 1303: 1302: 1301: 1265: 1259: 1255: 1249: 1244: 1240: 1233: 1229: 1222: 1218: 1209: 1208: 1198: 1197: 1195: 1175: 1174: 1172: 1146: 1140: 1134: 1133: 1131: 1130: 1129: 1122: 1120: 1116: 1112: 1104: 1102: 1100: 1096: 1088: 1083: 1080: 1077: 1076: 1075: 1066: 1063: 1055: 1045: 1041: 1035: 1034: 1029:This section 1027: 1023: 1018: 1017: 1011: 1009: 1007: 1002: 1000: 996: 991: 989: 985: 981: 977: 973: 961: 956: 954: 949: 947: 942: 941: 939: 938: 931: 928: 924: 921: 920: 919: 916: 914: 911: 910: 904: 903: 896: 893: 891: 888: 886: 883: 881: 878: 876: 873: 871: 868: 866: 863: 862: 856: 855: 848: 845: 843: 840: 838: 835: 833: 830: 828: 825: 823: 820: 818: 815: 813: 810: 809: 803: 802: 795: 792: 790: 787: 785: 782: 780: 777: 776: 770: 769: 762: 759: 757: 754: 752: 751:Crowdsourcing 749: 747: 744: 743: 737: 736: 727: 724: 723: 722: 719: 717: 714: 712: 709: 707: 704: 703: 700: 695: 694: 686: 683: 681: 680:Memtransistor 678: 676: 673: 671: 668: 664: 661: 660: 659: 656: 654: 651: 647: 644: 642: 639: 637: 634: 632: 629: 628: 627: 624: 622: 619: 617: 614: 612: 609: 605: 602: 601: 600: 597: 593: 590: 588: 585: 583: 580: 578: 575: 574: 573: 570: 568: 565: 563: 562:Deep learning 560: 558: 555: 554: 551: 546: 545: 538: 535: 533: 530: 528: 526: 522: 520: 517: 516: 513: 508: 507: 498: 497:Hidden Markov 495: 493: 490: 488: 485: 484: 483: 480: 479: 476: 471: 470: 463: 460: 458: 455: 453: 450: 448: 445: 443: 440: 438: 435: 433: 430: 428: 425: 423: 420: 419: 416: 411: 410: 403: 400: 398: 395: 393: 389: 387: 384: 382: 379: 377: 375: 371: 369: 366: 364: 361: 359: 356: 355: 352: 347: 346: 339: 336: 334: 331: 329: 326: 324: 321: 319: 316: 314: 311: 309: 306: 304: 302: 298: 294: 293:Random forest 291: 289: 286: 284: 281: 280: 279: 276: 274: 271: 269: 266: 265: 258: 257: 252: 251: 243: 237: 236: 229: 226: 224: 221: 219: 216: 214: 211: 209: 206: 204: 201: 199: 196: 194: 191: 189: 186: 184: 181: 179: 178:Data cleaning 176: 174: 171: 169: 166: 164: 161: 159: 156: 154: 151: 149: 146: 144: 141: 140: 134: 133: 126: 123: 121: 118: 116: 113: 111: 108: 106: 103: 101: 98: 96: 93: 91: 90:Meta-learning 88: 86: 83: 81: 78: 76: 73: 71: 68: 66: 63: 62: 56: 55: 52: 47: 43: 39: 38: 33: 19: 1852: 1848: 1838: 1823: 1788: 1784: 1778: 1757: 1748: 1742: 1733: 1709: 1699: 1680: 1671: 1618: 1582: 1567: 1556: 1544: 1532: 1516: 1504: 1493: 1479: 1447: 1439: 1432: 1428: 1417: 1413: 1408: 1406: 1393: 1291: 1257: 1253: 1247: 1242: 1238: 1231: 1227: 1220: 1216: 1123: 1119:scoring rule 1114: 1110: 1108: 1105:One-vs.-rest 1098: 1094: 1092: 1073: 1058: 1049: 1038:Please help 1033:verification 1030: 1003: 992: 983: 979: 969: 837:PAC learning 524: 373: 368:Hierarchical 300: 254: 248: 1855:: 820–836. 1791:: 310–321. 1736:. Springer. 1518:Naive Bayes 1513:Naive Bayes 1462:naive Bayes 1409:one-vs.-one 1403:One-vs.-one 1196:Procedure: 1111:one-vs.-all 1095:one vs rest 721:Multi-agent 658:Transformer 557:Autoencoder 313:Naive Bayes 51:data mining 1885:Categories 1862:2404.16958 1798:1609.00085 1769:2305.03238 1691:References 1615:Evaluation 1278:to obtain 1203:in {1, …, 1099:one vs one 1052:April 2021 706:Q-learning 604:Restricted 402:Mean shift 351:Clustering 328:Perceptron 256:regression 158:Clustering 153:Regression 1396:heuristic 1350:… 1341:∈ 1317:^ 1263:otherwise 1199:For each 1187:∈ {1, …, 1113:, OvA or 865:ECML PKDD 847:VC theory 794:ROC curve 726:Self-play 646:DeepDream 487:Bayes net 278:Ensembles 59:Paradigms 1815:12510650 1631:See also 1625:macro F1 1621:Accuracy 1435:− 1) / 2 1420:− 1) / 2 1173:Output: 1158:∈ {1, … 1141:samples 1132:Inputs: 288:Boosting 137:Problems 1407:In the 1147:labels 870:NeurIPS 687:(ECRAM) 641:AlexNet 283:Bagging 1813:  1266:Apply 1214:where 1151:where 999:binary 663:Vision 519:RANSAC 397:OPTICS 392:DBSCAN 376:-means 183:AutoML 1857:arXiv 1811:S2CID 1793:arXiv 1764:arXiv 1663:Notes 1251:and 885:IJCAI 711:SARSA 670:Mamba 636:LeNet 631:U-Net 457:t-SNE 381:Fuzzy 358:BIRCH 1573:tree 1468:and 1183:for 1097:and 974:and 895:JMLR 880:ICLR 875:ICML 761:RLHF 577:LSTM 363:CURE 49:and 1867:doi 1803:doi 1789:207 1675:In 1623:or 1607:, y 1333:max 1329:arg 1270:to 1261:= 0 1236:if 1042:by 982:or 970:In 621:SOM 611:GAN 587:ESN 582:GRU 527:-NN 462:SDL 452:PGD 447:PCA 442:NMF 437:LDA 432:ICA 427:CCA 303:-NN 1887:: 1865:. 1853:12 1851:. 1847:. 1809:. 1801:. 1787:. 1718:^ 1708:. 1627:. 1525:. 1464:, 1460:, 1456:, 1452:, 1274:, 1246:= 1225:= 1207:} 978:, 890:ML 1873:. 1869:: 1859:: 1832:. 1817:. 1805:: 1795:: 1772:. 1766:: 1751:. 1712:. 1609:t 1605:t 1601:t 1597:t 1593:t 1589:t 1433:K 1431:( 1429:K 1424:K 1418:K 1416:( 1414:K 1379:) 1376:x 1373:( 1368:k 1364:f 1356:} 1353:K 1347:1 1344:{ 1338:k 1323:= 1314:y 1298:k 1294:x 1282:k 1280:f 1276:z 1272:X 1268:L 1258:i 1254:z 1248:k 1243:i 1239:y 1232:i 1228:y 1221:i 1217:z 1212:z 1205:K 1201:k 1191:} 1189:K 1185:k 1180:k 1178:f 1166:i 1164:X 1160:K 1155:i 1153:y 1149:y 1143:X 1136:L 1126:L 1065:) 1059:( 1054:) 1050:( 1036:. 959:e 952:t 945:v 525:k 374:k 301:k 259:) 247:( 34:. 20:)

Index

Multiclass problem
multi-label classification
Machine learning
data mining
Supervised learning
Unsupervised learning
Semi-supervised learning
Self-supervised learning
Reinforcement learning
Meta-learning
Online learning
Batch learning
Curriculum learning
Rule-based learning
Neuro-symbolic AI
Neuromorphic engineering
Quantum machine learning
Classification
Generative modeling
Regression
Clustering
Dimensionality reduction
Density estimation
Anomaly detection
Data cleaning
AutoML
Association rules
Semantic analysis
Structured prediction
Feature engineering

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑