Knowledge (XXG)

Feature scaling

Source đź“ť

1520: 2473: 2073: 1029:
Also known as min-max scaling or min-max normalization, rescaling is the simplest method and consists in rescaling the range of features to scale the range in or . Selecting the target range depends on the nature of the data. The general formula for a min-max of is given as:
1168:
is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span . To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights).
991:. If one of the features has a broad range of values, the distance will be governed by this particular feature. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance. 1012:. In support vector machines, it can reduce the time to find support vectors. Feature scaling is also often used in applications involving distances and similarities between data points, such as clustering and similarity search. As an example, the 1565:. Feature standardization makes the values of each feature in the data have zero-mean (when subtracting the mean in the numerator) and unit-variance. This method is widely used for normalization in many machine learning algorithms (e.g., 1287: 1118: 1409: 2468:{\displaystyle \left({\frac {v_{1}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}},{\frac {v_{2}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}},{\frac {v_{3}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}}\right)} 1859: 1641: 1710: 1503: 866: 1939: 904: 2679: 2068: 861: 1996: 851: 692: 1730: 1457: 1166: 1505:
is the mean of that feature vector. There is another form of the means normalization which divides by the standard deviation which is also called standardization.
1551: 1316: 899: 1585:
for each feature. Next we subtract the mean from each feature. Then we divide the values (mean is already subtracted) of each feature by its standard deviation.
1664: 1432: 1141: 1178: 856: 707: 438: 939: 742: 1036: 2699: 818: 1561:
In machine learning, we can handle various types of data, e.g. audio signals and pixel values for image data, and this data can include multiple
367: 1762: 1329: 2630: 2603: 876: 639: 174: 2519:
Ioffe, Sergey; Christian Szegedy (2015). "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift".
894: 2565: 1519: 727: 702: 651: 2663: 775: 770: 423: 2483: 433: 71: 1523:
The effect of z-score normalization on k-means clustering. 4 gaussian clusters of points are generated, then squashed along the
828: 932: 592: 413: 803: 505: 281: 1591: 1002: 760: 697: 607: 585: 428: 418: 1009: 911: 823: 808: 269: 91: 798: 1669: 1462: 2488: 984: 980: 871: 548: 443: 231: 164: 124: 2694: 1574: 925: 531: 299: 169: 1864: 1557:-axis, since it is the axis with most of variation. After normalization, the clusters are recovered as expected. 553: 473: 396: 314: 144: 106: 101: 61: 56: 1759:. It scales features using the median and IQR as reference points instead of the mean and standard deviation: 500: 349: 249: 76: 2008: 2545: 1566: 680: 656: 558: 319: 294: 254: 66: 2540:
Juszczak, P.; D. M. J. Tax; R. P. W. Dui (2002). "Feature scaling in support vector data descriptions".
634: 456: 408: 264: 179: 51: 563: 513: 2550: 1748: 1570: 666: 602: 573: 478: 304: 237: 223: 209: 184: 134: 86: 46: 2520: 1950: 1949:
Unit vector normalization regards each individual data point as a vector, and divide each by its
1582: 1013: 988: 964: 960: 644: 568: 354: 149: 1956: 2659: 2651: 2626: 2599: 1752: 737: 580: 493: 289: 259: 204: 199: 154: 96: 1715: 2573: 995: 976: 765: 518: 468: 378: 362: 332: 194: 189: 139: 129: 27: 956: 793: 597: 463: 403: 1530: 1295: 1005:
is used as part of the loss function (so that coefficients are penalized appropriately).
955:
is a method used to normalize the range of independent variables or features of data. In
1437: 1282:{\displaystyle x'=a+{\frac {(x-{\text{min}}(x))(b-a)}{{\text{max}}(x)-{\text{min}}(x)}}} 1146: 2493: 1649: 1514: 1417: 1126: 813: 344: 81: 2688: 1553:
clustering was computed. Without normalization, the clusters were arranged along the
732: 661: 543: 274: 159: 2620: 538: 32: 1113:{\displaystyle x'={\frac {x-{\text{min}}(x)}{{\text{max}}(x)-{\text{min}}(x)}}} 1562: 687: 383: 309: 1172:
To rescale a range between an arbitrary set of values , the formula becomes:
2622:
The Elements of Statistical Learning: Data Mining, Inference, and Prediction
846: 627: 1404:{\displaystyle x'={\frac {x-{\bar {x}}}{{\text{max}}(x)-{\text{min}}(x)}}} 1999: 1756: 622: 1941:
are the three quartiles (25th, 50th, 75th percentile) of the feature.
1577:). The general method of calculation is to determine the distribution 1744: 373: 2525: 2498: 1008:
Empirically, feature scaling can improve the convergence speed of
617: 612: 339: 2619:
Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome H. (2009).
1998:. Any vector norm can be used, but the most common ones are the 1578: 979:
algorithms, objective functions will not work properly without
975:
Since the range of values of raw data varies widely, in some
998:
converges much faster with feature scaling than without it.
16:
Method used to normalize the range of independent variables
1854:{\displaystyle x'={\frac {x-Q_{2}(x)}{Q_{3}(x)-Q_{1}(x)}}} 905:
List of datasets in computer vision and image processing
994:
Another reason why feature scaling is applied is that
2076: 2011: 1959: 1867: 1765: 1718: 1672: 1652: 1594: 1533: 1465: 1440: 1420: 1332: 1298: 1181: 1149: 1129: 1039: 2501:, Feature space Maximum Likelihood Linear Regression 2650:Han, Jiawei; Kamber, Micheline; Pei, Jian (2011). 2467: 2062: 1990: 1933: 1853: 1724: 1704: 1658: 1636:{\displaystyle x'={\frac {x-{\bar {x}}}{\sigma }}} 1635: 1545: 1497: 1451: 1426: 1403: 1310: 1281: 1160: 1135: 1112: 2542:Proc. 8th Annu. Conf. Adv. School Comput. Imaging 987:calculate the distance between two points by the 1001:It's also important to apply feature scaling if 2598:. Sebastopol, CA: O'Reilly. pp. 99, 100. 1705:{\displaystyle {\bar {x}}={\text{average}}(x)} 1498:{\displaystyle {\bar {x}}={\text{average}}(x)} 900:List of datasets for machine-learning research 2652:"Data Transformation and Data Discretization" 933: 8: 1985: 1979: 1934:{\displaystyle Q_{1}(x),Q_{2}(x),Q_{3}(x)} 1016:algorithm is sensitive to feature scales. 940: 926: 18: 2549: 2524: 2447: 2443: 2433: 2428: 2421: 2412: 2403: 2398: 2391: 2382: 2373: 2368: 2361: 2352: 2342: 2336: 2320: 2316: 2306: 2301: 2294: 2285: 2276: 2271: 2264: 2255: 2246: 2241: 2234: 2225: 2215: 2209: 2193: 2189: 2179: 2174: 2167: 2158: 2149: 2144: 2137: 2128: 2119: 2114: 2107: 2098: 2088: 2082: 2075: 2051: 2038: 2025: 2010: 1974: 1958: 1916: 1894: 1872: 1866: 1833: 1811: 1790: 1777: 1764: 1717: 1688: 1674: 1673: 1671: 1651: 1616: 1615: 1606: 1593: 1532: 1481: 1467: 1466: 1464: 1439: 1419: 1384: 1367: 1354: 1353: 1344: 1331: 1297: 1262: 1245: 1211: 1199: 1180: 1148: 1128: 1093: 1076: 1060: 1051: 1038: 1712:is the mean of that feature vector, and 1518: 2680:Lecture by Andrew Ng on feature scaling 2511: 1509:Standardization (Z-score Normalization) 26: 1743:, also known as standardization using 963:and is generally performed during the 2063:{\displaystyle x=(v_{1},v_{2},v_{3})} 7: 2656:Data Mining: Concepts and Techniques 2070:, then its Lp-normalized version is: 895:Glossary of artificial intelligence 14: 1025:Rescaling (min-max normalization) 2484:Normalization (machine learning) 1666:is the original feature vector, 2700:Statistical data transformation 2658:. Elsevier. pp. 111–118. 2440: 2429: 2413: 2399: 2383: 2369: 2353: 2349: 2313: 2302: 2286: 2272: 2256: 2242: 2226: 2222: 2186: 2175: 2159: 2145: 2129: 2115: 2099: 2095: 2057: 2018: 1928: 1922: 1906: 1900: 1884: 1878: 1845: 1839: 1823: 1817: 1802: 1796: 1699: 1693: 1679: 1621: 1492: 1486: 1472: 1395: 1389: 1378: 1372: 1359: 1273: 1267: 1256: 1250: 1240: 1228: 1225: 1222: 1216: 1202: 1104: 1098: 1087: 1081: 1071: 1065: 315:Relevance vector machine (RVM) 1: 804:Computational learning theory 368:Expectation–maximization (EM) 761:Coefficient of determination 608:Convolutional neural network 320:Support vector machine (SVM) 1732:is its standard deviation. 1010:stochastic gradient descent 912:Outline of machine learning 809:Empirical risk minimization 2716: 2489:Normalization (statistics) 1991:{\displaystyle x'=x/\|x\|} 1575:artificial neural networks 1512: 549:Feedforward neural network 300:Artificial neural networks 2596:Data Science from Scratch 1945:Unit vector normalization 1751:(IQR), is designed to be 1459:is the normalized value, 532:Artificial neural network 1318:are the min-max values. 841:Journals and conferences 788:Mathematical foundations 698:Temporal difference (TD) 554:Recurrent neural network 474:Conditional random field 397:Dimensionality reduction 145:Dimensionality reduction 107:Quantum machine learning 102:Neuromorphic engineering 62:Self-supervised learning 57:Semi-supervised learning 2566:"Min Max normalization" 2000:L1 norm and the L2 norm 1725:{\displaystyle \sigma } 1567:support vector machines 250:Apprenticeship learning 2469: 2064: 1992: 1935: 1855: 1726: 1706: 1660: 1637: 1558: 1547: 1499: 1453: 1434:is an original value, 1428: 1405: 1312: 1283: 1162: 1143:is an original value, 1137: 1114: 959:, it is also known as 799:Bias–variance tradeoff 681:Reinforcement learning 657:Spiking neural network 67:Reinforcement learning 2470: 2065: 1993: 1936: 1856: 1727: 1707: 1661: 1638: 1548: 1522: 1500: 1454: 1429: 1406: 1313: 1284: 1163: 1138: 1115: 635:Neural radiance field 457:Structured prediction 180:Structured prediction 52:Unsupervised learning 2074: 2009: 1957: 1865: 1763: 1716: 1670: 1650: 1592: 1531: 1463: 1438: 1418: 1330: 1296: 1179: 1147: 1127: 1037: 983:. For example, many 824:Statistical learning 722:Learning with humans 514:Local outlier factor 2594:Grus, Joel (2015). 1749:interquartile range 1571:logistic regression 1546:{\displaystyle k=4} 1311:{\displaystyle a,b} 667:Electrochemical RAM 574:reservoir computing 305:Logistic regression 224:Supervised learning 210:Multimodal learning 185:Feature engineering 130:Generative modeling 92:Rule-based learning 87:Curriculum learning 47:Supervised learning 22:Part of a series on 2465: 2060: 1988: 1931: 1851: 1722: 1702: 1656: 1633: 1583:standard deviation 1559: 1543: 1495: 1452:{\displaystyle x'} 1449: 1424: 1401: 1322:Mean normalization 1308: 1279: 1161:{\displaystyle x'} 1158: 1133: 1110: 1014:K-means clustering 989:Euclidean distance 965:data preprocessing 961:data normalization 235: • 150:Density estimation 2632:978-0-387-84884-6 2605:978-1-491-90142-7 2458: 2331: 2204: 1849: 1691: 1682: 1659:{\displaystyle x} 1631: 1624: 1484: 1475: 1427:{\displaystyle x} 1399: 1387: 1370: 1362: 1277: 1265: 1248: 1214: 1136:{\displaystyle x} 1108: 1096: 1079: 1063: 950: 949: 755:Model diagnostics 738:Human-in-the-loop 581:Boltzmann machine 494:Anomaly detection 290:Linear regression 205:Ontology learning 200:Grammar induction 175:Semantic analysis 170:Association rules 155:Anomaly detection 97:Neuro-symbolic AI 2707: 2695:Machine learning 2669: 2637: 2636: 2616: 2610: 2609: 2591: 2585: 2584: 2582: 2581: 2572:. Archived from 2562: 2556: 2555: 2553: 2537: 2531: 2530: 2528: 2516: 2474: 2472: 2471: 2466: 2464: 2460: 2459: 2457: 2456: 2455: 2451: 2438: 2437: 2432: 2426: 2425: 2416: 2408: 2407: 2402: 2396: 2395: 2386: 2378: 2377: 2372: 2366: 2365: 2356: 2347: 2346: 2337: 2332: 2330: 2329: 2328: 2324: 2311: 2310: 2305: 2299: 2298: 2289: 2281: 2280: 2275: 2269: 2268: 2259: 2251: 2250: 2245: 2239: 2238: 2229: 2220: 2219: 2210: 2205: 2203: 2202: 2201: 2197: 2184: 2183: 2178: 2172: 2171: 2162: 2154: 2153: 2148: 2142: 2141: 2132: 2124: 2123: 2118: 2112: 2111: 2102: 2093: 2092: 2083: 2069: 2067: 2066: 2061: 2056: 2055: 2043: 2042: 2030: 2029: 2005:For example, if 1997: 1995: 1994: 1989: 1978: 1967: 1940: 1938: 1937: 1932: 1921: 1920: 1899: 1898: 1877: 1876: 1860: 1858: 1857: 1852: 1850: 1848: 1838: 1837: 1816: 1815: 1805: 1795: 1794: 1778: 1773: 1731: 1729: 1728: 1723: 1711: 1709: 1708: 1703: 1692: 1689: 1684: 1683: 1675: 1665: 1663: 1662: 1657: 1642: 1640: 1639: 1634: 1632: 1627: 1626: 1625: 1617: 1607: 1602: 1552: 1550: 1549: 1544: 1504: 1502: 1501: 1496: 1485: 1482: 1477: 1476: 1468: 1458: 1456: 1455: 1450: 1448: 1433: 1431: 1430: 1425: 1410: 1408: 1407: 1402: 1400: 1398: 1388: 1385: 1371: 1368: 1365: 1364: 1363: 1355: 1345: 1340: 1317: 1315: 1314: 1309: 1288: 1286: 1285: 1280: 1278: 1276: 1266: 1263: 1249: 1246: 1243: 1215: 1212: 1200: 1189: 1167: 1165: 1164: 1159: 1157: 1142: 1140: 1139: 1134: 1119: 1117: 1116: 1111: 1109: 1107: 1097: 1094: 1080: 1077: 1074: 1064: 1061: 1052: 1047: 996:gradient descent 977:machine learning 942: 935: 928: 889:Related articles 766:Confusion matrix 519:Isolation forest 464:Graphical models 243: 242: 195:Learning to rank 190:Feature learning 28:Machine learning 19: 2715: 2714: 2710: 2709: 2708: 2706: 2705: 2704: 2685: 2684: 2676: 2666: 2649: 2646: 2644:Further reading 2641: 2640: 2633: 2618: 2617: 2613: 2606: 2593: 2592: 2588: 2579: 2577: 2570:ml-concepts.com 2564: 2563: 2559: 2551:10.1.1.100.2524 2539: 2538: 2534: 2518: 2517: 2513: 2508: 2480: 2439: 2427: 2417: 2397: 2387: 2367: 2357: 2348: 2338: 2312: 2300: 2290: 2270: 2260: 2240: 2230: 2221: 2211: 2185: 2173: 2163: 2143: 2133: 2113: 2103: 2094: 2084: 2081: 2077: 2072: 2071: 2047: 2034: 2021: 2007: 2006: 1960: 1955: 1954: 1947: 1912: 1890: 1868: 1863: 1862: 1829: 1807: 1806: 1786: 1779: 1766: 1761: 1760: 1738: 1714: 1713: 1668: 1667: 1648: 1647: 1608: 1595: 1590: 1589: 1529: 1528: 1517: 1511: 1461: 1460: 1441: 1436: 1435: 1416: 1415: 1366: 1346: 1333: 1328: 1327: 1324: 1294: 1293: 1244: 1201: 1182: 1177: 1176: 1150: 1145: 1144: 1125: 1124: 1075: 1053: 1040: 1035: 1034: 1027: 1022: 973: 957:data processing 953:Feature scaling 946: 917: 916: 890: 882: 881: 842: 834: 833: 794:Kernel machines 789: 781: 780: 756: 748: 747: 728:Active learning 723: 715: 714: 683: 673: 672: 598:Diffusion model 534: 524: 523: 496: 486: 485: 459: 449: 448: 404:Factor analysis 399: 389: 388: 372: 335: 325: 324: 245: 244: 228: 227: 226: 215: 214: 120: 112: 111: 77:Online learning 42: 30: 17: 12: 11: 5: 2713: 2711: 2703: 2702: 2697: 2687: 2686: 2683: 2682: 2675: 2674:External links 2672: 2671: 2670: 2664: 2645: 2642: 2639: 2638: 2631: 2611: 2604: 2586: 2557: 2532: 2510: 2509: 2507: 2504: 2503: 2502: 2496: 2494:Standard score 2491: 2486: 2479: 2476: 2463: 2454: 2450: 2446: 2442: 2436: 2431: 2424: 2420: 2415: 2411: 2406: 2401: 2394: 2390: 2385: 2381: 2376: 2371: 2364: 2360: 2355: 2351: 2345: 2341: 2335: 2327: 2323: 2319: 2315: 2309: 2304: 2297: 2293: 2288: 2284: 2279: 2274: 2267: 2263: 2258: 2254: 2249: 2244: 2237: 2233: 2228: 2224: 2218: 2214: 2208: 2200: 2196: 2192: 2188: 2182: 2177: 2170: 2166: 2161: 2157: 2152: 2147: 2140: 2136: 2131: 2127: 2122: 2117: 2110: 2106: 2101: 2097: 2091: 2087: 2080: 2059: 2054: 2050: 2046: 2041: 2037: 2033: 2028: 2024: 2020: 2017: 2014: 1987: 1984: 1981: 1977: 1973: 1970: 1966: 1963: 1946: 1943: 1930: 1927: 1924: 1919: 1915: 1911: 1908: 1905: 1902: 1897: 1893: 1889: 1886: 1883: 1880: 1875: 1871: 1847: 1844: 1841: 1836: 1832: 1828: 1825: 1822: 1819: 1814: 1810: 1804: 1801: 1798: 1793: 1789: 1785: 1782: 1776: 1772: 1769: 1741:Robust scaling 1737: 1736:Robust Scaling 1734: 1721: 1701: 1698: 1695: 1687: 1681: 1678: 1655: 1644: 1643: 1630: 1623: 1620: 1614: 1611: 1605: 1601: 1598: 1542: 1539: 1536: 1515:Standard score 1510: 1507: 1494: 1491: 1488: 1480: 1474: 1471: 1447: 1444: 1423: 1412: 1411: 1397: 1394: 1391: 1383: 1380: 1377: 1374: 1361: 1358: 1352: 1349: 1343: 1339: 1336: 1323: 1320: 1307: 1304: 1301: 1290: 1289: 1275: 1272: 1269: 1261: 1258: 1255: 1252: 1242: 1239: 1236: 1233: 1230: 1227: 1224: 1221: 1218: 1210: 1207: 1204: 1198: 1195: 1192: 1188: 1185: 1156: 1153: 1132: 1121: 1120: 1106: 1103: 1100: 1092: 1089: 1086: 1083: 1073: 1070: 1067: 1059: 1056: 1050: 1046: 1043: 1026: 1023: 1021: 1018: 1003:regularization 972: 969: 948: 947: 945: 944: 937: 930: 922: 919: 918: 915: 914: 909: 908: 907: 897: 891: 888: 887: 884: 883: 880: 879: 874: 869: 864: 859: 854: 849: 843: 840: 839: 836: 835: 832: 831: 826: 821: 816: 814:Occam learning 811: 806: 801: 796: 790: 787: 786: 783: 782: 779: 778: 773: 771:Learning curve 768: 763: 757: 754: 753: 750: 749: 746: 745: 740: 735: 730: 724: 721: 720: 717: 716: 713: 712: 711: 710: 700: 695: 690: 684: 679: 678: 675: 674: 671: 670: 664: 659: 654: 649: 648: 647: 637: 632: 631: 630: 625: 620: 615: 605: 600: 595: 590: 589: 588: 578: 577: 576: 571: 566: 561: 551: 546: 541: 535: 530: 529: 526: 525: 522: 521: 516: 511: 503: 497: 492: 491: 488: 487: 484: 483: 482: 481: 476: 471: 460: 455: 454: 451: 450: 447: 446: 441: 436: 431: 426: 421: 416: 411: 406: 400: 395: 394: 391: 390: 387: 386: 381: 376: 370: 365: 360: 352: 347: 342: 336: 331: 330: 327: 326: 323: 322: 317: 312: 307: 302: 297: 292: 287: 279: 278: 277: 272: 267: 257: 255:Decision trees 252: 246: 232:classification 222: 221: 220: 217: 216: 213: 212: 207: 202: 197: 192: 187: 182: 177: 172: 167: 162: 157: 152: 147: 142: 137: 132: 127: 125:Classification 121: 118: 117: 114: 113: 110: 109: 104: 99: 94: 89: 84: 82:Batch learning 79: 74: 69: 64: 59: 54: 49: 43: 40: 39: 36: 35: 24: 23: 15: 13: 10: 9: 6: 4: 3: 2: 2712: 2701: 2698: 2696: 2693: 2692: 2690: 2681: 2678: 2677: 2673: 2667: 2665:9780123814807 2661: 2657: 2653: 2648: 2647: 2643: 2634: 2628: 2624: 2623: 2615: 2612: 2607: 2601: 2597: 2590: 2587: 2576:on 2023-04-05 2575: 2571: 2567: 2561: 2558: 2552: 2547: 2543: 2536: 2533: 2527: 2522: 2515: 2512: 2505: 2500: 2497: 2495: 2492: 2490: 2487: 2485: 2482: 2481: 2477: 2475: 2461: 2452: 2448: 2444: 2434: 2422: 2418: 2409: 2404: 2392: 2388: 2379: 2374: 2362: 2358: 2343: 2339: 2333: 2325: 2321: 2317: 2307: 2295: 2291: 2282: 2277: 2265: 2261: 2252: 2247: 2235: 2231: 2216: 2212: 2206: 2198: 2194: 2190: 2180: 2168: 2164: 2155: 2150: 2138: 2134: 2125: 2120: 2108: 2104: 2089: 2085: 2078: 2052: 2048: 2044: 2039: 2035: 2031: 2026: 2022: 2015: 2012: 2003: 2001: 1982: 1975: 1971: 1968: 1964: 1961: 1952: 1944: 1942: 1925: 1917: 1913: 1909: 1903: 1895: 1891: 1887: 1881: 1873: 1869: 1842: 1834: 1830: 1826: 1820: 1812: 1808: 1799: 1791: 1787: 1783: 1780: 1774: 1770: 1767: 1758: 1754: 1750: 1746: 1742: 1735: 1733: 1719: 1696: 1685: 1676: 1653: 1628: 1618: 1612: 1609: 1603: 1599: 1596: 1588: 1587: 1586: 1584: 1580: 1576: 1572: 1568: 1564: 1556: 1540: 1537: 1534: 1527:-axis, and a 1526: 1521: 1516: 1508: 1506: 1489: 1478: 1469: 1445: 1442: 1421: 1392: 1381: 1375: 1356: 1350: 1347: 1341: 1337: 1334: 1326: 1325: 1321: 1319: 1305: 1302: 1299: 1270: 1259: 1253: 1237: 1234: 1231: 1219: 1208: 1205: 1196: 1193: 1190: 1186: 1183: 1175: 1174: 1173: 1170: 1154: 1151: 1130: 1101: 1090: 1084: 1068: 1057: 1054: 1048: 1044: 1041: 1033: 1032: 1031: 1024: 1019: 1017: 1015: 1011: 1006: 1004: 999: 997: 992: 990: 986: 982: 981:normalization 978: 970: 968: 966: 962: 958: 954: 943: 938: 936: 931: 929: 924: 923: 921: 920: 913: 910: 906: 903: 902: 901: 898: 896: 893: 892: 886: 885: 878: 875: 873: 870: 868: 865: 863: 860: 858: 855: 853: 850: 848: 845: 844: 838: 837: 830: 827: 825: 822: 820: 817: 815: 812: 810: 807: 805: 802: 800: 797: 795: 792: 791: 785: 784: 777: 774: 772: 769: 767: 764: 762: 759: 758: 752: 751: 744: 741: 739: 736: 734: 733:Crowdsourcing 731: 729: 726: 725: 719: 718: 709: 706: 705: 704: 701: 699: 696: 694: 691: 689: 686: 685: 682: 677: 676: 668: 665: 663: 662:Memtransistor 660: 658: 655: 653: 650: 646: 643: 642: 641: 638: 636: 633: 629: 626: 624: 621: 619: 616: 614: 611: 610: 609: 606: 604: 601: 599: 596: 594: 591: 587: 584: 583: 582: 579: 575: 572: 570: 567: 565: 562: 560: 557: 556: 555: 552: 550: 547: 545: 544:Deep learning 542: 540: 537: 536: 533: 528: 527: 520: 517: 515: 512: 510: 508: 504: 502: 499: 498: 495: 490: 489: 480: 479:Hidden Markov 477: 475: 472: 470: 467: 466: 465: 462: 461: 458: 453: 452: 445: 442: 440: 437: 435: 432: 430: 427: 425: 422: 420: 417: 415: 412: 410: 407: 405: 402: 401: 398: 393: 392: 385: 382: 380: 377: 375: 371: 369: 366: 364: 361: 359: 357: 353: 351: 348: 346: 343: 341: 338: 337: 334: 329: 328: 321: 318: 316: 313: 311: 308: 306: 303: 301: 298: 296: 293: 291: 288: 286: 284: 280: 276: 275:Random forest 273: 271: 268: 266: 263: 262: 261: 258: 256: 253: 251: 248: 247: 240: 239: 234: 233: 225: 219: 218: 211: 208: 206: 203: 201: 198: 196: 193: 191: 188: 186: 183: 181: 178: 176: 173: 171: 168: 166: 163: 161: 160:Data cleaning 158: 156: 153: 151: 148: 146: 143: 141: 138: 136: 133: 131: 128: 126: 123: 122: 116: 115: 108: 105: 103: 100: 98: 95: 93: 90: 88: 85: 83: 80: 78: 75: 73: 72:Meta-learning 70: 68: 65: 63: 60: 58: 55: 53: 50: 48: 45: 44: 38: 37: 34: 29: 25: 21: 20: 2655: 2625:. Springer. 2621: 2614: 2595: 2589: 2578:. Retrieved 2574:the original 2569: 2560: 2541: 2535: 2514: 2004: 1953:, to obtain 1948: 1740: 1739: 1645: 1560: 1554: 1524: 1413: 1291: 1171: 1122: 1028: 1007: 1000: 993: 974: 952: 951: 819:PAC learning 506: 355: 350:Hierarchical 282: 236: 230: 1951:vector norm 985:classifiers 703:Multi-agent 640:Transformer 539:Autoencoder 295:Naive Bayes 33:data mining 2689:Categories 2580:2022-12-14 2526:1502.03167 2506:References 1563:dimensions 1513:See also: 971:Motivation 688:Q-learning 586:Restricted 384:Mean shift 333:Clustering 310:Perceptron 238:regression 140:Clustering 135:Regression 2546:CiteSeerX 2544:: 25–30. 1986:‖ 1980:‖ 1827:− 1784:− 1720:σ 1680:¯ 1629:σ 1622:¯ 1613:− 1473:¯ 1382:− 1360:¯ 1351:− 1260:− 1235:− 1209:− 1091:− 1058:− 847:ECML PKDD 829:VC theory 776:ROC curve 708:Self-play 628:DeepDream 469:Bayes net 260:Ensembles 41:Paradigms 2478:See also 1965:′ 1771:′ 1757:outliers 1600:′ 1446:′ 1338:′ 1187:′ 1155:′ 1045:′ 270:Boosting 119:Problems 1690:average 1483:average 1020:Methods 852:NeurIPS 669:(ECRAM) 623:AlexNet 265:Bagging 2662:  2629:  2602:  2548:  1861:where 1753:robust 1745:median 1646:Where 1573:, and 1414:where 1292:where 1123:where 967:step. 645:Vision 501:RANSAC 379:OPTICS 374:DBSCAN 358:-means 165:AutoML 2521:arXiv 2499:fMLLR 867:IJCAI 693:SARSA 652:Mamba 618:LeNet 613:U-Net 439:t-SNE 363:Fuzzy 340:BIRCH 2660:ISBN 2627:ISBN 2600:ISBN 1747:and 1581:and 1579:mean 877:JMLR 862:ICLR 857:ICML 743:RLHF 559:LSTM 345:CURE 31:and 1755:to 1386:min 1369:max 1264:min 1247:max 1213:min 1095:min 1078:max 1062:min 603:SOM 593:GAN 569:ESN 564:GRU 509:-NN 444:SDL 434:PGD 429:PCA 424:NMF 419:LDA 414:ICA 409:CCA 285:-NN 2691:: 2654:. 2568:. 2002:. 1569:, 872:ML 2668:. 2635:. 2608:. 2583:. 2554:. 2529:. 2523:: 2462:) 2453:p 2449:/ 2445:1 2441:) 2435:p 2430:| 2423:3 2419:v 2414:| 2410:+ 2405:p 2400:| 2393:2 2389:v 2384:| 2380:+ 2375:p 2370:| 2363:1 2359:v 2354:| 2350:( 2344:3 2340:v 2334:, 2326:p 2322:/ 2318:1 2314:) 2308:p 2303:| 2296:3 2292:v 2287:| 2283:+ 2278:p 2273:| 2266:2 2262:v 2257:| 2253:+ 2248:p 2243:| 2236:1 2232:v 2227:| 2223:( 2217:2 2213:v 2207:, 2199:p 2195:/ 2191:1 2187:) 2181:p 2176:| 2169:3 2165:v 2160:| 2156:+ 2151:p 2146:| 2139:2 2135:v 2130:| 2126:+ 2121:p 2116:| 2109:1 2105:v 2100:| 2096:( 2090:1 2086:v 2079:( 2058:) 2053:3 2049:v 2045:, 2040:2 2036:v 2032:, 2027:1 2023:v 2019:( 2016:= 2013:x 1983:x 1976:/ 1972:x 1969:= 1962:x 1929:) 1926:x 1923:( 1918:3 1914:Q 1910:, 1907:) 1904:x 1901:( 1896:2 1892:Q 1888:, 1885:) 1882:x 1879:( 1874:1 1870:Q 1846:) 1843:x 1840:( 1835:1 1831:Q 1824:) 1821:x 1818:( 1813:3 1809:Q 1803:) 1800:x 1797:( 1792:2 1788:Q 1781:x 1775:= 1768:x 1700:) 1697:x 1694:( 1686:= 1677:x 1654:x 1619:x 1610:x 1604:= 1597:x 1555:x 1541:4 1538:= 1535:k 1525:y 1493:) 1490:x 1487:( 1479:= 1470:x 1443:x 1422:x 1396:) 1393:x 1390:( 1379:) 1376:x 1373:( 1357:x 1348:x 1342:= 1335:x 1306:b 1303:, 1300:a 1274:) 1271:x 1268:( 1257:) 1254:x 1251:( 1241:) 1238:a 1232:b 1229:( 1226:) 1223:) 1220:x 1217:( 1206:x 1203:( 1197:+ 1194:a 1191:= 1184:x 1152:x 1131:x 1105:) 1102:x 1099:( 1088:) 1085:x 1082:( 1072:) 1069:x 1066:( 1055:x 1049:= 1042:x 941:e 934:t 927:v 507:k 356:k 283:k 241:) 229:(

Index

Machine learning
data mining
Supervised learning
Unsupervised learning
Semi-supervised learning
Self-supervised learning
Reinforcement learning
Meta-learning
Online learning
Batch learning
Curriculum learning
Rule-based learning
Neuro-symbolic AI
Neuromorphic engineering
Quantum machine learning
Classification
Generative modeling
Regression
Clustering
Dimensionality reduction
Density estimation
Anomaly detection
Data cleaning
AutoML
Association rules
Semantic analysis
Structured prediction
Feature engineering
Feature learning
Learning to rank

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑