Knowledge (XXG)

Bayesian optimization

Source đź“ť

1489: 70: 448:
high accuracy. A novel approach to optimize the HOG algorithm parameters and image size for facial recognition using a Tree-structured Parzen Estimator (TPE) based Bayesian optimization technique has been proposed. This optimized approach has the potential to be adapted for other computer vision applications and contributes to the ongoing development of hand-crafted parameter-based feature extraction algorithms in computer vision.
447:
Bayesian Optimization has been applied in the field of facial recognition. The performance of the Histogram of Oriented Gradients (HOG) algorithm, a popular feature extraction method, heavily relies on its parameter settings. Optimizing these parameters can be challenging but crucial for achieving
349:
problems. Optimization problems can become exotic if it is known that there is noise, the evaluations are being done in parallel, the quality of evaluations relies upon a tradeoff between difficulty and accuracy, the presence of random environmental conditions, or if the evaluation involves
300:
over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point.
1058:
Kent, Paul; Gaier, Adam; Mouret, Jean-Baptiste; Branke, Juergen (2023-07-19). "BOP-Elites, a Bayesian Optimisation Approach to Quality Diversity Search with Black-Box descriptor functions".
1386: 401: 205: 392:
The maximum of the acquisition function is typically found by resorting to discretization or by means of an auxiliary optimizer. Acquisition functions are maximized using a
296:
over it. The prior captures beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the
120: 1257: 1381: 343: 283: 234: 254: 160: 140: 1395: 1875: 765:
A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning
384:
so as to minimize the number of function queries. As such, Bayesian optimization is well suited for functions that are expensive to evaluate.
1250: 1106: 913:
Proceedings of the 9th International Conference on Information Processing in Sensor Networks, IPSN 2010, April 12–16, 2010, Stockholm, Sweden
659: 556: 34:
functions, that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions. With the rise of
73:
Bayesian optimization of a function (black) with Gaussian processes (purple). Three acquisition functions (blue) are shown at the bottom.
1956: 1418: 986: 947: 304:
There are several methods used to define the prior/posterior distribution over the objective function. The most common two methods use
1470: 1331: 1229:
Mohammed Mehdi Bouchene: Bayesian Optimization of Histogram of Oriented Gradients (Hog) Parameters for Facial Recognition. SSRN (2023)
586: 492: 397: 1438: 529: 838: 1987: 1549: 1243: 381: 316:
to construct two distributions for 'high' and 'low' points, and then finds the location that maximizes the expected improvement.
603: 1982: 960: 1826: 1488: 777: 1934: 1554: 1870: 1838: 856:
A Bayesian exploration-exploitation approach for optimal online sensing and planning with a visually guided mobile robot
497: 477: 207:), and whose membership can easily be evaluated. Bayesian optimization is particularly advantageous for problems where 1977: 1919: 1544: 256:, is continuous and takes the form of some unknown structure, referred to as a "black box". Upon its evaluation, only 1865: 1821: 1423: 1714: 1443: 738: 429: 1604: 1992: 1266: 393: 1789: 292:
Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a
1651: 1833: 1732: 1448: 35: 1326: 313: 169: 1924: 1909: 1799: 1677: 1303: 1270: 894: 803: 573: 482: 433: 297: 1813: 1779: 1682: 1624: 1505: 1311: 1291: 934: 754:: Portfolio Allocation for Bayesian Optimization. Uncertainty in Artificial Intelligence: 327–336 (2011) 61:
and is coined in his work from a series of publications on global optimization in the 1970s and 1980s.
1860: 1687: 1599: 1000: 1235: 1045: 80: 1929: 1794: 1747: 1737: 1589: 1577: 1390: 1373: 1278: 472: 27: 23: 51: 1664: 1633: 1619: 1609: 1400: 1210: 1179: 1143: 1112: 1059: 855: 718: 457: 293: 1316: 881: 627:
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR
1672: 1350: 1211:
Constrained Bayesian Optimization for Automatic Chemical Design using Variational Autoencoders
1171: 1163: 1102: 1046:
AI-optimized detector design for the future Electron-Ion Collider: the dual-radiator RICH case
908: 691: 655: 582: 552: 525: 467: 417: 374: 1752: 1742: 1646: 1523: 1428: 1410: 1363: 1274: 1153: 1094: 1014: 1006: 961:
Hyperopt: A Python Library for Optimizing the Hyperparameters of Machine Learning Algorithms
916: 884:. Ann. Math. Artif. Intell. Volume 76, Issue 1, pp 5-23 (2016) DOI:10.1007/s10472-015-9463-9 751: 675:
MoÄŤkus, Jonas (1977). "On Bayesian Methods for Seeking the Extremum and their Application".
645: 441: 413: 322: 305: 39: 895:
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
1768: 1085: 974:
Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms
854:
Ruben Martinez-Cantin, Nando de Freitas, Eric Brochu, Jose Castellanos and Arnaud Doucet.
842: 425: 69: 1093:. GECCO '23. New York, NY, USA: Association for Computing Machinery. pp. 1019–1026. 835: 1756: 1641: 1528: 1462: 1433: 487: 1197: 259: 210: 1971: 1914: 1898: 1116: 437: 1198:
Automatic Chemical Design using a Data-Driven Continuous Representation of Molecules
1183: 871:. International Journal of Robotics Research, volume 32, number 7, pp 806–825 (2013) 868: 444:, quality-diversity optimization, chemistry, material design, and drug development. 345:
being easy to evaluate, and problems that deviate from this assumption are known as
1852: 1358: 239: 145: 125: 38:
innovation in the 21st century, Bayesian optimizations have found prominent use in
623:"Fast bayesian optimization of machine learning hyperparameters on large datasets" 622: 1081: 56: 1939: 1321: 236:
is difficult to evaluate due to its computational cost. The objective function,
1018: 1002:
Safe Exploration in Reinforcement Learning: Theory and Applications in Robotics
822:. ACM Transactions on Graphics, Volume 39, Issue 4, pp.88:1–88:12 (2020). DOI: 806:. ACM Transactions on Graphics, Volume 36, Issue 4, pp.48:1–48:11 (2017). DOI: 286: 1167: 1010: 650: 642:
Optimization Techniques IFIP Technical Conference Novosibirsk, July 1–7, 1974
1098: 920: 790: 163: 31: 1175: 1132:"Data-Efficient Design Exploration through Surrogate-Assisted Illumination" 845:. International Joint Conference on Artificial Intelligence: 944–949 (2007) 791:
A Bayesian Interactive Optimization Approach to Procedural Animation Design
412:
The approach has been applied to solve a wide range of problems, including
823: 807: 1341: 1158: 1131: 804:
Sequential Line Search for Efficient Visual Design Optimization by Crowds
421: 1661: 935:
Sequential model-based optimization for general algorithm configuration
893:
Niranjan Srinivas, Andreas Krause, Sham M. Kakade, Matthias W. Seeger:
717:
Frazier, Peter I. (2018-07-08). "A Tutorial on Bayesian Optimization".
462: 309: 1130:
Gaier, Adam; Asteroth, Alexander; Mouret, Jean-Baptiste (2018-09-01).
640:
MoÄŤkus, Jonas (1975). "On bayesian methods for seeking the extremum".
950:. Advances in Neural Information Processing Systems: 2951-2959 (2012) 880:
Roberto Calandra, André Seyfarth, Jan Peters, and Marc P. Deisenroth
741:. Advances in Neural Information Processing Systems: 2546–2554 (2011) 644:. Lecture Notes in Computer Science. Vol. 27. pp. 400–404. 1148: 1064: 1032: 819: 780:. Advances in Neural Information Processing Systems: 409-416 (2007) 723: 1087:
Proceedings of the Genetic and Evolutionary Computation Conference
973: 972:
Chris Thornton, Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown:
834:
Daniel J. Lizotte, Tao Wang, Michael H. Bowling, Dale Schuurmans:
764: 1082:"Bayesian Quality Diversity Search with Interactive Illumination" 907:
Garnett, Roman; Osborne, Michael A.; Roberts, Stephen J. (2010).
897:. IEEE Transactions on Information Theory 58(5):3250–3265 (2012) 608:
Advances in Neural Information Processing Systems 25 (NIPS 2012)
604:"Practical Bayesian Optimization of Machine Learning Algorithms" 77:
Bayesian optimization is typically used on problems of the form
1896: 1712: 1575: 1503: 1289: 1239: 911:. In Abdelzaher, Tarek F.; Voigt, Thiemo; Wolisz, Adam (eds.). 987:
Practical Bayesian Optimization of Machine Learning Algorithms
948:
Practical Bayesian Optimization of Machine Learning Algorithms
820:
Sequential Gallery for Interactive Visual Design Optimization
1487: 836:
Automatic Gait Optimization with Gaussian Process Regression
436:, planning, visual attention, architecture configuration in 1035:. 2017 JINST 12 P04028. DOI: 10.1088/1748-0221/12/04/P04028 802:
Yuki Koyama, Issei Sato, Daisuke Sakamoto, Takeo Igarashi:
546: 933:
Frank Hutter, Holger Hoos, and Kevin Leyton-Brown (2011).
882:
Bayesian optimization for learning gaits under uncertainty
1048:
2020 JINST 15 P05009. DOI: 10.1088/1748-0221/15/05/P05009
989:. Advances in Neural Information Processing Systems, 2012 858:. Autonomous Robots. Volume 27, Issue 2, pp 93–103 (2009) 1200:. ACS Central Science, Volume 4, Issue 2, 268-276 (2018) 371:
upper confidence bounds (UCB) or lower confidence bounds
985:
Jasper Snoek, Hugo Larochelle and Ryan Prescott Adams.
262: 242: 213: 172: 148: 128: 83: 325: 867:
Scott Kuindersma, Roderic Grupen, and Andrew Barto.
778:
Active Preference Learning with Discrete Choice Data
572:
Hennig, P.; Osborne, M. A.; Kersting, H. P. (2022).
1851: 1812: 1778: 1767: 1725: 1660: 1632: 1618: 1588: 1537: 1516: 1461: 1409: 1372: 1349: 1340: 1302: 1033:Event generator tuning using Bayesian optimization 737:J. S. Bergstra, R. Bardenet, Y. Bengio, B. KĂ©gl: 337: 277: 248: 228: 199: 154: 134: 114: 869:Variable Risk Control via Stochastic Optimization 909:"Bayesian optimization for sensor set selection" 581:. Cambridge University Press. pp. 243–278. 319:Standard Bayesian optimization relies upon each 85: 42:problems, for optimizing hyperparameter values. 1225: 1223: 1221: 1219: 793:. Symposium on Computer Animation 2010: 103–112 776:Eric Brochu, Nando de Freitas, Abhijeet Ghosh: 1251: 789:Eric Brochu, Tyson Brochu, Nando de Freitas: 763:Eric Brochu, Vlad M. Cora, Nando de Freitas: 162:, which rely upon less (or equal to) than 20 8: 515: 513: 739:Algorithms for Hyper-Parameter Optimization 358:Examples of acquisition functions include 1893: 1809: 1775: 1722: 1709: 1629: 1585: 1572: 1513: 1500: 1346: 1299: 1286: 1258: 1244: 1236: 1080:Kent, Paul; Branke, Juergen (2023-07-12). 1031:Philip Ilten, Mike Williams, Yunjie Yang. 959:J. Bergstra, D. Yamins, D. D. Cox (2013). 402:Broyden–Fletcher–Goldfarb–Shanno algorithm 1157: 1147: 1063: 722: 649: 380:and hybrids of these. They all trade-off 324: 312:. Another less expensive method uses the 261: 241: 212: 179: 175: 174: 171: 147: 127: 88: 82: 1492:Optimization computes maxima and minima. 818:Yuki Koyama, Issei Sato, Masataka Goto: 522:Bayesian Approach to Global Optimization 440:, static program analysis, experimental 68: 937:, Learning and Intelligent Optimization 824:https://doi.org/10.1145/3386569.3392444 808:https://doi.org/10.1145/3072959.3073598 509: 1005:(Doctoral Thesis thesis). ETH Zurich. 200:{\textstyle \mathbb {R} ^{d},d\leq 20} 1688:Principal pivoting algorithm of Lemke 946:J. Snoek, H. Larochelle, R. P. Adams 428:, automatic algorithm configuration, 7: 1213:Chemical Science: 11, 577-586 (2020) 712: 710: 708: 50:The term is generally attributed to 1332:Successive parabolic interpolation 493:Active learning (machine learning) 16:Statistical optimization technique 14: 1652:Projective algorithm of Karmarkar 750:Matthew W. Hoffman, Eric Brochu, 693:ParBayesianOptimization R package 400:or quasi-Newton methods like the 1647:Ellipsoid algorithm of Khachiyan 1550:Sequential quadratic programming 1387:Broyden–Fletcher–Goldfarb–Shanno 394:numerical optimization technique 115:{\textstyle \max _{x\in A}f(x)} 1605:Reduced gradient (Frank–Wolfe) 551:. Cambridge University Press. 524:. Dordrecht: Kluwer Academic. 272: 266: 223: 217: 109: 103: 1: 1935:Spiral optimization algorithm 1555:Successive linear programming 690:Wilson, Samuel (2019-11-22), 1673:Simplex algorithm of Dantzig 1545:Augmented Lagrangian methods 498:Multi-objective optimization 478:Bayesian experimental design 382:exploration and exploitation 347:exotic Bayesian optimization 767:. CoRR abs/1012.2599 (2010) 2009: 999:Berkenkamp, Felix (2019). 430:automatic machine learning 362:probability of improvement 1952: 1905: 1892: 1876:Push–relabel maximum flow 1721: 1708: 1678:Revised simplex algorithm 1584: 1571: 1512: 1499: 1485: 1298: 1285: 915:. ACM. pp. 209–219. 1401:Symmetric rank-one (SR1) 1382:Berndt–Hall–Hall–Hausman 1196:Gomez-Bombarelli et al. 1136:Evolutionary Computation 1044:Evaristo Cisbani et al. 1011:10.3929/ethz-b-000370833 651:10.1007/3-540-07165-2_55 368:Bayesian expected losses 1988:Stochastic optimization 1925:Parallel metaheuristics 1733:Approximation algorithm 1444:Powell's dog leg method 1396:Davidon–Fletcher–Powell 1292:Unconstrained nonlinear 1099:10.1145/3583131.3590486 921:10.1145/1791212.1791238 545:Garnett, Roman (2023). 36:artificial intelligence 1983:Sequential experiments 1910:Evolutionary algorithm 1493: 602:Snoek, Jasper (2012). 575:Probabilistic Numerics 483:Probabilistic numerics 434:reinforcement learning 339: 338:{\displaystyle x\in A} 298:posterior distribution 279: 250: 230: 201: 156: 136: 116: 74: 1683:Criss-cross algorithm 1506:Constrained nonlinear 1491: 1312:Golden-section search 621:Klein, Aaron (2017). 548:Bayesian Optimization 354:Acquisition functions 340: 314:Parzen-Tree Estimator 280: 251: 231: 202: 157: 137: 117: 72: 20:Bayesian optimization 1600:Cutting-plane method 1159:10.1162/evco_a_00231 365:expected improvement 323: 285:is observed and its 260: 240: 211: 170: 146: 142:is a set of points, 126: 81: 1930:Simulated annealing 1748:Integer programming 1738:Dynamic programming 1578:Convex optimization 1439:Levenberg–Marquardt 1019:20.500.11850/370833 976:. KDD 2013: 847–855 963:. Proc. SciPy 2013. 520:MoÄŤkus, J. (1989). 473:Global optimization 420:and visual design, 308:in a method called 289:are not evaluated. 28:global optimization 1978:Sequential methods 1610:Subgradient method 1494: 1419:Conjugate gradient 1327:Nelder–Mead method 841:2017-08-12 at the 458:Multi-armed bandit 335: 306:Gaussian processes 275: 246: 226: 197: 152: 132: 112: 99: 75: 1965: 1964: 1948: 1947: 1888: 1887: 1884: 1883: 1847: 1846: 1808: 1807: 1704: 1703: 1700: 1699: 1696: 1695: 1567: 1566: 1563: 1562: 1483: 1482: 1479: 1478: 1457: 1456: 1209:Griffiths et al. 1108:979-8-4007-0119-1 661:978-3-540-07165-5 558:978-1-108-42578-0 468:Thompson sampling 418:computer graphics 375:Thompson sampling 278:{\textstyle f(x)} 229:{\textstyle f(x)} 84: 24:sequential design 2000: 1993:Machine learning 1894: 1810: 1776: 1753:Branch and bound 1743:Greedy algorithm 1723: 1710: 1630: 1586: 1573: 1514: 1501: 1449:Truncated Newton 1364:Wolfe conditions 1347: 1300: 1287: 1260: 1253: 1246: 1237: 1230: 1227: 1214: 1207: 1201: 1194: 1188: 1187: 1161: 1151: 1127: 1121: 1120: 1092: 1077: 1071: 1070:Preprint: Arxiv. 1069: 1067: 1055: 1049: 1042: 1036: 1029: 1023: 1022: 996: 990: 983: 977: 970: 964: 957: 951: 944: 938: 931: 925: 924: 904: 898: 891: 885: 878: 872: 865: 859: 852: 846: 832: 826: 816: 810: 800: 794: 787: 781: 774: 768: 761: 755: 752:Nando de Freitas 748: 742: 735: 729: 728: 726: 714: 703: 702: 701: 700: 687: 681: 680: 672: 666: 665: 653: 637: 631: 630: 618: 612: 611: 599: 593: 592: 580: 569: 563: 562: 542: 536: 535: 517: 442:particle physics 414:learning to rank 388:Solution methods 344: 342: 341: 336: 284: 282: 281: 276: 255: 253: 252: 247: 235: 233: 232: 227: 206: 204: 203: 198: 184: 183: 178: 161: 159: 158: 153: 141: 139: 138: 133: 121: 119: 118: 113: 98: 60: 40:machine learning 2008: 2007: 2003: 2002: 2001: 1999: 1998: 1997: 1968: 1967: 1966: 1961: 1944: 1901: 1880: 1843: 1804: 1781: 1770: 1763: 1717: 1692: 1656: 1623: 1614: 1591: 1580: 1559: 1533: 1529:Penalty methods 1524:Barrier methods 1508: 1495: 1475: 1471:Newton's method 1453: 1405: 1368: 1336: 1317:Powell's method 1294: 1281: 1264: 1234: 1233: 1228: 1217: 1208: 1204: 1195: 1191: 1129: 1128: 1124: 1109: 1090: 1079: 1078: 1074: 1057: 1056: 1052: 1043: 1039: 1030: 1026: 998: 997: 993: 984: 980: 971: 967: 958: 954: 945: 941: 932: 928: 906: 905: 901: 892: 888: 879: 875: 866: 862: 853: 849: 843:Wayback Machine 833: 829: 817: 813: 801: 797: 788: 784: 775: 771: 762: 758: 749: 745: 736: 732: 716: 715: 706: 698: 696: 689: 688: 684: 674: 673: 669: 662: 639: 638: 634: 620: 619: 615: 601: 600: 596: 589: 578: 571: 570: 566: 559: 544: 543: 539: 532: 519: 518: 511: 506: 454: 426:sensor networks 410: 398:Newton's Method 390: 356: 321: 320: 258: 257: 238: 237: 209: 208: 173: 168: 167: 144: 143: 124: 123: 79: 78: 67: 54: 48: 17: 12: 11: 5: 2006: 2004: 1996: 1995: 1990: 1985: 1980: 1970: 1969: 1963: 1962: 1960: 1959: 1953: 1950: 1949: 1946: 1945: 1943: 1942: 1937: 1932: 1927: 1922: 1917: 1912: 1906: 1903: 1902: 1899:Metaheuristics 1897: 1890: 1889: 1886: 1885: 1882: 1881: 1879: 1878: 1873: 1871:Ford–Fulkerson 1868: 1863: 1857: 1855: 1849: 1848: 1845: 1844: 1842: 1841: 1839:Floyd–Warshall 1836: 1831: 1830: 1829: 1818: 1816: 1806: 1805: 1803: 1802: 1797: 1792: 1786: 1784: 1773: 1765: 1764: 1762: 1761: 1760: 1759: 1745: 1740: 1735: 1729: 1727: 1719: 1718: 1713: 1706: 1705: 1702: 1701: 1698: 1697: 1694: 1693: 1691: 1690: 1685: 1680: 1675: 1669: 1667: 1658: 1657: 1655: 1654: 1649: 1644: 1642:Affine scaling 1638: 1636: 1634:Interior point 1627: 1616: 1615: 1613: 1612: 1607: 1602: 1596: 1594: 1582: 1581: 1576: 1569: 1568: 1565: 1564: 1561: 1560: 1558: 1557: 1552: 1547: 1541: 1539: 1538:Differentiable 1535: 1534: 1532: 1531: 1526: 1520: 1518: 1510: 1509: 1504: 1497: 1496: 1486: 1484: 1481: 1480: 1477: 1476: 1474: 1473: 1467: 1465: 1459: 1458: 1455: 1454: 1452: 1451: 1446: 1441: 1436: 1431: 1426: 1421: 1415: 1413: 1407: 1406: 1404: 1403: 1398: 1393: 1384: 1378: 1376: 1370: 1369: 1367: 1366: 1361: 1355: 1353: 1344: 1338: 1337: 1335: 1334: 1329: 1324: 1319: 1314: 1308: 1306: 1296: 1295: 1290: 1283: 1282: 1265: 1263: 1262: 1255: 1248: 1240: 1232: 1231: 1215: 1202: 1189: 1142:(3): 381–410. 1122: 1107: 1072: 1050: 1037: 1024: 991: 978: 965: 952: 939: 926: 899: 886: 873: 860: 847: 827: 811: 795: 782: 769: 756: 743: 730: 704: 682: 667: 660: 632: 613: 594: 588:978-1107163447 587: 564: 557: 537: 530: 508: 507: 505: 502: 501: 500: 495: 490: 488:Pareto optimum 485: 480: 475: 470: 465: 460: 453: 450: 409: 406: 389: 386: 378: 377: 372: 369: 366: 363: 355: 352: 334: 331: 328: 274: 271: 268: 265: 249:{\textstyle f} 245: 225: 222: 219: 216: 196: 193: 190: 187: 182: 177: 155:{\textstyle x} 151: 135:{\textstyle A} 131: 111: 108: 105: 102: 97: 94: 91: 87: 66: 63: 47: 44: 15: 13: 10: 9: 6: 4: 3: 2: 2005: 1994: 1991: 1989: 1986: 1984: 1981: 1979: 1976: 1975: 1973: 1958: 1955: 1954: 1951: 1941: 1938: 1936: 1933: 1931: 1928: 1926: 1923: 1921: 1918: 1916: 1915:Hill climbing 1913: 1911: 1908: 1907: 1904: 1900: 1895: 1891: 1877: 1874: 1872: 1869: 1867: 1864: 1862: 1859: 1858: 1856: 1854: 1853:Network flows 1850: 1840: 1837: 1835: 1832: 1828: 1825: 1824: 1823: 1820: 1819: 1817: 1815: 1814:Shortest path 1811: 1801: 1798: 1796: 1793: 1791: 1788: 1787: 1785: 1783: 1782:spanning tree 1777: 1774: 1772: 1766: 1758: 1754: 1751: 1750: 1749: 1746: 1744: 1741: 1739: 1736: 1734: 1731: 1730: 1728: 1724: 1720: 1716: 1715:Combinatorial 1711: 1707: 1689: 1686: 1684: 1681: 1679: 1676: 1674: 1671: 1670: 1668: 1666: 1663: 1659: 1653: 1650: 1648: 1645: 1643: 1640: 1639: 1637: 1635: 1631: 1628: 1626: 1621: 1617: 1611: 1608: 1606: 1603: 1601: 1598: 1597: 1595: 1593: 1587: 1583: 1579: 1574: 1570: 1556: 1553: 1551: 1548: 1546: 1543: 1542: 1540: 1536: 1530: 1527: 1525: 1522: 1521: 1519: 1515: 1511: 1507: 1502: 1498: 1490: 1472: 1469: 1468: 1466: 1464: 1460: 1450: 1447: 1445: 1442: 1440: 1437: 1435: 1432: 1430: 1427: 1425: 1422: 1420: 1417: 1416: 1414: 1412: 1411:Other methods 1408: 1402: 1399: 1397: 1394: 1392: 1388: 1385: 1383: 1380: 1379: 1377: 1375: 1371: 1365: 1362: 1360: 1357: 1356: 1354: 1352: 1348: 1345: 1343: 1339: 1333: 1330: 1328: 1325: 1323: 1320: 1318: 1315: 1313: 1310: 1309: 1307: 1305: 1301: 1297: 1293: 1288: 1284: 1280: 1276: 1272: 1268: 1261: 1256: 1254: 1249: 1247: 1242: 1241: 1238: 1226: 1224: 1222: 1220: 1216: 1212: 1206: 1203: 1199: 1193: 1190: 1185: 1181: 1177: 1173: 1169: 1165: 1160: 1155: 1150: 1145: 1141: 1137: 1133: 1126: 1123: 1118: 1114: 1110: 1104: 1100: 1096: 1089: 1088: 1083: 1076: 1073: 1066: 1061: 1054: 1051: 1047: 1041: 1038: 1034: 1028: 1025: 1020: 1016: 1012: 1008: 1004: 1003: 995: 992: 988: 982: 979: 975: 969: 966: 962: 956: 953: 949: 943: 940: 936: 930: 927: 922: 918: 914: 910: 903: 900: 896: 890: 887: 883: 877: 874: 870: 864: 861: 857: 851: 848: 844: 840: 837: 831: 828: 825: 821: 815: 812: 809: 805: 799: 796: 792: 786: 783: 779: 773: 770: 766: 760: 757: 753: 747: 744: 740: 734: 731: 725: 720: 713: 711: 709: 705: 695: 694: 686: 683: 678: 677:IFIP Congress 671: 668: 663: 657: 652: 647: 643: 636: 633: 628: 624: 617: 614: 609: 605: 598: 595: 590: 584: 577: 576: 568: 565: 560: 554: 550: 549: 541: 538: 533: 531:0-7923-0115-3 527: 523: 516: 514: 510: 503: 499: 496: 494: 491: 489: 486: 484: 481: 479: 476: 474: 471: 469: 466: 464: 461: 459: 456: 455: 451: 449: 445: 443: 439: 438:deep learning 435: 431: 427: 423: 419: 415: 407: 405: 403: 399: 395: 387: 385: 383: 376: 373: 370: 367: 364: 361: 360: 359: 353: 351: 350:derivatives. 348: 332: 329: 326: 317: 315: 311: 307: 302: 299: 295: 290: 288: 269: 263: 243: 220: 214: 194: 191: 188: 185: 180: 165: 149: 129: 106: 100: 95: 92: 89: 71: 64: 62: 58: 53: 45: 43: 41: 37: 33: 29: 26:strategy for 25: 21: 1920:Local search 1866:Edmonds–Karp 1822:Bellman–Ford 1592:minimization 1424:Gauss–Newton 1374:Quasi–Newton 1359:Trust region 1267:Optimization 1205: 1192: 1139: 1135: 1125: 1086: 1075: 1053: 1040: 1027: 1001: 994: 981: 968: 955: 942: 929: 912: 902: 889: 876: 863: 850: 830: 814: 798: 785: 772: 759: 746: 733: 697:, retrieved 692: 685: 676: 670: 641: 635: 626: 616: 607: 597: 574: 567: 547: 540: 521: 446: 411: 408:Applications 391: 379: 357: 346: 318: 303: 291: 76: 52:Jonas Mockus 49: 19: 18: 1940:Tabu search 1351:Convergence 1322:Line search 432:toolboxes, 287:derivatives 55: [ 1972:Categories 1771:algorithms 1279:heuristics 1271:Algorithms 1149:1806.05865 1065:2307.09326 724:1807.02811 699:2019-12-12 679:: 195–200. 629:: 528–536. 504:References 396:, such as 164:dimensions 1726:Paradigms 1625:quadratic 1342:Gradients 1304:Functions 1168:1063-6560 1117:259833672 330:∈ 192:≤ 93:∈ 32:black-box 1957:Software 1834:Dijkstra 1665:exchange 1463:Hessians 1429:Gradient 1184:47003986 1176:29883202 839:Archived 452:See also 422:robotics 122:, where 65:Strategy 1800:Kruskal 1790:BorĹŻvka 1780:Minimum 1517:General 1275:methods 463:Kriging 310:kriging 46:History 1662:Basis- 1620:Linear 1590:Convex 1434:Mirror 1391:L-BFGS 1277:, and 1182:  1174:  1166:  1115:  1105:  658:  585:  555:  528:  1861:Dinic 1769:Graph 1180:S2CID 1144:arXiv 1113:S2CID 1091:(PDF) 1060:arXiv 719:arXiv 579:(PDF) 294:prior 59:] 22:is a 1827:SPFA 1795:Prim 1389:and 1172:PMID 1164:ISSN 1103:ISBN 656:ISBN 583:ISBN 553:ISBN 526:ISBN 1757:cut 1622:and 1154:doi 1095:doi 1015:hdl 1007:doi 917:doi 646:doi 86:max 30:of 1974:: 1273:, 1269:: 1218:^ 1178:. 1170:. 1162:. 1152:. 1140:26 1138:. 1134:. 1111:. 1101:. 1084:. 1013:. 707:^ 654:. 625:. 606:. 512:^ 424:, 416:, 404:. 195:20 57:lt 1755:/ 1259:e 1252:t 1245:v 1186:. 1156:: 1146:: 1119:. 1097:: 1068:. 1062:: 1021:. 1017:: 1009:: 923:. 919:: 727:. 721:: 664:. 648:: 610:. 591:. 561:. 534:. 333:A 327:x 273:) 270:x 267:( 264:f 244:f 224:) 221:x 218:( 215:f 189:d 186:, 181:d 176:R 166:( 150:x 130:A 110:) 107:x 104:( 101:f 96:A 90:x

Index

sequential design
global optimization
black-box
artificial intelligence
machine learning
Jonas Mockus
lt

dimensions
derivatives
prior
posterior distribution
Gaussian processes
kriging
Parzen-Tree Estimator
Thompson sampling
exploration and exploitation
numerical optimization technique
Newton's Method
Broyden–Fletcher–Goldfarb–Shanno algorithm
learning to rank
computer graphics
robotics
sensor networks
automatic machine learning
reinforcement learning
deep learning
particle physics
Multi-armed bandit
Kriging

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑