Knowledge

Divide-and-conquer algorithm

Source 📝

840:. In this case, whether the next step will result in the base case is checked before the function call, avoiding an unnecessary function call. For example, in a tree, rather than recursing to a child node and then checking whether it is null, checking null before recursing; avoids half the function calls in some algorithms on binary trees. Since a D&C algorithm eventually reduces each problem or sub-problem instance to a large number of base instances, these often dominate the overall cost of the algorithm, especially when the splitting/joining overhead is low. Note that these considerations do not depend on whether recursion is implemented by the compiler or by an explicit stack. 113: 802:
Stack overflow may be difficult to avoid when using recursive procedures since many compilers assume that the recursion stack is a contiguous area of memory, and some allocate a fixed amount of space for it. Compilers may also save more information in the recursion stack than is strictly necessary,
887:
quicksort calls that would do nothing but return immediately. Increasing the base cases to lists of size 2 or less will eliminate most of those do-nothing calls, and more generally a base case larger than 2 is typically used to reduce the fraction of time spent in function-call overhead or stack
396:
Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases, and of combining sub-problems to the original problem. Similarly, decrease and conquer only requires reducing the
175:. Under this broad definition, however, every algorithm that uses recursion or loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors consider that the name "divide and conquer" should be used only when each problem may generate two or more subproblems. The name 182:
An important application of divide and conquer is in optimization, where if the search space is reduced ("pruned") by a constant factor at each step, the overall algorithm has the same asymptotic complexity as the pruning step, with the constant depending on the pruning factor (by summing the
132:
The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient
830:. This strategy avoids the overhead of recursive calls that do little or no work and may also allow the use of specialized non-recursive algorithms that, for those base cases, are more efficient than explicit recursion. A general procedure for a simple hybrid recursive algorithm is 684:
that breaks the data set into two halves, recursively computes the sum of each half, and then adds the two sums. While the second method performs the same number of additions as the first and pays the overhead of the recursive calls, it is usually more accurate.
803:
such as return address, unchanging parameters, and the internal variables of the procedure. Thus, the risk of stack overflow can be reduced by minimizing the parameters and internal variables of the recursive procedure or by using an explicit stack structure.
100:, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving 822:
algorithm could stop the recursion when the input is a single sample, and the quicksort list-sorting algorithm could stop when the input is the empty list; in both examples, there is only one base case to consider, and it requires no processing.
918:
For some problems, the branched recursion may end up evaluating the same sub-problem many times over. In such cases it may be worth identifying and saving the solutions to these overlapping subproblems, a technique which is commonly known as
374:
typically uses to route mail: letters are sorted into separate bags for different geographical areas, each of these bags is itself sorted into batches for smaller sub-regions, and so on until they are delivered. This is related to a
49:
breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
653:, where the problem is explicitly divided into chunks of the appropriate size—this can also use the cache optimally, but only when the algorithm is tuned for the specific cache sizes of a particular machine. 209:, a decrease-and-conquer algorithm where the subproblems are of roughly half the original size, has a long history. While a clear description of the algorithm on computers appeared in 1946 in an article by 891:
Alternatively, one can employ large base cases that still use a divide-and-conquer algorithm, but implement the algorithm for predetermined set of fixed sizes where the algorithm can be completely
818:
Choosing the smallest or simplest possible base cases is more elegant and usually leads to simpler programs, because there are fewer cases to consider and they are easier to solve. For example, a
903:). For example, this approach is used in some efficient FFT implementations, where the base cases are unrolled implementations of divide-and-conquer FFT algorithms for a set of fixed sizes. 645:
cache-oblivious algorithms–they use the cache in a probably optimal way, in an asymptotic sense, regardless of the cache size. In contrast, the traditional approach to exploiting the cache is
910:
The generalized version of this idea is known as recursion "unrolling" or "coarsening", and various techniques have been proposed for automating the procedure of enlarging the base case.
1350: 746:. D&C algorithms that are time-efficient often have relatively small recursion depth. For example, the quicksort algorithm can be implemented so that it never requires more than 742:
In recursive implementations of D&C algorithms, one must make sure that there is sufficient memory allocated for the recursion stack, otherwise, the execution may fail because of
144:/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list (see the picture). This approach is known as the 664:, as well as for multiple levels of cache: once a sub-problem is small enough, it can be solved within a given level of the hierarchy, without accessing the higher (slower) levels. 317: 589: 361: 777: 544: 445: 714:
Divide-and-conquer algorithms can also be implemented by a non-recursive program that stores the partial sub-problems in some explicit data structure, such as a
885: 865: 797: 516: 496: 419: 847:(or similar) algorithm once the number of items to be sorted is sufficiently small. Note that, if the empty list were the only base case, sorting a list with 826:
On the other hand, efficiency often improves if the recursion is stopped at relatively large base cases, and these are solved non-recursively, resulting in a
1610: 1343: 734:
method for function optimization. This approach is also the standard solution in programming languages that do not provide support for recursive procedures.
726:. This approach allows more freedom in the choice of the sub-problem that is to be solved next, a feature that is important in some applications — e.g. in 676:
numbers, a divide-and-conquer algorithm may yield more accurate results than a superficially equivalent iterative method. For example, one can add
1336: 960: 1600: 1034: 1007: 478:
have constant-bounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem's size
1212: 1123: 945: 638: 167:). These algorithms can be implemented more efficiently than general divide-and-conquer algorithms; in particular, if they use 641:. Moreover, D&C algorithms can be designed for important algorithms (e.g., sorting, FFTs, and matrix multiplication) to be 151:
The name "divide and conquer" is sometimes applied to algorithms that reduce each problem to only one sub-problem, such as the
1090: 233: 199:
Early examples of these algorithms are primarily decrease and conquer – the original problem is successively broken down into
1139: 623:. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the 259: 74: 225:
of two numbers by reducing the numbers to smaller and smaller equivalent subproblems, which dates to several centuries BC.
907:
methods may be used to produce the large number of separate base cases desirable to implement this strategy efficiently.
1528: 1498: 975: 699: 475: 46: 455:
The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to
243:
An early two-subproblem D&C algorithm that was specifically developed for computers and properly analyzed is the
1553: 1427: 1417: 1397: 1192: 1148: 702:. In that case, the partial sub-problems leading to the one currently being solved are automatically stored in the 380: 86: 1432: 1236: 896: 836: 657: 633: 39: 1024: 66: 611:
does not need to be planned in advance because distinct sub-problems can be executed on different processors.
1571: 1533: 727: 680:
numbers either by a simple loop that adds each datum to a single variable, or by a D&C algorithm called
608: 222: 206: 1377: 1275: 965: 904: 819: 719: 715: 650: 237: 164: 97: 90: 940: 1437: 1407: 628: 471: 464: 273: 240:
quantitatively, and FFTs did not become widespread until they were rediscovered over a century later.
1548: 1523: 1468: 1173: 1143: 267: 229: 955: 549: 330: 1605: 1558: 1543: 1488: 1280: 928: 456: 255: 218: 160: 156: 101: 70: 1576: 1478: 1473: 1387: 1293: 1218: 900: 681: 460: 213:, the idea of using a sorted list of items to facilitate searching dates back at least as far as 78: 53:
The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as
843:
Thus, for example, many library implementations of quicksort will switch to a simple loop-based
749: 366:
As another example of a divide-and-conquer algorithm that did not originally involve computers,
112: 1538: 1382: 1208: 1119: 1115: 1030: 1003: 624: 324: 54: 116:
Divide-and-conquer approach to sort the list (38, 27, 43, 3, 9, 82, 10) in increasing order.
1518: 1503: 1285: 1200: 1107: 924: 827: 731: 248: 188: 184: 172: 31: 1493: 950: 600: 82: 1023:
Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (31 July 2009).
521: 424: 1177: 1359: 995: 892: 870: 850: 844: 782: 743: 723: 673: 661: 501: 481: 404: 398: 320: 168: 137: 1311: 815:, the small subproblems that are solved directly in order to terminate the recursion. 656:
The same advantage exists with regards to other hierarchical storage systems, such as
1594: 1508: 1463: 1108: 604: 152: 1222: 17: 1458: 1422: 1392: 1297: 1103: 620: 367: 210: 1328: 1161: 1412: 920: 371: 228:
An early example of a divide-and-conquer algorithm with multiple subproblems is
1049:
Brassard, G., and Bratley, P. Fundamental of Algorithmics, Prentice-Hall, 1996.
811:
In any recursive algorithm, there is considerable freedom in the choice of the
1402: 1289: 1204: 706:. A recursive function is a function that calls itself within its definition. 703: 376: 244: 145: 62: 1363: 1197:
40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)
970: 214: 58: 43: 459:'s fast multiplication method, the quicksort and mergesort algorithms, the 1260: 96:
Designing efficient divide-and-conquer algorithms can be difficult. As in
546:
at each stage, then the cost of the divide-and-conquer algorithm will be
470:
In all these examples, the D&C approach led to an improvement in the
1513: 619:
Divide-and-conquer algorithms naturally tend to make efficient use of
217:
in 200 BC. Another ancient decrease-and-conquer algorithm is the
133:
simplicity are solved directly. For example, to sort a given list of
599:
Divide-and-conquer algorithms are naturally adapted for execution in
155:
algorithm for finding a record in a sorted list (or its analogue in
631:. An algorithm designed to exploit the cache in this way is called 111: 1442: 1078:
The Art of Computer Programming: Volume 3, Sorting and Searching
1332: 1110:
The Art of Computer Programming: Volume 3 Sorting and Searching
637:, because it does not contain the cache size as an explicit 698:
Divide-and-conquer algorithms are naturally implemented as
179:
has been proposed instead for the single-subproblem class.
397:
problem to a single smaller problem, such as the classic
1146:(1962). "Умножение многозначных чисел на автоматах". 1060:
Introduction to the Design and Analysis of Algorithms
873: 853: 785: 752: 552: 524: 504: 484: 427: 407: 333: 276: 1451: 1370: 1312:
Recursion unrolling for divide and conquer programs
1091:
Gauss and the history of the fast Fourier transform
1089:
Heideman, M. T., D. H. Johnson, and C. S. Burrus, "
672:In computations with rounded arithmetic, e.g. with 203:subproblems, and indeed can be solved iteratively. 1162:"Multiplication of Multidigit Numbers on Automata" 879: 859: 791: 771: 583: 538: 510: 490: 439: 413: 355: 311: 1002:. Cambridge University Press. pp. 139–143. 607:systems where the communication of data between 914:Dynamic programming for overlapping subproblems 401:puzzle, which reduces moving a tower of height 1316:Languages and Compilers for Parallel Computing 232:'s 1805 description of what is now called the 27:Algorithms which recursively solve subproblems 1344: 1191:M. Frigo; C. E. Leiserson; H. Prokop (1999). 8: 1093:", IEEE ASSP Magazine, 1, (4), 14–21 (1984). 363:operations would be required for that task. 1259:Frigo, M.; Johnson, S. G. (February 2005). 895:into code that has no recursion, loops, or 1351: 1337: 1329: 1279: 872: 852: 784: 757: 751: 566: 551: 528: 523: 503: 483: 474:of the solution. For example, if (a) the 426: 406: 344: 332: 292: 287: 275: 1261:"The design and implementation of FFTW3" 1237:The accuracy of floating-point summation 1080:, second edition (Addison-Wesley, 1998). 1072: 1070: 1068: 124:a one-element list is trivially sorted; 1254: 1252: 987: 961:Master theorem (analysis of algorithms) 927:divide-and-conquer algorithms such as 923:. Followed to the limit, it leads to 1000:Fast Algorithms for Signal Processing 236:(FFT) algorithm, although he did not 7: 498:, and (b) there is a bounded number 171:, they can be converted into simple 1611:Optimization algorithms and methods 1322:vol. 2017 (Berlin: Springer, 2001). 234:Cooley–Tukey fast Fourier transform 140:, split it into two lists of about 1160:Karatsuba, A.; Ofman, Yu. (1963). 334: 25: 1320:Lecture Notes in Computer Science 946:Decomposable aggregation function 312:{\displaystyle O(n^{\log _{2}3})} 1310:Radu Rugina and Martin Rinard, " 262:in 1960 that could multiply two 867:entries would entail maximally 779:nested recursive calls to sort 627:, without accessing the slower 467:, and fast Fourier transforms. 254:Another notable example is the 832:short-circuiting the base case 584:{\displaystyle O(n\log _{p}n)} 578: 556: 356:{\displaystyle \Omega (n^{2})} 350: 337: 306: 280: 1: 1601:Divide-and-conquer algorithms 899:(related to the technique of 1241:SIAM J. Scientific Computing 1193:"Cache-oblivious algorithms" 976:Heuristic (computer science) 323:). This algorithm disproved 383:machines as early as 1929. 238:analyze its operation count 1627: 1149:Doklady Akademii Nauk SSSR 1026:Introduction to Algorithms 772:{\displaystyle \log _{2}n} 518:of sub-problems of size ~ 421:to move a tower of height 392:Solving difficult problems 128:composing sorted sublists. 87:discrete Fourier transform 1567: 1318:, chapter 3, pp. 34–48. 1290:10.1109/JPROC.2004.840301 1205:10.1109/SFFCS.1999.814600 195:Early historical examples 120:splitting into sublists; 67:multiplying large numbers 40:algorithm design paradigm 327:'s 1956 conjecture that 1572:List of data structures 1268:Proceedings of the IEEE 1062:(Addison Wesley, 2002). 807:Choosing the base cases 728:breadth-first recursion 247:algorithm, invented by 223:greatest common divisor 42:. A divide-and-conquer 1166:Soviet Physics Doklady 1140:Karatsuba, Anatolii A. 966:Mathematical induction 905:Source-code generation 881: 861: 837:arm's-length recursion 820:Fast Fourier Transform 793: 773: 651:loop nest optimization 585: 540: 512: 492: 441: 415: 357: 313: 129: 98:mathematical induction 75:closest pair of points 1235:Nicholas J. Higham, " 882: 862: 794: 774: 689:Implementation issues 603:machines, especially 586: 541: 513: 493: 465:matrix multiplication 442: 416: 358: 314: 260:Anatolii A. Karatsuba 115: 85:), and computing the 1469:Breadth-first search 1246:(4), 783–799 (1993). 1199:. pp. 285–297. 951:"Divide and conquer" 871: 851: 783: 750: 704:procedure call stack 700:recursive procedures 550: 522: 502: 482: 451:Algorithm efficiency 425: 405: 331: 274: 187:); this is known as 177:decrease and conquer 102:recurrence relations 18:Decrease-and-conquer 1559:Topological sorting 1489:Dynamic programming 1178:1963SPhD....7..595K 929:dynamic programming 539:{\displaystyle n/p} 440:{\displaystyle n-1} 370:gives the method a 219:Euclidean algorithm 161:bisection algorithm 157:numerical computing 71:Karatsuba algorithm 1577:List of algorithms 1484:Divide and conquer 1479:Depth-first search 1474:Brute-force search 1388:Binary search tree 1058:Anany V. Levitin, 901:partial evaluation 877: 857: 789: 769: 682:pairwise summation 581: 536: 508: 488: 461:Strassen algorithm 437: 411: 381:punch-card sorting 353: 309: 130: 108:Divide and conquer 79:syntactic analysis 36:divide and conquer 1585: 1584: 1383:Associative array 1076:Donald E. Knuth, 1036:978-0-262-53305-8 1009:978-0-511-77637-3 941:Akra–Bazzi method 880:{\displaystyle n} 860:{\displaystyle n} 792:{\displaystyle n} 511:{\displaystyle p} 491:{\displaystyle n} 414:{\displaystyle n} 325:Andrey Kolmogorov 16:(Redirected from 1618: 1554:String-searching 1353: 1346: 1339: 1330: 1323: 1308: 1302: 1301: 1283: 1265: 1256: 1247: 1233: 1227: 1226: 1188: 1182: 1181: 1157: 1136: 1130: 1129: 1113: 1100: 1094: 1087: 1081: 1074: 1063: 1056: 1050: 1047: 1041: 1040: 1020: 1014: 1013: 992: 886: 884: 883: 878: 866: 864: 863: 858: 834:, also known as 828:hybrid algorithm 798: 796: 795: 790: 778: 776: 775: 770: 762: 761: 732:branch-and-bound 668:Roundoff control 590: 588: 587: 582: 571: 570: 545: 543: 542: 537: 532: 517: 515: 514: 509: 497: 495: 494: 489: 446: 444: 443: 438: 420: 418: 417: 412: 379:, described for 362: 360: 359: 354: 349: 348: 318: 316: 315: 310: 305: 304: 297: 296: 249:John von Neumann 189:prune and search 185:geometric series 83:top-down parsers 32:computer science 21: 1626: 1625: 1621: 1620: 1619: 1617: 1616: 1615: 1591: 1590: 1588: 1586: 1581: 1563: 1494:Graph traversal 1447: 1371:Data structures 1366: 1360:Data structures 1357: 1327: 1326: 1309: 1305: 1263: 1258: 1257: 1250: 1234: 1230: 1215: 1190: 1189: 1185: 1159: 1138: 1137: 1133: 1126: 1102: 1101: 1097: 1088: 1084: 1075: 1066: 1057: 1053: 1048: 1044: 1037: 1022: 1021: 1017: 1010: 998:(14 May 2014). 996:Blahut, Richard 994: 993: 989: 984: 956:Fork–join model 937: 916: 869: 868: 849: 848: 809: 781: 780: 753: 748: 747: 740: 712: 696: 691: 670: 634:cache-oblivious 617: 601:multi-processor 597: 562: 548: 547: 520: 519: 500: 499: 480: 479: 472:asymptotic cost 453: 423: 422: 403: 402: 394: 389: 340: 329: 328: 319:operations (in 288: 283: 272: 271: 221:to compute the 197: 138:natural numbers 110: 73:), finding the 28: 23: 22: 15: 12: 11: 5: 1624: 1622: 1614: 1613: 1608: 1603: 1593: 1592: 1583: 1582: 1580: 1579: 1574: 1568: 1565: 1564: 1562: 1561: 1556: 1551: 1546: 1541: 1536: 1531: 1526: 1521: 1516: 1511: 1506: 1501: 1496: 1491: 1486: 1481: 1476: 1471: 1466: 1461: 1455: 1453: 1449: 1448: 1446: 1445: 1440: 1435: 1430: 1425: 1420: 1415: 1410: 1405: 1400: 1395: 1390: 1385: 1380: 1374: 1372: 1368: 1367: 1358: 1356: 1355: 1348: 1341: 1333: 1325: 1324: 1303: 1281:10.1.1.66.3097 1274:(2): 216–231. 1248: 1228: 1213: 1183: 1158:Translated in 1131: 1124: 1095: 1082: 1064: 1051: 1042: 1035: 1015: 1008: 986: 985: 983: 980: 979: 978: 973: 968: 963: 958: 953: 948: 943: 936: 933: 915: 912: 888:manipulation. 876: 856: 845:insertion sort 808: 805: 788: 768: 765: 760: 756: 744:stack overflow 739: 736: 724:priority queue 711: 710:Explicit stack 708: 695: 692: 690: 687: 674:floating-point 669: 666: 662:virtual memory 616: 613: 596: 593: 580: 577: 574: 569: 565: 561: 558: 555: 535: 531: 527: 507: 487: 452: 449: 436: 433: 430: 410: 399:Tower of Hanoi 393: 390: 388: 385: 352: 347: 343: 339: 336: 321:Big O notation 308: 303: 300: 295: 291: 286: 282: 279: 196: 193: 169:tail recursion 109: 106: 26: 24: 14: 13: 10: 9: 6: 4: 3: 2: 1623: 1612: 1609: 1607: 1604: 1602: 1599: 1598: 1596: 1589: 1578: 1575: 1573: 1570: 1569: 1566: 1560: 1557: 1555: 1552: 1550: 1547: 1545: 1542: 1540: 1537: 1535: 1532: 1530: 1527: 1525: 1522: 1520: 1517: 1515: 1512: 1510: 1509:Hash function 1507: 1505: 1502: 1500: 1497: 1495: 1492: 1490: 1487: 1485: 1482: 1480: 1477: 1475: 1472: 1470: 1467: 1465: 1464:Binary search 1462: 1460: 1457: 1456: 1454: 1450: 1444: 1441: 1439: 1436: 1434: 1431: 1429: 1426: 1424: 1421: 1419: 1416: 1414: 1411: 1409: 1406: 1404: 1401: 1399: 1396: 1394: 1391: 1389: 1386: 1384: 1381: 1379: 1376: 1375: 1373: 1369: 1365: 1361: 1354: 1349: 1347: 1342: 1340: 1335: 1334: 1331: 1321: 1317: 1313: 1307: 1304: 1299: 1295: 1291: 1287: 1282: 1277: 1273: 1269: 1262: 1255: 1253: 1249: 1245: 1242: 1238: 1232: 1229: 1224: 1220: 1216: 1214:0-7695-0409-4 1210: 1206: 1202: 1198: 1194: 1187: 1184: 1179: 1175: 1171: 1167: 1163: 1155: 1151: 1150: 1145: 1144:Yuri P. Ofman 1141: 1135: 1132: 1127: 1125:0-201-89685-0 1121: 1117: 1112: 1111: 1105: 1104:Knuth, Donald 1099: 1096: 1092: 1086: 1083: 1079: 1073: 1071: 1069: 1065: 1061: 1055: 1052: 1046: 1043: 1038: 1032: 1029:. MIT Press. 1028: 1027: 1019: 1016: 1011: 1005: 1001: 997: 991: 988: 981: 977: 974: 972: 969: 967: 964: 962: 959: 957: 954: 952: 949: 947: 944: 942: 939: 938: 934: 932: 930: 926: 922: 913: 911: 908: 906: 902: 898: 894: 889: 874: 854: 846: 841: 839: 838: 833: 829: 824: 821: 816: 814: 806: 804: 800: 786: 766: 763: 758: 754: 745: 737: 735: 733: 729: 725: 721: 717: 709: 707: 705: 701: 693: 688: 686: 683: 679: 675: 667: 665: 663: 659: 654: 652: 648: 644: 640: 636: 635: 630: 626: 622: 621:memory caches 615:Memory access 614: 612: 610: 606: 605:shared-memory 602: 594: 592: 575: 572: 567: 563: 559: 553: 533: 529: 525: 505: 485: 477: 473: 468: 466: 462: 458: 450: 448: 434: 431: 428: 408: 400: 391: 386: 384: 382: 378: 373: 369: 364: 345: 341: 326: 322: 301: 298: 293: 289: 284: 277: 269: 265: 261: 257: 252: 250: 246: 241: 239: 235: 231: 226: 224: 220: 216: 212: 208: 207:Binary search 204: 202: 194: 192: 190: 186: 180: 178: 174: 170: 166: 162: 158: 154: 153:binary search 149: 147: 143: 139: 136: 127: 123: 119: 114: 107: 105: 103: 99: 94: 92: 88: 84: 80: 76: 72: 68: 64: 60: 56: 51: 48: 45: 41: 37: 33: 19: 1587: 1534:Root-finding 1483: 1459:Backtracking 1423:Segment tree 1393:Fenwick tree 1319: 1315: 1306: 1271: 1267: 1243: 1240: 1231: 1196: 1186: 1169: 1165: 1153: 1147: 1134: 1109: 1098: 1085: 1077: 1059: 1054: 1045: 1025: 1018: 999: 990: 917: 909: 897:conditionals 890: 842: 835: 831: 825: 817: 812: 810: 801: 741: 713: 697: 677: 671: 655: 646: 642: 632: 618: 598: 469: 454: 395: 368:Donald Knuth 365: 263: 258:invented by 253: 242: 227: 211:John Mauchly 205: 200: 198: 181: 176: 165:root finding 150: 141: 134: 131: 125: 121: 117: 95: 52: 35: 29: 1413:Linked list 1172:: 595–596. 921:memoization 629:main memory 595:Parallelism 372:post office 270:numbers in 148:algorithm. 126:lower half: 118:Upper half: 69:(e.g., the 47:recursively 1606:Algorithms 1595:Categories 1549:Sweep line 1524:Randomized 1452:Algorithms 1403:Hash table 1364:algorithms 1156:: 293–294. 1114:. p.  982:References 813:base cases 738:Stack size 609:processors 476:base cases 387:Advantages 377:radix sort 245:merge sort 146:merge sort 63:merge sort 1544:Streaming 1529:Recursion 1276:CiteSeerX 971:MapReduce 925:bottom-up 764:⁡ 694:Recursion 639:parameter 573:⁡ 457:Karatsuba 432:− 335:Ω 299:⁡ 256:algorithm 251:in 1945. 215:Babylonia 59:quicksort 44:algorithm 1223:62758836 1106:(1998). 935:See also 893:unrolled 730:and the 649:, as in 647:blocking 1539:Sorting 1514:Minimax 1298:6644892 1174:Bibcode 799:items. 643:optimal 81:(e.g., 57:(e.g., 55:sorting 1519:Online 1504:Greedy 1433:String 1296:  1278:  1221:  1211:  1122:  1033:  1006:  201:single 159:, the 38:is an 1428:Stack 1418:Queue 1398:Graph 1378:Array 1314:" in 1294:S2CID 1264:(PDF) 1219:S2CID 722:, or 720:queue 716:stack 625:cache 268:digit 230:Gauss 173:loops 1499:Fold 1443:Trie 1438:Tree 1408:Heap 1362:and 1209:ISBN 1120:ISBN 1031:ISBN 1004:ISBN 658:NUMA 463:for 163:for 122:mid: 1286:doi 1239:", 1201:doi 1154:146 1116:159 755:log 660:or 564:log 290:log 93:). 91:FFT 65:), 30:In 1597:: 1292:. 1284:. 1272:93 1270:. 1266:. 1251:^ 1244:14 1217:. 1207:. 1195:. 1168:. 1164:. 1152:. 1142:; 1118:. 1067:^ 931:. 718:, 591:. 447:. 191:. 104:. 77:, 61:, 34:, 1352:e 1345:t 1338:v 1300:. 1288:: 1225:. 1203:: 1180:. 1176:: 1170:7 1128:. 1039:. 1012:. 875:n 855:n 787:n 767:n 759:2 678:N 579:) 576:n 568:p 560:n 557:( 554:O 534:p 530:/ 526:n 506:p 486:n 435:1 429:n 409:n 351:) 346:2 342:n 338:( 307:) 302:3 294:2 285:n 281:( 278:O 266:- 264:n 142:n 135:n 89:( 20:)

Index

Decrease-and-conquer
computer science
algorithm design paradigm
algorithm
recursively
sorting
quicksort
merge sort
multiplying large numbers
Karatsuba algorithm
closest pair of points
syntactic analysis
top-down parsers
discrete Fourier transform
FFT
mathematical induction
recurrence relations

natural numbers
merge sort
binary search
numerical computing
bisection algorithm
root finding
tail recursion
loops
geometric series
prune and search
Binary search
John Mauchly

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.