Knowledge (XXG)

Method of conditional probabilities

Source 📝

1935: 107: 140:
In this example the random experiment consists of flipping three fair coins. The experiment is illustrated by the rooted tree in the adjacent diagram. There are eight outcomes, each corresponding to a leaf in the tree. A trial of the random experiment corresponds to taking a random walk from the root
170:
The invariant holds initially (at the root), because the original proof showed that the (unconditioned) probability of failure is less than 1. The conditional probability at any interior node is the average of the conditional probabilities of its children. The latter property is important because it
1331:
For the method of conditional probabilities to work, it suffices if the algorithm keeps the pessimistic estimator from decreasing (or increasing, as appropriate). The algorithm does not necessarily have to maximize (or minimize) the pessimistic estimator. This gives some flexibility in deriving the
200:
In the ideal case, given a partial state (a node in the tree), the conditional probability of failure (the label on the node) can be efficiently and exactly computed. (The example above is like this.) If this is so, then the algorithm can select the next node to go to by computing the conditional
148:
as the experiment proceeds step by step. In the diagram, each node is labeled with this conditional probability. (For example, if only the first coin has been flipped, and it comes up tails, that corresponds to the second child of the root. Conditioned on that partial state, the probability of
248:
and proceeds accordingly: at each interior node, there is some child whose conditional expectation is at most (at least) the node's conditional expectation; the algorithm moves from the current node to such a child, thus keeping the conditional expectation below (above) the threshold.
141:(the top node in the tree, where no coins have been flipped) to a leaf. The successful outcomes are those in which at least two coins came up tails. The interior nodes in the tree correspond to partially determined outcomes, where only 0, 1, or 2 of the coins have been flipped so far. 43:
is used to prove the existence of mathematical objects with some desired combinatorial properties. The proofs in that method work by showing that a random object, chosen from some probability distribution, has the desired properties with positive probability. Consequently, they are
132:
If the three coins are flipped randomly, the expected number of tails is 1.5. Thus, there must be some outcome (way of flipping the coins) so that the number of tails is at least 1.5. Since the number of tails is an integer, in such an outcome there are at least 2 tails.
175:
Thus, from any interior node, one can always choose some child to walk to so as to maintain the invariant. Since the invariant holds at the end, when the walk arrives at a leaf and all choices have been determined, the outcome reached in this way must be a successful one.
1886: 269:
given the current state, and it should be non-increasing (or non-decreasing) in expectation with each random step of the experiment. Typically, a good pessimistic estimator can be computed by precisely deconstructing the logic of the original proof.
385:
In this case, the conditional probability of failure is not easy to calculate. Indeed, the original proof did not calculate the probability of failure directly; instead, the proof worked by showing that the expected number of cut edges was at least
1696: 366:
To apply the method of conditional probabilities, first model the random experiment as a sequence of small random steps. In this case it is natural to consider each step to be the choice of color for a particular vertex (so there are
236:
is the number of "bad" events (not necessarily disjoint) that occur in a given outcome, where each bad event corresponds to one way the experiment can fail, and the expected number of bad events that occur is less than 1.)
762: 152:
The method of conditional probabilities replaces the random root-to-leaf walk in the random experiment by a deterministic root-to-leaf walk, where each step is chosen to inductively maintain the following invariant:
1513: 1317: 1031: 184:
In a typical application of the method, the goal is to be able to implement the resulting deterministic process by a reasonably efficient algorithm (the word "efficient" usually means an algorithm that runs in
59:
the proof. The basic idea is to replace each random choice in a random experiment by a deterministic choice, so as to keep the conditional probability of failure, given the choices so far, below 1.
474:|/2 or above, and so is guaranteed to keep the conditional probability of failure below 1, which in turn guarantees a successful outcome. By calculation, the algorithm simplifies to the following: 444:
Given that some of the vertices are colored already, what is this conditional expectation? Following the logic of the original proof, the conditional expectation of the number of cut edges is
201:
probabilities at each of the children of the current node, then moving to any child whose conditional probability is less than 1. As discussed above, there is guaranteed to be such a node.
305:
problem is to color each vertex of the graph with one of two colors (say black or white) so as to maximize the number of edges whose endpoints have different colors. (Say such an edge is
114:
To apply the method to a probabilistic proof, the randomly chosen object in the proof must be choosable by a random experiment that consists of a sequence of "small" random choices.
204:
Unfortunately, in most applications, the conditional probability of failure is not easy to compute efficiently. There are two standard and related techniques for dealing with this:
374:
Next, replace the random choice at each step by a deterministic choice, so as to keep the conditional probability of failure, given the vertices colored so far, below 1. (Here
1727: 1547: 1167:
to maximize the resulting pessimistic estimator. By the previous considerations, this keeps the pessimistic estimator from decreasing and guarantees a successful outcome.
1964: 470:
The algorithm colors each vertex to maximize the resulting value of the above conditional expectation. This is guaranteed to keep the conditional expectation at |
244:
below (or above) the threshold. To do this, instead of computing the conditional probability of failure, the algorithm computes the conditional expectation of
1409:
Each algorithm is analyzed with the same pessimistic estimator as before. With each step of either algorithm, the net increase in the pessimistic estimator is
437:. This suffices, because there must be some child whose conditional expectation is at least the current state's conditional expectation (and thus at least | 189:), even though typically the number of possible outcomes is huge (exponentially large). For example, consider the task with coin flipping, but extended to 2050: 1079:. Since the pessimistic estimator is a lower bound on the conditional expectation, this will ensure that the conditional expectation stays above | 665: 2110:. Wiley-Interscience Series in Discrete Mathematics and Optimization (Third ed.). Hoboken, NJ: John Wiley and Sons. pp. 250 et seq. 2196: 2166: 2115: 2027: 1415: 1228: 925: 265:. The pessimistic estimator is a function of the current state. It should be an upper (or lower) bound for the conditional expectation of 500:
Because of its derivation, this deterministic algorithm is guaranteed to cut at least half the edges of the given graph. This makes it a
94:... show that the probabilistic existence proof can be converted, in a very precise sense, into a deterministic approximation algorithm. 397:
be the number of edges cut. To keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of
2227: 2135: 1986: 173:
any interior node whose conditional probability is less than 1 has at least one child whose conditional probability is less than 1.
2232: 2152: 2105: 2211: 2077: 541: 77:
refers to a quantity used in place of the true conditional probability (or conditional expectation) underlying the proof.
240:
In this case, to keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of
287: 228:
is at most (at least) the threshold, and this and (ii) imply that there is a successful outcome. (In the example above,
1947: 1957: 1951: 1943: 1148:
is chosen randomly from the remaining vertices, the expected increase in the pessimistic estimator is non-negative.
224:
is at most (at least) this threshold, the outcome is a success. Then (i) implies that there exists an outcome where
421:|/2, so the conditional probability of reaching such an outcome is positive. To keep the conditional expectation of 2182: 2158: 1968: 1067:+1).) The algorithm will make each choice to keep the pessimistic estimator from decreasing, that is, so that 67: 52: 51:
The method of conditional probabilities converts such a proof, in a "very precise sense", into an efficient
33: 2048:(1988), "Probabilistic construction of deterministic algorithms: approximating packing integer programs", 1881:{\displaystyle \sum _{w\in N^{(t)}(u)\cup \{u\}}{\frac {1}{d(w)+1}}\leq (d'(u)+1){\frac {1}{d'(u)+1}}=1} 106: 45: 521: 1691:{\displaystyle \sum _{w\in N^{(t)}(u)\cup \{u\}}{\frac {1}{d(w)+1}}\leq (d(u)+1){\frac {1}{d(u)+1}}=1} 1913: 91: 40: 1923: 853:
We will replace each random step by a deterministic step that keeps the conditional expectation of
278:
This example demonstrates the method of conditional probabilities using a conditional expectation.
100: 63: 2148: 2045: 167:
In this way, it is guaranteed to arrive at a leaf with label 0, that is, a successful outcome.
55:, one that is guaranteed to compute an object with the desired properties. That is, the method 2192: 2162: 2131: 2111: 2023: 429:|/2 or above, the algorithm will, at each step, color the vertex under consideration so as to 2059: 338:
Color each vertex black or white by flipping a fair coin. By calculation, for any edge e in
29: 21: 2125: 892:. Given the first t steps, following the reasoning in the original proof, any given vertex 232:
is the number of tails, which should be at least the threshold 1.5. In many applications,
2188: 2121: 1918: 1087:+1), which in turn will ensure that the conditional probability of failure stays below 1. 772: 186: 56: 779:, so the left-hand side is minimized, subject to the sum of the degrees being fixed at 2| 48:— they don't explicitly describe an efficient method for computing the desired objects. 2178: 2144: 888:
denote those vertices that have not yet been considered, and that have no neighbors in
652: 343: 865:+1). This will ensure a successful outcome, that is, one in which the independent set 2221: 2064: 1717:
For the second algorithm, the net increase is non-negative because, by the choice of
1537:
For the first algorithm, the net increase is non-negative because, by the choice of
1382:
to be the empty set. 2. While the remaining graph is not empty: 3. Add a vertex
212:
Many probabilistic proofs work as follows: they implicitly define a random variable
2101: 2013: 2017: 501: 257:
In some cases, as a proxy for the exact conditional expectation of the quantity
160:
the conditional probability of failure, given the current state, is less than 1.
17: 90:
We first show the existence of a provably good approximate solution using the
489:. 3. Among these vertices, if more are black than white, then color 485:(in any order): 2. Consider the already-colored neighboring vertices of 220:
is at most (or at least) some threshold value, and (ii) in any outcome where
125:
It is possible to flip three coins so that the number of tails is at least 2.
2097: 757:{\displaystyle \sum _{u\in V}{\frac {1}{d(u)+1}}~\geq ~{\frac {|V|}{D+1}}.} 573:
Consider the following random process for constructing an independent set
73:
When applying the method of conditional probabilities, the technical term
1508:{\displaystyle 1-\sum _{w\in N^{(t)}(u)\cup \{u\}}{\frac {1}{d(w)+1}},} 1205:
to be the empty set. 2. While there exists a not-yet-considered vertex
1047:
The proof showed that the pessimistic estimator is initially at least |
302: 2092:
The method of conditional rounding is explaind in several textbooks:
362:
The method of conditional probabilities with conditional expectations
144:
To apply the method of conditional probabilities, one focuses on the
1312:{\displaystyle \sum _{w\in N^{(t)}(u)\cup \{u\}}{\frac {1}{d(w)+1}}} 1026:{\displaystyle |S^{(t)}|~+~\sum _{w\in R^{(t)}}{\frac {1}{d(w)+1}}.} 810:
The method of conditional probabilities using pessimistic estimators
2080:, blog entry by Neal E. Young, accessed 19/04/2012 and 14/09/2023. 830:
if none of its neighbors have yet been added. Let random variable
451:
the number of edges whose endpoints are colored differently so far
105: 512:
The next example demonstrates the use of pessimistic estimators.
619:
that is considered before all of its neighbors will be added to
405:|/2. This is because, as long as the conditional expectation of 2212:
The probabilistic method — method of conditional probabilities
2078:
The probabilistic method — method of conditional probabilities
1928: 457:
the number of edges with at least one endpoint not yet colored
818:| steps. Each step considers some not-yet considered vertex 615:
Clearly the process computes an independent set. Any vertex
146:
conditional probability of failure, given the choices so far
103:, but it works with the probabilistic method in general. 1327:
Algorithms that don't maximize the pessimistic estimator
1094:
be the vertex considered by the algorithm in the next ((
350:|/2. Thus, there exists a coloring that cuts at least | 117:
Here is a trivial example to illustrate the principle.
1155:
that keeps the pessimistic estimator from decreasing.
413:|/2, there must be some still-reachable outcome where 62:
The method is particularly relevant in the context of
1730: 1550: 1418: 1231: 928: 668: 1339:
to be the empty set. 2. While there exists a vertex
1332:
algorithm. The next two algorithms illustrate this.
99:
Raghavan is discussing the method in the context of
2214:, blog entry by Neal E. Young, accessed 19/04/2012. 1402:and all of its neighbors from the graph. 5. Return 1880: 1690: 1507: 1326: 1311: 1025: 880:Given that the first t steps have been taken, let 756: 342:, the probability that it is cut is 1/2. Thus, by 261:, one uses an appropriately tight bound called a 1956:but its sources remain unclear because it lacks 66:(which uses the probabilistic method to design 1159:Algorithm maximizing the pessimistic estimator 1121:), the pessimistic estimator is unchanged. If 36:that explicitly construct the desired object. 32:probabilistic existence proofs into efficient 1040:denote the above quantity, which is called a 877:+1), realizing the bound in Turán's theorem. 8: 1776: 1770: 1596: 1590: 1470: 1464: 1277: 1271: 592:in random order: 3. If no neighbors of 2040: 2038: 2063: 1987:Learn how and when to remove this message 1840: 1781: 1746: 1735: 1729: 1655: 1601: 1566: 1555: 1549: 1475: 1440: 1429: 1417: 1282: 1247: 1236: 1230: 993: 979: 968: 950: 938: 929: 927: 767:(The inequality above follows because 1/( 732: 724: 721: 685: 673: 667: 584:to be the empty set. 2. For each vertex 433:the resulting conditional expectation of 2019:Ten lectures on the probabilistic method 1163:The algorithm below chooses each vertex 900:has conditional probability at least 1/( 2051:Journal of Computer and System Sciences 2005: 502:0.5-approximation algorithm for Max-cut 346:, the expected number of edges cut is | 110:The method of conditional probabilities 1151:Thus, there must exist some choice of 884:denote the vertices added so far. Let 814:In this case, the random process has | 569:Probabilistic proof of Turán's theorem 274:Example using conditional expectations 28:is a systematic method for converting 564:| is the average degree of the graph. 7: 1530:in the remaining graph (that is, in 912:, so the conditional expectation of 508:Example using pessimistic estimators 834:be the number of vertices added to 216:, show that (i) the expectation of 26:method of conditional probabilities 14: 1343:in the graph with no neighbor in 1044:for the conditional expectation. 85:Raghavan gives this description: 1933: 493:white. 4. Otherwise, color 378:means that finally fewer than | 208:Using a conditional expectation 1860: 1854: 1837: 1828: 1822: 1811: 1796: 1790: 1764: 1758: 1753: 1747: 1670: 1664: 1652: 1643: 1637: 1631: 1616: 1610: 1584: 1578: 1573: 1567: 1490: 1484: 1458: 1452: 1447: 1441: 1297: 1291: 1265: 1259: 1254: 1248: 1008: 1002: 986: 980: 951: 945: 939: 930: 733: 725: 700: 694: 1: 2130:(cited pages in 2nd edition, 253:Using a pessimistic estimator 2065:10.1016/0022-0000(88)90003-7 1526:) denotes the neighbors of 1347:: 3. Add such a vertex 1213:: 3. Add such a vertex 1178:) denotes the neighbors of 401:at or above the threshold | 2249: 2159:Cambridge University Press 1394:has minimum degree in the 1105:already has a neighbor in 1367:) (the initial degree of 2228:Approximation algorithms 2184:Approximation algorithms 2107:The probabilistic method 1942:This article includes a 1904:in the remaining graph. 653:linearity of expectation 344:linearity of expectation 68:approximation algorithms 34:deterministic algorithms 2233:Probabilistic arguments 1971:more precise citations. 1714:in the original graph. 1194:nor have a neighbor in 1186:(that is, neighbors of 908:)+1) of being added to 838:. The proof shows that 655:, the expected size of 635:, the probability that 631:) denote the degree of 568: 515: 53:deterministic algorithm 1882: 1692: 1509: 1313: 1117:and (by inspection of 1027: 758: 359: 331:|/2 edges can be cut. 111: 2154:Randomized algorithms 1883: 1693: 1510: 1398:graph. 4. Delete 1314: 1042:pessimistic estimator 1028: 759: 333: 286:Given any undirected 263:pessimistic estimator 109: 75:pessimistic estimator 1914:Probabilistic method 1728: 1548: 1416: 1229: 1209:with no neighbor in 1190:that are neither in 926: 666: 393:Let random variable 382:|/2 edges are cut.) 336:Probabilistic proof. 130:Probabilistic proof. 92:probabilistic method 41:probabilistic method 2181:(5 December 2002), 2149:Raghavan, Prabhakar 2046:Raghavan, Prabhakar 1924:Randomized rounding 1900:) is the degree of 1710:) is the degree of 1144:By calculation, if 1129:have a neighbor in 869:has size at least | 520:One way of stating 477:1. For each vertex 101:randomized rounding 64:randomized rounding 2151:(25 August 1995). 1944:list of references 1878: 1780: 1688: 1600: 1505: 1474: 1309: 1281: 1023: 992: 754: 684: 544:of size at least | 524:is the following: 149:failure is 0.25.) 112: 2198:978-3-540-65367-7 2191:, pp. 130–, 2168:978-0-521-47465-8 2161:. pp. 120–. 2117:978-0-470-17020-5 2029:978-0-89871-325-1 1997: 1996: 1989: 1870: 1806: 1731: 1680: 1626: 1551: 1500: 1425: 1307: 1232: 1018: 964: 963: 957: 749: 720: 714: 710: 669: 2240: 2201: 2172: 2129: 2081: 2075: 2069: 2068: 2067: 2042: 2033: 2032: 2014:Spencer, Joel H. 2010: 1992: 1985: 1981: 1978: 1972: 1967:this article by 1958:inline citations 1937: 1936: 1929: 1887: 1885: 1884: 1879: 1871: 1869: 1853: 1841: 1821: 1807: 1805: 1782: 1779: 1757: 1756: 1697: 1695: 1694: 1689: 1681: 1679: 1656: 1627: 1625: 1602: 1599: 1577: 1576: 1514: 1512: 1511: 1506: 1501: 1499: 1476: 1473: 1451: 1450: 1318: 1316: 1315: 1310: 1308: 1306: 1283: 1280: 1258: 1257: 1113:is not added to 1032: 1030: 1029: 1024: 1019: 1017: 994: 991: 990: 989: 961: 955: 954: 949: 948: 933: 763: 761: 760: 755: 750: 748: 737: 736: 728: 722: 718: 712: 711: 709: 686: 683: 623:. Thus, letting 193:flips for large 30:non-constructive 22:computer science 2248: 2247: 2243: 2242: 2241: 2239: 2238: 2237: 2218: 2217: 2208: 2199: 2189:Springer Verlag 2179:Vazirani, Vijay 2177: 2169: 2145:Motwani, Rajeev 2143: 2118: 2096: 2090: 2088:Further reading 2085: 2084: 2076: 2072: 2044: 2043: 2036: 2030: 2012: 2011: 2007: 2002: 1993: 1982: 1976: 1973: 1962: 1948:related reading 1938: 1934: 1919:Derandomization 1910: 1846: 1845: 1814: 1786: 1742: 1726: 1725: 1660: 1606: 1562: 1546: 1545: 1480: 1436: 1414: 1413: 1407: 1376: 1329: 1324: 1287: 1243: 1227: 1226: 1161: 1055:+1). (That is, 998: 975: 934: 924: 923: 812: 738: 723: 690: 664: 663: 643:is at least 1/( 613: 571: 542:independent set 522:Turán's theorem 518: 516:Turán's theorem 510: 498: 468: 364: 284: 276: 255: 210: 187:polynomial time 182: 83: 46:nonconstructive 12: 11: 5: 2246: 2244: 2236: 2235: 2230: 2220: 2219: 2216: 2215: 2207: 2206:External links 2204: 2203: 2202: 2197: 2174: 2173: 2167: 2140: 2139: 2116: 2089: 2086: 2083: 2082: 2070: 2058:(2): 130–143, 2034: 2028: 2004: 2003: 2001: 1998: 1995: 1994: 1952:external links 1941: 1939: 1932: 1927: 1926: 1921: 1916: 1909: 1906: 1890: 1889: 1877: 1874: 1868: 1865: 1862: 1859: 1856: 1852: 1849: 1844: 1839: 1836: 1833: 1830: 1827: 1824: 1820: 1817: 1813: 1810: 1804: 1801: 1798: 1795: 1792: 1789: 1785: 1778: 1775: 1772: 1769: 1766: 1763: 1760: 1755: 1752: 1749: 1745: 1741: 1738: 1734: 1700: 1699: 1687: 1684: 1678: 1675: 1672: 1669: 1666: 1663: 1659: 1654: 1651: 1648: 1645: 1642: 1639: 1636: 1633: 1630: 1624: 1621: 1618: 1615: 1612: 1609: 1605: 1598: 1595: 1592: 1589: 1586: 1583: 1580: 1575: 1572: 1569: 1565: 1561: 1558: 1554: 1516: 1515: 1504: 1498: 1495: 1492: 1489: 1486: 1483: 1479: 1472: 1469: 1466: 1463: 1460: 1457: 1454: 1449: 1446: 1443: 1439: 1435: 1432: 1428: 1424: 1421: 1378:1. Initialize 1377: 1335:1. Initialize 1334: 1328: 1325: 1305: 1302: 1299: 1296: 1293: 1290: 1286: 1279: 1276: 1273: 1270: 1267: 1264: 1261: 1256: 1253: 1250: 1246: 1242: 1239: 1235: 1201:1. Initialize 1200: 1160: 1157: 1098:+1)-st) step. 1034: 1033: 1022: 1016: 1013: 1010: 1007: 1004: 1001: 997: 988: 985: 982: 978: 974: 971: 967: 960: 953: 947: 944: 941: 937: 932: 811: 808: 765: 764: 753: 747: 744: 741: 735: 731: 727: 717: 708: 705: 702: 699: 696: 693: 689: 682: 679: 676: 672: 580:1. Initialize 579: 570: 567: 566: 565: 540:) contains an 517: 514: 509: 506: 476: 467: 464: 463: 462: 461: 460: 453: 363: 360: 313:Max-Cut Lemma: 283: 280: 275: 272: 254: 251: 209: 206: 181: 178: 165: 164: 163: 162: 138: 137: 127: 97: 96: 82: 79: 13: 10: 9: 6: 4: 3: 2: 2245: 2234: 2231: 2229: 2226: 2225: 2223: 2213: 2210: 2209: 2205: 2200: 2194: 2190: 2186: 2185: 2180: 2176: 2175: 2170: 2164: 2160: 2156: 2155: 2150: 2146: 2142: 2141: 2137: 2136:9780471653981 2133: 2127: 2123: 2119: 2113: 2109: 2108: 2103: 2102:Spencer, Joel 2099: 2095: 2094: 2093: 2087: 2079: 2074: 2071: 2066: 2061: 2057: 2053: 2052: 2047: 2041: 2039: 2035: 2031: 2025: 2021: 2020: 2015: 2009: 2006: 1999: 1991: 1988: 1980: 1970: 1966: 1960: 1959: 1953: 1949: 1945: 1940: 1931: 1930: 1925: 1922: 1920: 1917: 1915: 1912: 1911: 1907: 1905: 1903: 1899: 1895: 1875: 1872: 1866: 1863: 1857: 1850: 1847: 1842: 1834: 1831: 1825: 1818: 1815: 1808: 1802: 1799: 1793: 1787: 1783: 1773: 1767: 1761: 1750: 1743: 1739: 1736: 1732: 1724: 1723: 1722: 1720: 1715: 1713: 1709: 1705: 1685: 1682: 1676: 1673: 1667: 1661: 1657: 1649: 1646: 1640: 1634: 1628: 1622: 1619: 1613: 1607: 1603: 1593: 1587: 1581: 1570: 1563: 1559: 1556: 1552: 1544: 1543: 1542: 1540: 1535: 1533: 1529: 1525: 1521: 1502: 1496: 1493: 1487: 1481: 1477: 1467: 1461: 1455: 1444: 1437: 1433: 1430: 1426: 1422: 1419: 1412: 1411: 1410: 1405: 1401: 1397: 1393: 1389: 1385: 1381: 1374: 1371:). 4. Return 1370: 1366: 1362: 1358: 1354: 1350: 1346: 1342: 1338: 1333: 1322: 1303: 1300: 1294: 1288: 1284: 1274: 1268: 1262: 1251: 1244: 1240: 1237: 1233: 1224: 1220: 1216: 1212: 1208: 1204: 1199: 1197: 1193: 1189: 1185: 1181: 1177: 1173: 1168: 1166: 1158: 1156: 1154: 1149: 1147: 1142: 1140: 1136: 1132: 1128: 1124: 1120: 1116: 1112: 1108: 1104: 1099: 1097: 1093: 1088: 1086: 1082: 1078: 1074: 1070: 1066: 1062: 1058: 1054: 1050: 1045: 1043: 1039: 1020: 1014: 1011: 1005: 999: 995: 983: 976: 972: 969: 965: 958: 942: 935: 922: 921: 920: 919: 915: 911: 907: 903: 899: 895: 891: 887: 883: 878: 876: 872: 868: 864: 860: 857:at or above | 856: 851: 849: 845: 841: 837: 833: 829: 825: 821: 817: 809: 807: 806: 802: 798: 794: 790: 786: 783:|, when each 782: 778: 774: 770: 751: 745: 742: 739: 729: 715: 706: 703: 697: 691: 687: 680: 677: 674: 670: 662: 661: 660: 658: 654: 650: 646: 642: 638: 634: 630: 626: 622: 618: 611: 607: 603: 599: 595: 591: 587: 583: 578: 576: 563: 559: 555: 551: 547: 543: 539: 535: 531: 527: 526: 525: 523: 513: 507: 505: 503: 496: 492: 488: 484: 480: 475: 473: 465: 458: 454: 452: 449: 448: 447: 446: 445: 442: 440: 436: 432: 428: 424: 420: 417:is at least | 416: 412: 409:is at least | 408: 404: 400: 396: 391: 389: 383: 381: 377: 372: 370: 361: 358: 357: 353: 349: 345: 341: 337: 332: 330: 327:), at least | 326: 322: 318: 315:In any graph 314: 310: 308: 304: 300: 296: 292: 289: 282:Max-Cut Lemma 281: 279: 273: 271: 268: 264: 260: 252: 250: 247: 243: 238: 235: 231: 227: 223: 219: 215: 207: 205: 202: 198: 196: 192: 188: 179: 177: 174: 171:implies that 168: 161: 158: 157: 156: 155: 154: 150: 147: 142: 136: 131: 128: 126: 123: 120: 119: 118: 115: 108: 104: 102: 95: 93: 88: 87: 86: 80: 78: 76: 71: 69: 65: 60: 58: 54: 49: 47: 42: 37: 35: 31: 27: 23: 19: 2183: 2153: 2106: 2091: 2073: 2055: 2049: 2018: 2008: 1983: 1974: 1963:Please help 1955: 1901: 1897: 1893: 1891: 1718: 1716: 1711: 1707: 1703: 1701: 1538: 1536: 1531: 1527: 1523: 1519: 1517: 1408: 1403: 1399: 1395: 1391: 1387: 1383: 1379: 1372: 1368: 1364: 1360: 1356: 1352: 1348: 1344: 1340: 1336: 1330: 1320: 1319:. 4. Return 1222: 1218: 1214: 1210: 1206: 1202: 1195: 1191: 1187: 1183: 1179: 1175: 1171: 1169: 1164: 1162: 1152: 1150: 1145: 1143: 1138: 1137:is added to 1134: 1130: 1126: 1122: 1118: 1114: 1110: 1106: 1102: 1100: 1095: 1091: 1089: 1084: 1080: 1076: 1072: 1068: 1064: 1060: 1056: 1052: 1048: 1046: 1041: 1037: 1035: 917: 913: 909: 905: 901: 897: 893: 889: 885: 881: 879: 874: 870: 866: 862: 858: 854: 852: 847: 843: 839: 835: 831: 827: 823: 819: 815: 813: 804: 800: 796: 792: 788: 784: 780: 776: 768: 766: 659:is at least 656: 648: 644: 640: 639:is added to 636: 632: 628: 624: 620: 616: 614: 609: 605: 601: 597: 593: 589: 585: 581: 574: 572: 561: 557: 553: 549: 545: 537: 533: 529: 519: 511: 499: 494: 490: 486: 482: 478: 471: 469: 456: 450: 443: 438: 434: 430: 426: 422: 418: 414: 410: 406: 402: 398: 394: 392: 387: 384: 379: 375: 373: 368: 365: 355: 351: 347: 339: 335: 334: 328: 324: 320: 316: 312: 311: 306: 298: 294: 290: 285: 277: 266: 262: 258: 256: 245: 241: 239: 233: 229: 225: 221: 217: 213: 211: 203: 199: 194: 190: 183: 172: 169: 166: 159: 151: 145: 143: 139: 134: 129: 124: 121: 116: 113: 98: 89: 84: 74: 72: 61: 57:derandomizes 50: 38: 25: 15: 1969:introducing 552:+1), where 354:|/2 edges. 39:Often, the 18:mathematics 2222:Categories 2098:Alon, Noga 2000:References 1359:minimizes 1225:minimizes 608:4. Return 528:Any graph 371:| steps). 180:Efficiency 1977:June 2012 1809:≤ 1768:∪ 1740:∈ 1733:∑ 1629:≤ 1588:∪ 1560:∈ 1553:∑ 1462:∪ 1434:∈ 1427:∑ 1423:− 1396:remaining 1269:∪ 1241:∈ 1234:∑ 1075:for each 973:∈ 966:∑ 822:and adds 716:≥ 678:∈ 671:∑ 651:)+1). By 466:Algorithm 455:+ (1/2)*( 2104:(2008). 2022:, SIAM, 2016:(1987), 1908:See also 1851:′ 1819:′ 1390:, where 1355:, where 1133:, then 918:at least 431:maximize 81:Overview 2126:2437651 1965:improve 1170:Below, 1109:, then 771:+1) is 596:are in 497:black. 376:failure 303:Max cut 301:), the 2195:  2165:  2134:  2124:  2114:  2026:  1892:where 1702:where 1518:where 1221:where 962:  956:  773:convex 719:  713:  600:, add 441:|/2). 122:Lemma: 24:, the 1950:, or 1125:does 850:+1). 390:|/2. 288:graph 2193:ISBN 2163:ISBN 2132:ISBN 2112:ISBN 2024:ISBN 1090:Let 1036:Let 803:|.) 795:= 2| 791:) = 556:= 2| 425:at | 20:and 2060:doi 1534:). 1386:to 1351:to 1217:to 1198:). 1182:in 1127:not 1101:If 1083:|/( 1063:|/( 1059:≥ | 1051:|/( 916:is 896:in 873:|/( 861:|/( 846:|/( 842:≥ | 826:to 805:QED 799:|/| 775:in 604:to 588:in 560:|/| 548:|/( 532:= ( 481:in 356:QED 319:= ( 309:.) 307:cut 293:= ( 135:QED 70:). 16:In 2224:: 2187:, 2157:. 2147:; 2122:MR 2120:. 2100:; 2056:37 2054:, 2037:^ 1954:, 1946:, 1894:d′ 1721:, 1541:, 1406:. 1375:. 1323:. 1141:. 1071:≥ 612:. 577:: 536:, 504:. 459:). 323:, 297:, 197:. 2171:. 2138:) 2128:. 2062:: 1990:) 1984:( 1979:) 1975:( 1961:. 1902:u 1898:u 1896:( 1888:, 1876:1 1873:= 1867:1 1864:+ 1861:) 1858:u 1855:( 1848:d 1843:1 1838:) 1835:1 1832:+ 1829:) 1826:u 1823:( 1816:d 1812:( 1803:1 1800:+ 1797:) 1794:w 1791:( 1788:d 1784:1 1777:} 1774:u 1771:{ 1765:) 1762:u 1759:( 1754:) 1751:t 1748:( 1744:N 1737:w 1719:u 1712:u 1708:u 1706:( 1704:d 1698:, 1686:1 1683:= 1677:1 1674:+ 1671:) 1668:u 1665:( 1662:d 1658:1 1653:) 1650:1 1647:+ 1644:) 1641:u 1638:( 1635:d 1632:( 1623:1 1620:+ 1617:) 1614:w 1611:( 1608:d 1604:1 1597:} 1594:u 1591:{ 1585:) 1582:u 1579:( 1574:) 1571:t 1568:( 1564:N 1557:w 1539:u 1532:R 1528:u 1524:u 1522:( 1520:N 1503:, 1497:1 1494:+ 1491:) 1488:w 1485:( 1482:d 1478:1 1471:} 1468:u 1465:{ 1459:) 1456:u 1453:( 1448:) 1445:t 1442:( 1438:N 1431:w 1420:1 1404:S 1400:u 1392:u 1388:S 1384:u 1380:S 1373:S 1369:u 1365:u 1363:( 1361:d 1357:u 1353:S 1349:u 1345:S 1341:u 1337:S 1321:S 1304:1 1301:+ 1298:) 1295:w 1292:( 1289:d 1285:1 1278:} 1275:u 1272:{ 1266:) 1263:u 1260:( 1255:) 1252:t 1249:( 1245:N 1238:w 1223:u 1219:S 1215:u 1211:S 1207:u 1203:S 1196:S 1192:S 1188:u 1184:R 1180:u 1176:u 1174:( 1172:N 1165:u 1153:u 1146:u 1139:S 1135:u 1131:S 1123:u 1119:Q 1115:S 1111:u 1107:S 1103:u 1096:t 1092:u 1085:D 1081:V 1077:t 1073:Q 1069:Q 1065:D 1061:V 1057:Q 1053:D 1049:V 1038:Q 1021:. 1015:1 1012:+ 1009:) 1006:w 1003:( 1000:d 996:1 987:) 984:t 981:( 977:R 970:w 959:+ 952:| 946:) 943:t 940:( 936:S 931:| 914:Q 910:S 906:w 904:( 902:d 898:R 894:w 890:S 886:R 882:S 875:D 871:V 867:S 863:D 859:V 855:Q 848:D 844:V 840:E 836:S 832:Q 828:S 824:u 820:u 816:V 801:V 797:E 793:D 789:u 787:( 785:d 781:E 777:x 769:x 752:. 746:1 743:+ 740:D 734:| 730:V 726:| 707:1 704:+ 701:) 698:u 695:( 692:d 688:1 681:V 675:u 657:S 649:u 647:( 645:d 641:S 637:u 633:u 629:u 627:( 625:d 621:S 617:u 610:S 606:S 602:u 598:S 594:u 590:V 586:u 582:S 575:S 562:V 558:E 554:D 550:D 546:V 538:E 534:V 530:G 495:u 491:u 487:u 483:V 479:u 472:E 439:E 435:Q 427:E 423:Q 419:E 415:Q 411:E 407:Q 403:E 399:Q 395:Q 388:E 386:| 380:E 369:V 367:| 352:E 348:E 340:E 329:E 325:E 321:V 317:G 299:E 295:V 291:G 267:Q 259:Q 246:Q 242:Q 234:Q 230:Q 226:Q 222:Q 218:Q 214:Q 195:n 191:n

Index

mathematics
computer science
non-constructive
deterministic algorithms
probabilistic method
nonconstructive
deterministic algorithm
derandomizes
randomized rounding
approximation algorithms
probabilistic method
randomized rounding

polynomial time
graph
Max cut
linearity of expectation
0.5-approximation algorithm for Max-cut
Turán's theorem
independent set
linearity of expectation
convex
Probabilistic method
Derandomization
Randomized rounding
list of references
related reading
external links
inline citations
improve

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.