Knowledge (XXG)

Hierarchical Dirichlet process

Source 📝

165: 1751:, a Bayesian nonparametric model for sequences which has a multi-level hierarchy of Pitman-Yor processes. In addition, Bayesian Multi-Domain Learning (BMDL) model derives domain-dependent latent representations of overdispersed count data based on hierarchical negative binomial factorization for accurate cancer subtyping even if the number of samples for a specific cancer type is small. 63: 22: 1775: 310:
for each group of data, with the Dirichlet processes for all groups sharing a base distribution which is itself drawn from a Dirichlet process. This method allows groups to share statistical strength via sharing of clusters across groups. The base distribution being drawn from a Dirichlet process is
1706:
play the role of the mixing proportions. In conclusion, each group of data is modeled using a mixture model, with mixture components shared across all groups but mixing proportions being group-specific. In clustering terms, we can interpret each mixture component as modeling a cluster of data
1056:
parameterized by its associated parameter. The resulting model above is called a HDP mixture model, with the HDP referring to the hierarchically linked set of Dirichlet processes, and the mixture model referring to the way the Dirichlet processes are related to the data items.
985: 1639: 1514: 1190: 311:
important, because draws from a Dirichlet process are atomic probability measures, and the atoms will appear in all group-level Dirichlet processes. Since each atom corresponds to a cluster, clusters are shared across all groups. It was developed by
608: 738: 1719:, where the number of topics can be unbounded and learnt from data. Here each group is a document consisting of a bag of words, each cluster is a topic, and each document is a mixture of topics. The HDP is also a core component of the 1519:
Thus the set of atoms is shared across all groups, with each group having its own group-specific atom masses. Relating this representation back to the observed data, we see that each data item is described by a mixture model:
856: 1531: 1426: 1102: 861: 678: 526: 1526: 1255: 468:
that governs the a priori distribution over data items, and a number of concentration parameters that govern the a priori number of clusters and amount of sharing across groups. The
1421: 1097: 521: 443: 1674: 1054: 848: 768: 638: 345:
This model description is sourced from. The HDP is a model for grouped data. What this means is that the data items come in multiple distinct groups. For example, in a
1704: 1305: 388: 1747:
and Hierarchical Gamma process. The hierarchy can be deeper, with multiple levels of groups arranged in a hierarchy. Such an arrangement has been exploited in the
818: 673: 1826: 1413: 1386: 1359: 1332: 1089: 1015: 665: 513: 329: 182: 35: 1275: 788: 486: 466: 1998: 1707:
items, with clusters shared across all groups, and each group, having its own mixing proportions, composed of different combinations of clusters.
229: 127: 201: 99: 41: 208: 106: 1911: 266: 248: 146: 49: 84: 77: 1060:
To understand how the HDP implements a clustering model, and how clusters become shared across groups, recall that draws from a
215: 113: 980:{\displaystyle {\begin{aligned}\theta _{ji}|G_{j}&\sim G_{j}\\x_{ji}|\theta _{ji}&\sim F(\theta _{ji})\end{aligned}}} 186: 667:
is the base distribution shared across all groups. In turn, the common base distribution is Dirichlet process distributed:
197: 95: 1993: 349:
words are organized into documents, with each document formed by a bag (group) of words (data items). Indexing groups by
1965:, et al. "A sticky HDP-HMM with application to speaker diarization." The Annals of Applied Statistics (2011): 1020-1056. 1716: 1744: 1634:{\displaystyle {\begin{aligned}x_{ji}|G_{j}&\sim \sum _{k=1}^{\infty }\pi _{jk}F(\theta _{k}^{*})\end{aligned}}} 1885: 1760: 1720: 334: 175: 73: 296: 1509:{\displaystyle {\begin{aligned}G_{j}&=\sum _{k=1}^{\infty }\pi _{jk}\delta _{\theta _{k}^{*}}\end{aligned}}} 1185:{\displaystyle {\begin{aligned}G_{0}&=\sum _{k=1}^{\infty }\pi _{0k}\delta _{\theta _{k}^{*}}\end{aligned}}} 1198: 1790: 222: 120: 1889: 1839: 1950: 603:{\displaystyle {\begin{aligned}G_{j}|G_{0}&\sim \operatorname {DP} (\alpha _{j},G_{0})\end{aligned}}} 1976:"Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data" 1748: 1876: 393: 1647: 1020: 320: 299: 1894: 1064:
are atomic probability measures with probability one. This means that the common base distribution
1818: 1724: 1844: 1735:
The HDP can be generalized in a number of directions. The Dirichlet processes can be replaced by
823: 1921: 1857: 1736: 746: 616: 1974:
Hajiramezanali, E. & Dadaneh, S. Z. & Karbalayghareh, A. & Zhou, Z. & Qian, X.
1679: 1280: 733:{\displaystyle {\begin{aligned}G_{0}&\sim \operatorname {DP} (\alpha _{0},H)\end{aligned}}} 1978:(PDF). 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. 1953:(PDF). Advances in Neural Information Processing Systems 14:577–585. Cambridge, MA: MIT Press. 1907: 1061: 352: 316: 307: 1899: 1849: 793: 284: 1391: 1364: 1337: 1310: 1067: 993: 643: 491: 1933: 790:. Finally, to relate the Dirichlet processes back with the observed data, each data item 1780: 1260: 773: 471: 451: 1987: 1740: 1962: 1861: 312: 303: 1903: 1361:
is itself the base distribution for the group specific Dirichlet processes, each
346: 164: 62: 1853: 324: 280: 990:
The first line states that each parameter has a prior distribution given by
1017:, while the second line states that each data item has a distribution 1727:
allowing the number of states to be unbounded and learnt from data.
1676:
play the role of the mixture component parameters, while the masses
1975: 1715:
The HDP mixture model is a natural nonparametric generalization of
640:
is the concentration parameter associated with the group, and
158: 56: 15: 1878:
Hierarchical Bayesian Nonparametric Models with Applications
1817:
Teh, Y. W.; Jordan, M. I.; Beal, M. J.; Blei, D. M. (2006).
1277:
has infinite support. Each atom is associated with a mass
488:
th group is associated with a random probability measure
1949:
Beal, M.J., Ghahramani, Z. and Rasmussen, C.E. (2002).
333:
in 2006, as a formalization and generalization of the
1682: 1650: 1529: 1424: 1394: 1367: 1340: 1313: 1283: 1263: 1201: 1100: 1070: 1023: 996: 859: 826: 796: 776: 749: 676: 646: 619: 524: 515:
which has distribution given by a Dirichlet process:
494: 474: 454: 396: 355: 189:. Unsourced material may be challenged and removed. 1698: 1668: 1633: 1508: 1407: 1380: 1353: 1326: 1299: 1269: 1249: 1184: 1083: 1048: 1009: 979: 842: 812: 782: 762: 732: 659: 632: 602: 507: 480: 460: 437: 382: 1723:, which is a nonparametric generalization of the 448:The HDP is parameterized by a base distribution 1827:Journal of the American Statistical Association 330:Journal of the American Statistical Association 1257:, assuming that the overall base distribution 1195:where there are an infinite number of atoms, 8: 390:, suppose each group consist of data items 50:Learn how and when to remove these messages 1945: 1943: 1893: 1843: 1812: 1810: 1808: 1806: 1804: 1687: 1681: 1660: 1655: 1649: 1618: 1613: 1594: 1584: 1573: 1556: 1547: 1538: 1530: 1528: 1494: 1489: 1484: 1471: 1461: 1450: 1433: 1425: 1423: 1415:, and can itself be written in the form: 1399: 1393: 1372: 1366: 1345: 1339: 1318: 1312: 1288: 1282: 1262: 1250:{\displaystyle \theta _{k}^{*},k=1,2,...} 1211: 1206: 1200: 1170: 1165: 1160: 1147: 1137: 1126: 1109: 1101: 1099: 1075: 1069: 1034: 1022: 1001: 995: 961: 935: 926: 917: 903: 886: 877: 868: 860: 858: 831: 825: 801: 795: 775: 754: 748: 711: 685: 677: 675: 651: 645: 624: 618: 587: 574: 548: 539: 533: 525: 523: 499: 493: 473: 453: 426: 401: 395: 354: 267:Learn how and when to remove this message 249:Learn how and when to remove this message 147:Learn how and when to remove this message 1800: 1307:. The masses have to sum to one since 1929: 1919: 1388:will have atoms given by the atoms of 820:is associated with a latent parameter 83:Please improve this article by adding 7: 1091:has a form which can be written as: 187:adding citations to reliable sources 1951:"The infinite hidden Markov model" 1875:Teh, Y. W.; Jordan, M. I. (2010). 1819:"Hierarchical Dirichlet Processes" 1585: 1462: 1138: 14: 1999:Nonparametric Bayesian statistics 1334:is a probability measure. Since 31:This article has multiple issues. 1773: 438:{\displaystyle x_{j1},...x_{jn}} 198:"Hierarchical Dirichlet process" 163: 96:"Hierarchical Dirichlet process" 61: 20: 1745:Hierarchical Pitman-Yor process 1669:{\displaystyle \theta _{k}^{*}} 1049:{\displaystyle F(\theta _{ji})} 174:needs additional citations for 39:or discuss these issues on the 1791:Hierarchical Dirichlet process 1624: 1606: 1548: 1043: 1027: 970: 954: 927: 878: 723: 704: 593: 567: 540: 289:hierarchical Dirichlet process 1: 743:with concentration parameter 85:secondary or tertiary sources 1904:10.1017/CBO9780511802478.006 1721:infinite hidden Markov model 843:{\displaystyle \theta _{ji}} 335:infinite hidden Markov model 1717:Latent Dirichlet allocation 763:{\displaystyle \alpha _{0}} 633:{\displaystyle \alpha _{j}} 2015: 1886:Cambridge University Press 1854:10.1198/016214506000000302 1761:Chinese Restaurant Process 1699:{\displaystyle \pi _{jk}} 1300:{\displaystyle \pi _{0k}} 383:{\displaystyle j=1,...J} 302:approach to clustering 1700: 1670: 1635: 1589: 1510: 1466: 1409: 1382: 1355: 1328: 1301: 1271: 1251: 1186: 1142: 1085: 1050: 1011: 981: 844: 814: 813:{\displaystyle x_{ji}} 784: 770:and base distribution 764: 734: 661: 634: 604: 509: 482: 462: 439: 384: 72:relies excessively on 1701: 1671: 1636: 1569: 1511: 1446: 1410: 1408:{\displaystyle G_{0}} 1383: 1381:{\displaystyle G_{j}} 1356: 1354:{\displaystyle G_{0}} 1329: 1327:{\displaystyle G_{0}} 1302: 1272: 1252: 1187: 1122: 1086: 1084:{\displaystyle G_{0}} 1051: 1012: 1010:{\displaystyle G_{j}} 982: 845: 815: 785: 765: 735: 662: 660:{\displaystyle G_{0}} 635: 605: 510: 508:{\displaystyle G_{j}} 483: 463: 440: 385: 327:and published in the 1994:Stochastic processes 1888:. pp. 158–207. 1737:Pitman-Yor processes 1680: 1648: 1527: 1422: 1392: 1365: 1338: 1311: 1281: 1261: 1199: 1098: 1068: 1021: 994: 857: 824: 794: 774: 747: 674: 644: 617: 522: 492: 472: 452: 394: 353: 183:improve this article 1743:, resulting in the 1725:hidden Markov model 1665: 1623: 1499: 1216: 1175: 337:published in 2002. 1696: 1666: 1651: 1631: 1629: 1609: 1506: 1504: 1485: 1405: 1378: 1351: 1324: 1297: 1267: 1247: 1202: 1182: 1180: 1161: 1081: 1046: 1007: 977: 975: 840: 810: 780: 760: 730: 728: 657: 630: 600: 598: 505: 478: 458: 435: 380: 1749:sequence memoizer 1270:{\displaystyle H} 1062:Dirichlet process 783:{\displaystyle H} 481:{\displaystyle j} 461:{\displaystyle H} 317:Michael I. Jordan 308:Dirichlet process 277: 276: 269: 259: 258: 251: 233: 157: 156: 149: 131: 54: 2006: 1979: 1972: 1966: 1960: 1954: 1947: 1938: 1937: 1931: 1927: 1925: 1917: 1897: 1883: 1872: 1866: 1865: 1847: 1823: 1814: 1777: 1776: 1705: 1703: 1702: 1697: 1695: 1694: 1675: 1673: 1672: 1667: 1664: 1659: 1644:where the atoms 1640: 1638: 1637: 1632: 1630: 1622: 1617: 1602: 1601: 1588: 1583: 1561: 1560: 1551: 1546: 1545: 1515: 1513: 1512: 1507: 1505: 1501: 1500: 1498: 1493: 1479: 1478: 1465: 1460: 1438: 1437: 1414: 1412: 1411: 1406: 1404: 1403: 1387: 1385: 1384: 1379: 1377: 1376: 1360: 1358: 1357: 1352: 1350: 1349: 1333: 1331: 1330: 1325: 1323: 1322: 1306: 1304: 1303: 1298: 1296: 1295: 1276: 1274: 1273: 1268: 1256: 1254: 1253: 1248: 1215: 1210: 1191: 1189: 1188: 1183: 1181: 1177: 1176: 1174: 1169: 1155: 1154: 1141: 1136: 1114: 1113: 1090: 1088: 1087: 1082: 1080: 1079: 1055: 1053: 1052: 1047: 1042: 1041: 1016: 1014: 1013: 1008: 1006: 1005: 986: 984: 983: 978: 976: 969: 968: 943: 942: 930: 925: 924: 908: 907: 891: 890: 881: 876: 875: 849: 847: 846: 841: 839: 838: 819: 817: 816: 811: 809: 808: 789: 787: 786: 781: 769: 767: 766: 761: 759: 758: 739: 737: 736: 731: 729: 716: 715: 690: 689: 666: 664: 663: 658: 656: 655: 639: 637: 636: 631: 629: 628: 609: 607: 606: 601: 599: 592: 591: 579: 578: 553: 552: 543: 538: 537: 514: 512: 511: 506: 504: 503: 487: 485: 484: 479: 467: 465: 464: 459: 444: 442: 441: 436: 434: 433: 409: 408: 389: 387: 386: 381: 285:machine learning 272: 265: 254: 247: 243: 240: 234: 232: 191: 167: 159: 152: 145: 141: 138: 132: 130: 89: 65: 57: 46: 24: 23: 16: 2014: 2013: 2009: 2008: 2007: 2005: 2004: 2003: 1984: 1983: 1982: 1973: 1969: 1961: 1957: 1948: 1941: 1928: 1918: 1914: 1895:10.1.1.157.9451 1881: 1874: 1873: 1869: 1821: 1816: 1815: 1802: 1798: 1797: 1796: 1778: 1774: 1769: 1757: 1741:Gamma processes 1733: 1731:Generalizations 1713: 1683: 1678: 1677: 1646: 1645: 1628: 1627: 1590: 1562: 1552: 1534: 1525: 1524: 1503: 1502: 1480: 1467: 1439: 1429: 1420: 1419: 1395: 1390: 1389: 1368: 1363: 1362: 1341: 1336: 1335: 1314: 1309: 1308: 1284: 1279: 1278: 1259: 1258: 1197: 1196: 1179: 1178: 1156: 1143: 1115: 1105: 1096: 1095: 1071: 1066: 1065: 1030: 1019: 1018: 997: 992: 991: 974: 973: 957: 944: 931: 913: 910: 909: 899: 892: 882: 864: 855: 854: 827: 822: 821: 797: 792: 791: 772: 771: 750: 745: 744: 727: 726: 707: 691: 681: 672: 671: 647: 642: 641: 620: 615: 614: 597: 596: 583: 570: 554: 544: 529: 520: 519: 495: 490: 489: 470: 469: 450: 449: 422: 397: 392: 391: 351: 350: 343: 321:Matthew J. Beal 273: 262: 261: 260: 255: 244: 238: 235: 192: 190: 180: 168: 153: 142: 136: 133: 90: 88: 82: 78:primary sources 66: 25: 21: 12: 11: 5: 2012: 2010: 2002: 2001: 1996: 1986: 1985: 1981: 1980: 1967: 1955: 1939: 1930:|journal= 1912: 1867: 1799: 1779: 1772: 1771: 1770: 1768: 1765: 1764: 1763: 1756: 1753: 1732: 1729: 1712: 1709: 1693: 1690: 1686: 1663: 1658: 1654: 1642: 1641: 1626: 1621: 1616: 1612: 1608: 1605: 1600: 1597: 1593: 1587: 1582: 1579: 1576: 1572: 1568: 1565: 1563: 1559: 1555: 1550: 1544: 1541: 1537: 1533: 1532: 1517: 1516: 1497: 1492: 1488: 1483: 1477: 1474: 1470: 1464: 1459: 1456: 1453: 1449: 1445: 1442: 1440: 1436: 1432: 1428: 1427: 1402: 1398: 1375: 1371: 1348: 1344: 1321: 1317: 1294: 1291: 1287: 1266: 1246: 1243: 1240: 1237: 1234: 1231: 1228: 1225: 1222: 1219: 1214: 1209: 1205: 1193: 1192: 1173: 1168: 1164: 1159: 1153: 1150: 1146: 1140: 1135: 1132: 1129: 1125: 1121: 1118: 1116: 1112: 1108: 1104: 1103: 1078: 1074: 1045: 1040: 1037: 1033: 1029: 1026: 1004: 1000: 988: 987: 972: 967: 964: 960: 956: 953: 950: 947: 945: 941: 938: 934: 929: 923: 920: 916: 912: 911: 906: 902: 898: 895: 893: 889: 885: 880: 874: 871: 867: 863: 862: 837: 834: 830: 807: 804: 800: 779: 757: 753: 741: 740: 725: 722: 719: 714: 710: 706: 703: 700: 697: 694: 692: 688: 684: 680: 679: 654: 650: 627: 623: 611: 610: 595: 590: 586: 582: 577: 573: 569: 566: 563: 560: 557: 555: 551: 547: 542: 536: 532: 528: 527: 502: 498: 477: 457: 432: 429: 425: 421: 418: 415: 412: 407: 404: 400: 379: 376: 373: 370: 367: 364: 361: 358: 342: 339: 275: 274: 257: 256: 171: 169: 162: 155: 154: 69: 67: 60: 55: 29: 28: 26: 19: 13: 10: 9: 6: 4: 3: 2: 2011: 2000: 1997: 1995: 1992: 1991: 1989: 1977: 1971: 1968: 1964: 1963:Fox, Emily B. 1959: 1956: 1952: 1946: 1944: 1940: 1935: 1923: 1915: 1913:9780511802478 1909: 1905: 1901: 1896: 1891: 1887: 1880: 1879: 1871: 1868: 1863: 1859: 1855: 1851: 1846: 1845:10.1.1.5.9094 1841: 1837: 1833: 1829: 1828: 1820: 1813: 1811: 1809: 1807: 1805: 1801: 1794: 1793: 1792: 1786: 1782: 1766: 1762: 1759: 1758: 1754: 1752: 1750: 1746: 1742: 1738: 1730: 1728: 1726: 1722: 1718: 1710: 1708: 1691: 1688: 1684: 1661: 1656: 1652: 1619: 1614: 1610: 1603: 1598: 1595: 1591: 1580: 1577: 1574: 1570: 1566: 1564: 1557: 1553: 1542: 1539: 1535: 1523: 1522: 1521: 1495: 1490: 1486: 1481: 1475: 1472: 1468: 1457: 1454: 1451: 1447: 1443: 1441: 1434: 1430: 1418: 1417: 1416: 1400: 1396: 1373: 1369: 1346: 1342: 1319: 1315: 1292: 1289: 1285: 1264: 1244: 1241: 1238: 1235: 1232: 1229: 1226: 1223: 1220: 1217: 1212: 1207: 1203: 1171: 1166: 1162: 1157: 1151: 1148: 1144: 1133: 1130: 1127: 1123: 1119: 1117: 1110: 1106: 1094: 1093: 1092: 1076: 1072: 1063: 1058: 1038: 1035: 1031: 1024: 1002: 998: 965: 962: 958: 951: 948: 946: 939: 936: 932: 921: 918: 914: 904: 900: 896: 894: 887: 883: 872: 869: 865: 853: 852: 851: 835: 832: 828: 805: 802: 798: 777: 755: 751: 720: 717: 712: 708: 701: 698: 695: 693: 686: 682: 670: 669: 668: 652: 648: 625: 621: 588: 584: 580: 575: 571: 564: 561: 558: 556: 549: 545: 534: 530: 518: 517: 516: 500: 496: 475: 455: 446: 430: 427: 423: 419: 416: 413: 410: 405: 402: 398: 377: 374: 371: 368: 365: 362: 359: 356: 348: 340: 338: 336: 332: 331: 326: 322: 318: 314: 309: 305: 301: 298: 297:nonparametric 294: 290: 286: 282: 271: 268: 253: 250: 242: 239:February 2012 231: 228: 224: 221: 217: 214: 210: 207: 203: 200: –  199: 195: 194:Find sources: 188: 184: 178: 177: 172:This article 170: 166: 161: 160: 151: 148: 140: 137:February 2012 129: 126: 122: 119: 115: 112: 108: 105: 101: 98: –  97: 93: 92:Find sources: 86: 80: 79: 75: 70:This article 68: 64: 59: 58: 53: 51: 44: 43: 38: 37: 32: 27: 18: 17: 1970: 1958: 1877: 1870: 1835: 1831: 1825: 1789: 1788: 1787:profile for 1784: 1734: 1714: 1711:Applications 1643: 1518: 1194: 1059: 989: 742: 612: 447: 344: 328: 313:Yee Whye Teh 306:. It uses a 304:grouped data 292: 288: 278: 263: 245: 236: 226: 219: 212: 205: 193: 181:Please help 176:verification 173: 143: 134: 124: 117: 110: 103: 91: 71: 47: 40: 34: 33:Please help 30: 1838:1566–1581. 347:topic model 1988:Categories 1767:References 325:David Blei 281:statistics 209:newspapers 107:newspapers 74:references 36:improve it 1932:ignored ( 1922:cite book 1890:CiteSeerX 1840:CiteSeerX 1685:π 1662:∗ 1653:θ 1620:∗ 1611:θ 1592:π 1586:∞ 1571:∑ 1567:∼ 1496:∗ 1487:θ 1482:δ 1469:π 1463:∞ 1448:∑ 1286:π 1213:∗ 1204:θ 1172:∗ 1163:θ 1158:δ 1145:π 1139:∞ 1124:∑ 1032:θ 959:θ 949:∼ 933:θ 897:∼ 866:θ 829:θ 752:α 709:α 702:⁡ 696:∼ 622:α 572:α 565:⁡ 559:∼ 42:talk page 1755:See also 300:Bayesian 1862:7934949 1834:(476): 1781:Scholia 295:) is a 223:scholar 121:scholar 1910:  1892:  1860:  1842:  1783:has a 613:where 287:, the 225:  218:  211:  204:  196:  123:  116:  109:  102:  94:  1882:(PDF) 1858:S2CID 1822:(PDF) 1785:topic 341:Model 230:JSTOR 216:books 128:JSTOR 114:books 1934:help 1908:ISBN 1739:and 323:and 283:and 202:news 100:news 1900:doi 1850:doi 1836:pp. 1832:101 293:HDP 279:In 185:by 76:to 1990:: 1942:^ 1926:: 1924:}} 1920:{{ 1906:. 1898:. 1884:. 1856:. 1848:. 1830:. 1824:. 1803:^ 850:: 699:DP 562:DP 445:. 319:, 315:, 87:. 45:. 1936:) 1916:. 1902:: 1864:. 1852:: 1795:. 1692:k 1689:j 1657:k 1625:) 1615:k 1607:( 1604:F 1599:k 1596:j 1581:1 1578:= 1575:k 1558:j 1554:G 1549:| 1543:i 1540:j 1536:x 1491:k 1476:k 1473:j 1458:1 1455:= 1452:k 1444:= 1435:j 1431:G 1401:0 1397:G 1374:j 1370:G 1347:0 1343:G 1320:0 1316:G 1293:k 1290:0 1265:H 1245:. 1242:. 1239:. 1236:, 1233:2 1230:, 1227:1 1224:= 1221:k 1218:, 1208:k 1167:k 1152:k 1149:0 1134:1 1131:= 1128:k 1120:= 1111:0 1107:G 1077:0 1073:G 1044:) 1039:i 1036:j 1028:( 1025:F 1003:j 999:G 971:) 966:i 963:j 955:( 952:F 940:i 937:j 928:| 922:i 919:j 915:x 905:j 901:G 888:j 884:G 879:| 873:i 870:j 836:i 833:j 806:i 803:j 799:x 778:H 756:0 724:) 721:H 718:, 713:0 705:( 687:0 683:G 653:0 649:G 626:j 594:) 589:0 585:G 581:, 576:j 568:( 550:0 546:G 541:| 535:j 531:G 501:j 497:G 476:j 456:H 431:n 428:j 424:x 420:. 417:. 414:. 411:, 406:1 403:j 399:x 378:J 375:. 372:. 369:. 366:, 363:1 360:= 357:j 291:( 270:) 264:( 252:) 246:( 241:) 237:( 227:· 220:· 213:· 206:· 179:. 150:) 144:( 139:) 135:( 125:· 118:· 111:· 104:· 81:. 52:) 48:(

Index

improve it
talk page
Learn how and when to remove these messages

references
primary sources
secondary or tertiary sources
"Hierarchical Dirichlet process"
news
newspapers
books
scholar
JSTOR
Learn how and when to remove this message

verification
improve this article
adding citations to reliable sources
"Hierarchical Dirichlet process"
news
newspapers
books
scholar
JSTOR
Learn how and when to remove this message
Learn how and when to remove this message
statistics
machine learning
nonparametric
Bayesian

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.