Knowledge

Latent space

Source 📝

122:(VAEs): VAEs are generative models that simultaneously learn to encode and decode data. The latent space in VAEs acts as an embedding space. By training VAEs on high-dimensional data, such as images or audio, the model learns to encode the data into a compact latent representation. VAEs are known for their ability to generate new data samples from the learned latent space. 116:: Siamese networks are a type of neural network architecture commonly used for similarity-based embedding. They consist of two identical subnetworks that process two input samples and produce their respective embeddings. Siamese networks are often used for tasks like image similarity, recommendation systems, and face recognition. 81:
high-dimensional, complex, and nonlinear, which may add to the difficulty of interpretation. Some visualization techniques have been developed to connect the latent space to the visual world, but there is often not a direct connection between the latent space interpretation and the model itself. Such techniques include
110:: GloVe (Global Vectors for Word Representation) is another widely used embedding model for NLP. It combines global statistical information from a corpus with local context information to learn word embeddings. GloVe embeddings are known for capturing both semantic and relational similarities between words. 139:
To embed multimodal data, specialized architectures such as deep multimodal networks or multimodal transformers are employed. These architectures combine different types of neural network modules to process and integrate information from various modalities. The resulting embeddings capture the
80:
The interpretation of the latent spaces of machine learning models is an active field of study, but latent space interpretation is difficult to achieve. Due to the black-box nature of machine learning models, the latent space may be completely unintuitive. Additionally, the latent space may be
104:: Word2Vec is a popular embedding model used in natural language processing (NLP). It learns word embeddings by training a neural network on a large corpus of text. Word2Vec captures semantic and syntactic relationships between words, allowing for meaningful computations like word analogies. 131:
Multimodality refers to the integration and analysis of multiple modes or types of data within a single model or framework. Embedding multimodal data involves capturing relationships and interactions between different data types, such as images, text, audio, and structured data.
135:
Multimodal embedding models aim to learn joint representations that fuse information from multiple modalities, allowing for cross-modal analysis and tasks. These models enable applications like image captioning, visual question answering, and multimodal sentiment analysis.
85:(t-SNE), where the latent space is mapped to two dimensions for visualization. Latent space distances lack physical units, so the interpretation of these distances may depend on the application. 167:
Social Systems: Embedding techniques can be used to learn latent representations of social systems such as internal migration systems, academic citation networks, and world trade networks.
164:
Healthcare: Embedding techniques have been applied to electronic health records, medical imaging, and genomic data for disease prediction, diagnosis, and treatment.
82: 152:
Information Retrieval: Embedding techniques enable efficient similarity search and recommendation systems by representing data points in a compact space.
97:. These models learn the embeddings by leveraging statistical techniques and machine learning algorithms. Here are some commonly used embedding models: 50:
in which items resembling each other are positioned closer to one another. Position within the latent space can be viewed as being defined by a set of
155:
Natural Language Processing: Word embeddings have revolutionized NLP tasks like sentiment analysis, machine translation, and document classification.
333:
Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, SĂžren (13 December 2021). "Latent Space Oddity: on the Curvature of Deep Generative Models".
93:
Several embedding models have been developed to perform this transformation to create latent space embeddings given a set of data items and a
437: 213: 161:
Recommendation Systems: Embeddings help capture user preferences and item characteristics, enabling personalized recommendations.
77:, and they can then be used as feature spaces in machine learning models, including classifiers and other supervised predictors. 355: 158:
Computer Vision: Image and video embeddings enable tasks like object recognition, image retrieval, and video summarization.
533:"Investigating internal migration with network analysis and latent space representations: an application to Turkey" 715: 148:
Embedding latent space and multimodal embedding models have found numerous applications across various domains:
193: 66: 203: 119: 113: 198: 140:
complex relationships between different data types, facilitating multimodal analysis and understanding.
710: 590:"Detecting trends in academic research from a citation network using network representation learning" 183: 286:
Li, Ziqiang; Tao, Rentuo; Wang, Jie; Li, Fu; Niu, Hongjing; Yue, Mingdao; Li, Bin (February 2021).
218: 208: 188: 94: 513: 487: 451: 367: 334: 315: 268: 686: 668: 629: 611: 570: 552: 505: 443: 433: 393:
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
307: 260: 65:
from which the data points are drawn, making the construction of a latent space an example of
676: 660: 647:
García-Pérez, Guillermo; Boguñå, Mariån; Allard, Antoine; Serrano, M. Ángeles (2016-09-16).
619: 601: 560: 544: 497: 425: 396: 299: 252: 240: 178: 74: 70: 424:, Methods in Molecular Biology, vol. 2190, New York, NY: Springer US, pp. 73–94, 51: 475: 681: 648: 624: 589: 565: 58: 704: 517: 455: 319: 272: 62: 649:"The hidden hyperbolic geometry of international trade: World Trade Atlas 1870–2013" 606: 588:
Asatani, Kimitaka; Mori, Junichiro; Ochi, Masanao; Sakata, Ichiro (2018-05-21).
429: 354:
Mikolov, Tomas; Sutskever, Ilya; Chen, Kai; Corrado, Greg S; Dean, Jeff (2013).
548: 287: 17: 417: 395:. Doha, Qatar: Association for Computational Linguistics. pp. 1532–1543. 672: 615: 556: 532: 509: 356:"Distributed Representations of Words and Phrases and their Compositionality" 311: 303: 264: 43: 690: 633: 574: 447: 387:
Pennington, Jeffrey; Socher, Richard; Manning, Christopher (October 2014).
401: 101: 61:
of the latent space is chosen to be lower than the dimensionality of the
47: 501: 664: 256: 241:"Latent Space Cartography: Visual Analysis of Vector Space Embeddings" 388: 492: 339: 372: 107: 27:
Embedding of data within a manifold based on a similarity function
288:"Interpreting the Latent Space of GANs via Measuring Decoupling" 239:
Liu, Yang; Jun, Eunice; Li, Qisheng; Heer, Jeffrey (June 2019).
54:
that emerge from the resemblances from the objects.
360:Advances in Neural Information Processing Systems 474:Kingma, Diederik P.; Welling, Max (2019-11-27). 416:Chicco, Davide (2021), Cartwright, Hugh (ed.), 389:"Glove: Global Vectors for Word Representation" 476:"An Introduction to Variational Autoencoders" 8: 531:GĂŒrsoy, Furkan; Badur, Bertan (2022-10-06). 292:IEEE Transactions on Artificial Intelligence 83:t-distributed stochastic neighbor embedding 480:Foundations and Trends in Machine Learning 680: 623: 605: 564: 491: 400: 371: 338: 69:, which can also be viewed as a form of 231: 418:"Siamese Neural Networks: An Overview" 7: 73:. Latent spaces are usually fit via 537:Social Network Analysis and Mining 214:Nonlinear dimensionality reduction 25: 1: 607:10.1371/journal.pone.0197260 430:10.1007/978-1-0716-0826-5_3 46:of a set of items within a 732: 549:10.1007/s13278-022-00974-w 422:Artificial Neural Networks 366:. Curran Associates, Inc. 304:10.1109/TAI.2021.3071642 194:Latent semantic analysis 120:Variational Autoencoders 67:dimensionality reduction 245:Computer Graphics Forum 204:Ordination (statistics) 199:Latent variable model 184:Clustering algorithm 36:latent feature space 402:10.3115/v1/D14-1162 219:Self-organizing map 209:Manifold hypothesis 189:Intrinsic dimension 95:similarity function 57:In most cases, the 653:Scientific Reports 502:10.1561/2200000056 34:, also known as a 665:10.1038/srep33441 439:978-1-0716-0826-5 257:10.1111/cgf.13672 16:(Redirected from 723: 716:Cluster analysis 695: 694: 684: 644: 638: 637: 627: 609: 585: 579: 578: 568: 528: 522: 521: 495: 471: 465: 464: 463: 462: 413: 407: 406: 404: 384: 378: 377: 375: 351: 345: 344: 342: 330: 324: 323: 283: 277: 276: 236: 179:Induced topology 114:Siamese Networks 89:Embedding models 75:machine learning 71:data compression 52:latent variables 21: 731: 730: 726: 725: 724: 722: 721: 720: 701: 700: 699: 698: 646: 645: 641: 600:(5): e0197260. 587: 586: 582: 530: 529: 525: 473: 472: 468: 460: 458: 440: 415: 414: 410: 386: 385: 381: 353: 352: 348: 332: 331: 327: 285: 284: 280: 238: 237: 233: 228: 223: 174: 146: 129: 91: 40:embedding space 28: 23: 22: 18:Latent manifold 15: 12: 11: 5: 729: 727: 719: 718: 713: 703: 702: 697: 696: 639: 580: 523: 486:(4): 307–392. 466: 438: 408: 379: 346: 325: 278: 230: 229: 227: 224: 222: 221: 216: 211: 206: 201: 196: 191: 186: 181: 175: 173: 170: 169: 168: 165: 162: 159: 156: 153: 145: 142: 128: 125: 124: 123: 117: 111: 105: 90: 87: 59:dimensionality 26: 24: 14: 13: 10: 9: 6: 4: 3: 2: 728: 717: 714: 712: 709: 708: 706: 692: 688: 683: 678: 674: 670: 666: 662: 658: 654: 650: 643: 640: 635: 631: 626: 621: 617: 613: 608: 603: 599: 595: 591: 584: 581: 576: 572: 567: 562: 558: 554: 550: 546: 542: 538: 534: 527: 524: 519: 515: 511: 507: 503: 499: 494: 489: 485: 481: 477: 470: 467: 457: 453: 449: 445: 441: 435: 431: 427: 423: 419: 412: 409: 403: 398: 394: 390: 383: 380: 374: 369: 365: 361: 357: 350: 347: 341: 336: 329: 326: 321: 317: 313: 309: 305: 301: 297: 293: 289: 282: 279: 274: 270: 266: 262: 258: 254: 250: 246: 242: 235: 232: 225: 220: 217: 215: 212: 210: 207: 205: 202: 200: 197: 195: 192: 190: 187: 185: 182: 180: 177: 176: 171: 166: 163: 160: 157: 154: 151: 150: 149: 143: 141: 137: 133: 127:Multimodality 126: 121: 118: 115: 112: 109: 106: 103: 100: 99: 98: 96: 88: 86: 84: 78: 76: 72: 68: 64: 63:feature space 60: 55: 53: 49: 45: 41: 37: 33: 19: 659:(1): 33441. 656: 652: 642: 597: 593: 583: 540: 536: 526: 483: 479: 469: 459:, retrieved 421: 411: 392: 382: 363: 359: 349: 328: 298:(1): 58–70. 295: 291: 281: 251:(3): 67–78. 248: 244: 234: 147: 144:Applications 138: 134: 130: 92: 79: 56: 39: 35: 32:latent space 31: 29: 711:Data mining 705:Categories 543:(1): 150. 493:1906.02691 461:2023-06-26 340:1710.11379 226:References 673:2045-2322 616:1932-6203 557:1869-5469 518:174802445 510:1935-8237 456:221144012 373:1310.4546 320:234847784 312:2691-4581 273:189858337 265:0167-7055 44:embedding 691:27633649 634:29782521 594:PLOS ONE 575:36246429 448:32804361 172:See also 102:Word2Vec 48:manifold 42:, is an 682:5025783 625:5962067 566:9540093 689:  679:  671:  632:  622:  614:  573:  563:  555:  516:  508:  454:  446:  436:  318:  310:  271:  263:  514:S2CID 488:arXiv 452:S2CID 368:arXiv 335:arXiv 316:S2CID 269:S2CID 108:GloVe 687:PMID 669:ISSN 630:PMID 612:ISSN 571:PMID 553:ISSN 506:ISSN 444:PMID 434:ISBN 308:ISSN 261:ISSN 677:PMC 661:doi 620:PMC 602:doi 561:PMC 545:doi 498:doi 426:doi 397:doi 300:doi 253:doi 38:or 707:: 685:. 675:. 667:. 655:. 651:. 628:. 618:. 610:. 598:13 596:. 592:. 569:. 559:. 551:. 541:12 539:. 535:. 512:. 504:. 496:. 484:12 482:. 478:. 450:, 442:, 432:, 420:, 391:. 364:26 362:. 358:. 314:. 306:. 294:. 290:. 267:. 259:. 249:38 247:. 243:. 30:A 693:. 663:: 657:6 636:. 604:: 577:. 547:: 520:. 500:: 490:: 428:: 405:. 399:: 376:. 370:: 343:. 337:: 322:. 302:: 296:2 275:. 255:: 20:)

Index

Latent manifold
embedding
manifold
latent variables
dimensionality
feature space
dimensionality reduction
data compression
machine learning
t-distributed stochastic neighbor embedding
similarity function
Word2Vec
GloVe
Siamese Networks
Variational Autoencoders
Induced topology
Clustering algorithm
Intrinsic dimension
Latent semantic analysis
Latent variable model
Ordination (statistics)
Manifold hypothesis
Nonlinear dimensionality reduction
Self-organizing map
"Latent Space Cartography: Visual Analysis of Vector Space Embeddings"
doi
10.1111/cgf.13672
ISSN
0167-7055
S2CID

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑