599:
computational power can be used at production time to further improve the data. At playback, 'level of detail' can be utilized to manage the computational load on the playback device, increasing or decreasing the number of polygons. Interactive light changes are harder to realize as the bulk of the data is pre-baked. This means that while the lighting information stored with the points is very accurate and high-fidelity, it lacks the ability to easily change in any given situation. Another benefit of point capture is that computer graphics can be rendered with very high quality and also stored as points, opening the door for a perfect blend of real and imagined elements.
643:
present at the physical set itself, and a proper visualization can help an actor or performer block out a scene or action with the comfort that their practice isn't at the expense of the rest of production. Old sets can be captured digitally before being torn down, allowing them to persist eternally as a place to revisit and explore for entertainment and inspiration, and multiple sets can be kit-bashed in such a way to tighten the iteration loops of set design, sound design, coloring, and many other aspects of production.
656:
assets to complement the existing geometry data, or using the existing data as a base on which to build, similar to how a digital painter might paint over a basic 3D render. The onus will be on the artisan to ensure they keep up with the tools and workflows that best suit their skillsets, but the prudent will find that the production pipeline of the future will involve many opportunities to streamline the creation of the labor-intensive and allowing for investment in bigger creative challenges.
473:, resulting in a near-perfect replica of the set. Full performance capture, however, uses an array of video cameras to capture real-time information. Those synchronized cameras are then used frame-by-frame to generate a set of points or geometry that can be played back at speed, resulting in the full volumetric performance capture that can be composited into any environment. In 2008, 4DViews installed a first volumetric video capture system at DigiCast studio in Tokyo (JP). Later in 2015,
582:. Intense clean up is required to create the final set of triangles. To extend beyond the physical world, CG techniques can be deployed to further enhance the captured data, employing artists to build onto and into the static mesh as necessary. The playback is usually handled by a real-time engine and resembles a traditional game pipeline in implementation, allowing interactive lighting changes and creative and archivable ways of compositing static and animated meshes together.
486:
297:
713:
used to visualize micro scale scenarios on a cellular level as much as epic events that changed the course of the human experiment. The main advantage being that virtual field trips is the democratisation of high end educational scenarios. Being able to take part in visiting a museum without having to physically be there allows a broader audience and also enables institutions to show their entire inventory rather the subsection currently on display.
502:, this was suddenly possible. Stereoscopic viewing and the ability to rotate and move the head as well as move in a small space allows immersion into environments well beyond what was possible in the past. The photographic nature of the captures combined with this immersion and the resulting interactivity is one giant step closer to being the holy grail of true virtual reality. With the rise of 360° video content, the demand for
515:
77:
1595:
445:. EF EVE™ supports unlimited number of Azure Kinect sensors on one PC giving full volumetric capture with easy setup. It also has automatic sensors calibration and VFX functionality. Depthkit is a software suite that allows the capture of geometry data with one structured light sensor including the Azure Kinect, as well as high quality color detail from an attached witness camera.
406:
179:
36:
369:
332:, and many other science-fiction productions over the years. Through the growing advancements in the fields of computer graphics, optics, and data processing, this fiction has slowly evolved into a reality. Volumetric video is the logical next step after stereoscopic movies and 360° videos in that it combines the visual quality of photography with the
454:
422:
community. By projecting a known pattern onto the space and capturing the distortion by objects in the scene, the result capture can then be computed into different outputs. Artists and hobbyists started to make tools and projects around the affordable device, sparking a growing interest in volumetric capture as a creative medium.
660:
than anything completely CG. By combining these real-life set capture with the volumetric captures of additional CG elements, we will be able to blend real-life and our imagination in a way that we have only previously been able to do on a flat-screen, creating new fields in the areas like compositing and VFX.
642:
Once a capture is created and saved, it can be re-used and even possibly re-purposed ad nauseam for circumstances beyond the initial envisioned scope. Creating a virtual set enables volumetric videographers and cinematographers to create stories and plan for shots without needing a crew or to even be
628:
As volumetric video evolves into global capture and the display hardware evolves to match, we will enter into an era of true immersion where the nuances of captured environment combined with those of captured performances will convey emotionality in a whole new medium, blurring the boundaries between
716:
Real estate and tourism could preview destinations accurately and make the retail industry much more custom for the individual. Capturing products has already been done for shoes and magic mirrors can be used in stores to visualize this. Shopping Malls have started to embrace this to repopulate them
712:
Documenting spaces for historical event, captured live or recreated will benefit the educational sector greatly. Virtual lectures depicting big events in history with an immersive component will help future generations imagine spaces and learn collaboratively about events. This can be abstracted and
694:
In order to store and playback the captured data, enormous sets need to be streamed to the consumer. Currently the most effective way is to build bespoke apps that are delivered. There is no standard yet that generated volumetric video and makes it experienceable at home. Compression of this data is
659:
Most importantly, skills currently rendered semi-obsolete by advances in computer graphics and off-line rendering will once again be made relevant, as the fidelity of things like real, hand-built sets quality tailored costumes rendered as high volume captures will almost always be far more immersive
655:
Volumetric capture excels at capturing static data or pre-rendered animated footage. It can not, however, create an imaginary environment or natively allow for any level of interactivity. This is where skilled artists and developers will be in highest demand, creating seamless interactive events and
685:
Current video and film making production pipelines are not immediately ready transition to volumetric production. Every step in the film making process needs to be rethought and reinvented. On set capture, directing of talent on set, editing, photography, story telling, and much more are all fields
633:
are still in research and development stage, one day in the not-so-distant future we will travel convincingly to new locales, both real and imagined. Industries in tourism and journalism will find new life in the ability to transport to a viewer or visitor safely to a location, while others such as
594:
volumetric capture. The resulting data is represented as points or particles in 3D space carrying attributes such as color and point size with them. This allows for more information density and higher resolution content. The data rates required are big and current graphics hardware is not optimized
553:
camera family to capture 360° video footage that is getting stitched with the help of distance maps. Extracting this raw data is possible and allows a high-resolution capture of any stage. Again the data rates combined with the fidelity of the depth maps are huge bottlenecks but soon to be overcome
537:
is creating consumer-facing cameras to allow the capture of light fields. Fields can be captured inside-out in camera or outside-in from renderings of 3D geometry, representing a huge amount of information ready to be manipulated. Currently data rates are still a large issue and the technique has a
493:
As volumetric video developed into a commercially applicable approach to environment and performance capture, the ability to move about the results with six degrees of freedom and true stereoscopy necessitated a new type of display device. With the rise of consumer-facing VR in 2016 through devices
464:
describes the process of measuring data based on photographic reference. While being as old as photography itself, only through advances over the years in volumetric capture research has it now become possible to capture more and more geometry and texture detail from a large number of input images.
708:
Besides the application in entertainment, several other industries have vested interest in the capture of scenes to the detail described above. Sports events would benefit greatly from a detailed replay of the state of a game. This is already happening in
American football and baseball, as well as
676:
As every medium creates its own visual language, rules and creative approaches, volumetric video is still at the infancy of this. This compares to the addition of sound to moving pictures. New design philosophies had to be created and tested. Currently the language of film, the art of directing is
610:
While one goal, with the point-based approach to volumetric capture, is to stream point data from cloud to the user at home, allowing the creating and dissemination of realistic virtual worlds on demand - a second goal more recently considered would be a real-time data stream of live events. This
436:
VR experience. There are currently three studios in operation: Redmond, WA; San
Francisco, CA; and London, England. While this remains a very interesting setup for the high-end market, the affordable price of a single Kinect device led more experimental artists and independent directors to become
677:
battle hardened over 100 years. In a completely six degrees of freedom, interactive and non-linear world many of the traditional approaches can't function. The more experiences are being created and analyzed, the quicker can the community come to a conclusion about this language of experiences.
651:
One area of concern with the growing field of volumetric capture is the shrinking of demand for traditional skillsets like modeling, lighting, animation, etc. However, while in future the stack of production-oriented volumetric capture technologies will grow and grow, so too will the demand for
352:. The ultimate goal is to imitate reality in minute detail while giving creatives the power to build worlds atop this foundation to match their vision. Traditionally, artists create these worlds using modeling and rendering techniques developed over decades since the birth of computer graphics.
574:
similar to the geometry used for computer games and visual effects. The data volume is usually less but the quantization of real-world data into lower resolution data limits the resolution and visual fidelity. Trade-offs are generally made between mesh density and final experience performance.
421:
to the market, a consumer product that used structured light in the infrared spectrum to generate a 3D mesh from its camera. While the intent was to facilitate and innovate in user input and gameplay, it was very quickly adapted as a generic capture device for 3D data in the volumetric capture
383:
and produces enormous amounts of data. In 2007 the band
Radiohead used it extensively to create a music video for "House of Cards", capturing point cloud performances of the singer's face and of select environments in one of the first uses of this technology for volumetric capture. Director
598:
The main advantage of points is the potential for higher spatial resolution. Points can either be scattered on triangle meshes with pre-computed lighting, or used directly from a LIDAR scanner. Performance of talent is captured the same way as per the mesh-based approach, but more time and
465:
The result is usually split into two composited sources, static geometry and full performance capture. For static geometry, sets that are captured with a large number of overlapping digital images are then aligned to each other using similar features in the images and used as a base for
360:, scanning devices, and the computational backend to handle the data received from these new intensive methods. Generally, these advances have come as a result of creating more advanced visuals for entertainment and media, but have not been the goal of the field itself.
602:
After capturing and generating the data, editing and compositing is done within a realtime engine, connecting recorded actions to tell the intended story. The final product can then be viewed either as a flat rendering of the captured data, or interactively in a
425:
Researchers at
Microsoft then constructed an entire capture stage using multiple cameras, Kinect devices, and algorithms that generated a full volumetric capture from the combined optical and depth information. This is now the
619:
With the general understanding of the technology in mind, this chapter will describe the advances on the horizon for entertainment and other industries, as well as the potential this technology has to change media landscape.
699:
in search for a reasonable way to stream the data. This would make truly interactive immersive projects available to be distributed and worked on more efficiently and needs to be solved before the medium becomes mainstream.
397:, being distinct samples of three-dimensional space with position and color, create a high fidelity representation of the real world with a huge amount of data. However, viewing this data in real-time was not yet possible.
392:
to capture 3D point-clouds used for this music clip, and while the final output of this work was still a rendered flat representation of the data, the capture and mindset of the authors was already ahead of its time.
562:
Different workflows to generate volumetric video are currently available. These are not mutually exclusive and are used effectively in combinations. Here are some examples that show a couple of them:
629:
real and virtual worlds. This groundbreaking in the world of sensory trickery will spark an evolution in the way we consume media, and while technologies for other senses like scent, smell, and
668:
The capture and creation process of volumetric data is full of challenges and unsolved problems. It is the next step in cinematography and comes with issues that will be resolved over time.
909:
336:
and interactivity of spatialized content and could prove to be the most important development in the recording of human performance since the creation of contemporary cinema.
1476:
1014:
634:
architectural visualization and civil engineering will find ways to build entire structures and cities and explore them without the need for a single swing of a hammer.
506:
capture is rising, and VR in particular drives the applications for this technology, slowly fusing cinema, games and art with the field of volumetric capture research.
1044:
966:
686:
that need to spend time to adapt to the volumetric workflows. Currently each production is using a variety of technologies as well as trying the rules of engagement.
477:
contributed in the field, and recently Intel, Microsoft, and
Samsung have joined in by creating their own capture stages for performance capture and photogrammetry.
1387:
1338:
1039:
1271:
1004:
1471:
999:
578:
Photogrammetry is usually used as a base for static meshes, and is then augmented with performance capture of talent via the same underlying technology of
1034:
1102:
379:
scanning describes a survey method that uses laser-sampled points densely packed to scan static objects into a point cloud. This requires physical
1119:
1418:
1139:
745:
1431:
1049:
1019:
1009:
989:
959:
1380:
94:
49:
1641:
1626:
1204:
1029:
1461:
1436:
1059:
240:
222:
160:
63:
189:
941:, Start VR & Animal Logic, Interactive Cinematic VR experience (filmed at Microsoft Mixed Reality Capture Studio, Redmond, WA)
1567:
1481:
1209:
141:
1194:
952:
935:
William
Patrick Corgan: Aeronaut, VR Experience and Music Video (filmed at Microsoft Mixed Reality Capture Studio, Redmond, WA)
466:
113:
1446:
1373:
98:
735:
709:
British soccer. Those 360° degree replays will enable viewers in the future to analyze a match from multiple perspectives.
1636:
1599:
1555:
1303:
1234:
1085:
453:
120:
1519:
696:
385:
1562:
1054:
333:
204:
127:
1199:
200:
87:
1251:
1164:
1355:
1229:
526:
109:
1486:
785:
430:, used today as part of both their research division and in certain select commercial experiences such as the
1524:
1291:
1281:
1024:
932:
Blade Runner 2049: Memory Lab, VR Experience (filmed at
Microsoft Mixed Reality Capture Studio, Redmond, WA)
269:
55:
1426:
1328:
1296:
1075:
737:
Computer Vision -- ECCV 2018: 15th
European Conference, Munich, Germany, September 8-14, 2018, Proceedings
503:
1646:
1396:
1333:
1144:
1090:
260:
is a technique that captures a three-dimensional space, such as a location or performance. This type of
910:"Arsenal FC, Liverpool FC and Manchester City Bring Immersive Experiences to Fans with Intel True View"
1509:
1256:
1239:
1219:
1189:
474:
1631:
1550:
1504:
1466:
1261:
1124:
285:
261:
1572:
1276:
1184:
1129:
554:
with more advanced depth estimation techniques, compression, as well as parametric light fields.
550:
538:
large potential for the future as it samples light and displays the result in a variety of ways.
301:
827:
485:
427:
134:
296:
1214:
1159:
1111:
741:
525:
describe at a given sample point the incoming light from all directions. This is then used in
432:
349:
277:
1545:
1496:
1323:
1286:
1134:
994:
611:
requires very high bandwidth as pixel information includes depth data (i.e. become voxels)
1540:
1514:
1318:
1266:
975:
313:
284:, and other computation-based methods. The viewer generally experiences the result in a
1621:
1343:
1313:
1224:
1149:
1080:
847:
761:
630:
579:
530:
461:
357:
353:
281:
273:
265:
1615:
1308:
717:
by attracting customers with VR Arcades as well as presenting merchandise virtually.
571:
545:
of the scene. Meaning each pixel has information about its distance from the camera.
1246:
389:
328:
514:
1154:
591:
522:
495:
394:
345:
305:
76:
17:
604:
405:
380:
348:, and other ways of measuring the world has always been an important topic in
317:
312:
Recording talent without the limitation of a flat screen has been depicted in
832:
734:
Vittorio
Ferrari; Martial Hebert; Cristian Sminchisescu; Yair Weiss (2018).
542:
414:
368:
322:
437:
active in the volumetric capture field. Two results from this activity are
1451:
1441:
861:
546:
499:
470:
1365:
533:
as well as allowing the user to move their head slightly. Since 2006
418:
938:
207:. Statements consisting only of original research should be removed.
885:
944:
534:
513:
484:
452:
404:
376:
367:
295:
264:
acquires data that can be viewed on flat screens as well as using
595:
to render this, being optimised to a mesh-based render pipeline.
1456:
1369:
948:
288:
engine and has direct input in exploring the generated volume.
828:"Bring life to mixed reality at Mixed Reality Capture Studios"
541:
Another by-product of this technique is a reasonably accurate
172:
70:
29:
469:
and depth estimation. This information is interpreted as 3D
809:
518:
Lytro Illum Camera, a second generation Light Field camera.
356:
in movies and video games paved the way for advances in
272:. Consumer-facing formats are numerous and the required
929:
Carne Y Arena, Alejandro G. Inarritu, LACMA Art
Exhibit
320:
and 3D real-world visuals have featured prominently in
196:
442:
438:
1533:
1495:
1417:
1410:
1403:
1110:
1101:
1068:
982:
101:. Unsourced material may be challenged and removed.
786:"Announcing Azure Kinect support in Depthkit!"
570:This approach generates a more traditional 3D
1381:
960:
8:
590:Recently the spotlight has shifted towards
64:Learn how and when to remove these messages
1594:
1414:
1407:
1388:
1374:
1366:
1107:
967:
953:
945:
241:Learn how and when to remove this message
223:Learn how and when to remove this message
161:Learn how and when to remove this message
726:
27:Three-dimensional videography technique
1120:3D reconstruction from multiple images
926:House of Cards, Radiohead, Music video
428:Microsoft Mixed Reality Capture Studio
1140:Simultaneous localization and mapping
7:
99:adding citations to reliable sources
1205:Automatic number-plate recognition
695:starting to be available with the
25:
45:This article has multiple issues.
1593:
1210:Automated species identification
921:List of experiences contributing
177:
75:
34:
1195:Audio-visual speech recognition
388:collaborated with media artist
344:Creating 3D models from video,
86:needs additional citations for
53:or discuss these issues on the
1040:Recognition and categorization
1:
1304:Optical character recognition
1235:Content-based image retrieval
1520:Optical head-mounted display
862:"Aspect 3D volumetric video"
697:Moving Picture Experts Group
529:to generate effects such as
1563:Multi-primary color display
740:. Springer. pp. 351–.
203:the claims made and adding
1663:
1200:Automatic image annotation
1035:Noise reduction techniques
549:is using this idea in its
1642:Motion in computer vision
1627:Film and video technology
1589:
1352:
1165:Free viewpoint television
340:Computer graphics and VFX
1230:Computer-aided diagnosis
1525:Virtual retinal display
1292:Moving object detection
1282:Medical image computing
1045:Research infrastructure
1015:Image sensor technology
652:traditional skillsets.
638:Full capture and re-use
489:Virtual reality headset
1329:Video content analysis
1297:Small object detection
1076:Computer stereo vision
886:"Volograms technology"
762:"RGBDToolkit Workshop"
519:
490:
458:
410:
373:
309:
1397:Emerging technologies
1334:Video motion analysis
1145:Structure from motion
1091:3D object recognition
647:Traditional skillsets
517:
488:
456:
408:
371:
299:
1637:3D computer graphics
1510:Head-mounted display
1257:Foreground detection
1240:Reverse image search
1220:Bioimage informatics
1190:Activity recognition
372:Leica HDS-3000 LIDAR
110:"Volumetric capture"
95:improve this article
1551:Holographic display
1505:Bionic contact lens
1324:Autonomous vehicles
1262:Gesture recognition
1125:2D to 3D conversion
866:Level Five Supplies
704:Future applications
681:Pipeline disruption
276:techniques lean on
1573:Volumetric display
1556:Computer-generated
1339:Video surveillance
1277:Landmark detection
1185:3D pose estimation
1170:Volumetric capture
1130:Gaussian splatting
1086:Object recognition
1000:Commercial systems
939:Awake: Episode One
850:. 7 November 2018.
520:
491:
459:
411:
374:
310:
302:multi-camera setup
254:Volumetric capture
188:possibly contains
1609:
1608:
1585:
1584:
1581:
1580:
1363:
1362:
1272:Image restoration
1215:Augmented reality
1180:
1179:
1160:4D reconstruction
1112:3D reconstruction
1005:Feature detection
848:"Samsung HOLOLAB"
747:978-3-030-01270-0
433:Blade Runner 2049
409:Xbox One's Kinect
350:computer graphics
316:for a long time.
278:computer graphics
251:
250:
243:
233:
232:
225:
190:original research
171:
170:
163:
145:
68:
16:(Redirected from
1654:
1597:
1596:
1546:Flexible display
1415:
1408:
1390:
1383:
1376:
1367:
1287:Object detection
1252:Face recognition
1135:Shape from focus
1108:
995:Digital geometry
969:
962:
955:
946:
914:
913:
906:
900:
899:
897:
896:
882:
876:
875:
873:
872:
858:
852:
851:
844:
838:
837:
836:. 7 August 2023.
824:
818:
817:
806:
800:
799:
797:
796:
782:
776:
775:
773:
772:
758:
752:
751:
731:
401:Structured light
258:volumetric video
246:
239:
228:
221:
217:
214:
208:
205:inline citations
181:
180:
173:
166:
159:
155:
152:
146:
144:
103:
79:
71:
60:
38:
37:
30:
21:
18:Volumetric video
1662:
1661:
1657:
1656:
1655:
1653:
1652:
1651:
1612:
1611:
1610:
1605:
1577:
1541:Autostereoscopy
1529:
1515:Head-up display
1491:
1419:Next generation
1399:
1394:
1364:
1359:
1348:
1319:Robotic mapping
1267:Image denoising
1176:
1097:
1064:
1030:Motion analysis
978:
976:Computer vision
973:
923:
918:
917:
908:
907:
903:
894:
892:
884:
883:
879:
870:
868:
860:
859:
855:
846:
845:
841:
826:
825:
821:
808:
807:
803:
794:
792:
790:www.depthkit.tv
784:
783:
779:
770:
768:
760:
759:
755:
748:
733:
732:
728:
723:
706:
692:
683:
674:
672:Visual language
666:
649:
640:
626:
617:
588:
568:
560:
527:post processing
512:
483:
481:Virtual reality
451:
403:
366:
342:
314:science-fiction
294:
247:
236:
235:
234:
229:
218:
212:
209:
194:
182:
178:
167:
156:
150:
147:
104:
102:
92:
80:
39:
35:
28:
23:
22:
15:
12:
11:
5:
1660:
1658:
1650:
1649:
1644:
1639:
1634:
1629:
1624:
1614:
1613:
1607:
1606:
1604:
1603:
1590:
1587:
1586:
1583:
1582:
1579:
1578:
1576:
1575:
1570:
1565:
1560:
1559:
1558:
1548:
1543:
1537:
1535:
1531:
1530:
1528:
1527:
1522:
1517:
1512:
1507:
1501:
1499:
1493:
1492:
1490:
1489:
1484:
1479:
1474:
1469:
1464:
1459:
1454:
1449:
1444:
1439:
1434:
1429:
1423:
1421:
1412:
1405:
1401:
1400:
1395:
1393:
1392:
1385:
1378:
1370:
1361:
1360:
1353:
1350:
1349:
1347:
1346:
1344:Video tracking
1341:
1336:
1331:
1326:
1321:
1316:
1314:Remote sensing
1311:
1306:
1301:
1300:
1299:
1294:
1284:
1279:
1274:
1269:
1264:
1259:
1254:
1249:
1244:
1243:
1242:
1232:
1227:
1225:Blob detection
1222:
1217:
1212:
1207:
1202:
1197:
1192:
1187:
1181:
1178:
1177:
1175:
1174:
1173:
1172:
1167:
1157:
1152:
1150:View synthesis
1147:
1142:
1137:
1132:
1127:
1122:
1116:
1114:
1105:
1099:
1098:
1096:
1095:
1094:
1093:
1083:
1081:Motion capture
1078:
1072:
1070:
1066:
1065:
1063:
1062:
1057:
1052:
1047:
1042:
1037:
1032:
1027:
1022:
1017:
1012:
1007:
1002:
997:
992:
986:
984:
980:
979:
974:
972:
971:
964:
957:
949:
943:
942:
936:
933:
930:
927:
922:
919:
916:
915:
901:
877:
853:
839:
819:
801:
777:
753:
746:
725:
724:
722:
719:
705:
702:
691:
688:
682:
679:
673:
670:
665:
662:
648:
645:
639:
636:
631:proprioception
625:
624:True immersion
622:
616:
613:
587:
584:
580:videogrammetry
567:
564:
559:
556:
531:depth of field
511:
508:
482:
479:
462:Photogrammetry
450:
449:Photogrammetry
447:
402:
399:
365:
362:
358:photogrammetry
354:Visual effects
341:
338:
293:
290:
282:photogrammetry
274:motion capture
249:
248:
231:
230:
185:
183:
176:
169:
168:
83:
81:
74:
69:
43:
42:
40:
33:
26:
24:
14:
13:
10:
9:
6:
4:
3:
2:
1659:
1648:
1645:
1643:
1640:
1638:
1635:
1633:
1630:
1628:
1625:
1623:
1620:
1619:
1617:
1602:
1601:
1592:
1591:
1588:
1574:
1571:
1569:
1566:
1564:
1561:
1557:
1554:
1553:
1552:
1549:
1547:
1544:
1542:
1539:
1538:
1536:
1532:
1526:
1523:
1521:
1518:
1516:
1513:
1511:
1508:
1506:
1503:
1502:
1500:
1498:
1494:
1488:
1485:
1483:
1480:
1478:
1475:
1473:
1470:
1468:
1465:
1463:
1460:
1458:
1455:
1453:
1450:
1448:
1445:
1443:
1440:
1438:
1435:
1433:
1430:
1428:
1425:
1424:
1422:
1420:
1416:
1413:
1409:
1406:
1402:
1398:
1391:
1386:
1384:
1379:
1377:
1372:
1371:
1368:
1358:
1357:
1356:Main category
1351:
1345:
1342:
1340:
1337:
1335:
1332:
1330:
1327:
1325:
1322:
1320:
1317:
1315:
1312:
1310:
1309:Pose tracking
1307:
1305:
1302:
1298:
1295:
1293:
1290:
1289:
1288:
1285:
1283:
1280:
1278:
1275:
1273:
1270:
1268:
1265:
1263:
1260:
1258:
1255:
1253:
1250:
1248:
1245:
1241:
1238:
1237:
1236:
1233:
1231:
1228:
1226:
1223:
1221:
1218:
1216:
1213:
1211:
1208:
1206:
1203:
1201:
1198:
1196:
1193:
1191:
1188:
1186:
1183:
1182:
1171:
1168:
1166:
1163:
1162:
1161:
1158:
1156:
1153:
1151:
1148:
1146:
1143:
1141:
1138:
1136:
1133:
1131:
1128:
1126:
1123:
1121:
1118:
1117:
1115:
1113:
1109:
1106:
1104:
1100:
1092:
1089:
1088:
1087:
1084:
1082:
1079:
1077:
1074:
1073:
1071:
1067:
1061:
1058:
1056:
1053:
1051:
1048:
1046:
1043:
1041:
1038:
1036:
1033:
1031:
1028:
1026:
1023:
1021:
1018:
1016:
1013:
1011:
1008:
1006:
1003:
1001:
998:
996:
993:
991:
988:
987:
985:
981:
977:
970:
965:
963:
958:
956:
951:
950:
947:
940:
937:
934:
931:
928:
925:
924:
920:
911:
905:
902:
891:
887:
881:
878:
867:
863:
857:
854:
849:
843:
840:
835:
834:
829:
823:
820:
815:
811:
805:
802:
791:
787:
781:
778:
767:
763:
757:
754:
749:
743:
739:
738:
730:
727:
720:
718:
714:
710:
703:
701:
698:
689:
687:
680:
678:
671:
669:
663:
661:
657:
653:
646:
644:
637:
635:
632:
623:
621:
614:
612:
608:
606:
600:
596:
593:
585:
583:
581:
576:
573:
572:triangle mesh
565:
563:
557:
555:
552:
548:
544:
539:
536:
532:
528:
524:
516:
509:
507:
505:
501:
497:
487:
480:
478:
476:
472:
468:
467:triangulation
463:
455:
448:
446:
444:
440:
435:
434:
429:
423:
420:
416:
407:
400:
398:
396:
391:
387:
382:
378:
370:
363:
361:
359:
355:
351:
347:
339:
337:
335:
331:
330:
325:
324:
319:
315:
307:
304:recording a "
303:
298:
291:
289:
287:
283:
279:
275:
271:
267:
263:
259:
255:
245:
242:
227:
224:
216:
206:
202:
198:
192:
191:
186:This article
184:
175:
174:
165:
162:
154:
151:December 2018
143:
140:
136:
133:
129:
126:
122:
119:
115:
112: –
111:
107:
106:Find sources:
100:
96:
90:
89:
84:This article
82:
78:
73:
72:
67:
65:
58:
57:
52:
51:
46:
41:
32:
31:
19:
1647:Telepresence
1598:
1354:
1247:Eye tracking
1169:
1103:Applications
1069:Technologies
1055:Segmentation
904:
893:. Retrieved
889:
880:
869:. Retrieved
865:
856:
842:
831:
822:
813:
804:
793:. Retrieved
789:
780:
769:. Retrieved
765:
756:
736:
729:
715:
711:
707:
693:
684:
675:
667:
658:
654:
650:
641:
627:
618:
609:
601:
597:
589:
577:
569:
561:
540:
523:Light fields
521:
510:Light fields
494:such as the
492:
460:
457:3D animation
431:
424:
417:brought the
412:
395:Point clouds
390:Aaron Koblin
375:
343:
329:Blade Runner
327:
321:
311:
262:volumography
257:
253:
252:
237:
219:
210:
187:
157:
148:
138:
131:
124:
117:
105:
93:Please help
88:verification
85:
61:
54:
48:
47:Please help
44:
1155:Visual hull
1050:Researchers
814:4dviews.com
592:point-based
586:Point-based
551:Surround360
496:Oculus Rift
386:James Frost
346:photography
306:bullet time
266:3D displays
1632:3D imaging
1616:Categories
1497:Screenless
1025:Morphology
983:Categories
895:2020-06-23
871:2020-06-23
795:2019-08-06
771:2019-08-06
721:References
690:Data rates
664:Challenges
605:VR headset
566:Mesh-based
270:VR goggles
213:April 2018
197:improve it
121:newspapers
50:improve it
890:Volograms
833:Microsoft
558:Workflows
543:depth map
415:Microsoft
334:immersion
323:Star Wars
318:Holograms
286:real-time
201:verifying
56:talk page
1568:Ultra HD
1452:MicroLED
1411:Displays
1060:Software
1020:Learning
1010:Geometry
990:Datasets
615:Promises
547:Facebook
500:HTC Vive
471:geometry
439:Depthkit
413:In 2010
381:scanners
308:" effect
766:Eyebeam
443:EF EVE™
292:History
195:Please
135:scholar
1467:QD-LED
1404:Fields
810:"Home"
744:
419:Kinect
137:
130:
123:
116:
108:
1622:Video
1534:Other
1442:Laser
535:Lytro
504:6-DOF
377:LIDAR
364:LIDAR
142:JSTOR
128:books
1600:List
1482:TMOS
1477:TDEL
1462:OLET
1457:OLED
1437:iMoD
1432:FLCD
742:ISBN
498:and
441:and
268:and
114:news
1487:TPD
1472:SED
1447:LPD
1427:FED
256:or
199:by
97:by
1618::
888:.
864:.
830:.
812:.
788:.
764:.
607:.
475:8i
326:,
300:A
280:,
59:.
1389:e
1382:t
1375:v
968:e
961:t
954:v
912:.
898:.
874:.
816:.
798:.
774:.
750:.
244:)
238:(
226:)
220:(
215:)
211:(
193:.
164:)
158:(
153:)
149:(
139:·
132:·
125:·
118:·
91:.
66:)
62:(
20:)
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.