Knowledge (XXG)

Range imaging

Source 📝

43: 254:(PCA) based technique, the method learns off-line a bank of filters that uniquely identify each size of blur; these filters are then applied directly to the captured image, as a normal convolution. The most important advantage of this approach is that no information about the coded aperture pattern is required. Because of its efficiency, this algorithm has also been extended to video sequences with moving and deformable objects. 196:, except that ToF is scannerless, i.e., the entire scene is captured with a single light pulse, as opposed to point-by-point with a rotating laser beam. Time-of-flight cameras are relatively new devices that capture a whole scene in three dimensions with a dedicated image sensor, and therefore have no need for moving parts. A time-of-flight laser radar with a fast gating 142:
the distance between the light source and the reflected points. By observing the reflected sheet of light using a camera (often a high resolution camera) and knowing the positions and orientations of both camera and light source, it is possible to determine the distances between the reflected points and the light source or camera.
128:
is one of the main problems when using this type of technique. For instance, it is difficult to solve the correspondence problem for image points that lie inside regions of homogeneous intensity or color. As a consequence, range imaging based on stereo triangulation can usually produce reliable depth
246:
The first approach uses correct mathematical deconvolution that takes account of the known aperture design pattern; this deconvolution can identify where and by what degree the scene has become convoluted by out of focus light selectively falling on the capture surface, and reverse the process. Thus
235:
Depth information may be partially or wholly inferred alongside intensity through reverse convolution of an image captured with a specially designed coded aperture pattern with a specific complex arrangement of holes through which the incoming light is either allowed through or blocked. The complex
141:
If the scene is illuminated with a sheet of light this creates a reflected line as seen from the light source. From any point out of the plane of the sheet the line will typically appear as a curve, the exact shape of which depends both on the distance between the observer and the light source, and
123:
system. This way it is possible to determine the depth to points in the scene, for example, from the center point of the line between their focal points. In order to solve the depth measurement problem using a stereo camera system it is necessary to first find corresponding points in the different
257:
Since the depth for a point is inferred from its extent of blurring caused by the light spreading from the corresponding point in the scene arriving across the entire surface of the aperture and distorting according to this spread, this is a complex form of stereo triangulation. Each point in the
132:
The advantage of this technique is that the measurement is more or less passive; it does not require special conditions in terms of scene illumination. The other techniques mentioned here do not have to solve the correspondence problem but are instead dependent on particular scene illumination
216:
and measuring the phase shift of the reflected light relative to the light source it is possible to determine depth. Under the assumption that the true range image is a more or less continuous function of the image coordinates, the correct depth can be obtained using a technique called
236:
shape of the aperture creates a non-uniform blurring of the image for those parts of the scene not at the focal plane of the lens. The extent of blurring across the scene, which is related to the displacement from the focal plane, may be used to infer the depth.
145:
By moving either the light source (and normally also the camera) or the scene in front of the camera, a sequence of depth profiles of the scene can be generated. These can be represented as a 2D range image.
87:
has pixel values that correspond to the distance. If the sensor that is used to produce the range image is properly calibrated the pixel values can be given directly in physical units, such as meters.
200:
achieves sub-millimeter depth resolution. With this technique a short laser pulse illuminates a scene, and the intensified CCD camera opens its high speed shutter only for a few hundred
166:, depth can be determined using only a single image of the reflected light. The structured light can be in the form of horizontal and vertical lines, points or checker board patterns. A 565: 250:
The second approach, instead, extracts the extent of the blur bypassing the recovery of the blur-free image, and therefore without performing reverse convolution. Using a
204:. The 3D information is calculated from a 2D image series that was gathered with increasing delay between the laser pulse and the shutter opening. 192:, in that a range image similar to a radar image is produced, except that a light pulse is used instead of an RF pulse. It is also not unlike a 481: 443: 239:
In order to identify the size of the blur (needed to decode depth information) in the captured image, two approaches can be used: 1)
546: 527: 505: 296: 218: 406: 251: 459:
Martinello, Manuel; Favaro, Paolo (2012). "Depth estimation from a video sequence with moving and deformable objects".
155: 420: 306: 31: 460: 374: 388: 103:. Range cameras can operate according to a number of different techniques, some of which are presented here. 243:
the captured image with different blurs, or 2) learning some linear filters that identify the type of blur.
339: 335: 331: 125: 171: 355: 350: 197: 183: 120: 188:
The depth can also be measured using the standard time-of-flight (ToF) technique, more or less like a
575: 321: 170:
is basically a generic structured light range imaging device originally created for the job of
542: 523: 501: 477: 439: 311: 570: 469: 431: 162: 95:
The sensor device that is used for producing the range image is sometimes referred to as a
76:
is the name for a collection of techniques that are used to produce a 2D image showing the
430:. Lecture Notes in Computer Science. Vol. 7082. Springer-Verlag. pp. 124–151. 56:
Please help update this article to reflect recent events or newly available information.
345: 230: 213: 112: 559: 515: 116: 327: 435: 167: 286: 240: 201: 115:
where the depth data of the pixels are determined from data acquired using a
291: 274: 270: 80:
from a specific point, normally associated with some type of sensor device.
77: 17: 129:
estimates only for a subset of all points visible in the multiple cameras.
473: 262: 258:
image is effectively spatially sampled across the width of the aperture.
247:
the blur-free scene may be retrieved together with the size of the blur.
421:"Single Image Blind Deconvolution with Higher-Order Texture Statistics" 266: 277:
have tried to use this technology but they do not use the 3D mapping.
377:
Jens Busck and Henning Heiselberg, Danmarks Tekniske University, 2004
301: 160:
By illuminating the scene with a specially designed light pattern,
498:
Practical Handbook on Image Processing for Scientific Applications
316: 193: 189: 407:
Image and depth from a conventional camera with a coded aperture
36: 409:
Anat Levin, Rob Fergus, Fredo Durand, William T. Freeman, MIT
462:IET Conference on Image Processing (IPR 2012) 8: 261:This technology has lately been used in the 566:Image sensor technology in computer vision 419:Martinello, Manuel; Favaro, Paolo (2011). 111:Stereo triangulation is an application of 537:David A. Forsyth and Jean Ponce (2003). 428:Video Processing and Computational Video 367: 334:provides an effective solution to the 7: 539:Computer Vision, A Modern Approach 25: 41: 518:and George C. Stockman (2001). 219:terrestrial SAR interferometry 1: 78:distance to points in a scene 375:High accuracy 3D laser radar 330:technique developed for the 252:principal component analysis 212:By illuminating points with 137:Sheet of light triangulation 436:10.1007/978-3-642-24870-2_6 387:Martinello, Manuel (2012). 156:Structured-light 3D scanner 592: 307:Laser Dynamic Range Imager 228: 181: 153: 32:high-dynamic-range imaging 29: 396:. Heriot-Watt University. 265:. Many other phones from 50:This article needs to be 30:Not to be confused with 390:Coded Aperture Imaging 340:virtual cinematography 336:correspondence problem 297:Intensified CCD camera 217:phase-unwrapping. See 198:intensified CCD camera 126:correspondence problem 91:Types of range cameras 356:Time-of-flight camera 351:Structure from motion 229:Further information: 184:Time-of-flight camera 121:multiple-camera setup 496:Bernd Jähne (1997). 474:10.1049/cp.2012.0425 124:images. Solving the 113:stereophotogrammetry 107:Stereo triangulation 172:reflectance capture 27:Measuring technique 324:(plenoptic camera) 322:Light-field camera 541:. Prentice Hall. 522:. Prentice Hall. 483:978-1-84919-632-1 445:978-3-642-24869-6 312:Laser rangefinder 71: 70: 16:(Redirected from 583: 552: 533: 516:Linda G. Shapiro 511: 488: 487: 467: 456: 450: 449: 425: 416: 410: 404: 398: 397: 395: 384: 378: 372: 332:Matrix franchise 163:structured light 150:Structured light 66: 63: 57: 45: 44: 37: 21: 591: 590: 586: 585: 584: 582: 581: 580: 556: 555: 549: 536: 530: 520:Computer Vision 514: 508: 495: 492: 491: 484: 468:. p. 131. 465: 458: 457: 453: 446: 423: 418: 417: 413: 405: 401: 393: 386: 385: 381: 373: 369: 364: 283: 233: 227: 210: 186: 180: 158: 152: 139: 109: 93: 67: 61: 58: 55: 46: 42: 35: 28: 23: 22: 15: 12: 11: 5: 589: 587: 579: 578: 573: 568: 558: 557: 554: 553: 547: 534: 528: 512: 506: 490: 489: 482: 451: 444: 411: 399: 379: 366: 365: 363: 360: 359: 358: 353: 348: 346:Photogrammetry 343: 325: 319: 314: 309: 304: 299: 294: 289: 282: 279: 231:Coded aperture 226: 225:Coded aperture 223: 214:coherent light 209: 208:Interferometry 206: 182:Main article: 179: 178:Time-of-flight 176: 154:Main article: 151: 148: 138: 135: 108: 105: 92: 89: 83:The resulting 69: 68: 49: 47: 40: 26: 24: 14: 13: 10: 9: 6: 4: 3: 2: 588: 577: 574: 572: 569: 567: 564: 563: 561: 550: 548:0-12-379777-2 544: 540: 535: 531: 529:0-13-030796-3 525: 521: 517: 513: 509: 507:0-8493-8906-2 503: 500:. CRC Press. 499: 494: 493: 485: 479: 475: 471: 464: 463: 455: 452: 447: 441: 437: 433: 429: 422: 415: 412: 408: 403: 400: 392: 391: 383: 380: 376: 371: 368: 361: 357: 354: 352: 349: 347: 344: 341: 337: 333: 329: 326: 323: 320: 318: 315: 313: 310: 308: 305: 303: 300: 298: 295: 293: 290: 288: 285: 284: 280: 278: 276: 272: 268: 264: 259: 255: 253: 248: 244: 242: 237: 232: 224: 222: 220: 215: 207: 205: 203: 199: 195: 191: 185: 177: 175: 173: 169: 165: 164: 157: 149: 147: 143: 136: 134: 130: 127: 122: 118: 114: 106: 104: 102: 98: 90: 88: 86: 81: 79: 75: 74:Range imaging 65: 53: 48: 39: 38: 33: 19: 538: 519: 497: 461: 454: 427: 414: 402: 389: 382: 370: 328:Optical flow 260: 256: 249: 245: 238: 234: 211: 187: 161: 159: 144: 140: 133:conditions. 131: 110: 101:depth camera 100: 97:range camera 96: 94: 84: 82: 73: 72: 62:October 2019 59: 51: 18:Depth camera 202:picoseconds 168:light stage 85:range image 576:3D imaging 560:Categories 362:References 338:to enable 287:3D scanner 241:deblurring 292:Depth map 275:Microsoft 271:computers 281:See also 263:iPhone X 571:Cameras 267:Samsung 52:updated 545:  526:  504:  480:  442:  302:Kinect 117:stereo 466:(PDF) 424:(PDF) 394:(PDF) 317:Lidar 273:from 194:LIDAR 190:radar 543:ISBN 524:ISBN 502:ISBN 478:ISBN 440:ISBN 269:and 470:doi 432:doi 119:or 99:or 562:: 476:. 438:. 426:. 221:. 174:. 551:. 532:. 510:. 486:. 472:: 448:. 434:: 342:. 64:) 60:( 54:. 34:. 20:)

Index

Depth camera
high-dynamic-range imaging
distance to points in a scene
stereophotogrammetry
stereo
multiple-camera setup
correspondence problem
Structured-light 3D scanner
structured light
light stage
reflectance capture
Time-of-flight camera
radar
LIDAR
intensified CCD camera
picoseconds
coherent light
terrestrial SAR interferometry
Coded aperture
deblurring
principal component analysis
iPhone X
Samsung
computers
Microsoft
3D scanner
Depth map
Intensified CCD camera
Kinect
Laser Dynamic Range Imager

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.