Depth from Defocus via Active Multispectral Quasi-random Point Projections using Deep Learning

  • Avery Ma
  • Alexander Wong
  • David Clausi

Abstract

A novel approach for inferring depth measurements via multispectral
active depth from defocus and deep learning has been designed,
implemented, and successfully tested. The scene is actively
illuminated with a multispectral quasi-random point pattern,
and a conventional RGB camera is used to acquire images of the
projected pattern. The projection points in the captured image of
the projected pattern are analyzed using an ensemble of deep neural
networks to estimate the depth at each projection point. A final
depth map is then reconstructed algorithmically based on the point
depth estimates. Experiments using different test scenes with different
structural characteristics show that the proposed approach
can produced improved depth maps compared to prior deep learning
approaches using monospectral projection patterns.

Published
2017-10-15
How to Cite
Ma, A., Wong, A., & Clausi, D. (2017). Depth from Defocus via Active Multispectral Quasi-random Point Projections using Deep Learning. Journal of Computational Vision and Imaging Systems, 3(1). https://doi.org/10.15353/vsnl.v3i1.165
Section
Articles