Depth from Defocus via Active Multispectral Quasi-random Point Projections using Deep Learning

Avery Ma, Alexander Wong, David Clausi


A novel approach for inferring depth measurements via multispectral
active depth from defocus and deep learning has been designed,
implemented, and successfully tested. The scene is actively
illuminated with a multispectral quasi-random point pattern,
and a conventional RGB camera is used to acquire images of the
projected pattern. The projection points in the captured image of
the projected pattern are analyzed using an ensemble of deep neural
networks to estimate the depth at each projection point. A final
depth map is then reconstructed algorithmically based on the point
depth estimates. Experiments using different test scenes with different
structural characteristics show that the proposed approach
can produced improved depth maps compared to prior deep learning
approaches using monospectral projection patterns.

Full Text:




  • There are currently no refbacks.

Comments on this article

View all comments