Skip to main navigation menu Skip to main content Skip to site footer

Depth from Defocus via Active Multispectral Quasi-random Point Projections using Deep Learning

Abstract

A novel approach for inferring depth measurements via multispectral
active depth from defocus and deep learning has been designed,
implemented, and successfully tested. The scene is actively
illuminated with a multispectral quasi-random point pattern,
and a conventional RGB camera is used to acquire images of the
projected pattern. The projection points in the captured image of
the projected pattern are analyzed using an ensemble of deep neural
networks to estimate the depth at each projection point. A final
depth map is then reconstructed algorithmically based on the point
depth estimates. Experiments using different test scenes with different
structural characteristics show that the proposed approach
can produced improved depth maps compared to prior deep learning
approaches using monospectral projection patterns.

PDF