LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Learning scene and blur model for active chromatic depth from defocus.

Photo from wikipedia

In this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus… Click to show full abstract

In this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus (DFD). Using a limited set of calibration images, we directly learn image covariance, which encodes both scene and blur (i.e., depth) information. Depth is then estimated from a single image patch using a maximum likelihood criterion defined using the learned covariance. This method is applied here within a new active DFD method using a dense textured projection and a chromatic lens for image acquisition. The projector adds texture for low-textured objects, which is usually a limitation of DFD, and the chromatic aberration increases the estimated depth range with respect to a conventional DFD. Here, we provide quantitative evaluations of the depth estimation performance of our method on simulated and real data of fronto-parallel untextured scenes. The proposed method is then experimentally evaluated qualitatively using a 3D printed benchmark.

Keywords: depth; learning scene; method; scene blur; defocus; depth defocus

Journal Title: Applied optics
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.