LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

D-Net: A Generalised and Optimised Deep Network for Monocular Depth Estimation

Photo from wikipedia

Depth estimation is an essential component in computer vision systems for achieving 3D scene understanding. Efficient and accurate depth map estimation has numerous applications including self-driving vehicles and virtual reality… Click to show full abstract

Depth estimation is an essential component in computer vision systems for achieving 3D scene understanding. Efficient and accurate depth map estimation has numerous applications including self-driving vehicles and virtual reality tools. This paper presents a new deep network, called D-Net, for depth estimation from a single RGB image. The proposed network can be trained end-to-end, and its structure can be customised to meet different requirements in model size, speed, and prediction accuracy. Our approach gathers strong global and local contextual features at multiple resolutions, and then transfers these to high resolutions for clearer depth maps. For the encoder backbone, D-Net can utilise many state-of-the-art models including EfficientNet, HRNet and Swin Transformer to obtain dense depth maps. The proposed D-net is designed to have minimal parameters and reduced computational complexity. Extensive evaluations on the NYUv2 and KITTI benchmark datasets show that our model is highly accurate across multiple backbones, and it achieves state-of-the-art performance on both benchmarks when combined with the Swin Transformer and HRNets.

Keywords: depth; deep network; estimation; net; depth estimation

Journal Title: IEEE Access
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.