The studies of previous decades have shown that the quality of depth maps can be significantly lifted by introducing the guidance from intensity images describing the same scenes. With the… Click to show full abstract
The studies of previous decades have shown that the quality of depth maps can be significantly lifted by introducing the guidance from intensity images describing the same scenes. With the rising of deep convolutional neural network, the performance of guided depth map super-resolution is further improved. The variants always consider deep structure, optimized gradient flow and feature reusing. Nevertheless, it is difficult to obtain sufficient and appropriate guidance from intensity features without any prior. In fact, features in the gradient domain, e.g., edges, present strong correlations between the intensity image and the corresponding depth map. Therefore, the guidance in the gradient domain can be more efficiently explored. In this paper, the depth features are iteratively upsampled by 2×. In each upsampling stage, the low-quality depth features and the corresponding gradient features are iteratively refined by the guidance from the intensity features via two parallel streams. Then, to make full use of depth features in the image and gradient domains, the depth features and gradient features are alternatively complemented with each other. Compared with state-of-the-art counterparts, the sufficient experimental results show improvements according to the objective and subjective assessments. The code is available at https://github.com/Yifan-Zuo/MIG-net-gradient_guided_depth_enhancement.
               
Click one of the above tabs to view related content.