This paper presents a novel Deep Learning (DL) model that estimates camera parameters, including camera rotations, field of view, and distortion parameter, from single-view images. The classical approach often analyzes… Click to show full abstract
This paper presents a novel Deep Learning (DL) model that estimates camera parameters, including camera rotations, field of view, and distortion parameter, from single-view images. The classical approach often analyzes geometric cues such as vanishing points, but is constrained only when geometric cues exist in images. To alleviate such constraints, we use DL, and employ implicit geometric cues, which can reflect the inter-image changes of camera parameters and be observed more frequently in images. Our geometric cues are inspired by two important intuitions: 1) geometric appearance changes caused by camera parameters are the most prominent in object edges; 2) spatially consistent objects (in size and shape) better reflect the inter-image changes of camera parameters. To realize our approach, we propose a weighted edge-attention mechanism that assigns higher weights onto the edges of spatially consistent objects. Our experiments prove that our edge-driven geometric emphasis significantly improves the estimation accuracy of the camera parameters than the existing DL-based approaches.
               
Click one of the above tabs to view related content.