Texture mapping (TM) as a fundamental task in 3-D modeling has been well established for well-acquired aerial assets under consistent illumination, yet it remains a challenge when it is scaled… Click to show full abstract
Texture mapping (TM) as a fundamental task in 3-D modeling has been well established for well-acquired aerial assets under consistent illumination, yet it remains a challenge when it is scaled to large datasets with images under varying views and illuminations. A well-performed TM algorithm must be able to efficiently select views, fuse and map textures from these views to mesh models, at the same time, achieve consistent radiometry over the entire model. Existing approaches achieve efficiency either by limiting the number of images to one view per face, or simplifying global inferences to only achieve local color consistency. In this article, we break this tie by proposing a novel and efficient TM framework that allows the use of multiple views of texture per face, at the same time to achieve global color consistency. The proposed method leverages a loopy belief propagation algorithm to perform an efficient and global-level probabilistic inferences to rank candidate views per face, which enables face-level multiview texture fusion and blending. The texture fusion algorithm, being nonparametric, brings another advantage over typical parametric post color correction methods, due to its improved robustness to nonlinear illumination differences. The experiments on three different types of datasets (i.e., satellite dataset, unmanned-aerial vehicle (UAV) dataset, and close-range dataset) show that the proposed method has produced visually pleasant and texturally consistent results in all scenarios, with an added advantage of consuming less running time as compared to the state-of-the-art methods, especially for large-scale dataset such as satellite-derived models.
               
Click one of the above tabs to view related content.