Natural environments usually have a larger dynamic range than the dynamic range that can be acquired by an optical camera with a single shot. In this paper, we propose a… Click to show full abstract
Natural environments usually have a larger dynamic range than the dynamic range that can be acquired by an optical camera with a single shot. In this paper, we propose a multiexposure fusion method that effectively fuses in a direct manner differently exposed images of a high dynamic range scene into a high-quality image. First, we present a developed joint weight by considering the exposure level measurement of the local and global luminance components of the input images. Second, we introduce a designed multiscale edge-preserving smoothing (MEPS) model for direct representing the weight maps. Third, two scale-aware factors for the MEPS model are adaptively determined without manual interference to obtain an optimal representation effect for each scale of the weight maps. The proposed adaptive MEPS model does not require Gaussian filtering steps to first smoothen the weight maps. It significantly reduces spatial artifacts in the fused image. We compare the proposed method with eight existing methods on 30 sequences from two databases with different characteristics. The experimental results indicate that the proposed approach achieves better imaging performance than the existing state-of-the-art methods, on both quantitative and qualitative evaluation. Moreover, it maintains a high computational efficiency.
               
Click one of the above tabs to view related content.