In the human and computer vision, color constancy is the ability to perceive the true color of objects in spite of changing illumination conditions. Color constancy is remarkably benefitting human… Click to show full abstract
In the human and computer vision, color constancy is the ability to perceive the true color of objects in spite of changing illumination conditions. Color constancy is remarkably benefitting human and computer vision issues such as human tracking, object and human detection and scene understanding. Traditional color constancy approaches based on the gray world assumption fall short of performing a universal predictor, but recent color constancy methods have greatly progressed with the introduction of convolutional neural networks (CNNs). Yet, shallow CNN-based methods face learning capability limitations. Accordingly, this article proposes a novel color constancy method that uses a multi-stream deep neural network (MSDNN)-based convoluted mixture of deep experts (CMoDE) fusion technique in performing deep learning and estimating local illumination. In the proposed method, the CMoDE fusion technique is used to extract and learn spatial and spectral features in an image space. The proposed method distinctively piles up layers both in series and in parallel, selects and concatenates effective paths in the CMoDE-based DCNN, as opposed to previous works where residual networks stack multiple layers linearly and concatenate multiple paths. As a result, the proposed CMoDE-based DCNN brings significant progress towards efficiency of using computing resources, as well as accuracy of estimating illuminants. In the experiments, Shi’s Reprocessed, gray-ball and NUS-8 Camera datasets are used to prove illumination and camera invariants. The experimental results establish that this new method surpasses its conventional counterparts.
               
Click one of the above tabs to view related content.