We present an efficient combination strategy for color constancy algorithms. We define a compact neural network architecture to process and combine the illuminant estimations of individual algorithms, that may be… Click to show full abstract
We present an efficient combination strategy for color constancy algorithms. We define a compact neural network architecture to process and combine the illuminant estimations of individual algorithms, that may be based on different assumptions over the input scene content. Our solution can be specialized to the image domain, thus expecting a single frame input, and to the video domain, exploiting a Long Short-Term Memory module (LSTM) to handle varying-length sequences. To prove the effectiveness of our combining method we limit ourselves to combine only learning-free color constancy algorithms based on simple image statistics. We experiment on the standard Shi-Gehler and NUS datasets for still images, and on the recent Burst Color Constancy dataset for videos. Experimental results show that our method outperforms other combination strategies, and reaches an illuminant estimation accuracy comparable to more sophisticated and computationally-demanding solutions when the standard dataset split is used. Furthermore, our solution is proven to be effective even when the number of training instances available is reduced. As a further analysis, we assess the individual contribution of each underlying method towards the final illuminant estimation.
               
Click one of the above tabs to view related content.