Background extraction is generally the first step in many computer vision and augmented reality applications. Most existing methods, which assume the existence of a clean background during the reconstruction period,… Click to show full abstract
Background extraction is generally the first step in many computer vision and augmented reality applications. Most existing methods, which assume the existence of a clean background during the reconstruction period, are not suitable for video sequences such as highway traffic surveillance videos, whose complex foreground movements may not meet the assumption of a clean background. Therefore, we propose a novel joint Gaussian conditional random field (JGCRF) background extraction algorithm for estimating the optimal weights of frame composition for a fixed-view video sequence. A maximum a posteriori problem is formulated to describe the intra- and inter-frame relationships among all pixels of all frames based on their contrast distinctness and spatial and temporal coherence. Because all background objects and elements are assumed to be static, patches that are motionless are good candidates for the background. Therefore, in the algorithm method, a motionless extractor is designed by computing the pixel-wise differences between two consecutive frames and thresholding the accumulation of variation across the frames to remove possible moving patches. The proposed JGCRF framework can flexibly link extracted motionless patches with desired fusion weights as extra observable random variables to constrain the optimization process for more consistent and robust background extraction. The results of quantitative and qualitative experiments demonstrated the effectiveness and robustness of the proposed algorithm compared with several state-of-the-art algorithms; the proposed algorithm also produced fewer artifacts and had a lower computational cost.
               
Click one of the above tabs to view related content.