Background modeling techniques for embedded computer vision applications must balance accuracy, speed, and power. Basic background modeling techniques run quickly, but their accuracy is not sufficient for computer vision problems… Click to show full abstract
Background modeling techniques for embedded computer vision applications must balance accuracy, speed, and power. Basic background modeling techniques run quickly, but their accuracy is not sufficient for computer vision problems involving dynamic background. In contrast, adaptive background modeling techniques are more robust, but run more slowly. Due to its high inherent fine-grain parallelism, robust adaptive background modeling has been implemented on GPUs with significant performance improvements over CPUs. However, these implementations are infeasible in embedded applications due to the high power ratings of the targeted general-purpose GPU platforms. This paper focuses on exploiting fine-grain data parallelism and optimizing memory access patterns to target a low-cost adaptive background modeling algorithm multimodal mean (MMM) to a low-power GPU with thermal design power (TDP) of only 12 watts. The algorithm has comparable accuracy with the Gaussian mixture model (GMM) algorithm, but less computational and memory cost. It achieves a frame rate of 392 fps with a full VGA resolution (640x480) frame on the low-power integrated GPU NVIDIA ION. This is a 20x speed-up of the MMM algorithm compared to the embedded CPU platform Intel Atom of comparable TDP. In addition, the MMM algorithm attains a 5-6x speed up over the GMM implementation on the ION GPU platform.
               
Click one of the above tabs to view related content.