The advent of depth sensing technologies means that the extraction of object contours in images—a common and important pre-processing step for later higher level computer vision tasks like object detection… Click to show full abstract
The advent of depth sensing technologies means that the extraction of object contours in images—a common and important pre-processing step for later higher level computer vision tasks like object detection and human action recognition—has become easier. However, captured depth images contain acquisition noise and the detected contours suffer from errors as a result. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. First, we prove theoretically that in general a joint denoising/compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we propose a burst error model that models typical errors encountered in an observed string of directional edges. We then formulate a rate-constrained maximum a posteriori problem that trades off the posterior probability of an estimated string given with its code rate. We design a dynamic programming algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree that can reduce complexity of the algorithm dramatically. To the best of our knowledge, we are the first in the literature to study the problem of joint denoising/compression of image contours and offer a computation-efficient optimization algorithm. Experimental results show that our joint denoising/compression scheme can reduce bitrate by up to 18% compared with a competing separate scheme at comparable visual quality.
               
Click one of the above tabs to view related content.