The recent popularity of remote desktop software and live streaming of composited video has given rise to a growing number of applications that make use of the so-called screen content… Click to show full abstract
The recent popularity of remote desktop software and live streaming of composited video has given rise to a growing number of applications that make use of the so-called screen content images that contain a mixture of text, graphics, and photographic imagery. Automatic quality assessment (QA) of screen-content images is necessary to enable tasks, such as quality monitoring, parameter adaptation, and other optimizations. Although QA of natural images has been heavily researched over the last several decades, the QA of screen content images is a relatively new topic. In this paper, we present a QA algorithm called convolutional neural network-based screen content image quality estimator (CNN-SQE), which operates via a fuzzy classification of screen content images into plain-text, computer-graphics/cartoons, and natural-image regions. The first two classes are considered to contain synthetic content (text/graphics), and the latter two classes are considered to contain naturalistic content (graphics/photographs), where the overlap of the classes allows the computer graphics/cartoons segments to be analyzed by both text-based and natural-image-based features. We present a CNN-based approach for the classification, an edge-structure-based quality degradation model, and a region-size-adaptive quality-fusion strategy. As we will demonstrate, the proposed CNN-SQE algorithm can achieve better/competitive performance as compared with the other state-of-the-art QA algorithms.
               
Click one of the above tabs to view related content.