Abstract 3D videos quality assessment (3D-VQA) is essential to various 3D video processing applications. However, it has not been well investigated on how to make use of perceptual multi-channel video… Click to show full abstract
Abstract 3D videos quality assessment (3D-VQA) is essential to various 3D video processing applications. However, it has not been well investigated on how to make use of perceptual multi-channel video information to improve 3D-VQA under different distortion categories and degrees, especially under asymmetrical distortions. In the paper, we propose a new blind 3D-VQA metric by jointly learning perceptually heterogeneous features. Firstly, a binocular spatio-temporal internal generative mechanism (BST-IGM) is proposed to decompose the views of 3D video into multi-channel videos. Then, we extract perceptually heterogeneous features by proposed multi-channel natural video statistics (MNVS) model, which are characterized 3D video information. Furthermore, a robust AdaBoosting Radial Basis Function (RBF) neural network is utilized to map the features to the overall quality of 3D video. On two benchmark databases, the extensive evaluations demonstrate that the proposed algorithm significantly outperforms several state-of-the-art quality metrics in term of prediction accuracy and robustness.
               
Click one of the above tabs to view related content.