In this paper, we propose a pyramidal spatiotemporal feature hierarchy (PSFH)-based no-reference (NR) video quality assessment (VQA) method using transfer learning. First, we generate simulated videos by a generative adversarial… Click to show full abstract
In this paper, we propose a pyramidal spatiotemporal feature hierarchy (PSFH)-based no-reference (NR) video quality assessment (VQA) method using transfer learning. First, we generate simulated videos by a generative adversarial network (GAN)-based image restoration model. The residual maps between the distorted frames and simulated frames, which can capture rich information, are utilized as one input of the quality regression network. Second, we use 3D convolution operations to construct a PSFH network with five stages. The spatiotemporal features incorporating the shared features transferred from the pretrained image restoration model are fused stage by stage. Third, with the guidance of the transferred knowledge, each stage generates multiple feature mapping layers that encode different semantics and degradation information using 3D convolution layers and gated recurrent units (GRUs). Finally, five approximate perceptual quality scores and a precise prediction score are obtained by fully connected (FC) networks. The whole model is trained under a finely designed loss function that combines pseudo-Huber loss and Pearson linear correlation coefficient (PLCC) loss to improve the robustness and prediction accuracy. According to the extensive experiments, outstanding results can be obtained compared with other state-of-the-art methods. Both the source code and models are available online.1
               
Click one of the above tabs to view related content.