LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Viewport-Based Omnidirectional Video Quality Assessment: Database, Modeling and Inference

Photo by saadahmad_umn from unsplash

This article first provides a new Viewport-based OmniDirectional Video Quality Assessment (VOD-VQA) database, which includes eighteen salient viewport videos extracted from the original OmniDirectional Videos (ODVs), and corresponding 774 impaired… Click to show full abstract

This article first provides a new Viewport-based OmniDirectional Video Quality Assessment (VOD-VQA) database, which includes eighteen salient viewport videos extracted from the original OmniDirectional Videos (ODVs), and corresponding 774 impaired samples generated by compressing the raw viewports using a variety of combinations of its Spatial (frame size $s$ ), Temporal (frame rate $t$ ), and Amplitude (quantization stepsize $q$ ) Resolutions (STAR). Total 160 subjects have assessed the processed viewport videos rendered on the head mounted display (HMD) when they stabilize their fixations. We then have formulated an analytical model to connect the perceptual quality of a compressed viewport video with its STAR variables, noted as the $Q^{{\mathsf {VP}}}_{\tt {STAR}}$ index. All four model parameters can be predicted using linearly weighted content features, making the proposed metric generalized to various contents. This model correlates well with the mean opinion scores (MOSs) collected for processed viewport videos, having both the Pearson Correlation Coefficient and Spearman’s Rank Correlation Coefficient (SRCC) at 0.95 according to an independent validation test, yielding the state-of-the-art performance in comparison to those popular objective metrics (e.g., Weighted to Spherically uniform (WS)-Peak Signal to Noise Ratio (PSNR), WMS-SSIM, Video Multimethod Assessment Fusion (VMAF), Feature SIMilarity Index (FSIM), and Visual Saliency based IQA Index (VSI)). Furthermore, this viewport-based quality index $Q^{{\mathsf {VP}}}_{\tt {STAR}}$ is extended to infer the overall ODV quality, a.k.a., $Q^{{\mathsf {ODV}}}_{\tt {STAR}}$ , by linearly weighing the saliency-aggregated qualities of salient viewports and the quality of quick-scanning (or non-salient) area. Experiments have shown that inferred $Q^{{\mathsf {ODV}}}_{\tt {STAR}}$ can accurately predict the MOS with competitive performance to the state-of-the-art algorithm using another four independent and third-party ODV assessment datasets. All related materials are made publicly accessible at https://vision.nju.edu.cn/20/86/c29466a467078/page.htm for reproducible research.

Keywords: inline formula; viewport; tex math

Journal Title: IEEE Transactions on Circuits and Systems for Video Technology
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.