LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Reference-Free DIBR-Synthesized Video Quality Metric in Spatial and Temporal Domains

Photo by aiony from unsplash

Depth image-based rendering (DIBR) techniques play an important role in free viewpoint videos (FVVs), which have a wide range of applications including immersive entertainment, remote monitoring, education, etc. FVVs are… Click to show full abstract

Depth image-based rendering (DIBR) techniques play an important role in free viewpoint videos (FVVs), which have a wide range of applications including immersive entertainment, remote monitoring, education, etc. FVVs are usually synthesized by DIBR techniques in a “blind” environment (without a reference video). Thus, an effective reference-free synthesized video quality assessment (VQA) metric is vital. At present, many image quality assessment (IQA) algorithms for DIBR-synthesized images have been proposed, but limited researches have been concerned about the quality assessment of DIBR-synthesized videos. To this end, this paper proposes a novel reference-free VQA method for synthesized videos, which operates in Spatial and Temporal Domains, dubbed as STD. The design fundamental of the proposed STD metric considers the effects of two major distortions introduced by DIBR techniques on the visual quality of synthesized videos. First, considering the geometric distortion introduced by DIBR technologies can increase high-frequency contents of the synthesized frame, the influence of the geometric distortion on the visual quality of a synthesized video can be effectively evaluated by estimating high-frequency energies of each synthesized frame in spatial domain. Second, temporal inconsistency caused by DIBR techniques brings the temporal flicker distortion, which is one of the most annoying artifacts in DIBR-synthesized videos. In temporal domain, we quantify temporal inconsistency by measuring motion differences between consecutive frames. Specifically, optical flow method is first used to estimate the motion field between adjacent frames. Then, we calculate the structural similarity of adjacent optical flow fields and further adopt the structural similarity value to weight the pixel differences of adjacent optical flow fields. Experiments show that the above two features are able to well perceive the visual quality of DIBR-synthesized videos. Furthermore, since the two features are extracted from spatial and temporal domains, respectively, we integrate them using a linear weighting strategy to obtain our STD metric, which proves advantageous over two components and the competing state-of-the-art I/VQA methods. The source code is available at https://github.com/wgc-vsfm/DIBR-video-quality-assessment.

Keywords: synthesized video; quality; reference free; dibr synthesized; video quality; dibr

Journal Title: IEEE Transactions on Circuits and Systems for Video Technology
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.