Virtual reality (VR) has been adopted in various fields such as entertainment, education, healthcare, and the military, due to its ability to provide an immersive experience to users. However, 360°… Click to show full abstract
Virtual reality (VR) has been adopted in various fields such as entertainment, education, healthcare, and the military, due to its ability to provide an immersive experience to users. However, 360° images, one of the main components in VR systems, have bulky sizes and thus require effective transmitting and rendering solutions. One of the potential solutions is to use foveated technologies, that take advantage of the foveation feature of the human eyes. Foveated technologies can significantly reduce the data required for transmission and computation complexity in rendering. However, understanding the impact of foveated 360° images on human quality perception is still limited. This paper addresses the above problems by proposing an accurate machine-learning-based quality assessment model for foveated 360° images. The proposed model is proven to outperform the three cutting-edge machine-learning-based models, which apply deep learning techniques and 25 traditional-metric-based models (or analytical-function-based-models), which utilize analytical functions. It is also expected that our model helps to evaluate and improve 360° content streaming and rendering solutions to further reduce data sizes while ensuring user experience. Also, this model could be used as a building block to construct quality assessment methods for 360° videos, that are reserved for our future work. The source code is available at https://github.com/telagment/FoVGCN.
               
Click one of the above tabs to view related content.