Despite its recent advances and increasing industrial interest, cloud gaming’s high bandwidth usage is still one of its major challenges. In this paper, we demonstrate how incorporating visual attention into… Click to show full abstract
Despite its recent advances and increasing industrial interest, cloud gaming’s high bandwidth usage is still one of its major challenges. In this paper, we demonstrate how incorporating visual attention into cloud gaming helps to reduce bitrate without negatively affecting the player’s quality of experience. We show that current visual attention models, which work well for normal videos, underperform in the context of cloud gaming videos. Hence, we propose our novel model, by developing a skill-based visual attention model, based on a cloud gaming dataset. First, it is demonstrated how players’ attention maps are correlated with their skill levels and how this can be exploited to improve the accuracy of visual attention modeling. Then, this fact is used to cluster attention maps, according to the player’s skill level. A simple yet effective method is introduced to predict players’ skill levels using their performance in game. Finally, the models are incorporated into the video encoder to perceptually optimize the bitrate allocation. Incorporating the player’s skill level into our model improves the accuracy of saliency maps by 14% with respect to the baseline, and 24% with respect to competing methods, in terms of Normalized Scanpath Saliency (NSS). Furthermore, we show that the maximum possible amount of video bitrate reduction depends on the player’s skill level. Experimental results show 13%, 5%, and 15% reduction in video bitrate for beginner, intermediate, and expert players, respectively.
               
Click one of the above tabs to view related content.