Streaming high-quality 360-degree video over constrained networks with low latency is very challenging due to high bandwidth requirement. Tile-based viewport adaptive streaming that proactively delivers predicted visible fields with higher… Click to show full abstract
Streaming high-quality 360-degree video over constrained networks with low latency is very challenging due to high bandwidth requirement. Tile-based viewport adaptive streaming that proactively delivers predicted visible fields with higher quality is bandwidth-friendly, but limited prediction accuracy of head movement results in degraded viewport quality. In this letter, we propose a perception-based pseudo-motion response strategy to mitigate the damage to viewport quality, benefiting from human perception thresholds for head rotation losses and gains in virtual environment. It employs imperceptible virtual rotation losses when the imminent physical viewports may exceed the high-quality region, and immediate losses compensation once the prediction performs well. Experiments results show that our proposed strategy achieves an additional average 1.43% coding gain compared to traditional tile-based video streaming. Most notably, the proposed method is compatible with any tile-based video streaming.
               
Click one of the above tabs to view related content.