The usability of telepresence applications is strongly affected by the communication delay between the user and the remote system. Special attention needs to be paid in case the distant scene is… Click to show full abstract
The usability of telepresence applications is strongly affected by the communication delay between the user and the remote system. Special attention needs to be paid in case the distant scene is experienced by means of a Head Mounted Display. A high motion-to-photon latency, which describes the time needed to fully reflect the user's motion on the display, results in a poor feeling of presence. Further consequences involve unbearable motion sickness, indisposition, and termination of the telepresence session in the worst case. In this letter, we present our low-cost MAVI telepresence system, which is equipped with a stereoscopic 360° vision system and high-payload manipulation capabilities. Special emphasis is placed on the stereoscopic vision system and its delay compensation. More specifically, we propose velocity-based dynamic field-of-view adaptation techniques to decrease the emergence of simulator sickness and to improve the achievable level of delay compensation. The proposed delay compensation approach relies on deep learning to predict the prospective head motion. We use our previously described head motion dataset for training, validation, and testing. To prove the general validity of our approach, we perform cross validation with another independent dataset. We use both qualitative measures and subjective experiments for evaluation. Our results show that the proposed approach is able to achieve mean compensation rates of around 99.9% for latencies between 0.1 and 0.5 s.
               
Click one of the above tabs to view related content.