Several advanced redirected walking techniques have been proposed in recent years to improve natural walking in virtual environments. One active and important research challenge of redirected walking focuses on the… Click to show full abstract
Several advanced redirected walking techniques have been proposed in recent years to improve natural walking in virtual environments. One active and important research challenge of redirected walking focuses on the alignment of virtual and physical environments by redirection gains. If both environments are aligned, physical objects appear at the same positions as their virtual counterparts. When a user arrives at such a virtual object, she can touch the corresponding physical object providing passive haptic feedback. When multiple transferable virtual or physical target positions exist, the alignment can exploit multiple options, but the process requires more complicated solutions. In this paper, we study the problem of virtual-physical environmental alignment at multiple transferable target positions, and introduce a novel reinforcement learning-based redirected walking method. We design a novel comprehensive reward function that dynamically determines virtual-physical target matching and updates virtual target weights for reward computation. We evaluate our method through various simulated experiments as well as real user tests. The results show that our method obtains less physical distance error for environmental alignment and requires fewer resets than state-of-the-art techniques.
               
Click one of the above tabs to view related content.