It is still a great challenge for the visually impaired people to perceive their surroundings from a global perspective, which makes it difficult for them to interact with unfamiliar environments.… Click to show full abstract
It is still a great challenge for the visually impaired people to perceive their surroundings from a global perspective, which makes it difficult for them to interact with unfamiliar environments. The reason is that these conventional assisting devices only address the obstacle avoidance problem. They do not provide visually impaired people with a global perception of the surrounding environment. In this article, a new generative adversarial network (GAN) model is developed to effectively transform the ground images into the tactile signal, which can be displayed by an off-the-shelf vibration device. The algorithm module and the hardware are integrated into a portable device, which provides visually impaired people with effective surrounding perception capability. In addition, a visual-tactile cross-modal data set is constructed to train the proposed deep-learning architecture. Experimental results show that the proposed system can help visually impaired people sense the ground and bring a better traveling experience for them. Note to Practitioners—This article presents a portable device that provides tactile recognition assistance for visually impaired people. Such a technology can be extensively used in tactile mouse and white cane. The developed technology can be extensively used for various industrial applications, such as surrounding monitoring and manipulation. The proposed work demonstrates the promising ability of artificial intelligence in healthcare applications. The generated tactile signals are expected to be used in many human-centered systems, and we believe that our contribution is an important step toward the development of a more comprehensive assisting technology for visually impaired people.
               
Click one of the above tabs to view related content.