LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

PVGAN: a generative adversarial network for object simplification in prosthetic vision

Photo by rhondak from unsplash

Objective. By means of electrical stimulation of the visual system, visual prostheses provide promising solution for blind patients through partial restoration of their vision. Despite the great success achieved so… Click to show full abstract

Objective. By means of electrical stimulation of the visual system, visual prostheses provide promising solution for blind patients through partial restoration of their vision. Despite the great success achieved so far in this field, the limited resolution of the perceived vision using these devices hinders the ability of visual prostheses users to correctly recognize viewed objects. Accordingly, we propose a deep learning approach based on generative adversarial networks (GANs), termed prosthetic vision GAN (PVGAN), to enhance object recognition for the implanted patients by representing objects in the field of view based on a corresponding simplified clip art version. Approach. To assess the performance, an axon map model was used to simulate prosthetic vision in experiments involving normally-sighted participants. In these experiments, four types of image representation were examined. The first and second types comprised presenting phosphene simulation of real images containing the actual high-resolution object, and presenting phosphene simulation of the real image followed by the clip art image, respectively. The other two types were utilized to evaluate the performance in the case of electrode dropout, where the third type comprised presenting phosphene simulation of only clip art images without electrode dropout, while the fourth type involved clip art images with electrode dropout. Main results. The performance was measured through three evaluation metrics which are the accuracy of the participants in recognizing the objects, the time taken by the participants to correctly recognize the object, and the confidence level of the participants in the recognition process. Results demonstrate that representing the objects using clip art images generated by the PVGAN model results in a significant enhancement in the speed and confidence of the subjects in recognizing the objects. Significance. These results demonstrate the utility of using GANs in enhancing the quality of images perceived using prosthetic vision.

Keywords: clip art; presenting phosphene; generative adversarial; prosthetic vision; vision

Journal Title: Journal of Neural Engineering
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.