Skip to Main content Skip to Navigation

Naviguer en vision prothétique simulée : apport de la vision par ordinateur pour augmenter les rendus prothétiques de basse résolution

Abstract : Blindness affects thirty nine millions people in the world and generates numerous difficulties in everyday life. Specifically, navigation abilities (which include wayfinding and mobility) are heavily diminished. This leads blind people to limit and eventually to stop walking outside. Visual neuroprosthesis are developed in order to restore such "visual" perception and help them to get some autonomy back. Those implants generate electrical micro-stimulations which are focused on the retina, the optic nerve or the visual cortex. Those stimulations elicit blurry dots called "phosphenes". Phosphenes can be mainly white, grey or yellow. The whole stimulation device contains a wearable camera, a small computer and the implant which is connected to the computer. The implant resolution and position impact directly the quality of the restored visual perception. Current implants include less than a hundred electrodes so it is mandatory to reduce the resolution of the visual stream to match the implant resolution. For instance, the already commercialized Argus II implant from the company Second Sight (Seymar, California) is the leading visual implant worldwide and uses only sixty electrodes. This means that Argus II blind owners can perceive only sixty phosphenes simultaneously. Therefore this restored vision is quite poor and signal optimization is required to get to a functional implant usage. Blind people with implants are involved in restricted clinical trials and are difficult to reach. Yet, studying those implant possibilities is at our reach by simulating prosthetic vision and displaying it in a head mounted display for sighted subjects. This is the field of simulated prosthetic vision (SPV). Navigation was never studied with people with implant, and only a few studies approached this topic in SPV. In this thesis, we focused on the study of navigation in SPV. Computer vision allowed us to select which of the scene elements to display in order to help subjects to navigate and build a spatial representation of the environment. We used psychological models of navigation to conceive and evaluate SPV renderings. Subjects had to find their way and collect elements in a navigation task in SPV inspired by video games for the blind. To evaluate their performance we used a performance index based on the completion time. To evaluate their mental representation, we asked them to draw the environment layout after the task for each rendering. This double evaluation lead us to spot which elements can and should be displayed in low resolution SPV in order to navigate. Specifically those results show that to be understandable in low vision, a scene must be simple and the structure of the environment should not be hidden. When blind people with implant will become available we will be able to confirm or deny those results by evaluating their navigation in virtual and real environments.
Document type :
Complete list of metadata

Cited literature [79 references]  Display  Hide  Download
Contributor : ABES STAR :  Contact
Submitted on : Monday, August 29, 2016 - 12:08:08 PM
Last modification on : Monday, July 4, 2022 - 9:38:56 AM
Long-term archiving on: : Wednesday, November 30, 2016 - 12:57:00 PM


Version validated by the jury (STAR)


  • HAL Id : tel-01357149, version 1


Victor Vergnieux. Naviguer en vision prothétique simulée : apport de la vision par ordinateur pour augmenter les rendus prothétiques de basse résolution. Environnements Informatiques pour l'Apprentissage Humain. Université Paul Sabatier - Toulouse III, 2015. Français. ⟨NNT : 2015TOU30323⟩. ⟨tel-01357149⟩



Record views


Files downloads