MedPath

Environmental Localization Mapping and Guidance for Visual Prosthesis Users

Not Applicable
Completed
Conditions
Retinitis Pigmentosa
Visual Prosthesis
Visual Impairment
Interventions
Device: Navigation System for Users of a Visual Prosthesis
Registration Number
NCT04359108
Lead Sponsor
Johns Hopkins University
Brief Summary

This study is driven by the hypothesis that independent navigation by blind users of visual prosthetic devices can be greatly aided by use of an autonomous navigational aid that provides information about the environment and guidance for navigation through multimodal sensory cues. For this study, the investigators developed a navigation system that uses on-board sensing to map the user's environment and compute navigable paths to desired destinations in real-time. Information regarding obstacles and directional guidance is communicated to the user via a combination of sensory modalities including limited vision (through the user's visual prosthesis), haptic, and audio cues. This study evaluates how effectively this navigational aid improves prosthetic vision users' ability to perform navigational tasks. The participants for this study include both retinal prosthesis users of the Argus II Retinal Prosthesis System (Argus II) and normally sighted individuals who use a virtual reality headset to simulate the limited vision of the Argus II system.

Detailed Description

About 1.3 million Americans aged 40 and older are legally blind, a majority because of diseases with onset later in life, such as glaucoma and age-related macular degeneration. Second Sight Medical Products (SSMP) has developed the world's first FDA approved retinal implant, Argus II, intended to restore some functional vision for people suffering from retinitis pigmentosa (RP).

In this era of smart devices, generic navigation technology, such as GPS mapping apps for smartphones, can provide directions to help guide a blind user from point A to point B. However, these navigational aids do little to enable blind users to form an egocentric understanding of the surroundings, are not suited to navigation indoors, and do nothing to assist in avoiding obstacles to mobility. The Argus II, on the other hand, provides blind users with a limited visual representation of the users surroundings that improves users' ability to orient themselves and traverse obstacles, yet lacks features for high-level navigation and semantic interpretation of the surroundings. The proposed study aims to address these limitations of the Argus II through a synergy of state-of-the-art simultaneous localization and mapping (SLAM) and scene recognition technologies.

This study is driven by the hypothesis that independent navigation by blind users of visual prosthetic devices can be greatly aided by use of an autonomous navigational aid that provides information about the environment and guidance for navigation through multimodal sensory cues. The investigators developed a navigation system that uses on-board sensing and SLAM-based algorithms to continuously construct a map of the user's environment and locate the user within that map in real-time. On-board path planning algorithms compute optimal navigation routes to reach desired destinations based on the constructed map. The system then communicates obstacle locations and navigational cues to the user while navigating via a combination of sensory modalities. The participants for this study include blind Argus II users, who use their retinal implant for vision, and normally sighted individuals, who use a virtual reality headset to simulate the limited vision of a retinal prosthesis.

The sensory modalities used by the navigational aid to communicate information back to the user include:

* Limited vision is provided via the user's visual prosthesis, with the Argus II retinal implant supporting an image size of 10 x 6 pixels spanning approximately 18 x 11 degrees field-of-view. Images sent to the visual implant are derived from video frames provided by forward-facing cameras integrated within headgear worn by the user. Three forms of vision feedback are evaluated in this study including: 1) the standard vision output of the Argus II (which uses texture-based processing including difference-of-Gaussian and contrast enhancement filters), 2) an enhanced depth-based vision mode that uses the depth sensing capabilities of the navigational aid to highlight above-ground obstacles, and 3) a high field of view depth-based vision mode that doubles the pixel size and field of view of the visual feedback in each dimension. The depth-vision modes display only above-ground obstacles with increasing brightness relative to decreasing distance from the user; the ground and obstacles beyond a threshold distance are not displayed in order to declutter the visual scene. The high field-of-view depth-vision mode is only utilized by normally sighted participants, as this mode exceeds the vision capabilities of the Argus II implant.

* Haptic cues indicate the direction in which the user should advance in order to follow the path computed by the navigational aid to reach a target destination. The haptic cues are generated by vibrators situated at five positions located left-to-right along the users forehead, which are built into user-worn headgear. The five vibration points indicate for the user to turn in directions: "far left", "slight left", "straight ahead", "slight right", and "far right".

* Audio cues provide an audible alert when the user approaches an obstacle within 1.5 feet and provide verbal updates on the remaining distance to reach the destination along the path computed by the navigational aid.

This study compares participants' performance in completing navigation tasks using five different modes and combinations of the foregoing sensory modalities as follows: 1) Argus vision, 2) depth vision, 3) depth vision with haptic and audio, 4) haptic and audio (without vision), and 5) high field-of-view depth vision.

The navigation tasks performed by the participants using these modalities include navigating through a dense obstacle field and navigating between rooms within an indoor facility that requires successful traversal of non-trivial paths.

In addition, a third experiment evaluates the effect of resolution and field-of-view of the retinal implant upon participants' ability to visually discern relative distances to different obstacles based on optical flow patterns induced by the participant's motion when approaching obstacles situated at different distances ahead of the user. For this experiment, the following four vision settings are evaluated: 1) low resolution / low field-of-view, 2) low resolution / high field-of-view, 3) high resolution / low field-of-view, and 4) high resolution / high field-of-view. The "low" settings correspond to the values of the Argus II system, whereas the "high" settings corresponding to a doubling of the "low" values along each dimension. For Argus user participants, only the low resolution / low field-of-view setting is evaluated since the Argus II retinal implant is incapable of supporting the higher vision settings.

Recruitment & Eligibility

Status
COMPLETED
Sex
All
Target Recruitment
26
Inclusion Criteria

Not provided

Exclusion Criteria

Not provided

Study & Design

Study Type
INTERVENTIONAL
Study Design
SINGLE_GROUP
Arm && Interventions
GroupInterventionDescription
Navigation system for users of a visual prosthesisNavigation System for Users of a Visual ProsthesisThis intervention will assess the feasibility of using a navigation system to aid blind users of a visual prosthesis with navigation tasks, by using the navigation system to provide navigational cues through multiple sensory modalities including vision and audition. The navigation system will be designed and developed as part of the proposed research and will interface with the Argus II retinal prosthesis system, which is an FDA approved visual prosthesis. Existing blind users of the Argus II device will be recruited for this study, and the navigation system will interface with these subjects' existing Argus II systems. Sighted subjects will also be recruited for this study, in which case the navigation system will interface with a head-mounted display (such as the Oculus Rift) worn by the sighted subjects that simulates the visual sensory information experienced by blind users of the Argus II retinal implant.
Primary Outcome Measures
NameTimeMethod
Duration of Time to Complete Navigational Tasks as Assessed by the Mean Duration of Travel in Seconds Normalized by the Nominal Path Distance in MetersUp to 7 minutes for each navigation task, on Day 1 or Day 2

Participants will be asked to complete the navigation tasks of crossing an obstacle field approximately 6 meters in length and navigating to a target destination located about 20 - 30 meters away. The normalized duration of time to complete each navigation task will be measured in seconds. To account for variability in the path distances of different trials, the measured time will be normalized by a reference (nominal) path distance in meters associated with the configuration of each trial. The mean normalized time across all trials will be computed for each intervention.

Distance Traversed to Complete Navigational Tasks as Assessed by the Mean Distance Traveled in Meters Normalized by the Nominal Path Distance in MetersUp to 7 minutes for each navigation task, on Day 1 or Day 2

Participants will be asked to complete the navigation tasks of crossing an obstacle field approximately 6 meters in length and navigating to a target destination located about 20 - 30 meters away. The normalized distance traversed to complete each navigation task will be measured. To account for variability in the path distances of different trials, the measured distance will be normalized by a reference (nominal) path distance in meters associated with the configuration of each trial. The mean normalized distance across all trials will be computed for each intervention.

Obstacle Avoidance While Completing Navigational Tasks as Assessed by the Mean Number of Contacts With Obstacles Normalized by the Nominal Path Distance in MetersUp to 7 minutes for each navigation task, on Day 1 or Day 2

Participants will be asked to complete the navigation tasks of crossing an obstacle field approximately 6 meters in length and navigating to a target destination located about 20 - 30 meters away. The number of contacts with obstacles encountered while completing each navigation task will be measured. To account for variability in the path distances of different trials, the number of contacts will be normalized by a reference (nominal) path distance in meters associated with the configuration of each trial. The mean normalized number of contacts with obstacles across all trials will be computed for each intervention.

Success in Completing Navigational Tasks as Assessed by the Mean Percentage of Trials Where Participants Successfully Reached the Intended DestinationUp to 7 minutes for each navigation task, on Day 1 or Day 2

Participants will be asked to navigate to a target destination located about 20 - 30 meters away. We will measure success rate as the mean number of trials for which each participants successfully reached the intended destination divided by the total number of trials performed by that participant for each intervention.

Accuracy of Relative Distance Judgments as Assessed by the Percentage of Correct ResponsesUp to 60 seconds for each discrimination task, on Day 1 or Day 2

Participants will be presented with two physical objects and asked to identify which of the objects is the nearer one after walking a controlled distance down a centerline oriented between the objects. We will measure accuracy as the percentage of correct responses (the total number of correct responses divided by the total number of trials times 100).

Secondary Outcome Measures
NameTimeMethod

Trial Locations

Locations (2)

Johns Hopkins Medicine - Wilmer Eye Institute

🇺🇸

Baltimore, Maryland, United States

Johns Hopkins Applied Physics Laboratory

🇺🇸

Laurel, Maryland, United States

© Copyright 2025. All Rights Reserved by MedPath