MedPath

Object Finder for a Retinal Prosthesis

Not Applicable
Completed
Conditions
Retinitis Pigmentosa
Interventions
Device: Object recognition subsystem
Registration Number
NCT04319809
Lead Sponsor
Minnesota HealthSolutions
Brief Summary

The proposed project seeks to provide object recognition as a feature in a retinal implant system. Participants will be able to direct an object recognition application to find a desired object in the field of view of the head-mounted camera, and to direct the participant's view towards it through the presentation of a recognizable icon. A prototype system will be developed and evaluated in human subjects in phase I. A full system implementation and a second phase of the trial will be completed in phase II.

Detailed Description

The investigators propose to add an object-finding feature to a retinal prosthesis system. To use this feature, the participant will enable a special mode and input the desired object from a set of pre-programmed object types. Imagery from the visible light camera in the system eyeglasses will be processed using object recognition software as the participant scans their head across the room scene. When the object is identified in the scene by the processor, a flashing icon will be output to the epiretinal array in the appropriate position to guide the participant to the physical location of the object. Once located, the system will track the location of the object.

There will be two phases to the human subjects evaluation, each run initially through simulations in sighted human subjects, followed by tests in Argus II participants. In phase 1, system evaluation in human subjects at Johns Hopkins UNiversity (JHU) will explore performance in representative tasks and compare prosthetic visual performance without and with the new object finding feature. An important aspect of the evaluation will be the comparison of different icons and presentation modes to assist participants in locating and reaching objects. In phase 2, the system will be integrated into the Argus II video processing unit (VPU), and JHU will conduct human trials that include functional testing of the integrated prototype in representative environments and optimizing the ergonomics of the system, e.g. simultaneous finding and tracking of multiple objects/icons.

Recruitment & Eligibility

Status
COMPLETED
Sex
All
Target Recruitment
9
Inclusion Criteria
  • For healthy volunteers: Vision corrected to 20/25, good general health
  • For retinitis pigmentosa (RP) patients: End-stage retinitis pigmentosa, recipient of an Argus II retinal prosthesis system
Exclusion Criteria
  • None

Study & Design

Study Type
INTERVENTIONAL
Study Design
SINGLE_GROUP
Arm && Interventions
GroupInterventionDescription
Device feasibilityObject recognition subsystemTo investigate the utility of a device adaptation allowing Argus II users to detect the presence and location of desired objects. Performance of the unaided Argus II system will be compared with performance using the system augmented with object recognition.
Primary Outcome Measures
NameTimeMethod
Performance (Completion Time) Locating a Cell Phone and a PersonTime in seconds to complete task. Stationary task time was time to placing hand on the table with the cell phone and mobility task time was time to handshake with the target person.

This outcome measure compares time to task completion without and with modalities of the subsystem for both a stationary and mobility task. For the stationary task, participants were seated in front of a table and a cell phone was placed randomly at the center of one of ten rectangular zones. Participants were asked to find and put their hand on the location of the cell phone. The time of the response was recorded (and the distance from the cell phone, which is a separate primary outcome). For the mobility task, participants were asked to find a target person in an otherwise empty room with dark walls. Once the participant got within arm's length, the target person would initiate a handshake. The time and number of steps (a separate primary outcome) were recorded. If the person wasn't found within 5 minutes, the task was stopped and scored as incomplete.

Accuracy (Distance From Target)Distance to cell phone required up to 30 minutes per mode and distance to a person required up to 45 minutes per mode.

This outcome measure compares task completion (accuracy to a target) without and with modalities of the subsystem for both a stationary and mobility task. For the stationary task, participants were seated in front of a table and a cell phone was placed randomly at the center of one of ten rectangular zones. Participants were asked to find and put their hand on the location of the cell phone. The distance to the cell phone in centimeters was recorded. For the mobility task, participants were asked to find a target person in an otherwise empty room with dark walls. Once the participant got within arm's length, the target person would initiate a handshake. The number of steps were recorded. If the person wasn't found within 5 minutes, the task was stopped and scored as incomplete.

Secondary Outcome Measures
NameTimeMethod

Trial Locations

Locations (1)

Johns Hopkins Hospital

🇺🇸

Baltimore, Maryland, United States

© Copyright 2025. All Rights Reserved by MedPath