Artificial Intelligence and Augmentative and Alternative Communication AAC
- Conditions
- Cerebral PalsyCortical Visual Impairment
- Interventions
- Device: Testing artificial intelligence algorithms for interpreting gestures
- Registration Number
- NCT06599996
- Lead Sponsor
- Penn State University
- Brief Summary
The overarching objective of this project is to transform access to assistive communication technologies (augmentative and alternative communication) for individuals with motor disabilities and/or visual impairment, for whom natural speech is not meeting their communicative needs. These individuals often cannot access traditional augmentative and alternative communication because of their restricted movement or visual function. However, most such individuals have idiosyncratic body-based means of communication that is reliably interpreted by familiar communication partners. The project will test artificial intelligence algorithms that gather information from sensors or camera feeds about these idiosyncratic movement patterns of the individual with motor/visual impairments. Based on the sensor or camera feed information, the artificial intelligence algorithms will interpret the individual's gestures and translate the interpretation into speech output. For instance, if an individual waves their hand as their means of communicating "I want", the artificial intelligence algorithm will detect that gesture and prompt the speech-generating technology to produce the spoken message "I want." This will allow individuals with restricted but idiosyncratic movements to access the augmentative and alternative communication technologies that are otherwise out of reach.
- Detailed Description
As noted in the Communication Bill of Rights from the National Joint Committee on the Communication Needs of Persons with Severe Disabilities, "All people with a disability of any extent or severity have a basic right to affect, through communication, the conditions of their existence." Access to speech-language therapies that promote optimal communication outcomes is also noted to be a fundamental right by the United Nation's Article 19 of the Convention on the Rights of Persons with Disabilities. Yet many individuals with physical or intellectual disabilities have language limitations that prevent them from using speech as their primary mode of communication. For these individuals, assistive communication technologies (augmentative and alternative communication) offer an important set of supports for realizing this critical human right.
Although augmentative and alternative communication is widely-used and evidence-based, there are particular challenges in designing augmentative and alternative communication for individuals with visual and concomitant motor impairments. Unlike spoken language, in much of aided augmentative and alternative communication the vocabulary items are visual (letters, words, symbols) and only a limited number of items can be displayed at a time, since they must be presented on an external device (such as a tablet or a dedicated device). To maximize available vocabulary, clinicians often place many symbols onto the small display. Although this strategy can be useful for some people - and does maximize vocabulary visible on any given page - it is a substantial problem for individuals with visual impairments who cannot either see (ocular) or process (cortical) the visual information. In addition, access to these vocabulary items often involves use of a finger or eye gaze to select a symbol or a limb to activate a switch. These types of repetitive selections may be difficult and fatiguing for individuals with motor disabilities. As a consequence, traditional methods of accessing augmentative and alternative communication that work for other individuals are selectively more difficult for those with visual impairment and motor disabilities. There is an urgent need to develop augmentative and alternative communication technologies that reduce the visual and motoric burden for such individuals.
This project seeks to substantially increase the flexibility of aided augmentative and alternative communication access in part through a reconsideration of the traditional distinction made between aided (i.e., technology assisted) and unaided (i.e., body-based) communication modes. Aided communication modes offer the power of symbolic communication that is readily understood by many communication partners, even those who are unfamiliar with the individual using augmentative and alternative communication. However, aided modes can be quite limiting in terms of the vocabulary available, speed of message preparation, environmental constraints, and ability to support natural conversations. Unaided communication modes, on the other hand, can involve a diverse range of natural movements that are well within the skill set of the user, and can be rapidly produced with low effort. The drawback of unaided modes is that they are often difficult for unfamiliar partners to understand, thus limiting the range of potential communication partners and necessitating the proximity of a communication partner to the augmentative and alternative communication user to observe the body-based communication.
Given contemporary technology, it is both theoretically and practically possible to substantially increase access to aided augmentative and alternative communication by leveraging the ability of technology to sense and interpret unaided input ranging from natural air gestures to facial expressions and/or other intentional movement patterns. Harnessing unaided inputs as a supplemental means for access to technology will marry the power of the aided symbolic communication with the ease, speed, and unique movements employed by individual users. In so doing, it will shift the burden of access from the user (at least in part) onto the aided augmentative and alternative communication technologies themselves. Indeed, building flexible technologies that are tailored to the motor and visual skills of individuals with disabilities is well within the capabilities of modern devices and is an active area of research in Human-Computer Interaction and accessible computing.
This project will test artificial intelligence algorithms that are capable of interpreting idiosyncratic, individual-specific unaided gestures for augmentative and alternative communication access. This proposed system is intended to be human-centered, use-inspired, and readily-programmed, to empower both the user and their communication partners who may be involved in augmentative and alternative communication services. The project will solicit individuals with a wide range of motor disabilities to ensure the algorithms are widely applicable.
Recruitment & Eligibility
- Status
- NOT_YET_RECRUITING
- Sex
- All
- Target Recruitment
- 15
- Have motor impairment, which can present in diverse/multiple ways, including spasticity, ataxia, or dystonia (these types of movement disorders are different from one another, and can result from diverse genetic conditions or injury to the brain before or shortly after birth, and generally all fall under the umbrella term cerebral palsy or movement disorder). Note: Presence of intellectual disability in addition to motor disability is not an exclusionary criteria, meaning that we will include both individuals with intact intellectual ability as well as those with intellectual disability
- Can/will tolerate a small biosensor (about the size of a medallion) attached to a limb (for instance, wrist or elbow) embedded within a soft wrist band
- Do not have motor disabilities
- Cannot tolerate a small biosensor (about the size of a medallion) attached to a limb (for instance, wrist or elbow) with a soft wrist band
Study & Design
- Study Type
- INTERVENTIONAL
- Study Design
- SINGLE_GROUP
- Arm && Interventions
Group Intervention Description Evaluation of learnability and utility of artificial intelligence algorithm Testing artificial intelligence algorithms for interpreting gestures Participants will be learning to use the artificial intelligence algorithms and testing them for ease of use and efficiency
- Primary Outcome Measures
Name Time Method Time taken for Programming of artificial intelligence algorithms by users/personal care aides average of 12 months How long it takes participants learn to program the algorithms (number of minutes taken)
Number of messages programmed by users/personal care aides average of 12 months How many messages the user/personal care aide can program in the session
- Secondary Outcome Measures
Name Time Method