MedPath

Vocal Emotion Communication With Cochlear Implants

Early Phase 1
Recruiting
Conditions
Cochlear Hearing Loss
Interventions
Behavioral: Perception of acoustic cues to emotion
Behavioral: Production of acoustic cues to emotion
Registration Number
NCT05486637
Lead Sponsor
Father Flanagan's Boys' Home
Brief Summary

Patients with hearing loss who use cochlear implants (CIs) show significant deficits and strong unexplained intersubject variability in their perception and production of spoken emotions in speech. This project will investigate the hypothesis that "cue-weighting", or how patients utilize the different acoustic cues to emotion, accounts for significant variance in emotional communication with CIs. The results will focus on children with CIs, but parallel measures in postlingually deaf adults with CIs will be made, ensuring that results of these studies benefit social communication by CI patients across the lifespan by informing the development of technological innovations and improved clinical protocols.

Detailed Description

Emotion communication is a fundamental part of spoken language. For patients with hearing loss who use cochlear implants (CIs), detecting emotions in speech poses a significant challenge. Deficits in vocal emotion perception observed in both children and adults with CIs have been linked with poor self-reported quality of life. For young children, learning to identify others' emotions and express one's own emotions is a fundamental aspect of social development. Yet, little is known about the mechanisms and factors that shape vocal emotion communication by children with CIs. Primary cues to vocal emotions (voice characteristics such as pitch) are degraded in CI hearing, but secondary cues such as duration and intensity remain accessible to patients. It is proposed that individual CI users' auditory experience with their device plays an important role in how they utilize these different cues and map them to corresponding emotions.

In previous studies, the Principal Investigator (PI) and the PI's team conducted foundational research that provided valuable information about key predictors of vocal emotion perception and production by pediatric CI recipients. The work proposed here will use novel methodologies to investigate how the specific acoustic cues used in emotion recognition by CI patients change with increasing device experience (Aim 1) and how the specific cues emphasized in vocal emotion productions by CI patients change with increasing device experience (Aim 2). Studies will include both a cross-sectional and a longitudinal approach.

The team's long-term goal is to improve emotional communication by CI users. The overall objectives of this application are to address critical gaps in knowledge by elucidating how cue-utilization (the reliance on different acoustic cues) for vocal emotion perception (Aim 1) and production (Aim 2) are shaped by CI experience. The knowledge gained from these studies will provide the evidence-base to support the development of clinical protocols that support emotional communication by pediatric CI recipients, and will thus benefit quality of life for CI users.

The hypotheses to be tested are: \[H1\] that cue-weighting accounts significantly for inter-subject variations in vocal emotion identification by CI users; \[H2\] that optimization of cue-weighting patterns is the mechanism by which predictors such as the duration of device experience and age at implantation benefit vocal emotion identification; and \[H3\] that in children with CIs, the ability to utilize voice pitch cues to emotion, together with early auditory experience (e.g., age at implantation and/or presence of usable hearing at birth) account significantly for inter-subject variation in emotional productions. The two Specific Aims will test these hypotheses while taking into account other factors such as cognitive and socioeconomic status, theory of mind, and psychophysical sensitivity to individual prosodic cues.

This is a prospective design involving human subjects who are children and adults. The participants will perform two kinds of tasks: 1) listening tasks in which participants listen to speech or nonspeech sounds and make a judgment about it, interacting with a software program on a computer screen; and 2) speaking tasks, in which participants will read aloud a list of simple sentences in a happy way and a sad way or converse with a member of the research team, in which participants retell a picture book story or describe an activity of their choosing. Participants' speech will be recorded, analyzed for its acoustics, and also used as stimuli for listening tasks. In addition to these tasks, participants will also be invited to perform tests of cognition, vocabulary, and theory of mind.

Participants will not be assigned to groups, and no control group will be assigned, in any of the Aims. In parallel with cochlear implant patients, the team will test normally hearing listeners spanning a similar age range to provide information on how the intact auditory system processes emotional cues in speech in perception and in production. Effects of patient factors such as their hearing history, experience with their cochlear implant, and cognition will be investigated using regression-based models. All patients will be invited to participate in all studies, with no assignment, until the sample size target is met for the particular study. The order of tests will be randomized as appropriate to avoid order effects.

Recruitment & Eligibility

Status
RECRUITING
Sex
All
Target Recruitment
255
Inclusion Criteria
  • Prelingually deaf children with cochlear implants

    • Postlingually deaf adults with cochlear implants
    • Normally hearing children
    • Normally hearing adults
Exclusion Criteria
  • Non-native speakers of American English

    • Prelingually deaf individuals who receive cochlear implants after age 12
    • Adults unable to pass a basic cognitive screen

Study & Design

Study Type
INTERVENTIONAL
Study Design
SINGLE_GROUP
Arm && Interventions
GroupInterventionDescription
Vocal emotion communication by children and adults with cochlear implants or normal hearingProduction of acoustic cues to emotionParticipants will be native speakers of American English and include pediatric cochlear implant recipients with unilateral or bilateral devices aged 6-19 years, children with normal hearing aged 6-19 years, postlingually deaf adults with cochlear implants, and adults with normal hearing. In Aim 1 participants will listen to emotional speech sounds and identify the talker's intended emotion. In Aim 2 participants will be invited to produce emotional speech by reading out scripted materials or in a more naturalistic conversational setting.
Vocal emotion communication by children and adults with cochlear implants or normal hearingPerception of acoustic cues to emotionParticipants will be native speakers of American English and include pediatric cochlear implant recipients with unilateral or bilateral devices aged 6-19 years, children with normal hearing aged 6-19 years, postlingually deaf adults with cochlear implants, and adults with normal hearing. In Aim 1 participants will listen to emotional speech sounds and identify the talker's intended emotion. In Aim 2 participants will be invited to produce emotional speech by reading out scripted materials or in a more naturalistic conversational setting.
Primary Outcome Measures
NameTimeMethod
Vocal emotion recognition accuracyYears 1-5

Percent correct scores in vocal emotion recognition

Vocal emotion recognition sensitivityYears 1-5

Sensitivity (d's) in vocal emotion recognition

Voice pitch (fundamental frequency) of vocal productionsYears 1-5

Voice pitch (Hz) measured from acoustic analyses of recorded speech

Intensity of vocal productionsYears 1-5

Intensity (decibel units) measured from acoustic analyses of recorded speech

Duration of vocal productionsYears 1-5

Duration (1/speaking rate) measured from acoustic analyses of recorded speech

Recognition of recorded speech emotions by listeners -- percent correct scoresYears 1-5

Accuracy and associated d's (sensitivity measure) in listeners' ability to identify the emotions recorded in participants' speech

Recognition of recorded speech emotions by listeners -- d' values (sensitivity measure)Years 1-5

Sensitivity (d's based on hit rates and false alarm rates) in listeners' ability to identify the emotions recorded in participants' speech

Secondary Outcome Measures
NameTimeMethod
Reactions times (seconds) for vocal emotion identificationYears 1-5

Time between the end of the stimulus recording and the response (button press)

Trial Locations

Locations (2)

Boys Town National Research Hospital

🇺🇸

Omaha, Nebraska, United States

Arizona State University

🇺🇸

Tempe, Arizona, United States

© Copyright 2025. All Rights Reserved by MedPath