MedPath

Behavioral and Neural Characteristics of Adaptive Speech Motor Control

Not Applicable
Recruiting
Conditions
Speech
Interventions
Behavioral: Visual feedback perturbation during reaching
Other: DBS stimulation ON/OFF
Behavioral: Auditory feedback perturbation during speech
Registration Number
NCT06164717
Lead Sponsor
University of Washington
Brief Summary

This study meets the NIH definition of a clinical trial, but is not a treatment study. Instead, the goal of this study is to investigate how hearing ourselves speak affects the planning and execution of speech movements. The study investigates this topic in both typical speakers and in patients with Deep Brain Stimulation (DBS) implants. The main questions it aims to answer are:

* Does the way we hear our own speech while talking affect future speech movements?

* Can the speech of DBS patients reveal which brain areas are involved in adjusting speech movements? Participants will read words, sentences, or series of random syllables from a computer monitor while their speech is being recorded. For some participants, an electrode cap is also used to record brain activity during these tasks. And for DBS patients, the tasks will be performed with the stimulator ON and with the stimulator OFF.

Detailed Description

Not available

Recruitment & Eligibility

Status
RECRUITING
Sex
All
Target Recruitment
507
Inclusion Criteria

Not provided

Read More
Exclusion Criteria

Not provided

Read More

Study & Design

Study Type
INTERVENTIONAL
Study Design
FACTORIAL
Arm && Interventions
GroupInterventionDescription
Visual feedback perturbation during reachingVisual feedback perturbation during reachingThe intervention consists of manipulating real-time visual feedback during upper limb reaching movements. In our lab, such feedback perturbations can be implemented with a virtual reality display system.
Deep brain stimulationDBS stimulation ON/OFFThis intervention consists of toggling the deep brain stimulation (DBS) implant ON/OFF prior to participation in the speech auditory-motor learning tasks and speech sequence learning tasks. This intervention can be implemented by the subject themselves as all patients have a hand- held controlled that they use to switch stimulation ON/OFF.
Auditory feedback perturbation during speechAuditory feedback perturbation during speechThe intervention consists of manipulating real-time auditory feedback during speech production. In our lab, such feedback perturbations can be implemented with either a stand-alone digital vocal processor (a device commonly used by singers and the music industry) or with software-based signal processing routines (see Equipment section for details). Note that the study does not investigate the efficacy of these hardware or software methods to induce behavioral change in subjects' speech. Rather, the study addresses basic experimental questions regarding the general role of auditory feedback in the central nervous system's control of articulatory speech movements.
Primary Outcome Measures
NameTimeMethod
Reach direction for arm movementsOutcome measures will be made only during a single data recording session (~2 hours).

Measuring initial reach direction for arm movements allows us to measure the direction that was planned before movement onset.

Amplitude of long-latency auditory evoked potentials (from EEG recordings) responsesMeasurements will be made only from electroencephalography (EEG) recordings made during the test session (~2 hours).

Amplitude of the N1 component (in microvolt) will be measured in response to both probe tones and to a subject's own speech onset.

Local field potentials recorded by neural implantsMeasurements will be made only from DBS implant recordings made during the test session (~1-2 hours).

Local field potentials (LFPs) will be recorded by the PerceptPC DBS implants and used to measure changes in power spectrum density across different phases of the tasks. Additionally, LFPs will be used to conduct event-related analyses.

Temporal measures of speech syllable sequence learningOutcome measures will be made only during a single data recording session (~0.5 hours)

1. Speech onset time (in milliseconds); 2. Average syllable duration (in milliseconds)

Accuracy during speech syllable sequence learningOutcome measures will be made only during a single data recording session (~0.5 hours)

Sequence accuracy (in percent)

Speech formant frequenciesMeasurements will be made only from acoustic recordings made during the test session (~1 hour).

The frequencies of the subject's first two formants (F1, F2) for each test word will be measured from spectrographic displays with overlaid Linear Predictive Coding formant tracks.

Secondary Outcome Measures
NameTimeMethod

Trial Locations

Locations (1)

University of Washington

🇺🇸

Seattle, Washington, United States

© Copyright 2025. All Rights Reserved by MedPath