Articulation and coordination of speech after treatment for oral cancer
- Conditions
- dysarthriaspeech pathology10019190
- Registration Number
- NL-OMON51026
- Lead Sponsor
- niversitair Medisch Centrum Groningen
- Brief Summary
Not available
- Detailed Description
Not available
Recruitment & Eligibility
- Status
- Completed
- Sex
- Not specified
- Target Recruitment
- 40
• At least 18 years of age and able to make an informed consent
• Native speaker of Dutch
• Diagnosed with oral tumors and having had a major surgical resection of a TNM
stage T3 or T4, where parts of the tongue or jaw have been removed and perhaps
replaced by other tissue at least 12 months before (patient subject) or a
volunteer without disturbed speech.
• Recurrence of disease
• History of neurological or psychological disorders
• Self-reported signs of depression
• Stuttering or other pre-existing speech and language problems
• Anatomy that prevents attaching sensors to the tongue (i.e., trismus or
tongue immobility)
• Problems with sight or hearing that impede reading or understanding
instructions. When glasses or a hearing aid resolve these problems, there is no
impediment to participation.
• Non removable metal on, in or close to the head (e.g., piercings, dental
braces, medical devices such as deep brain stimulation electrodes) or medical
devices (e.g., a pacemaker) incompatible to electromagnetic fields.
Study & Design
- Study Type
- Observational invasive
- Study Design
- Not specified
- Primary Outcome Measures
Name Time Method <p>- Displacement of the tongue tip, the tongue body, the lips and the jaw at<br /><br>consonant and vowel targets<br /><br>- Velocity of tongue, of the tongue tip, the tongue body, the lips, and the jaw<br /><br>at key landmarks of speech gestures.<br /><br>- Duration of speech gestures produced with the tongue tip, the tongue body,<br /><br>the lips and the jaw.<br /><br>- Speech rate (syllables/min and words/min)</p><br>
- Secondary Outcome Measures
Name Time Method <p>- Acoustic measures of speech<br /><br>- Models to link articulation and speech, both speech synthesis from EMA, and<br /><br>EMA from speech (machine learning)<br /><br>- Changes in the variability and complexity of articulators* movement under<br /><br>masked auditory feedback compared to normal feedback</p><br>