MedPath

Eyes On Lips? Speechreading Skills and Facial Expression Discrimination in Children With and Without Impaired Hearing

Not Applicable
Recruiting
Conditions
Hearing Impaired Children
Hearing Impairment
Interventions
Behavioral: Speechreading training with an application
Registration Number
NCT05854719
Lead Sponsor
University of Oulu
Brief Summary

The goal of this clinical trial is to find out the role of background factors and gaze use in children's speechreading performance.

The main questions it aims to answer are:

* Which background factors and eye gaze patterns are associated with the best speechreading results in hearing children and those with hearing impairment/loss?

* Are children's gaze patterns and facial expression discrimination associated with interpretation of emotional contents of verbal messages in speechreading?

* What is the efficacy of intervention that is based on the use of a speechreading application to be developed?

Participants will be

* tested with linguistic and cognitive tests and tasks

* tested with a speechreading test and tasks with or without simultaneous eye-tracking

* about half of the participants with hearing impairment/loss will train speechreading with an application

Researchers will compare the different age groups and the results of hearing children to those of children with impaired hearing to see if there are differences.

Detailed Description

1. Aim The objectives of this project are to 1) gain information about speechreading abilities in Finnish of children without and with hearing impairment, 2) obtain information about the association between gaze behaviour and speechreading accuracy, 3) develop a test for speechreading for Finnish-speaking children, 4) find out whether emotion discrimination (discrimination of facial expressions) helps children with impairment in speechreading, 5) find out whether speechreading can effectively be trained with a smart device application, and 6) explore and train artificial intelligence algorithms further with the help of data on children's use of gaze and speechreading skills (this part does not belong to the clinical trial part of the study).

2. Study design Controlled, clinical trial. Study arms: 1) hearing children serving as controls 2) children with impaired hearing participating in speechreading training, 3) children with impaired hearing serving as controls

Altogether 70 children (a part of them with impaired hearing) will be tested remotely (via Zoom application), and about 80 children on site (as eye-tracking is used in data collection from them).

Caregivers will be requested to fill out a background form by using an on-line survey platform REDCap having a high data protection capability. Caregivers will give information about their child's hearing ability/level based on the most recent audiogram, overall health (e.g., possible medical diagnoses, vision) and development. Caregivers' profession and educational level will also be surveyed.

Child outcomes:

Children will be tested with linguistic, cognitive and social cognitive tests and tasks and 80 children also with eye-tracking.

Of the linguistic tests and tasks, a validated Finnish version (Laine et al., 1997) of the Boston Naming Test (Kaplan et al., 1983) is used in testing child's expressive vocabulary. In the nonword repetition subtest of the Nepsy Test (Korkman, 1998), the child is asked to repeat 16 nonwords presented as an audio recording. The phonological processing subtest of the NEPSY II Test (Kemp \& Korkman, 2008) is composed of two phonological processing tasks designed to assess phonemic awareness. It explores identification of words from word segments. Children aged 7 to 8 years are asked to repeat a word and then to show from pictures the alternative in question when the test administrator has first pronounced only a part of the word as a cue. Children aged 9 to 11 years are asked to create a new word by omitting a part of a compound word or a phoneme with the test administrator first pronouncing the part to be omitted. Reading skills of the children is explored with three subtests, Technical reading ability TL2B, TL3B and TL4B, of the ALLU Test (Lindeman, 1998). The child is asked to select the right alternative out of four line drawings to match it with single words or sentences or the judge whether the meaning of the sentence written is true or false.

Children's speechreading skills will be assessed with Children's speechreading test (Huttunen \& Saalasti) which contains single words and short sentences and a task in which facial expressions and speechreading on sentence level need to be combined.

Firstly, a novel computerized Children's speechreading test will be constructed (Huttunen \& Saalasti) for children acquiring Finnish. In addition to piloting results of hearing children aged 8 to 11 years, information about the receptive vocabulary of 8-11-year-old children with hearing impairment, and visual analogues of spoken phonemes (visemes) in Finnish will be used as the central basis for choosing the items for the multiple-choice word-level part of the test. Two- to three-word sentences will be included in the sentence-level part of the test.

When giving their responses in the Children's speechreading test, after watching each video clip, children need to discriminate the word or sentence expressed by choosing from alternatives given as drawings and illustrating various persons, objects, or events. For validating the novel speechreading test, that is, to obtain the age norms for it and to explore its psychometric properties, 120 children with normal hearing and typical development (30 children per age group) and about 30 children with HI (about eight children per age group) will be tested with it. Reading level sufficient for selecting the alternatives for the meaning of short sentences in the sentence-level part of the speechreading test is required from the participants.

In addition to the Children's speechreading test, an emotion + speecreading task is conceived to see whether children can make use of additional information from facial expressions to discriminate the sentences they speechread. For that, a speaker expresses some classic basic emotions (happiness, sadness, anger) in 10 sets of four sentences to be constructed for this purpose. Ten video recordings are presented without voice to the children with always four written alternative choices.

Children will also be tested with cognitive and social cognitive tests and tasks: reaction time (Reaction time task), first-order Theory of Mind (Sally Anne Test), second-order Theory of Mind skills (modified Ice Cream Van Task), auditory short-term memory (ITPA auditory serial memory subtest), visual short-term memory (ITPA visual serial memory subtest), visuo-spatial short-term memory (Corsi Block Test), emotion discrimination (discrimination of facial expressions from photographs, video clips and the FEFA 2 test).

Of the cognitive and social cognitive tests and tasks, the Reaction time task which follows the classic principles of a two-choice reaction time test, two numbers randomly appear on a computer screen; within 1 to 3 seconds either on the left or on the right side of the screen. Child's task is to strike the left or right arrow key on the keyboard as soon as the the number has appeared on the screen. The task takes less than two minutes to perform, and after 40 numbers shown the software produces the results (mean reaction time in milliseconds, SD, min, max and the number of correct answers as a relative percentage score).

As the first-order Theory of Mind task, the classic Sally Anne Test (Baron-Cohen, Leslie \& Frith, 1985) will be used and as the second-order Theory of Mind skills a modified version of the Ice Cream Van Task (Perner \& Wimmer, 1985; Doherty, 2009). In the second-order task, four drawings constructed are used to help the child to understand and to remember the story told by the task administrator.

Short term memory skills are assessed using the auditory and visual short-term sequential memory subtests of the validated Finnish version (Kuusinen \& Blåfield, 1974) of the Illinois Test of Psycholinguistic Abilities (ITPA) (Kirk et al., 1968). In the auditory short-term subtest of ITPA the child is asked to orally repeat digit series given and in the visual short-term subtest to arrange the right symbols in the right order. In the visual short-term subtest, the test administrator first shows the symbol series and the child restores that in the short-term memory to reproduce the series. Visuo-spatial short-term memory is researched by using the Corsi Block Test (Corsi, 1972, Kessels, van Zandvoort, Postman, Kapelle \& de Hand, 2000) included in the PsyToolkit (Professor Gijsbert Stoet). In the on-line test, nine blocks are shown. They are arranged in certain fixed positions on a screen in front of the participant. The software flashes a sequence of blocks, for example, a sequence of three different blocks, one after another. As a response, by using a mouse, the participant needs to tap the blocks on the screen in the same order the on-line test showed. The test takes less than 30 seconds to perform. The Corsi span is defined as the longest sequence a participant can correctly repeat.

Discrimination of facial expressions from photographs and video clips are self-constructed tasks (Huttunen, 2015, first described in Huttunen, Kosonen, Laakso \& Waaramaa, 2018). In the first computerized task, a set of 12 photographs depicting four different emotions (three basic emotions and a neutral expression) are shown. Four verbal labels are given as written response choices. To test facial emotion recognition skills using dynamic input, the same set of emotions expressed by the same persons are presented as video clips of two seconds each with four verbal labels as response choices.

The computerized "Faces" submodule of the Finnish version of the FEFA 2 test (The Frankfurt Test and Training of Facial Affect Recognition; Bölte et al., 2013; Bölte \& Poustka, 2003) is used as a standardized task to assess children's facial emotion recognition skills. This test consists of 50 photographs depicting seven different emotions and their labels as response choices (joy, anger, sadness, fear, surprise, disgust and neutral). The child selects the alternative to match the facial expression (emotion) presented. The FEFA 2 software summarizes the results (total score, confusion matrices, response time).

Children's gaze use will be explored by eye-tracking (EyeLink 1000+ device) during facial expression and speechreading tests and tasks. Eye-tracking is used for 80 children (50 with normal hearing and 30 with hearing impairment/loss). Their gaze use is explored during Children's speechreading test, during the tasks in which facial expressions need to be discriminated from photographs and video clips, and during the emotions + speechreading task. Chin rest is used to stabilize the position of the child for securing the success in data collection. Fixations, dwell time (the time the gaze stays on certain place on the screen) and gaze path are analysed to find out which areas of interest on the face are the ones that attract the children's gaze the most. It is explored with eye tracking data what kind of gaze use and gaze patterns are optimal for children's speechreading and emotion discrimination performance.

3. Sample size Hearing children (n = 120) aged 8 to 11 years (30 children per age group), Children with hearing impairment (about 30) aged 8 to 11 years (about eight children per age group).

4. Blinding and randomization None

5. Follow-up protocols

About 30 - 35 children with impaired hearing are aimed to be tested twice:

1. After initial testing, 30 children with impaired hearing will be asked to train speechreading with a smart device application at home. After two months their emotion discrimination and speechreading skills and gaze use during emotion discrimination and speechreading tasks will be examined again (on-site testing). Their use of speechreading application will be explored by transferring the user data (how much they have used the application and how their speechreading skills have developed as indicated by the scoring system built in the software) with a cable or blue-tooth connection.

2. A small group of children with impaired hearing will serve as controls; they will be remotely tested (via Zoom application) after two months of the initial testing with no intervention between the initial and last assessment.

Recruitment & Eligibility

Status
RECRUITING
Sex
All
Target Recruitment
155
Inclusion Criteria

Normally hearing children:

  • age 8-11 years
  • being born full-term (on 37. gestational week or later)
  • Finnish speaking (Finnish is the language the child's family uses at home, the child goes to a school where Finnish is used as the language of instruction)
  • normal hearing and vision
  • typically developing, mainstream education curriculum at school
  • for those tested remotely: computer available at home for remote testing

Children with hearing impairment/loss:

  • age 8-11 years
  • diagnosed bilateral hearing impairment
  • being born full-term (on 37. gestational week or later)
  • Finnish speaking (Finnish is the language the child's family uses at home, the child goes to a school where Finnish is used as the language of instruction)
  • normal vision
  • (mainly) typically developing
  • for those tested remotely: computer available at home for remote testing
Read More
Exclusion Criteria

Normally hearing children: psychiatric and neurodevelopmental disorders, including ADHD (Attention Deficit and Hyperacitivity Disorder)

Children with hearing impairment/loss: psychiatric and neurodevelopmental disorders (excluding ADHD if medication helps the child to concentrate well during testing)

Read More

Study & Design

Study Type
INTERVENTIONAL
Study Design
PARALLEL
Arm && Interventions
GroupInterventionDescription
Children with hearing impairment/loss with interventionSpeechreading training with an applicationThese children participate in speechreading training for two months at home.
Primary Outcome Measures
NameTimeMethod
Level of performance in emotions (facial expressions) + sentence-level speeachreading taskTwo months

Score obtained (percent correct, min 0, max 100, higher score indicates better result)

Level of speechreading skillTwo months

Score obtained in Children's speechreading test (percent correct, min 0, max 100, higher score indicates better result)

Secondary Outcome Measures
NameTimeMethod
Eye gaze useTwo months

Child's eye gaze use during speechreading is defined as areas on the speaker's face attracting the most eye fixations (stops, the time when the eyes are relatively stationary) and their duration, and gaze path, i.e., which way the eyes move (from where to where) on the speaker's face while watching the face. Areas of interest to be calculated are left eye, right eye, nose or nose/chins, mouth, and other locations. Number of eye fixations and their duration are calculated. The higher the number of fixations and the higher the dwell time (duration of fixation), the higher the interest is on certain part of the face (area of interest)

Trial Locations

Locations (1)

University of Oulu

🇫🇮

Oulu, Finland

© Copyright 2025. All Rights Reserved by MedPath