MedPath

Comutti - A Research Project Dedicated to Finding Smart Ways of Using Technology for a Better Tomorrow for Everyone, Everywhere.

Not Applicable
Completed
Conditions
Autism Spectrum Disorder
Interventions
Diagnostic Test: Clinical evaluation of participants by means of Autism Diagnostic Observation Schedule
Behavioral: audio signal dataset creation and validation; machine learning analysis, empirical evaluations
Registration Number
NCT05149144
Lead Sponsor
IRCCS Eugenio Medea
Brief Summary

According to World Health Organization, worldwide one in 160 children has an ASD. About around 25% to 30% of children are unable to use verbal language to communicate (non-verbal ASD) or are minimally verbal, i.e., use fewer than 10 words (mv-ASD). The ability to communicate is a crucial life skill, and difficulties with communication can have a range of negative consequences such as poorer quality of life and behavioural difficulties. Communication interventions generally aim to improve children's ability to communicate either through speech or by supplementing speech with other means (e.g., sign language, pictures, or AAC - Advanced Augmented Communication tools). Individuals with non- verbal ASD or mv-ASD often communicate with people through vocalizations that in some cases have a self-consistent phonetic association to concepts (e.g., "ba" to mean "bathroom") or are onomatopoeic expressions (e.g., "woof" to refer to a dog). In most cases vocalizations sound arbitrary; even if they vary in tone, pitch, and duration depending it is extremely difficult to interpret the intended message or the individual's emotional or physical state they would convey, creating a barrier between the persons with ASD and the rest of the world that originate stress and frustration. Only caregivers who have long term acquaintance with the subjects are able to decode such wordless sounds and assign them to unique meanings.

This project aims at defining algorithms, methods, and technologies to identify the communicative intent of vocal expressions generated by children with mv-ASD, and to create tools that help people who are not familiar with the subjects to understand these individuals during spontaneous conversations.

Detailed Description

Not available

Recruitment & Eligibility

Status
COMPLETED
Sex
All
Target Recruitment
33
Inclusion Criteria
  • having a clinical diagnosis of autism spectrum disorder according to DSM-5 criteria
  • use fewer than 10 words
Exclusion Criteria
  • using any stimulant or non-stimulant medication affecting the central nervous system
  • having an identified genetic disorder
  • having vision or hearing problems
  • suffering from chronic or acute medical illness

Study & Design

Study Type
INTERVENTIONAL
Study Design
SINGLE_GROUP
Arm && Interventions
GroupInterventionDescription
Experimental: audiosignal dataset creation and machine learning analysisClinical evaluation of participants by means of Autism Diagnostic Observation ScheduleExperimental: audiosignal dataset creation and processing; machine learning analysis, empirical evaluations
Experimental: audiosignal dataset creation and machine learning analysisaudio signal dataset creation and validation; machine learning analysis, empirical evaluationsExperimental: audiosignal dataset creation and processing; machine learning analysis, empirical evaluations
Primary Outcome Measures
NameTimeMethod
Accuracy of machine learning predictionimmediately after the intervention

The classification accuracy of machine learning analysis, i.e., the number of correct predictions divided by the total number of predictions, which will be tested in a retained test set of recorded audio signal samples.

This outcome measures will estimate the usability/utility of the developed tool for vocalization interpretion based on a machine learning analysis of the recorded audio signal samples.

Frequency of audio signal samples and their associated labelsimmediately after the intervention

Frequency (measured in number per hour) of audio signal samples (sounds and verbalizations) produced by each participant recorded during the hospital stays, in various contexts (i.e., during educational interventions and / or in moments of unstructured play) labeled as self-talk, delight, dysregulation, frustration, request, or social exchange.

A small, wireless recorder (Sony TX800 Digital Voice Recorder TX Series) will be attached to the participant's clothing using strong magnets. Next, the adults (caregiver and / or operators) must associate the sounds produced by the child to an affective and / or to the probable meaning of the vocalization -labels- through the use of a web app.

Participant-specific harmonic features derived by the audio signal samplesimmediately after the intervention

Temporal and spectral audio features -i.e., pitch-related features, formants features, energy-related features, timing features, articulation features- extracted from the samples and used next for supervised and unsupervised machine learning analysis.

The collected audio signal samples will be segmented in the proximity of the temporal locations of labels. Next, it will be segmented and associated with temporally adjacent labels (affective states or probable meaning of vocalizations). Audio harmonic features (temporal/phonetic characteristics) will be then identified for each participant using supervised/unsupervised machine learning analysis of audio signal samples. Through this process, participant-specific patterns corresponding to specific communications purposes or emotional states will be identified.

Secondary Outcome Measures
NameTimeMethod

Trial Locations

Locations (1)

Scientific Institute, IRCCS Eugenio Medea

🇮🇹

Bosisio Parini, Lecco, Italy

© Copyright 2025. All Rights Reserved by MedPath