MedPath

Interpretation of Fetal Echocardiography by Artificial Intelligence

Conditions
Congenital Heart Disease
Interventions
Diagnostic Test: Routine ultrasound examination
Registration Number
NCT05090306
Lead Sponsor
University of Medicine and Pharmacy Craiova
Brief Summary

The study to be performed aims to design and develope an automated Intelligent Decision Support System for fetal echocardiography that can significantly assist the obstetric physician in the improvement of detection of fetal congenital heart disease compared to the common standard of care.

Detailed Description

Introduction:

Worldwide, Congenital Heart Disease (CHD) is the most encountered fetal malformation. The incidence of congenital heart disease appears to be about 1 per 100 live born infants and is even higher in infants who die before term (1). Fetal echocardiography (FE) has evolved from just the description of the anatomical abnormalities of the heart toward quantitative assessment of its function, dimension and shape (2). Presently, FE is performed manually by the sonographer during the second trimester investigation. However, only half of the babies undergoing surgery within the first year of life have a prenatal detection (3), explaining the need for an improvement of the fetal cardiac assessment. Many studies showed the presence of a significant discrepancy between the pre- and postnatal diagnosis of the CHD obtained by a manually performed FE (4, 5).

Intelligent Decision Support Systems (ISs) are frameworks that have the capacity to gather and analyze data, communicate with other systems, learn from experience, and adapt according to new cases. Technically speaking, ISs are advanced machines that observe and respond to the environment that they have been exposed to using Artificial Intelligence (AI) (6). This project aims to foster a cross-fertilization of FE and ISs, which will provide an enormous potential in developing new fundamental theories and practical methods that rise above the boundaries of the disciplines involved and lead to new impactful methods that assist medical practice and discovery.

Methods and analysis:

The study to be performed is a cross-sectional study divided into two separated parts: the training part of the machine learning approaches within the proposed framework and the testing phase on previously unseen frames and eventually on actual video scans. All pregnant women in their first and second trimester are considered eligible for the study. Pregnant women will be admitted for their routine ultrasound examination and monitoring, first time between 12-13+6 weeks of pregnancy (for the first trimester anomaly scan) and / or between 18-24 weeks of pregnancy (for the second trimester anomaly scan). Two-dimensional evaluation of each fetal heart will include a cine loop sweep obtained from the from the four-chamber view plane by moving the transducer cranially towards the upper mediastinum, allowing the visualization of the following planes: four-chamber view, left and right ventricular outflow tracts, three vessels and trachea view. All video files saved from the US devices will be collected into the cloud. Each ultrasound sweep will be split into frames by the OB-GYN/Cardio (OBC) department. The Data Science / IT department (DSIT) will process the frames for obeying the anonymization regulations. For key feature identification, the frames will be grouped by the OBC into the classes that represent the plane views for each trimester. DSIT will try different state-of-the-art DL pre-trained algorithms on the data set with plane views. All the recent DL entries will be tailored and tested on the current two scenarios of key view identification and semantic segmentation. Their performance results (their prediction against the ground truth marked by the OBC) will be analyzed in comparison by means of a statistical test. The accuracy-speed equilibrium will be taken into account in the ranking of the approaches, since the system will finally perform on a video. Once a new video will be available in practice, the model chosen for the respective task will highlight the key feature or the segmented region on video and also provide a degree of confidence in its recognition. The OBC physicians will validate all the intermediary findings at frame level, as well as the meaningfulness of the video labelling and segmentation. The outcomes of the model on the first and second trimester videos of the same patient will be compared to assess the discrepancy.

AI (ARTIFICIAL INTELLIGENCE) ANALYSIS

Database construction:

The data set for AI analysis will be constructed from images extracted from ultrasound scans taken in the apical plane. Consequently, a classification problem with four categories corresponding to the given key views is considered. An additional Other class is appointed and populated with images of other unimportant frames from the scan.

Image preprocessing for fetal heart scans:

The first step is represented by the extraction performed by converting the image to gray scale and applying a threshold. In this way, the background noise is discarded. In order to eliminate the small bridges between the area of interest and other areas in the image, erosion is applied using a 10x10 pixels kernel. After erosion, besides the area of interest, several smaller "islands" may appear in the image. The zone of interest has two properties that distinguish it from the rest of the islands: it always covers the central area of the image and it has the highest area. In order to eliminate the non-relevant islands, the spots that cover the central area of the image are first identified, then the spot with the highest surface coverage is selected. The detected spot is filled, then the dimension is restored by applying a dilation algorithm, using the same kernel of 10x10 pixels. In order to prevent losing the fine details around the dilated spot, its convex hull is drawn and filled. In the end, the generated spot is used as a mask to extract the area of interest from the original image.

Experimental results:

Three variants of the data collection are considered: the original double-sided (standard + Doppler) samples, the Doppler single crops and merged image pairs of resulting standard and Doppler crops.

The ResNet18 and ResNet50 architectures, with similar setups, are applied for each data set in turn for evaluating the suitability of the preliminary processing.

As the data collections with cropped images contain representatives from all the initial images, all sets thus have the same number of items and structure per each class, as well as for the training, validation and test separations. For an objective evaluation, all images that are extracted from a patient lie within the same separation of the data set, i.e. training, validation or test. This occurs even if there were multiple video files for a patient made at different moments (e.g. weeks apart) in time.

The images are resized to 224 × 224 pixels. The 1cycle policy is used and the implementation is made using fastai and PyTorch libraries. The initial model weights are taken through transfer learning, as pretrained on the ImageNet data set. The default options for data augmentations are used. There are 2 steps involved in the training process, each containing only 10 epochs. Within the first training session, all layers are frozen, except for the batch normalization layers and the head of the model. Its choice controls how the weights of the network are adjusted with respect to the loss gradient and selecting a proper value is essential for making the model converging to a local minimum, and reaching thus improved accuracy in a smaller amount of epochs. The batch size is taken equal to 32. The model that hits the highest accuracy on the validation data is applied on the test set. For each of the 2 architectures and for each separate data set, 5 repeated applications are made and the reported results for each case are computed as the average over the 5 outcomes. The runs are made in Google Colab, using Tesla T4 for GPU. B. The successful gradient class activation map (Grad-CAM) approach is used for outlining the decisions of the model for the Aorta and Other labelling for each data set. The plots are derived from the 5 runs that are computed for each distinct (architecture, data set) setup.

Recruitment & Eligibility

Status
UNKNOWN
Sex
Female
Target Recruitment
1000
Inclusion Criteria
  • pregnant women in their first and second trimester
  • signed informed consent for the study
Exclusion Criteria
  • unknown outcome of pregnancy
  • age under 18 years old

Study & Design

Study Type
OBSERVATIONAL
Study Design
Not specified
Arm && Interventions
GroupInterventionDescription
Pregnant women in their first and second trimesterRoutine ultrasound examinationPregnant women in their first and second trimester will be examined using two-dimensional echocardiography of the fetal heart.
Primary Outcome Measures
NameTimeMethod
Development of an Intelligent Decision Support System for fetal echocardiography24 months

The primary objective of this study is the design and development of the IS that can significantly assist the physician in the improvement of detection of fetal congenital heart disease compared to the common standard of care.

Secondary Outcome Measures
NameTimeMethod
Counseling aid for newly trained sonographers36 months

The first secondary outcome is to improve the performance of unexperienced and newly trained sonographers regarding the quality in image acquisition.

Improveing the prenatal diagnosis36 months

Another secondary outcome is to reduce the rate of diagnosis discrepancies between first and second trimester evaluations and between pre- and postnatal cardiac assessment.

Trial Locations

Locations (1)

University Emergency County Hospital

🇷🇴

Craiova, Dolj, Romania

© Copyright 2025. All Rights Reserved by MedPath