AI-Based Radiographic Detection of Periodontal Defects
- Conditions
- Periodontitis
- Registration Number
- NCT07086625
- Lead Sponsor
- University of Cagliari
- Brief Summary
The primary objective of the study is to develop and validate a machine learning model for the automatic identification of periodontal vertical bone defects, improving diagnostic accuracy and efficiency.
The study comprises three phases:
1. Public dataset annotation: Approximately 7,000 intraoral radiographs will be manually annotated by experts to classify periodontal bone defects (1-wall, 2+ walls, craters, furcation involvement).
2. Model training: A deep learning algorithm will be trained on the annotated images to learn automatic recognition of the defects.
3. Clinical validation: The model will be tested on a dataset of 150 anonymized radiographs from 20-30 patients treated at AOU (Azienda Ospedaliero Universitaria) Cagliari, comparing its performance to expert dental evaluations.
- Detailed Description
To address the challenge of detecting periodontal osseous defects, the study will employ the YOLOv8 (You Only Look Once versione 8) framework, a state-of-the-art deep learning model optimized for object detection tasks. This architecture is known for its balance between accuracy and inference speed, making it suitable for clinical applications that require efficient processing.
The YOLOv8l (large) variant will be selected to maximize detection accuracy, given the complexity of the task. The architecture will include:
* Backbone: a convolutional neural network (CNN) for multi-scale feature extraction;
* Neck: a feature pyramid network (FPN) to integrate spatial and semantic information across layers;
* Head: a detection module responsible for class probabilities and bounding box predictions.
Model Training
The training will be performed on a dataset consisting of approximately 406 images for training, 58 for validation, and 117 for testing. Annotations will include bounding boxes for four types of defects: 1-wall, 2+ walls, craters, and furcation involvement. The dataset will be formatted according to YOLO standards.
Key training parameters will include:
* Input resolution: 640 × 640 pixels
* Batch size: 16
* Optimizer: AdamW with a learning rate of 0.0014
* Epochs: 100
* Loss function: a combination of box, class, and distribution focal loss (DFL)
* Hardware: NVIDIA GeForce RTX 4090 with mixed precision training
Inference and Evaluation
Inference will be conducted on the test set, and model performance will be evaluated using standard object detection metrics. These will include:
* Intersection over Union (IoU) to measure the overlap between predicted and ground truth bounding boxes
* Precision and recall, computed based on true positives, false positives, and false negatives
* Confidence threshold (initially set at 0.25) to filter out low-confidence predictions
* Precision-recall curves, to visualize the trade-off between precision and recall at varying thresholds
Performance will be summarized using mean Average Precision (mAP):
* mAP@0.5: mean average precision at a fixed IoU threshold of 0.5
* mAP@0.5:0.95: mean average precision averaged across multiple IoU thresholds (from 0.5 to 0.95 in 0.05 increments)
The model's detection capabilities will be assessed across all four classes of periodontal bone defects, providing a comprehensive evaluation of its diagnostic potential.
Recruitment & Eligibility
- Status
- COMPLETED
- Sex
- All
- Target Recruitment
- 500
- Intraoral radiographs showing presence of periodontal infrabony defects
- Intraoral radiographs showing without detectable presence of periodontal infrabony defects
Study & Design
- Study Type
- OBSERVATIONAL
- Study Design
- Not specified
- Primary Outcome Measures
Name Time Method Intersection over Union (IoU) Baseline The IoU measures the overlap between a predicted bounding box and a ground truth bounding box. It is defined as: Area of Overlap/Area of Union; where the area of overlap is the intersection of the predicted and ground truth boxes, and the area of union is the total area covered by both boxes.
Precision (P) Baseline The fraction of true positives (TP) among all predictions:
T P/T P + F P High precision indicates that the model makes few false positive (FP) predictions.Recall (R) Baseline The fraction of true positives among all ground truth objects:
T P/T P + F N (false negatives) High recall indicates that the model detects most ground truth objects.
- Secondary Outcome Measures
Name Time Method
Trial Locations
- Locations (1)
Università degli Studi di Cagliari
🇮🇹Cagliari, California, Italy
Università degli Studi di Cagliari🇮🇹Cagliari, California, Italy