A recent study published in Scientific Reports has uncovered inconsistencies in the reporting of adverse events (AEs) in glaucoma intervention trials registered on ClinicalTrials.gov. The retrospective analysis, encompassing trials completed between September 2009 and November 2023, compared safety data reported in the registry with corresponding publications. The findings raise concerns about the transparency and completeness of safety reporting in clinical trials for glaucoma treatments.
The study focused on Phase 3 and 4 interventional trials, crucial for confirming efficacy and monitoring safety in larger populations. Researchers analyzed data from ClinicalTrials.gov, PubMed, Web of Science, Scopus, and Google Scholar, using the National Clinical Trial (NCT) identifier to match registry entries with published articles. The primary goal was to assess the consistency of AE reporting, including serious adverse events (SAEs), deaths, and other adverse events (OAEs).
Discrepancies in Adverse Event Reporting
The analysis revealed significant discrepancies between the safety data reported on ClinicalTrials.gov and in the corresponding publications. These inconsistencies included differences in the number of affected participants, the total number of AEs reported, and the descriptions of AEs. The researchers noted that complete reporting in the registry should include tables summarizing the number of affected participants out of those at risk for each AE, as mandated by the Final Rule.
In publications, complete reporting of AEs constituted an explicit statement regarding the occurrence of SAEs, deaths, or OAEs, following CONSORT extensions for improved reporting of harms in randomized trials. Any disparities in the completeness, the number of affected participants, total AEs, or the description of AEs between ClinicalTrials.gov and publications were classified as inconsistent reporting of AEs.
Methodological Rigor and Statistical Analysis
To mitigate potential bias, two investigators independently extracted data from the trial cohort and corresponding publications. Inter-rater reliability was high for SAEs reporting in ClinicalTrials.gov and publications (kappa range 0.83 to 1.00). For OAEs reporting, the inter-rater reliability was high for some elements (kappa range 0.69 to 1.00). Disagreements were resolved through consensus discussion.
The differences between the registry and publications were tested using chi-square test/Fisher exact test or Mann–Whitney U test. The statistical analyses were conducted using JASP, with significance determined at P < 0.05.
Implications for Clinical Practice and Research
The study's findings underscore the importance of rigorous safety data reporting in clinical trials. Inconsistent reporting of adverse events can hinder the accurate assessment of treatment risks and benefits, potentially impacting clinical decision-making. The researchers emphasize the need for standardized terminology and improved data extraction methods to enhance the reliability and transparency of safety reporting.
The authors suggest that stricter enforcement of regulatory guidelines, such as the FDA Amendments Act of 2007 (FDAAA 801) and the Final Rule, is crucial to ensure comprehensive and accurate reporting of adverse events in clinical trials. They also recommend the use of standardized terminology systems, such as MedDRA, to facilitate consistent reporting across different trials and publications.