MedPath

FDA to Release Guidance on AI Use in Drug Development Amidst Rapid Technological Advancements

8 months ago3 min read

Key Insights

  • The FDA is expected to release new guidance by year's end on the use of AI in clinical trials and drug development to address rapid technological advancements.

  • AI's potential to improve drug efficacy, optimize dosing, and predict adverse effects is balanced against concerns about data quality, transparency, and patient safety.

  • The FDA emphasizes the need for ongoing AI performance monitoring and specialized tools to assess large language models in clinical settings, ensuring continuous improvement.

The U.S. Food and Drug Administration (FDA) is preparing to release new guidance on the use of artificial intelligence (AI) in clinical trials and drug development by the end of the year. This move comes as the rapid advancement of AI technology presents both opportunities and challenges for the pharmaceutical industry.

Addressing AI's Transformative Potential

AI and machine learning have demonstrated the potential to extract data from electronic health records and other sources, making inferences that can optimize drug dosing and predict adverse effects in specific patient populations. Khair ElZarrad, director of the Office of Medical Policy at the FDA's Center for Drug Evaluation and Research, noted that approximately 300 drug submissions to the FDA since 2016 have referenced AI use in some form.
AI can also improve clinical trial recruitment, screen compounds, and enhance post-market safety surveillance. However, these advancements also raise concerns about patient safety, data quality, and the reliability of AI algorithms.

Key Considerations for AI Implementation

Sarah Thompson Schick, counsel at Reed Smith, highlighted the importance of ensuring AI is "fit for the purposes of what you're doing." The anticipated guidance is likely to address how to ensure these issues are addressed throughout the continuous improvement and training of AI models used in essential research and development activities, and how to mitigate potential risks around those issues.
The FDA also published a special communication in the Journal of the American Medical Association (JAMA) outlining concerns about AI use in clinical research, medical product development, and clinical care. The agency emphasized the need for specialized tools to thoroughly assess large language models in their specific contexts and settings, as well as the importance of ongoing AI performance monitoring.

Ensuring Data Quality and Transparency

ElZarrad highlighted the variability in the quality, size, and representativeness of data sets for training AI models. He stressed that the responsible use of AI demands that the data used to develop these models are fit for purpose and fit for use. Additionally, he noted the difficulty in understanding how AI models are developed and arrive at their conclusions, suggesting the need for new approaches around transparency.
Data privacy issues, particularly those involving patient data and compliance with HIPAA and other federal and state laws, are also a significant concern. Schick noted that patient data used in AI development is generally aggregated and de-identified.

Industry's Proactive Approach

Despite the anticipation of FDA guidance, life sciences leaders are not waiting idly. Schick noted, "I don't think companies are waiting on the FDA, necessarily," indicating a proactive approach within the industry to address the challenges and opportunities presented by AI.
Subscribe Icon

Stay Updated with Our Daily Newsletter

Get the latest pharmaceutical insights, research highlights, and industry updates delivered to your inbox every day.

Related News

Sources

© Copyright 2025. All Rights Reserved by MedPath