Safe and Explainable AI
- Conditions
- Artifical InteligenceCardiologyBreast CancerSepsis
- Registration Number
- NCT06694181
- Lead Sponsor
- Abramson Cancer Center at Penn Medicine
- Brief Summary
While current AI technology is suitable for automating some repetitive clinical tasks, technical challenges remain in solving critical and gainful problems in the domains of patient and disease management. The proposed research seeks to address issues in medical AI, such as integrating medical knowledge effectively, making AI recommendations explainable to clinicians, and establishing safety guarantees.
- Detailed Description
Not available
Recruitment & Eligibility
- Status
- NOT_YET_RECRUITING
- Sex
- All
- Target Recruitment
- 300000
Cardiology 18 years of age and older, admitted to any of the Penn Medicine hospitals from 2017 to the present. Sepsis 18 years of age at the time of presentation to an emergency department or admission to any Penn Medicine hospital from July 1, 2017, onward will be eligible as this represents the population at risk for acquiring sepsis Oncology 18 years of age and older with a diagnosis of invasive breast cancer (Stage 1-4) in the Penn Cancer registry
Exclusion Criteria All prediction models will exclude patients under the age of 18 from their patient data sets.
Cardiology Patients whose primary admission diagnosis was cardiac arrest Sepsis Those with pre-existing limitations on life-sustaining therapy will be excluded because their eligibility for sepsis definitions, care received, and outcomes, may be significantly and variably affected by pre-existing limitations on care. Oncology There are no other exclusions.
Not provided
Study & Design
- Study Type
- OBSERVATIONAL
- Study Design
- Not specified
- Primary Outcome Measures
Name Time Method Neurosymbolic Learning Algorithms Prototype and develop new learning algorithms; 18 months. Benchmark and evaluate the learning algorithms; 24 months. Publish research results; 24 months Develop and evaluate novel algorithms for training neurosymbolic models. We will develop data- and compute-efficient algorithms for end-to-end training of neurosymbolic models. This task will reduce the burden on clinician experts to provide fine-grained labels on voluminous EHR data.
Explanation Methods Prototype and develop new explanation algorithms; 18 months. Derive certified guarantees for explanations; 18 months. Benchmark and evaluate the explanation algorithms; 24 months. Extend certificates to new properties and tasks; 30 months. Publ We will develop new explainable AI techniques that come with verifiable guarantees. These guarantees will enable trust and transparency in AI at a fundamental level.
Methods for Safety Guarantees Prototype and develop new rule learning algorithms; 30 months. Scale rule learning algorithms to larger data settings; 36 months. Incorporate new primitives to express complex rules; 36 months. Implement rule learning algorithms on baseline tasks We will develop new algorithms that can scalably extract complex logical rules governing safety within the data that have statistical guarantees. These techniques will be rooted in statistical analysis and assist users in identifying out of distribution data and detecting anomalies.
- Secondary Outcome Measures
Name Time Method