Use of Artificial Intelligence to Assess Trainee Communication Compared to Human Assessment
- Conditions
- Communication Skills
- Registration Number
- NCT07107880
- Lead Sponsor
- Stanford University
- Brief Summary
This study will evaluate whether a web-based artificial intelligence platform (AI) (Clinical Mind AI \[CMAI\] Stanford, CA), can assess communication skills in anesthesiology trainees, including residents and fellows, in the setting of disclosing medical errors. All participants will participate in an AI-generated simulation by using the platform remotely, and CMAI will assess trainee performance immediately after the simulation.
- Detailed Description
The goal of this study is to evaluate the accuracy of CMAI platform to assess participant performance following a voice-based AI simulation designed to help trainees practice disclosing medical errors. The platform will feature a custom clinical case, created using CMAI's patient creation tool, involving a discussion with the parent of a child who suffered a dental injury that occurred during intubation. The platform will provide an audio-based, simulated encounter with the parent enabling the participant to interact with, and then assess trainee communication performance, followed by the delivery of questionnaires to the trainee to determine usability and satisfaction. A human evaluator will also assess the trainees' performance using the same scales, and the investigators will compare the AI performance evaluation to the human evaluation. This study will allow the investigators to determine:
1. Reliability of the performance assessments of the CMAI platform compared to human raters
2. Usability for the CMAI audio voice model for simulated patient encounters
3. Satisfaction related to an innovative, educational technique
By evaluating these domains, we aim to determine the educational value of using simulated voice communication for training in emotionally complex, clinical scenarios.
Recruitment & Eligibility
- Status
- NOT_YET_RECRUITING
- Sex
- All
- Target Recruitment
- 45
- Participant's must be 18 years of age or older
- Graduate medical trainess such as residents and fellows at all levels of training excluding PGY1 trainees at Department of Anesthesiology, Perioperative, and Pain Medicine at Stanford University
- Able to speak and understand English
- Willing and able to provide consent to participate in research
- Able to participate in Artificial Intelligence communication simulation
- Non-English speaker
- unable to provide consent to participate in research
- individual with no access to the necessary technology (internet, computer/smart device) required for participating in the digital, voice-based interaction with the AI platform
Study & Design
- Study Type
- INTERVENTIONAL
- Study Design
- SINGLE_GROUP
- Primary Outcome Measures
Name Time Method Reliability of AI Conversational Performance Assessment Immediately after the simulation The primary outcome is to evaluate the reliability of the Clinical Mind AI (CMAI) platform in accurately assessing the communication skills of medical education trainees during a simulated interaction. To measure this outcome, the CMAI will use the Breaking Bad News Assessment Schedule (BBAS), a validated tool assessing participant's communication skills in five domains: 1) Setting the scene, 2) Breaking the news, 3) Eliciting concerns, 4) Informative giving, and 5) Empathy and Support. It includes 17 items, each with sub questions, rated on a 5-point Likert scale from (1-5), where score meanings may vary by each question.
The scoring will be guided by Breaking Bad News Rubric, which outlines performance criteria for each item on the BBAS. In addition, two trained human evaluators will independently asses the participant's communication skills using the same BBAS tool and rubric. Lastly, both CMAI and human scores will be compared to determine the reliability of the CMAI platform.
- Secondary Outcome Measures
Name Time Method Usability of CMAI Simulation Platform immediatley after the simulation The perceived usability of the CMAI platform will be evaluated using a usability questionnaire. This questionnaire consists of 14 items, each offering five different response options. Participants will rate each item on a scale from 1 - 5, where score meanings may vary by each question.
Satisfaction of CMAI Simulation immediately after the simulation Participant's satisfaction levels will be evaluated using a modified version of the Questionnaire on Satisfaction with Teaching Innovation (QSTI) Survey. The survey consists of five items, each rated on a scale of 1-5, where 1= Strongly Disagree and 5 = Strongly Agree. Higher scores indicate increased satisfaction levels.