Generating Fast and Slow for Entree Level Medical Knowledge
- Conditions
- Patient Satisfaction
- Interventions
- Other: Answers generated by ChatGPT
- Registration Number
- NCT06247475
- Lead Sponsor
- National Taiwan University Hospital
- Brief Summary
The generative artificial intelligence tool, ChatGPT, has garnered widespread interest since its launch. This innovative multimedia platform has the potential to enhance medical communication and health education, thereby improving medical accessibility and reducing the burden on healthcare professionals. Some studies have indicated that ChatGPT achieves higher levels of satisfaction in counseling compared to human healthcare professionals. Additionally, research has shown that ChatGPT's performance in answering objective structured clinical examination questions is comparable to that of typical medical students. However, in both scenarios, it still requires editing by professionals before being used. Moreover, a recent meta-analysis evaluating ChatGPT's ability in various types of medical exams revealed inconsistent results. Before its application and actual integration into clinical practice, the investigators need to comprehend the advantages, disadvantages, and relevant limitations of ChatGPT in the field of medical communication. This study aims to simulate virtual consultations between ChatGPT, acting as a health professional, and study participants, serving as patients. It will evaluate the participants' satisfaction with the virtual consultation questions, categorized by different levels of cognition through Bloom's Taxonomy. This study plans to recruit medical professionals, healthcare-related professionals, medical students from National Taiwan University Hospital, and the general public. Two researchers will select 20 questions from the USMLE step 3 practice tests and the second stage of the Taiwan Medical Licensing Examination. These questions will be categorized into different levels of cognition based on knowledge, comprehension, application, analysis, synthesis, or evaluation. ChatGPT 3.5 and 4.0 will answer these 20 questions without specifying the question's level of cognition. Each participant will review the answers to these 20 questions and assign a satisfaction score based on the appropriateness of each answer.
- Detailed Description
Not available
Recruitment & Eligibility
- Status
- NOT_YET_RECRUITING
- Sex
- All
- Target Recruitment
- 120
-
National Taiwan University Hospital
- Medical professionals
- Healthcare-related professionals
- Medical students
-
Taiwan Emergency Medical Technician Association.
- Volunteers
- N/A.
Study & Design
- Study Type
- OBSERVATIONAL
- Study Design
- Not specified
- Arm && Interventions
Group Intervention Description General public Answers generated by ChatGPT Volunteers from Taiwan Emergency Medical Technician Association Medical students Answers generated by ChatGPT Medical students of National Taiwan University College of Medicine who have passed the first stage of Taiwan Medical Licencing Exam. Medical professionals Answers generated by ChatGPT Attending physicians of National Taiwan University Hospital Healthcare-related professionals Answers generated by ChatGPT Medical Radiation Technologist of National Taiwan University Hospital
- Primary Outcome Measures
Name Time Method Correlation between satisfaction and Bloom's taxonomy baseline (at completion of study questionnaire) Divided by the four participant groups
- Secondary Outcome Measures
Name Time Method Correctness of answers baseline (at completion of study questionnaire) Answers from ChatGPT 3.5/4.0 and the answers provided by Medical Licencing Exam institutions
Trial Locations
- Locations (1)
National Taiwan University
🇨🇳Taipei, Taiwan