Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions
Artificial Intelligence in Medicine
Medical education
Medicine (General)
Original Paper
Radiology, Nuclear Medicine and Imaging
Family medicine
LC8-6691
Health Informatics
Applications of Deep Learning in Medical Imaging
Special aspects of education
FOS: Psychology
R5-920
Chatbots
Artificial Intelligence
Health Sciences
Artificial Intelligence in Service Industry
Computer Science
Physical Sciences
Pathology
Medicine
Psychology
Perception
Cross-sectional study
Neuroscience
DOI:
10.2196/50658
Publication Date:
2023-12-22T14:31:11Z
AUTHORS (3)
ABSTRACT
Background
ChatGPT is a well-known large language model–based chatbot. It could be used in the medical field in many aspects. However, some physicians are still unfamiliar with ChatGPT and are concerned about its benefits and risks.
Objective
We aim to evaluate the perception of physicians and medical students toward using ChatGPT in the medical field.
Methods
A web-based questionnaire was sent to medical students, interns, residents, and attending staff with questions regarding their perception toward using ChatGPT in clinical practice and medical education. Participants were also asked to rate their perception of ChatGPT’s generated response about knee osteoarthritis.
Results
Participants included 124 medical students, 46 interns, 37 residents, and 32 attending staff. After reading ChatGPT’s response, 132 of the 239 (55.2%) participants had a positive rating about using ChatGPT for clinical practice. The proportion of positive answers was significantly lower in graduated physicians (48/115, 42%) compared with medical students (84/124, 68%; P<.001). Participants listed a lack of a patient-specific treatment plan, updated evidence, and a language barrier as ChatGPT’s pitfalls. Regarding using ChatGPT for medical education, the proportion of positive responses was also significantly lower in graduate physicians (71/115, 62%) compared to medical students (103/124, 83.1%; P<.001). Participants were concerned that ChatGPT’s response was too superficial, might lack scientific evidence, and might need expert verification.
Conclusions
Medical students generally had a positive perception of using ChatGPT for guiding treatment and medical education, whereas graduated doctors were more cautious in this regard. Nonetheless, both medical students and graduated doctors positively perceived using ChatGPT for creating patient educational materials.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (38)
CITATIONS (34)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....