Item request has been placed!
×
Item request cannot be made.
×
Processing Request
Performance Comparison of Junior Residents and ChatGPT in the Objective Structured Clinical Examination (OSCE) for Medical History Taking and Documentation of Medical Records: Development and Usability Study.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Author(s): Huang TY;Huang TY; Hsieh PH; Hsieh PH; Chang YC; Chang YC
- Source:
JMIR medical education [JMIR Med Educ] 2024 Nov 21; Vol. 10, pp. e59902. Date of Electronic Publication: 2024 Nov 21.
- Publication Type:
Journal Article; Comparative Study
- Language:
English
- Additional Information
- Source:
Publisher: JMIR Publications Country of Publication: Canada NLM ID: 101684518 Publication Model: Electronic Cited Medium: Internet ISSN: 2369-3762 (Electronic) Linking ISSN: 23693762 NLM ISO Abbreviation: JMIR Med Educ Subsets: MEDLINE
- Publication Information:
Original Publication: Toronto, ON : JMIR Publications, [2015]-
- Subject Terms:
- Abstract:
Background: This study explores the cutting-edge abilities of large language models (LLMs) such as ChatGPT in medical history taking and medical record documentation, with a focus on their practical effectiveness in clinical settings-an area vital for the progress of medical artificial intelligence.
Objective: Our aim was to assess the capability of ChatGPT versions 3.5 and 4.0 in performing medical history taking and medical record documentation in simulated clinical environments. The study compared the performance of nonmedical individuals using ChatGPT with that of junior medical residents.
Methods: A simulation involving standardized patients was designed to mimic authentic medical history-taking interactions. Five nonmedical participants used ChatGPT versions 3.5 and 4.0 to conduct medical histories and document medical records, mirroring the tasks performed by 5 junior residents in identical scenarios. A total of 10 diverse scenarios were examined.
Results: Evaluation of the medical documentation created by laypersons with ChatGPT assistance and those created by junior residents was conducted by 2 senior emergency physicians using audio recordings and the final medical records. The assessment used the Objective Structured Clinical Examination benchmarks in Taiwan as a reference. ChatGPT-4.0 exhibited substantial enhancements over its predecessor and met or exceeded the performance of human counterparts in terms of both checklist and global assessment scores. Although the overall quality of human consultations remained higher, ChatGPT-4.0's proficiency in medical documentation was notably promising.
Conclusions: The performance of ChatGPT 4.0 was on par with that of human participants in Objective Structured Clinical Examination evaluations, signifying its potential in medical history and medical record documentation. Despite this, the superiority of human consultations in terms of quality was evident. The study underscores both the promise and the current limitations of LLMs in the realm of clinical practice.
(© Ting-Yun Huang, Pei Hsing Hsieh, Yung-Chun Chang. Originally published in JMIR Medical Education (https://mededu.jmir.org).)
- References:
Front Pharmacol. 2022 Jan 28;12:814858. (PMID: 35153767)
Am J Obstet Gynecol. 2023 Aug;229(2):172.e1-172.e12. (PMID: 37088277)
N Engl J Med. 2023 Mar 30;388(13):1233-1239. (PMID: 36988602)
JMIR Med Educ. 2023 Dec 6;9:e52202. (PMID: 38055323)
Nature. 2023 Aug;620(7972):172-180. (PMID: 37438534)
JAMA Intern Med. 2023 Sep 1;183(9):1026-1027. (PMID: 37459091)
NPJ Digit Med. 2021 Jun 3;4(1):93. (PMID: 34083689)
PLOS Digit Health. 2023 Feb 9;2(2):e0000198. (PMID: 36812645)
JMIR Med Inform. 2022 Feb 10;10(2):e32875. (PMID: 35142635)
Multimed Tools Appl. 2023;82(3):3713-3744. (PMID: 35855771)
Patterns (N Y). 2024 Mar 01;5(3):100943. (PMID: 38487804)
NPJ Digit Med. 2023 Nov 16;6(1):210. (PMID: 37973919)
Anat Sci Educ. 2024 Jul-Aug;17(5):926-931. (PMID: 36916887)
J Med Educ Curric Dev. 2023 May 24;10:23821205231178449. (PMID: 37255525)
Cureus. 2023 Feb 19;15(2):e35179. (PMID: 36811129)
Lancet Digit Health. 2023 Mar;5(3):e107-e108. (PMID: 36754724)
JAMA Intern Med. 2023 Sep 1;183(9):1028-1030. (PMID: 37459090)
Med Educ. 1979 Jan;13(1):41-54. (PMID: 763183)
NPJ Digit Med. 2021 Mar 30;4(1):62. (PMID: 33785839)
- Contributed Indexing:
Keywords: LLM; OSCE standards; clinical documentation; large language model; medical history taking; simulation-based evaluation
- Publication Date:
Date Created: 20241202 Date Completed: 20241202 Latest Revision: 20241205
- Publication Date:
20241209
- Accession Number:
PMC11612517
- Accession Number:
10.2196/59902
- Accession Number:
39622713
No Comments.