Lai, U hin, Wu, Keng sam, Hsu, Ting-Yu and Kan, Jessie kai ching (2023). Evaluating the performance of ChatGPT-4 on the United Kingdom Medical Licensing Assessment. Frontiers in Medicine, 10 ,
Abstract
Introduction: Recent developments in artificial intelligence large language models (LLMs), such as ChatGPT, have allowed for the understanding and generation of human-like text. Studies have found LLMs abilities to perform well in various examinations including law, business and medicine. This study aims to evaluate the performance of ChatGPT in the United Kingdom Medical Licensing Assessment (UKMLA). Methods: Two publicly available UKMLA papers consisting of 200 single-best-answer (SBA) questions were screened. Nine SBAs were omitted as they contained images that were not suitable for input. Each question was assigned a specialty based on the UKMLA content map published by the General Medical Council. A total of 191 SBAs were inputted in ChatGPT-4 through three attempts over the course of 3 weeks (once per week). Results: ChatGPT scored 74.9% (143/191), 78.0% (149/191) and 75.6% (145/191) on three attempts, respectively. The average of all three attempts was 76.3% (437/573) with a 95% confidence interval of (74.46% and 78.08%). ChatGPT answered 129 SBAs correctly and 32 SBAs incorrectly on all three attempts. On three attempts, ChatGPT performed well in mental health (8/9 SBAs), cancer (11/14 SBAs) and cardiovascular (10/13 SBAs). On three attempts, ChatGPT did not perform well in clinical haematology (3/7 SBAs), endocrine and metabolic (2/5 SBAs) and gastrointestinal including liver (3/10 SBAs). Regarding to response consistency, ChatGPT provided correct answers consistently in 67.5% (129/191) of SBAs but provided incorrect answers consistently in 12.6% (24/191) and inconsistent response in 19.9% (38/191) of SBAs, respectively. Discussion and conclusion: This study suggests ChatGPT performs well in the UKMLA. There may be a potential correlation between specialty performance. LLMs ability to correctly answer SBAs suggests that it could be utilised as a supplementary learning tool in medical education with appropriate medical educator supervision.
Publication DOI: | https://doi.org/10.3389/fmed.2023.1240915 |
---|---|
Divisions: | College of Health & Life Sciences > Aston Medical School |
Additional Information: | © 2023 Lai, Wu, Hsu and Kan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
Uncontrolled Keywords: | ChatGPT,Medical Licensing Examination,United Kingdom Medical Licensing Assessment,assessment,examination,medical education,medicine,Medicine(all) |
Publication ISSN: | 2296-858X |
Last Modified: | 03 May 2024 07:20 |
Date Deposited: | 20 Sep 2023 09:09 |
Full Text Link: | |
Related URLs: |
https://www.fro ... 0170000_ARTICLE
(Publisher URL) http://www.scop ... tnerID=8YFLogxK (Scopus URL) |
PURE Output Type: | Article |
Published Date: | 2023-09-19 |
Accepted Date: | 2023-08-30 |
Authors: |
Lai, U hin
Wu, Keng sam Hsu, Ting-Yu Kan, Jessie kai ching |