Assessing the utility of ChatGPT as an artificial intelligence‐based large language model for information to answer questions on myopia

Abstract

Purpose:  ChatGPT is an artificial intelligence language model, which uses natural language processing to simulate human conversation. It has seen a wide range of applications including healthcare education, research and clinical practice. This study evaluated the accuracy of ChatGPT in providing accurate and quality information to answer questions on myopia. Methods: A series of 11 questions (nine categories of general summary, cause, symptom, onset, prevention, complication, natural history, treatment and prognosis) were generated for this cross-sectional study. Each question was entered five times into fresh ChatGPT sessions (free from influence of prior questions). The responses were evaluated by a five-member team of optometry teaching and research staff. The evaluators individually rated the accuracy and quality of responses on a Likert scale, where a higher score indicated greater quality of information (1: very poor; 2: poor; 3: acceptable; 4: good; 5: very good). Median scores for each question were estimated and compared between evaluators. Agreement between the five evaluators and the reliability statistics of the questions were estimated. Results: Of the 11 questions on myopia, ChatGPT provided good quality information (median scores: 4.0) for 10 questions and acceptable responses (median scores: 3.0) for one question. Out of 275 responses in total, 66 (24%) were rated very good, 134 (49%) were rated good, whereas 60 (22%) were rated acceptable, 10 (3.6%) were rated poor and 5 (1.8%) were rated very poor. Cronbach's α of 0.807 indicated good level of agreement between test items. Evaluators' ratings demonstrated ‘slight agreement’ (Fleiss's κ, 0.005) with a significant difference in scoring among the evaluators (Kruskal–Wallis test, p < 0.001). Conclusion: Overall, ChatGPT generated good quality information to answer questions on myopia. Although ChatGPT shows great potential in rapidly providing information on myopia, the presence of inaccurate responses demonstrates that further evaluation and awareness concerning its limitations are crucial to avoid potential misinterpretation.

Publication DOI: https://doi.org/10.1111/opo.13207
Divisions: College of Health & Life Sciences > School of Optometry > Optometry
College of Health & Life Sciences
College of Health & Life Sciences > School of Optometry > Optometry & Vision Science Research Group (OVSRG)
College of Health & Life Sciences > School of Optometry > Vision, Hearing and Language
Additional Information: Copyright © 2023 The Authors. Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Uncontrolled Keywords: ChatGPT,artificial intelligence,chatbot response,myopia,patient information,Sensory Systems,Ophthalmology,Optometry
Publication ISSN: 1475-1313
Last Modified: 05 Feb 2024 09:18
Date Deposited: 24 Jul 2023 13:42
Full Text Link:
Related URLs: https://onlinel ... .1111/opo.13207 (Publisher URL)
http://www.scop ... tnerID=8YFLogxK (Scopus URL)
PURE Output Type: Article
Published Date: 2023-07-21
Published Online Date: 2023-07-21
Accepted Date: 2023-07-11
Authors: Biswas, Sayantan (ORCID Profile 0000-0001-6011-0365)
Logan, Nicola S. (ORCID Profile 0000-0002-0538-9516)
Davies, Leon N. (ORCID Profile 0000-0002-1554-0566)
Sheppard, Amy L. (ORCID Profile 0000-0003-0035-8267)
Wolffsohn, James S. (ORCID Profile 0000-0003-4673-8927)

Download

[img]

Version: Published Version

License: Creative Commons Attribution

| Preview

Export / Share Citation


Statistics

Additional statistics for this record