Variation in performance on common content items at UK medical schools

Abstract

Background: Due to differing assessment systems across UK medical schools, making meaningful cross-school comparisons on undergraduate students’ performance in knowledge tests is difficult. Ahead of the introduction of a national licensing assessment in the UK, we evaluate schools’ performances on a shared pool of “common content” knowledge test items to compare candidates at different schools and evaluate whether they would pass under different standard setting regimes. Such information can then help develop a cross-school consensus on standard setting shared content. Methods: We undertook a cross-sectional study in the academic sessions 2016-17 and 2017-18. Sixty “best of five” multiple choice ‘common content’ items were delivered each year, with five used in both years. In 2016-17 30 (of 31 eligible) medical schools undertook a mean of 52.6 items with 7,177 participants. In 2017-18 the same 30 medical schools undertook a mean of 52.8 items with 7,165 participants, creating a full sample of 14,342 medical students sitting common content prior to graduation. Using mean scores, we compared performance across items and carried out a “like-for-like” comparison of schools who used the same set of items then modelled the impact of different passing standards on these schools. Results: Schools varied substantially on candidate total score. Schools differed in their performance with large (Cohen’s d around 1) effects. A passing standard that would see 5 % of candidates at high scoring schools fail left low-scoring schools with fail rates of up to 40 %, whereas a passing standard that would see 5 % of candidates at low scoring schools fail would see virtually no candidates from high scoring schools fail. Conclusions: Candidates at different schools exhibited significant differences in scores in two separate sittings. Performance varied by enough that standards that produce realistic fail rates in one medical school may produce substantially different pass rates in other medical schools – despite identical content and the candidates being governed by the same regulator. Regardless of which hypothetical standards are “correct” as judged by experts, large institutional differences in pass rates must be explored and understood by medical educators before shared standards are applied. The study results can assist cross-school groups in developing a consensus on standard setting future licensing assessment.

Publication DOI: https://doi.org/10.1186/s12909-021-02761-1
Divisions: College of Health & Life Sciences > Aston Medical School
Additional Information: © The Author(s). 2021 Open Access - This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Funding: The Medical Schools Council Assessment Alliance funded this research.
Uncontrolled Keywords: Cross-Sectional Studies,Education, Medical, Undergraduate,Educational Measurement,Humans,Schools, Medical,United Kingdom,Education
Publication ISSN: 1472-6920
Last Modified: 19 Mar 2024 08:28
Date Deposited: 18 Jun 2021 08:26
Full Text Link:
Related URLs: http://www.scop ... tnerID=8YFLogxK (Scopus URL)
https://bmcmede ... 21-02761-1#Abs1 (Publisher URL)
PURE Output Type: Article
Published Date: 2021-06-05
Accepted Date: 2021-05-17
Authors: Hope, David
Kluth, David
Homer, Matthew
Dewar, Avril
Fuller, Richard
Cameron, Helen (ORCID Profile 0000-0002-2798-2177)

Download

[img]

Version: Published Version

License: Creative Commons Attribution

| Preview

Export / Share Citation


Statistics

Additional statistics for this record