Evaluating Accountability, Transparency, and Bias in AI-Assisted Healthcare Decision- Making: A Qualitative Study of Healthcare Professionals’ Perspectives in the UK

Abstract

Background: While artificial intelligence (AI) has emerged as a powerful tool for enhancing diagnostic accuracy and streamlining workflows, key ethical questions remain insufficiently explored—particularly around accountability, transparency, and bias. These challenges become especially critical in domains such as pathology and blood sciences, where opaque AI algorithms and non-representative datasets can impact clinical outcomes. The present work focuses on a single NHS context and does not claim broader generalization. Methods: We conducted a local qualitative study across multiple healthcare facilities in a single NHS Trust in the West Midlands, United Kingdom, to investigate healthcare professionals’ experiences and perceptions of AI-assisted decision-making. Forty participants—including clinicians, healthcare administrators, and AI developers—took part in semi-structured interviews or focus groups. Transcribed data were analyzed using Braun and Clarke’s thematic analysis framework, allowing us to identify core themes relating to the benefits of AI, ethical challenges, and potential mitigation strategies. Results: Participants reported notable gains in diagnostic efficiency and resource allocation, underscoring AI’s potential to reduce turnaround times for routine tests and enhance detection of abnormalities. Nevertheless, accountability surfaced as a pervasive concern: while clinicians felt ultimately liable for patient outcomes, they also relied on AI-generated insights, prompting questions about liability if systems malfunctioned. Transparency emerged as another major theme, with clinicians emphasizing the difficulty of trusting “black box” models that lack clear rationale or interpretability—particularly for rare or complex cases. Bias was repeatedly cited, especially when algorithms underperformed in minority patient groups or in identifying atypical presentations. These issues raised doubts about the fairness and reliability of AI assisted diagnoses. Conclusions: Although AI demonstrates promise for improving efficiency and patient care, unresolved ethical complexities around accountability, transparency, and bias may erode stakeholder confidence and compromise patient safety. Participants called for clearer regulatory frameworks, inclusive training datasets, and stronger clinician–developer collaboration. Future research should incorporate patient perspectives, investigate long-term impacts of AI-driven clinical decisions, and refine ethical guidelines to ensure equitable, responsible AI deployment.

Publication DOI: https://doi.org/10.1186/s12910-025-01243-z
Divisions: College of Business and Social Sciences > Aston Business School > Operations & Information Management
Additional Information: Copyright The Author(s) 2025. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
Uncontrolled Keywords: Electronic Health Record,Artificial Intelligence,Clinical Decision-Making,Accountability,Transparency,Bias,Qualitative Research,Healthcare Ethics,Artificial Intelligence,Health(social science),SDG 3 - Good Health and Well-being
Data Access Statement: The qualitative datasets (interview transcripts, focus group data) generatedand analyzed during the current study are not publicly available to protectparticipant confidentiality. However, de-identified versions of the transcriptsmay be made available from the corresponding author upon reasonablerequest and with appropriate institutional approvals.
Last Modified: 23 Jul 2025 07:12
Date Deposited: 23 Jun 2025 11:13
Full Text Link:
Related URLs: https://bmcmede ... 910-025-01243-z (Publisher URL)
http://www.scop ... tnerID=8YFLogxK (Scopus URL)
PURE Output Type: Article
Published Date: 2025-07-08
Accepted Date: 2025-06-11
Authors: Nouis, Saoudi CE
Uren, Victoria (ORCID Profile 0000-0002-1303-5574)
Jariwala, Srushti

Download

[img]

Version: Accepted Version

Access Restriction: Restricted to Repository staff only


[img]

Version: Published Version

License: Creative Commons Attribution


Export / Share Citation


Statistics

Additional statistics for this record