Uren, Victoria, Nouis, Saoudi CE and Jariwala, Srushti (2025). Evaluating Accountability, Transparency, and Bias in AI-Assisted Healthcare Decision- Making: A Qualitative Study of Healthcare Professionals’ Perspectives in the UK. BMC Medical Ethics , (In Press)
Abstract
Background: While artificial intelligence (AI) has emerged as a powerful tool for enhancing diagnostic accuracy and streamlining workflows, key ethical questions remain insufficiently explored—particularly around accountability, transparency, and bias. These challenges become especially critical in domains such as pathology and blood sciences, where opaque AI algorithms and non-representative datasets can impact clinical outcomes. The present work focuses on a single NHS context and does not claim broader generalization. Methods: We conducted a local qualitative study across multiple healthcare facilities in a single NHS Trust in the West Midlands, United Kingdom, to investigate healthcare professionals’ experiences and perceptions of AI-assisted decision-making. Forty participants—including clinicians, healthcare administrators, and AI developers—took part in semi-structured interviews or focus groups. Transcribed data were analyzed using Braun and Clarke’s thematic analysis framework, allowing us to identify core themes relating to the benefits of AI, ethical challenges, and potential mitigation strategies. Results: Participants reported notable gains in diagnostic efficiency and resource allocation, underscoring AI’s potential to reduce turnaround times for routine tests and enhance detection of abnormalities. Nevertheless, accountability surfaced as a pervasive concern: while clinicians felt ultimately liable for patient outcomes, they also relied on AI-generated insights, prompting questions about liability if systems malfunctioned. Transparency emerged as another major theme, with clinicians emphasizing the difficulty of trusting “black box” models that lack clear rationale or interpretability—particularly for rare or complex cases. Bias was repeatedly cited, especially when algorithms underperformed in minority patient groups or in identifying atypical presentations. These issues raised doubts about the fairness and reliability of AI assisted diagnoses. Conclusions: Although AI demonstrates promise for improving efficiency and patient care, unresolved ethical complexities around accountability, transparency, and bias may erode stakeholder confidence and compromise patient safety. Participants called for clearer regulatory frameworks, inclusive training datasets, and stronger clinician–developer collaboration. Future research should incorporate patient perspectives, investigate long-term impacts of AI-driven clinical decisions, and refine ethical guidelines to ensure equitable, responsible AI deployment.
Divisions: | College of Business and Social Sciences > Aston Business School > Operations & Information Management |
---|---|
Uncontrolled Keywords: | Electronic Health Record,Artificial Intelligence,Clinical Decision-Making,Accountability,Transparency,Bias,Qualitative Research,Healthcare Ethics,Artificial Intelligence,Health(social science),SDG 3 - Good Health and Well-being |
Last Modified: | 17 Jun 2025 10:47 |
Date Deposited: | 12 Jun 2025 09:05 | PURE Output Type: | Article |
Published Date: | 2025-06-11 |
Accepted Date: | 2025-06-11 |
Authors: |
Uren, Victoria
(![]() Nouis, Saoudi CE Jariwala, Srushti |