A critical exploration of ethical discourses around the adoption of an algorithmic tool by a criminal justice case study in Europe: A Foucauldian perspective

Abstract

As the use of algorithmic technologies for key organisational and work processes grows, the AI/algorithm ethics literature also continues to raise important concerns around the potential moral risks associated with their use, such as the unintended biases that may stem from algorithmic decision-making. More recently, research has begun to highlight the roles of human self-reflexivity and resistance, calling for further research into this burgeoning stream of AI/algorithm ethics. This thesis, therefore, builds upon these underexplored ethical nuances in algorithmic work practice utilising a Foucauldian lens – in particular, drawing Foucault’s theories of discourse, governmentality, and resistance/ethics – to explore the ethical discourses and actions that emerge when a large/complex criminal justice organisation (based in a European country) adopted algorithmic tools to aid in their key decision-making activities. Data was collected through 38 semi-structured qualitative interviews with different organisational actors. A range of organisational documents were used as an addition to the interview data. Using Foucauldian Discourse Analysis theory, I have found that the adoption of algorithmic technologies in this particular service was steered and supported by the scientific power/knowledge of data scientists. I also found that whilst transparent (and ethical) work practice, for data scientists and senior leaders, is achieved via utilisation of algorithms and data-driven tools, there is a nascent discursive shift amongst many frontline practitioners. This discursive shift highlights practitioners’ agency, self-reflexivity and awareness around shortcomings and potential ethical risks of algorithms. I argue that the practitioners’ awareness and – in some cases – subtle resistances against algorithm are examples on how ethical practice is crystalised in algorithmic work environments. By applying a Foucauldian lens, this thesis contributes to organisational ethics and AI/algorithm ethics literatures, highlighting ethical nuances in relations to marginalisation of employee voices (discourses) through algorithmic work governmentality. Moreover, this research gains further understanding on how those marginalised discourses shift towards subtle active resistance and expansion of the space for ethical practice.

Publication DOI: https://doi.org/10.48780/publications.aston.ac.uk.00047816
Divisions: College of Business and Social Sciences > Aston Business School
Additional Information: Copyright © Ali Gordjahanbeiglou, 2024. Ali Gordjahanbeiglou asserts their moral right to be identified as the author of this thesis. This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognise that its copyright rests with its author and that no quotation from the thesis and no information derived from it may be published without appropriate permission or acknowledgement. If you have discovered material in Aston Publications Explorer which is unlawful e.g. breaches copyright, (either yours or that of a third party) or any other law, including but not limited to those relating to patent, trademark, confidentiality, data protection, obscenity, defamation, libel, then please read our Takedown Policy and contact the service immediately.
Institution: Aston University
Uncontrolled Keywords: AI,Algorithmic work practice,Ethical discourses,Foucaldian perspective,Governmentality,Criminal justice organisation
Last Modified: 15 Jul 2025 16:40
Date Deposited: 15 Jul 2025 16:38
Completed Date: 2024-09
Authors: Gordjahanbeiglou, Ali

Export / Share Citation


Statistics

Additional statistics for this record