Remedies Against Bias In Analytics Systems


Advances in IT offer the possibility to develop ever more complex predictive and prescriptive systems based on analytics. Organizations are beginning to rely on the outputs from these systems without inspecting them, especially if they are embedded in the organization’s operational systems. This reliance could be misplaced if the systems contain bias. This could even be at best unethical and at worst illegal. Data, algorithms and machine learning methods are all potentially subject to bias. In this article we explain the ways in which bias might arise in analytics systems, present some examples where this has happened, and give some suggestions as to how to prevent or reduce it. We use a framework inspired by the work of Hammond, Keeney and Raiffa (1998, reprinted 2006) on psychological traps in human decision-making. Each of these traps “translates” into a potential type of bias for an analytics-based system. Fortunately, this means that remedies to reduce bias in human decision-making also translate into potential remedies for algorithmic systems.

Divisions: College of Business and Social Sciences > Aston Business School > Operations & Information Management
Additional Information: This is an Accepted Manuscript of an article published by Taylor & Francis Group in Journal of Business Analytics on 29 June 2019, available online at:
Publication ISSN: 2573-2358
Last Modified: 29 Nov 2023 12:22
Date Deposited: 20 Jun 2019 08:10
Full Text Link: 10.1080/2573234X.2019.1633890
Related URLs: https://www.tan ... 4X.2019.1633890 (Publisher URL)
PURE Output Type: Article
Published Date: 2019
Published Online Date: 2019-06-29
Accepted Date: 2019-06-10
Authors: Edwards, John S
Rodriguez, Eduardo



Version: Accepted Version

| Preview

Export / Share Citation


Additional statistics for this record