Ana Cristina Bicharra Garcia et al. explore a salient issue in today’s world of ubiquitous artificial intelligence (AI) and machine learning (ML) applications — the intersection of algorithmic decision-making, fairness, and discrimination in the credit domain. Undertaking a systematic literature review from five data sources, the study meticulously categorizes, analyzes, and synthesizes a wide array of existing literature on this topic. Out of an initial 1320 papers identified, 78 were eventually selected for a detailed review.
The research identifies and critically assesses the inherent biases and potential discriminatory practices in algorithmic credit decision systems, particularly regarding race, gender, and other sensitive attributes. A key observation noted is the existing tendency of studies to examine discriminatory effects based on single sensitive attributes. However, the authors highlight the relevance of Kimberlé Crenshaw’s intersectionality theory, which emphasizes the complex layers of discrimination that could emerge when multiple attributes intersect. The study further underscores the issue of ‘reverse redlining’ — a form of discrimination where individuals are either denied credit based on specific attributes or targeted with high-interest loans.
In addition to mapping the landscape of algorithmic fairness and discrimination, the authors offer a critical examination of fairness definitions, technical limitations of fair algorithms, and the challenging equilibrium between data privacy and data sources’ broadening. The authors’ exploration of fairness reveals a lack of consensus on its definition. In fact, the diverse metrics available often lead to contradictory outcomes. Technical actions, the authors assert, have boundaries, and a genuinely discrimination-free environment requires not just fair algorithms, but also structural and societal changes.
In a broader philosophical context, the research paper’s exploration of algorithmic fairness and discrimination in the credit domain harks back to a fundamental question in the philosophy of technology: What is the impact of technology on society and individual human beings? Algorithmic decision-making systems, as exemplified in this research, are not neutral tools; they are imbued with the biases and prejudices of the society they emerge from, raising significant ethical concerns. The credit domain, with its inherent power dynamics and implications on individuals’ livelihoods, serves as a potent illustration of how algorithmic biases can exacerbate societal inequalities. The philosophical debate around the agency of technology, the moral responsibilities of developers and users, and the consequences of technologically mediated discrimination is thereby highly relevant.
As for future research directions, this study presents multiple avenues. A pressing need is the exploration of discrimination scope beyond race, gender, and commonly studied categories. More nuanced understanding of intersectionality in algorithmic discrimination, including the examination of multiple attributes simultaneously, is a vital need. Additionally, further exploration of ‘reverse redlining’, particularly in the Global South, is warranted. A compelling challenge is to arrive at a globally accepted definition of fairness, taking into account the cultural differences that influence societal perceptions. Lastly, the ethical implications of expanding data sources for credit evaluation, while preserving individuals’ privacy, merit in-depth scrutiny. Through these avenues, we can aspire to develop more ethical, fair, and inclusive algorithmic systems, thus addressing the philosophical concerns highlighted above.
Abstract
Many modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.
Algorithmic discrimination in the credit domain: what do we know about it?

