(Featured) Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Sinead O’Connor and Helen Liu investigate a pertinent concern in contemporary artificial intelligence (AI) studies: the manifestation and amplification of gender bias within AI technologies. The authors present a systematic review of multiple case studies which demonstrate the pervasiveness of gender bias across various forms of AI, particularly focusing on textual and visual algorithms. The highlighted studies underscore how AI, far from being an objective tool, can inadvertently perpetuate societal biases ingrained within training datasets, which can extend to controversial societal asymmetries. Moreover, these studies reveal that although de-biasing efforts have been attempted, residual biases often persist due to the depth and complexity of discriminatory patterns.

In an innovative approach, the authors differentiate between bias perpetuation and bias mitigation, exploring this distinction in both text-based and image-based AI contexts. The issue of latent gendered word associations in text is emphasized, wherein researchers strive for a delicate balance between retaining the utility of algorithms and mitigating bias. In image-based AI, the researchers reveal how biases are not only present within algorithms but also entrenched within the evaluative benchmarks themselves. This insight brings into focus the importance of not merely scrutinizing the algorithms but also the standards used to assess their accuracy and bias perpetuation. The researchers also present an incisive critique of the methodological and conceptual issues underlying the treatment of bias in AI research, drawing attention to the often unaddressed question of what counts as ‘bias’ or ‘discrimination’.

The review shifts to an exploration of policy guidelines to address the identified issues, citing initiatives such as the European Commission’s ‘Ethics Guidelines for Trustworthy AI’ and UNESCO’s report on AI and gender equality. These initiatives aim to align AI with fundamental human rights and principles, ensuring their compliance with EU values and norms. The authors conclude with an insightful analysis of the dynamic relationship between gender bias in AI and broader societal structures, highlighting the need for regulatory efforts to manage this interplay.

Placed in a broader philosophical context, the article touches upon several key themes within the philosophy of technology. One of these is the entwined relationship between technology and society. Drawing from scholars like Orlikowski and Bryson, the authors illustrate how AI, as a socio-technical system, is deeply embedded within social structures and reflects societal biases. This notion challenges the conventional perception of technology as neutral and instead, presents it as a socially constructed entity that both shapes and is shaped by society.

The second philosophical theme pertains to the ethics of AI. The authors highlight the necessity of ethical accountability and responsibility in AI development and use. This resonates with the philosophical debates around morality in AI, raising questions about who should be held responsible for algorithmic biases and how should they be held accountable. By proposing cross-disciplinary and accessible approaches in AI research, the authors indirectly invoke the idea of “moral machines” or the notion that AI systems need to be designed with a nuanced understanding of human ethics.

Looking forward, it is essential to deepen the intersectional analysis of bias in AI systems. Future research could expand on the conceptualization and measurement of bias in AI, accounting for the diverse intersections of identities beyond gender, such as race, age, sexuality, and disability. There is also a critical need to explore how AI bias research can engage with non-binary and fluid conceptions of gender to provide a more comprehensive understanding of gender bias.

Abstract

Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding of bias could influence policy and outcomes. Building on a rich seam of literature from both technological and sociological fields, this article constructs an original framework through which to analyse both the perpetuation and mitigation of gender biases, choosing to categorize AI technologies based on whether their input is text or images. Through the close analysis and pairing of four case studies, the paper thus unites two often disparate approaches to the investigation of bias in technology, revealing the large and varied potential for AI to echo and even amplify existing human bias, while acknowledging the important role AI itself can play in reducing or reversing these effects. The conclusion calls for further collaboration between scholars from the worlds of technology, gender studies and public policy in fully exploring algorithmic accountability as well as in accurately and transparently exploring the potential consequences of the introduction of AI technologies.

Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Leave a Reply

Your email address will not be published. Required fields are marked *