Lena Podoletz investigates the utilization of emotional Artificial Intelligence (AI) within the context of law enforcement and criminal justice systems in a critical examination of the sociopolitical, legal, and ethical ramifications of this technology, contextualizing the analysis within the broader landscape of technological trends and potential future applications.
The opening part of the article is devoted to the intricacies of emotion recognition AI, specifically its definition, functionality, and the scientific foundations that inform its development. In dissecting these aspects, the author emphasizes the discrepancy between the common understanding of emotions and the way they are algorithmically conceptualized and processed. Key to this understanding is the recognition that emotional AI, in its current stage of development, relies heavily on theoretical constructs like the ‘basic emotions theory’ and the ‘circumplex model’, the limitations and biases of which can significantly impact its effective and ethical application in law enforcement and criminal justice contexts.
Subsequent sections of the article provide a rigorous evaluation of four areas of concern: accuracy and performance, bias, accountability, and privacy along with other rights and freedoms. The author underscores the need for distinguishing between different uses of emotional AI, stressing that the challenges presented in a law enforcement setting differ significantly from its application in other contexts, such as private homes or smart health environments. This examination extends to issues related to bias in algorithmic decision-making, where existing societal biases can be reproduced and amplified. The complex issue of accountability in emotional AI is also dissected, particularly in terms of attributing responsibility for decisions made by such systems. Finally, the author explores the intersection of emotional AI technologies with privacy and other human rights, indicating that the deployment of these systems can challenge individual autonomy and human dignity.
The thematic concerns presented in the article echo the larger philosophical discourse surrounding the role and implications of AI in society. The author’s evaluation of emotional AI is in line with post-humanist thought, which questions the Cartesian dualism of human and machine, and problematizes the reduction of complex human behaviors and emotions into codified, algorithmic processes. The exploration of bias, accountability, and privacy ties into ongoing debates around the ethics of AI, especially concerning notions of fairness, transparency, and justice in algorithmic decision-making. Moreover, the question of who holds responsibility when AI systems make mistakes or violate rights brings into focus the legal and philosophical concept of moral agency in the age of advanced AI.
Future research might delve deeper into how emotional AI, specifically within law enforcement and criminal justice systems, could be better regulated or standardized to address the highlighted concerns. It would be valuable to explore potential legislative and technical solutions to mitigate bias, improve accuracy, and establish clear lines of accountability. Moreover, further philosophical examination is needed to unpack the implications of emotional AI on our understanding of human emotions, agency, and rights in an increasingly technologized society. Finally, in line with futures studies philosophy, it would be beneficial to conceive of alternative trajectories for the development and deployment of emotional AI that are anchored in ethical foresight and participatory decision-making, thereby ensuring a future that upholds societal well-being and human dignity.
Abstract
Emotional AI is an emerging technology used to make probabilistic predictions about the emotional states of people using data sources, such as facial (micro)-movements, body language, vocal tone or the choice of words. The performance of such systems is heavily debated and so are the underlying scientific methods that serve as the basis for many such technologies. In this article I will engage with this new technology, and with the debates and literature that surround it. Working at the intersection of criminology, policing, surveillance and the study of emotional AI this paper explores and offers a framework of understanding the various issues that these technologies present particularly to liberal democracies. I argue that these technologies should not be deployed within public spaces because there is only a very weak evidence-base as to their effectiveness in a policing and security context, and even more importantly represent a major intrusion to people’s private lives and also represent a worrying extension of policing power because of the possibility that intentions and attitudes may be inferred. Further to this, the danger in the use of such invasive surveillance for the purpose of policing and crime prevention in urban spaces is that it potentially leads to a highly regulated and control-oriented society. I argue that emotion recognition has severe impacts on the right to the city by not only undertaking surveillance of existing situations but also making inferences and probabilistic predictions about future events as well as emotions and intentions.
We have to talk about emotional AI and crime

