Peter Mantello and Manh-Tung Ho examine the impact of emotional artificial intelligence (AI) technologies on employee-employer relationships, focusing on the case of Amazon Japan. The authors argue that the adoption of AI technologies for managing employee emotions can exacerbate pre-existing issues of precarity and worsen the already dire global economic situation. Although emotional AI is being touted as a way to combat stress-related work absences, it is based on the same neoliberal logic that creates these problems. The paper concludes that in order for emotional AI to play a positive role in the workplace, three essential steps must be taken: the technology must be designed to better understand human emotions, workers must have access and control over their data, and a more pluralistic approach to devising regulatory frameworks must be adopted.
The authors begin by discussing the growth of precarity and the worsening global economic situation, noting that these factors have led to an increased demand for emotion-sensing technologies. They examine the case of Amazon Japan, which has been embroiled in legal disputes due to its culturally insensitive performance improvement plan and general hostility towards collective bargaining. The authors argue that emotional AI is being uncritically adopted as a tool for combating stress-related work absences, without considering the underlying neoliberal logic and efficiency practices that contribute to these problems.
The authors then turn to the traditional Japanese work culture, which values loyalty over productivity and focuses on solidarity, consensus, long-term trust, and human growth. They argue that the adoption of AI-driven management systems signifies a lack of trust in workers, which challenges this traditional work culture. The authors suggest that emotional AI companies and policy makers would benefit from embracing a more pluralistic approach to devising regulatory frameworks that draw from both Eastern and Western value traditions.
This paper raises important questions about the role of emotional AI technologies in the workplace and their impact on employee-employer relationships. It also highlights the need to better understand the complexity of human emotions and to incorporate a greater range of modulators to account for diversity and particularity. Philosophers and researchers interested in the ethics of AI and its impact on society will find this paper to be a valuable contribution to the ongoing debate.
Future research could explore the impact of emotional AI on other aspects of the workplace, such as employee creativity and innovation. It could also examine the potential for emotional AI to exacerbate issues of bias and discrimination. Finally, future research could explore the implications of emotional AI technologies for the broader philosophical debate about the relationship between humans and machines.
Abstract
This paper interrogates the growing pervasiveness of affect recognition tools as an emerging layer human-centric automated management in the global workplace. While vendors tout the neoliberal incentives of emotion-recognition technology as a pre-eminent tool of workplace wellness, we argue that emotional AI recalibrates the horizons of capital not by expanding outward into the consumer realm (like surveillance capitalism). Rather, as a new genus of digital Taylorism, it turns inward, passing through the corporeal exterior to extract greater surplus value and managerial control from the affective states of workers. Thus, empathic surveillance signals a profound shift in the ontology of human labor relations. In the emotionally quantified workplace, employees are no longer simply seen as physical capital, but conduits of actuarial and statistical intelligence gleaned from their most intimate subjective states. As a result, affect-driven automated management means that priority is often given to actuarial rather than human-centered managerial decisions.
Emotional AI and the future of wellbeing in the post-pandemic workplace

