(Featured) The Future of Work: Augmentation or Stunting?

The Future of Work: Augmentation or Stunting?

Markus Furendal and Karim Jebari present a nuanced exploration of the implications of artificial intelligence (AI) on the future of work, straddling the philosophical, political, and economic realms. The authors distinguish between two paradigms of AI’s impact on work – ‘human-augmenting’ and ‘human-stunting’. Augmentation refers to scenarios where AI and humans collaboratively work, enhancing the latter’s capabilities and providing more fulfilling work. Stunting, on the other hand, implies a diminishment of human capabilities as AI takes over, reducing humans to mere overseers or executors of pre-programmed tasks. Utilizing Amazon fulfillment centers as a case study, the authors elucidate how the application of AI could potentially lead to stunting, thereby negating the potential goods of work.

The authors address four objections to their perspective. The objections challenge their interpretation of ‘goods of work’, the feasibility of political intervention, and question their assessment of the human augmentation-stunting dichotomy, and the potential paternalistic implications thereof. The paper refrains from advocating for particular policy interventions, but stresses the moral obligation to address human stunting as an issue of concern. The authors point out that workers might be forced to accept stunting roles due to higher pay or collective action problems, and that state intervention could potentially rectify such situations. Furthermore, they also acknowledge the possibility of exploring alternative, non-labor paths to human flourishing, but emphasize their focus on immediate and medium-term impacts rather than long-term societal transformations.

The conclusion of the paper underscores the critical need for an augmenting-stunting distinction in future work debates. The authors acknowledge the potential for AI to augment human capabilities, but caution that the rise of AI technologies could also lead to widespread human stunting, affecting the quality of work and its associated moral goods. They argue that while AI could theoretically enable more stimulating work experiences, it could also degrade human capabilities, detrimentally impacting large swaths of the workforce. As such, the paper calls for additional empirical research to better understand the real-world implications of human-AI collaboration in the workplace.

In the broader philosophical context, this paper instigates a profound discourse on the ethical dimensions of AI and the concept of ‘human flourishing’. By invoking notions of ‘goods of work’, it brings the discourse on AI and work into the arena of moral philosophy, questioning the essence of work and its role in the human condition. The researchers’ debate on the ‘augmentation-stunting’ dichotomy in human-AI interaction is reminiscent of classical deliberations on the dual nature of technology – as both an enabler and a potential detriment to human existence. Furthermore, their contemplation of the role of the state in regulating AI adoption underscores the inherent tension between technological progress and societal welfare, a theme that has persisted throughout technological history.

Future research on this topic could potentially delve deeper into the effects of AI technologies on different labor markets, depending on workers’ skill levels, institutional frameworks, and reskilling policies. More case studies from diverse sectors could enhance understanding of the augmentation-stunting paradigm in practical settings. Furthermore, the idea of ‘human flourishing’ outside of work, in the context of AI’s transformative potential, presents a fascinating area for exploration. The role of political institutions in shaping this future of work would also be an interesting research avenue, bridging the gap between philosophy, political science, and technology studies. The authors’ call for empirical research in workplaces further suggests the potential for cross-disciplinary studies that combine philosophical inquiry with sociological and anthropological methodologies.

Abstract

The last decade has seen significant improvements in artificial intelligence (AI) technologies, including robotics, machine vision, speech recognition, and text generation. Increasing automation will undoubtedly affect the future of work, and discussions on how the development of AI in the workplace will impact labor markets often include two scenarios: (1) labor replacement and (2) labor enabling. The former involves replacing workers with machines, while the latter assumes that human–machine cooperation can significantly improve worker productivity. In this context, it is often argued that (1) could lead to mass unemployment and that (2) therefore would be more desirable. We argue, however, that the labor-enabling scenario conflates two distinct possibilities. On the one hand, technology can increase productivity while also promoting “the goods of work,” such as the opportunity to pursue excellence, experience a sense of community, and contribute to society (human augmentation). On the other hand, higher productivity can also be achieved in a way that reduces opportunities for the “goods of work” and/or increases “the bads of work,” such as injury, reduced physical and mental health, reduction of autonomy, privacy, and human dignity (human stunting). We outline the differences of these outcomes and discuss the implications for the labor market in the context of contemporaneous discussions on the value of work and human wellbeing.

The Future of Work: Augmentation or Stunting?

(Featured) Research Ethics in the Age of Digital Platforms

Research Ethics in the Age of Digital Platforms

José Luis Molina et al. explore the ethical implications of microwork, a novel form of labor facilitated by digital platforms. The authors articulate the nuanced dynamics of this field, focusing primarily on the asymmetrical power relations between microworkers, clients, and platform operators. The piece scrutinizes the transactional nature of microwork, where workers are subject to the platform’s regulations and risk the arbitrary denial of payment or termination of their accounts. Microworkers’ reputation, determined by their prior task success rate, often dictates the quality and quantity of tasks they receive, creating a system of algorithmic governance that perpetuates an exploitative dynamic.

The authors further illustrate this situation by examining the biomedical research standards developed in the aftermath of World War II, which they argue are ill-equipped to address the ethical quandaries posed by microwork. They argue that the conditions of microwork, such as lack of payment floors and the potential for anonymity and segmentation, exacerbate the vulnerability of these workers, aligning them more closely with the exploitation of vulnerable populations in traditional research contexts. They propose a reconceptualization of microworkers as “guest workers” in “digital autocracies,” where the platforms exercise a quasi-governmental control over the working conditions, identity, and compensation of the microworkers.

The authors posit that these digital autocracies extract value through “heteromation” – a process where labor is mediated between cheap human labor and computers, and through the appropriation of workers’ rights to privacy and personal data protection. They argue that microwork platforms, due to their transnational nature and lack of comprehensive regulation, can impose conditions on their workforce that would be unacceptable in traditional employment contexts. They stress the importance of recognizing microworkers as vulnerable populations in research ethics reviews and propose a set of criteria for researchers to ensure the protection of these workers’ rights.

Positioning microwork within the broader philosophical discourse, the authors’ analysis suggests a reevaluation of labor, autonomy, and ethical standards in the digital age. The “digital autocracies” mirror Foucault’s concept of biopower, where power is exerted not merely through coercion but through the management and control of life processes, in this case, the economic existence of microworkers. The situation also reflects Marx’s concept of alienation, as microworkers are distanced from the fruits of their labor, the process of their work, and their fellow workers. The algorithmic governance system also raises questions about agency and autonomy, echoing concerns raised by philosophers such as Hannah Arendt and Jürgen Habermas regarding the instrumentalization of human beings.

Future research in this domain could explore multiple avenues. First, a more extensive empirical study could be conducted to quantify and analyze the conditions of microworkers across different platforms and geographical regions. Second, a comparative study could be undertaken to examine how different regulatory environments impact the working conditions and rights of microworkers. Lastly, a philosophical exploration of notions such as autonomy, justice, and dignity within the digital labor context could provide a more profound understanding of this emerging labor paradigm. The complex interplay of labor, ethics, technology, and globalization, as exemplified by microwork, provides a rich and crucial area for futures studies.

Abstract

Scientific research is growingly increasingly reliant on “microwork” or “crowdsourcing” provided by digital platforms to collect new data. Digital platforms connect clients and workers, charging a fee for an algorithmically managed workflow based on Terms of Service agreements. Although these platforms offer a way to make a living or complement other sources of income, microworkers lack fundamental labor rights and basic safe working conditions, especially in the Global South. We ask how researchers and research institutions address the ethical issues involved in considering microworkers as “human participants.” We argue that current scientific research fails to treat microworkers in the same way as in-person human participants, producing de facto a double morality: one applied to people with rights acknowledged by states and international bodies (e.g., the Helsinki Declaration), the other to guest workers of digital autocracies who have almost no rights at all. We illustrate our argument by drawing on 57 interviews conducted with microworkers in Spanish-speaking countries.

Research Ethics in the Age of Digital Platforms

(Featured) The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work

The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work

In the realm of artificial intelligence (AI) deployment, a neglected ethical concern is the impact of AI on meaningful work. Sarah Bankins and Paul Formosa focus on this critical aspect, emphasizing that understanding the consequences of AI on meaningful work for the remaining workforce is as significant as examining the impact of AI-induced unemployment. Meaningful work plays a crucial role in human well-being, autonomy, and flourishing, rendering it an essential ethical dimension.

The authors investigate three paths of AI deployment: replacing tasks, ‘tending the machine’, and amplifying, across five dimensions of meaningful work: task integrity, skill cultivation and use, task significance, autonomy, and belongingness. By employing this approach, they identify the ways AI may both enhance and undermine meaningful work experiences across the dimensions. Additionally, the authors draw upon ethical implications by utilizing five key ethical AI principles, providing practical guidance for organizations and suggesting opportunities for future research.

The paper concludes that AI has the potential to make work more meaningful for some workers by performing less meaningful tasks and amplifying their capabilities. However, it also highlights the risk of making work less meaningful for others by generating monotonous tasks, restricting worker autonomy, and disproportionately distributing AI benefits away from less-skilled workers. This dualistic impact suggests that AI’s future effects on meaningful work will be both significant and varied.

The authors’ analysis of AI and meaningful work raises broader philosophical issues. One such issue pertains to the value of work in the context of human dignity, self-realization, and social connection. As AI technologies advance, society will need to reflect on the meaning of work and redefine it in response to the changes brought about by these innovations. Furthermore, the ethical principles guiding AI development and deployment must not only ensure fair and equitable distribution of benefits but also preserve the essence of human engagement in work.

Future research in this area could explore the potential impact of AI on work’s existential value and its influence on the human experience. Researchers may also delve into the development of ethical frameworks that ensure AI technologies foster more meaningful work and equitable distribution of benefits. Finally, the potential outcomes and implications of artificial general intelligence (AGI) on meaningful work should be considered, as AGI could dramatically alter the landscape of human labor and the very nature of work itself.

Abstract

The increasing workplace use of artificially intelligent (AI) technologies has implications for the experience of meaningful human work. Meaningful work refers to the perception that one’s work has worth, significance, or a higher purpose. The development and organisational deployment of AI is accelerating, but the ways in which this will support or diminish opportunities for meaningful work and the ethical implications of these changes remain under-explored. This conceptual paper is positioned at the intersection of the meaningful work and ethical AI literatures and offers a detailed assessment of the ways in which the deployment of AI can enhance or diminish employees’ experiences of meaningful work. We first outline the nature of meaningful work and draw on philosophical and business ethics accounts to establish its ethical importance. We then explore the impacts of three paths of AI deployment (replacing some tasks, ‘tending the machine’, and amplifying human skills) across five dimensions constituting a holistic account of meaningful work, and finally assess the ethical implications. In doing so we help to contextualise the meaningful work literature for the era of AI, extend the ethical AI literature into the workplace, and conclude with a range of practical implications and future research directions.

The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work

(Featured) Emotional AI and the future of wellbeing in the post-pandemic workplace

Emotional AI and the future of wellbeing in the post-pandemic workplace

Peter Mantello and Manh-Tung Ho examine the impact of emotional artificial intelligence (AI) technologies on employee-employer relationships, focusing on the case of Amazon Japan. The authors argue that the adoption of AI technologies for managing employee emotions can exacerbate pre-existing issues of precarity and worsen the already dire global economic situation. Although emotional AI is being touted as a way to combat stress-related work absences, it is based on the same neoliberal logic that creates these problems. The paper concludes that in order for emotional AI to play a positive role in the workplace, three essential steps must be taken: the technology must be designed to better understand human emotions, workers must have access and control over their data, and a more pluralistic approach to devising regulatory frameworks must be adopted.

The authors begin by discussing the growth of precarity and the worsening global economic situation, noting that these factors have led to an increased demand for emotion-sensing technologies. They examine the case of Amazon Japan, which has been embroiled in legal disputes due to its culturally insensitive performance improvement plan and general hostility towards collective bargaining. The authors argue that emotional AI is being uncritically adopted as a tool for combating stress-related work absences, without considering the underlying neoliberal logic and efficiency practices that contribute to these problems.

The authors then turn to the traditional Japanese work culture, which values loyalty over productivity and focuses on solidarity, consensus, long-term trust, and human growth. They argue that the adoption of AI-driven management systems signifies a lack of trust in workers, which challenges this traditional work culture. The authors suggest that emotional AI companies and policy makers would benefit from embracing a more pluralistic approach to devising regulatory frameworks that draw from both Eastern and Western value traditions.

This paper raises important questions about the role of emotional AI technologies in the workplace and their impact on employee-employer relationships. It also highlights the need to better understand the complexity of human emotions and to incorporate a greater range of modulators to account for diversity and particularity. Philosophers and researchers interested in the ethics of AI and its impact on society will find this paper to be a valuable contribution to the ongoing debate.

Future research could explore the impact of emotional AI on other aspects of the workplace, such as employee creativity and innovation. It could also examine the potential for emotional AI to exacerbate issues of bias and discrimination. Finally, future research could explore the implications of emotional AI technologies for the broader philosophical debate about the relationship between humans and machines.

Abstract

This paper interrogates the growing pervasiveness of affect recognition tools as an emerging layer human-centric automated management in the global workplace. While vendors tout the neoliberal incentives of emotion-recognition technology as a pre-eminent tool of workplace wellness, we argue that emotional AI recalibrates the horizons of capital not by expanding outward into the consumer realm (like surveillance capitalism). Rather, as a new genus of digital Taylorism, it turns inward, passing through the corporeal exterior to extract greater surplus value and managerial control from the affective states of workers. Thus, empathic surveillance signals a profound shift in the ontology of human labor relations. In the emotionally quantified workplace, employees are no longer simply seen as physical capital, but conduits of actuarial and statistical intelligence gleaned from their most intimate subjective states. As a result, affect-driven automated management means that priority is often given to actuarial rather than human-centered managerial decisions.

Emotional AI and the future of wellbeing in the post-pandemic workplace