(Featured) The Future of Work: Augmentation or Stunting?

The Future of Work: Augmentation or Stunting?

Markus Furendal and Karim Jebari present a nuanced exploration of the implications of artificial intelligence (AI) on the future of work, straddling the philosophical, political, and economic realms. The authors distinguish between two paradigms of AI’s impact on work – ‘human-augmenting’ and ‘human-stunting’. Augmentation refers to scenarios where AI and humans collaboratively work, enhancing the latter’s capabilities and providing more fulfilling work. Stunting, on the other hand, implies a diminishment of human capabilities as AI takes over, reducing humans to mere overseers or executors of pre-programmed tasks. Utilizing Amazon fulfillment centers as a case study, the authors elucidate how the application of AI could potentially lead to stunting, thereby negating the potential goods of work.

The authors address four objections to their perspective. The objections challenge their interpretation of ‘goods of work’, the feasibility of political intervention, and question their assessment of the human augmentation-stunting dichotomy, and the potential paternalistic implications thereof. The paper refrains from advocating for particular policy interventions, but stresses the moral obligation to address human stunting as an issue of concern. The authors point out that workers might be forced to accept stunting roles due to higher pay or collective action problems, and that state intervention could potentially rectify such situations. Furthermore, they also acknowledge the possibility of exploring alternative, non-labor paths to human flourishing, but emphasize their focus on immediate and medium-term impacts rather than long-term societal transformations.

The conclusion of the paper underscores the critical need for an augmenting-stunting distinction in future work debates. The authors acknowledge the potential for AI to augment human capabilities, but caution that the rise of AI technologies could also lead to widespread human stunting, affecting the quality of work and its associated moral goods. They argue that while AI could theoretically enable more stimulating work experiences, it could also degrade human capabilities, detrimentally impacting large swaths of the workforce. As such, the paper calls for additional empirical research to better understand the real-world implications of human-AI collaboration in the workplace.

In the broader philosophical context, this paper instigates a profound discourse on the ethical dimensions of AI and the concept of ‘human flourishing’. By invoking notions of ‘goods of work’, it brings the discourse on AI and work into the arena of moral philosophy, questioning the essence of work and its role in the human condition. The researchers’ debate on the ‘augmentation-stunting’ dichotomy in human-AI interaction is reminiscent of classical deliberations on the dual nature of technology – as both an enabler and a potential detriment to human existence. Furthermore, their contemplation of the role of the state in regulating AI adoption underscores the inherent tension between technological progress and societal welfare, a theme that has persisted throughout technological history.

Future research on this topic could potentially delve deeper into the effects of AI technologies on different labor markets, depending on workers’ skill levels, institutional frameworks, and reskilling policies. More case studies from diverse sectors could enhance understanding of the augmentation-stunting paradigm in practical settings. Furthermore, the idea of ‘human flourishing’ outside of work, in the context of AI’s transformative potential, presents a fascinating area for exploration. The role of political institutions in shaping this future of work would also be an interesting research avenue, bridging the gap between philosophy, political science, and technology studies. The authors’ call for empirical research in workplaces further suggests the potential for cross-disciplinary studies that combine philosophical inquiry with sociological and anthropological methodologies.

Abstract

The last decade has seen significant improvements in artificial intelligence (AI) technologies, including robotics, machine vision, speech recognition, and text generation. Increasing automation will undoubtedly affect the future of work, and discussions on how the development of AI in the workplace will impact labor markets often include two scenarios: (1) labor replacement and (2) labor enabling. The former involves replacing workers with machines, while the latter assumes that human–machine cooperation can significantly improve worker productivity. In this context, it is often argued that (1) could lead to mass unemployment and that (2) therefore would be more desirable. We argue, however, that the labor-enabling scenario conflates two distinct possibilities. On the one hand, technology can increase productivity while also promoting “the goods of work,” such as the opportunity to pursue excellence, experience a sense of community, and contribute to society (human augmentation). On the other hand, higher productivity can also be achieved in a way that reduces opportunities for the “goods of work” and/or increases “the bads of work,” such as injury, reduced physical and mental health, reduction of autonomy, privacy, and human dignity (human stunting). We outline the differences of these outcomes and discuss the implications for the labor market in the context of contemporaneous discussions on the value of work and human wellbeing.

The Future of Work: Augmentation or Stunting?

(Featured) Algorithmic Nudging: The Need for an Interdisciplinary Oversight

Algorithmic Nudging: The Need for an Interdisciplinary Oversight

Christian Schmauder et al. critically assess the implications and risks of employing “black box” AI systems for the development and implementation of personalized nudges in various domains of life. They begin by outlining the power and promise of algorithmic nudging, drawing attention to how AI-driven nudges could bring about widespread benefits in areas such as health, finance, and sustainability. However, they contend that outsourcing nudging to opaque AI systems poses challenges in terms of understanding the underlying reasons for their effectiveness and addressing potential unintended consequences.

The authors delve deeper into the nuances of algorithmic nudging by examining the role of personalized advice in influencing human decision-making. They highlight a key concern that arises when AI systems attempt to maximize user satisfaction: the tendency of the algorithms to exploit cognitive biases in order to achieve desired outcomes. Consequently, the effectiveness of the AI-developed nudges might come at the cost of truthfulness, ultimately undermining the very goals they were designed to achieve.

To address this issue, the authors advocate for the need to look “under the hood” of AI systems, arguing that understanding the underlying cognitive processes harnessed by these systems is crucial for mitigating unintended side effects. They emphasize the importance of interdisciplinary collaboration between computer scientists, cognitive scientists, and psychologists in the development, monitoring, and refinement of AI systems designed to influence human decision-making.

The authors’ exploration of the limitations and risks of “black box” AI nudges raises broader philosophical concerns, particularly in relation to the ethics of autonomy, transparency, and accountability. These concerns call into question the balance between leveraging AI-driven nudges to benefit society and preserving individual autonomy and freedom of choice. Furthermore, the analysis highlights the tension between relying on AI’s predictive power and fostering a deeper understanding of the mechanisms driving human behavior.

This paper provides a valuable foundation for future research on the ethical and philosophical implications of AI-driven nudging. Further investigation could delve into the possible approaches to designing more transparent and explainable AI systems, exploring how such systems might enhance, rather than hinder, human decision-making processes. Additionally, researchers could examine the moral responsibilities of AI developers and regulators, studying the ethical frameworks necessary to guide the development and deployment of AI nudges that respect human autonomy, values, and dignity. Ultimately, a deeper understanding of these complex philosophical questions will be instrumental in realizing the full potential of AI-driven nudges while safeguarding against their potential pitfalls.

Abstract

Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to “black box” AI systems means that the ultimate reasons for why such nudges work, that is, the underlying human cognitive processes that they harness, will often be unknown. In this paper, we unpack this concern by considering a series of examples and case studies that demonstrate how AI systems can learn to harness biases in human judgment to reach a specified goal. Drawing on an analogy in a philosophical debate concerning the methodology of economics, we call for the need of an interdisciplinary oversight of AI systems that are tasked and deployed to nudge human behaviours.

Algorithmic Nudging: The Need for an Interdisciplinary Oversight

(Featured) Philosophical foundation of the right to mental integrity in the age of neurotechnologies

Philosophical foundation of the right to mental integrity in the age of neurotechnologies

Andrea Lavazza and Rodolfo Giorgi argue that the development and use of neurotechnology present new challenges to privacy, mental integrity, and autonomy, necessitating a reevaluation of existing ethical frameworks and the introduction of new rights to protect individuals against potential threats to these fundamental aspects of human dignity.

The authors first examine the concept of intentionality, highlighting its importance for understanding the subjective and first-person perspective of mental experiences. They argue that neurotechnology poses a risk to intentionality by potentially manipulating or monitoring individuals’ mental processes. This risk extends to the first-person perspective, as the development of brain-computer interfaces could blur the boundaries between the self and external entities, undermining the sense of ownership and agency that is integral to personal identity.

The paper further discusses the significance of autonomy in moral decision-making and identity-building. Drawing upon moral constructivism, the authors contend that privacy and mental integrity are crucial for individuals to engage in the process of moral self-determination. They assert that neurotechnology has the potential to interfere with this process, leading to misinterpretations of mental states and behaviors, and ultimately hindering individuals’ ability to make autonomous choices and form their own moral judgments.

This research contributes to broader philosophical issues by shedding light on the complex relationship between emerging neurotechnology and fundamental aspects of human nature, such as intentionality, autonomy, and personal identity. It underscores the importance of establishing a right to mental integrity in order to protect these essential elements of human dignity in a world increasingly influenced by advancements in neuroscience and technology.

For future research, it is vital to investigate the ethical and legal implications of the right to mental integrity, delineating its scope and limitations in relation to neurotechnology. This may include examining the potential consequences of different types of interventions, ranging from non-invasive monitoring to direct manipulation of brain states. Additionally, interdisciplinary collaboration between philosophers, neuroscientists, and policymakers will be crucial to developing comprehensive ethical guidelines that address the profound challenges posed by the ongoing development and implementation of neurotechnology in various domains of human life. By bridging these disciplines, we can ensure that the protection of mental integrity remains a central consideration as we navigate the uncharted territory of human-machine interaction.

Abstract

Neurotechnologies broadly understood are tools that have the capability to read, record and modify our mental activity by acting on its brain correlates. The emergence of increasingly powerful and sophisticated techniques has given rise to the proposal to introduce new rights specifically directed to protect mental privacy, freedom of thought, and mental integrity. These rights, also proposed as basic human rights, are conceived in direct relation to tools that threaten mental privacy, freedom of thought, mental integrity, and personal identity. In this paper, our goal is to give a philosophical foundation to a specific right that we will call right to mental integrity. It encapsulates both the classical concepts of privacy and non-interference in our mind/brain. Such a philosophical foundation refers to certain features of the mind that hitherto could not be reached directly from the outside: intentionality, first-person perspective, personal autonomy in moral choices and in the construction of one’s narrative, and relational identity. A variety of neurotechnologies or other tools, including artificial intelligence, alone or in combination can, by their very availability, threaten our mental integrity. Therefore, it is necessary to posit a specific right and provide it with a theoretical foundation and justification. It will be up to a subsequent treatment to define the moral and legal boundaries of such a right and its application.

Philosophical foundation of the right to mental integrity in the age of neurotechnologies

(Featured) Mobile health technology and empowerment

Mobile health technology and empowerment

Karola V. Kreitmair critically evaluates the notion of empowerment that has become pervasive in the discourse surrounding direct-to-consumer (DTC) mobile health technologies. The author argues that while these technologies claim to empower users by providing knowledge, enabling control, and fostering responsibility, the actual outcome is often not genuine empowerment but merely the perception of empowerment. This distinction has significant implications for individuals who might be seeking to affect behavior change and improve their health and well-being.

The paper meticulously breaks down the concept of empowerment into five key features: knowledgeability, control, responsibility, availability of good choices, and healthy desires. The author presents a thorough review of the evidence related to the efficacy, privacy, and security concerns surrounding the use of m-health technologies. They demonstrate that these technologies, while marketed as empowering tools, often fail to live up to their promises and, in some cases, even contribute to negative health outcomes or exacerbate existing issues such as disordered eating.

The core of the argument lies in the distinction between genuine empowerment and the mere perception of empowerment. The author posits that, rather than fostering true empowerment, DTC m-health technologies often create a psychological illusion of control and knowledgeability. This illusion can lead users to form unrealistic expectations and place undue burden on themselves to effect change when the necessary conditions for change are not met. This “empowerment paradox” ultimately calls into question the purported benefits of DTC m-health technologies and the societal narrative around personal responsibility and control over one’s health.

This paper’s findings resonate with broader philosophical discussions around individual autonomy, agency, and the role of technology in shaping our lives. The empowerment paradox highlights the complex interplay between the individual and the structural factors that shape health outcomes. It raises crucial questions about the ethical implications of profit-driven technologies and the responsibilities of technology developers, marketers, and users in navigating an increasingly technologically-driven healthcare landscape. The insights from this paper contribute to ongoing debates about the nature of empowerment and the limits of individual autonomy in an age where our lives are increasingly mediated by technology.

Future research should focus on the prevalence and consequences of the empowerment paradox in the context of DTC m-health technologies. A deeper understanding of how individuals make decisions around their health in the presence of perceived empowerment could inform the development of more effective and ethically responsible technologies. Additionally, examining the social and cultural factors that influence the marketing and adoption of these technologies may provide insight into how the industry can foster genuine empowerment, rather than perpetuating an illusion of control. Ultimately, a more nuanced understanding of the relationship between DTC m-health technologies and empowerment will pave the way for a more responsible and equitable approach to healthcare in the digital age.

Abstract

Mobile Health (m-health) technologies, such as wearables, apps, and smartwatches, are increasingly viewed as tools for improving health and well-being. In particular, such technologies are conceptualized as means for laypersons to master their own health, by becoming “engaged” and “empowered” “managers” of their bodies and minds. One notion that is especially prevalent in the discussions around m-health technology is that of empowerment. In this paper, I analyze the notion of empowerment at play in the m-health arena, identifying five elements that are required for empowerment. These are (1) knowledge, (2) control, (3) responsibility, (4) the availability of good choices, and (5) healthy desires. I argue that at least sometimes, these features are not present in the use of these technologies. I then argue that instead of empowerment, it is plausible that m-health technology merely facilitates a feeling of empowerment. I suggest this may be problematic, as it risks placing the burden of health and behavior change solely on the shoulders of individuals who may not be in a position to affect such change.

Mobile health technology and empowerment

(Featured) Trojan technology in the living room?

Trojan technology in the living room?

Franziska Sonnauer and Andreas Frewer explore the delicate balance between self-determination and external determination in the context of older adults using assistive technologies, particularly those incorporating artificial intelligence (AI). The authors introduce the concept of a “tipping point” to delineate the transition between self-determination and external determination, emphasizing the importance of considering the subjective experiences of older adults when employing such technologies. To this end, the authors adopt self-determination theory (SDT) as a theoretical framework to better understand the factors that may influence this tipping point.

The paper argues that the tipping point is intrapersonal and variable, suggesting that fulfilling the three basic psychological needs outlined in SDT—autonomy, competence, and relatedness—can potentially shift the tipping point towards self-determination. The authors propose various strategies to achieve this, such as providing alternatives for assistance in old age, promoting health technology literacy, and prioritizing social connectedness in technological development. They also emphasize the need to include older adults’ perspectives in decision-making processes, as understanding their subjective experiences is crucial to recognizing and respecting their autonomy.

Moreover, the authors call for future research to explore the tipping point and factors affecting its variability in different contexts, including assisted suicide, health deterioration, and the use of living wills and advance care planning. They contend that understanding the tipping point between self-determination and external determination may enable the development of targeted interventions that respect older adults’ autonomy and allow them to maintain self-determination for as long as possible.

In a broader philosophical context, this paper raises important ethical questions concerning the role of technology in shaping human agency, autonomy, and decision-making processes. It challenges us to reflect on the ethical implications of increasingly advanced assistive technologies and the potential consequences of their indiscriminate use. The issue of the tipping point resonates with broader debates on the nature of free will, the limits of self-determination, and the moral implications of human-machine interactions. As AI continues to become more integrated into our lives, the question of how to balance self-determination and external determination takes on greater urgency and complexity.

For future research, it would be valuable to explore the concept of the tipping point in different cultural contexts, as perceptions of autonomy and self-determination may vary across societies. Additionally, interdisciplinary approaches that combine insights from philosophy, psychology, and technology could shed light on the complex interplay between human values and AI-driven systems. Finally, empirical research investigating the experiences of older adults using assistive technologies would provide valuable data to help refine our understanding of the tipping point and inform the development of more ethically sound technologies that respect individual autonomy and promote well-being.

Abstract

Assistive technologies, including “smart” instruments and artificial intelligence (AI), are increasingly arriving in older adults’ living spaces. Various research has explored risks (“surveillance technology”) and potentials (“independent living”) to people’s self-determination from technology itself and from the increasing complexity of sociotechnical interactions. However, the point at which self-determination of the individual is overridden by external influences has not yet been sufficiently studied. This article aims to shed light on this point of transition and its implications.

Trojan technology in the living room?