(Featured) Future value change: Identifying realistic possibilities and risks

Future value change: Identifying realistic possibilities and risks

The advent of rapid technological development has prompted philosophical investigation into the ways in which societal values might adapt or evolve in response to changing circumstances. One such approach is axiological futurism, a discipline that endeavors to anticipate potential shifts in value systems proactively. The research article at hand makes a significant contribution to the developing field of axiological futurism, proposing innovative methods for predicting potential trajectories of value change. This article from Jeroen Hopster underscores the complexity and nuance inherent in such a task, acknowledging the myriad factors influencing the evolution of societal values.

His research presents an interdisciplinary approach to advance axiological futurism, drawing parallels between the philosophy of technology and climate scholarship, two distinct yet surprisingly complementary fields. Both fields, it argues, share an anticipatory nature, characterized by a future orientation and a firm grounding in substantial uncertainty. Notably, the article positions climate science’s sophisticated modelling techniques as instructive for philosophical studies, promoting the use of similar predictive models in axiological futurism. The approach suggested in the article enriches the discourse on futures studies by integrating strategies from climate science and principles from historical moral change, presenting an enlightened perspective on the anticipatory framework.

Theoretical Framework

The theoretical framework of the article is rooted in the concept of axiological possibility spaces, a means to anticipate future moral change based on a deep historical understanding of past transformations in societal values. The researcher proposes that these spaces represent realistic possibilities of value change, where ‘realism’ is a function of historical conditioning. To illustrate, processes of moralisation and demoralisation are considered historical markers that offer predictive insights into future moral transitions. Moralisation is construed as the phenomenon wherein previously neutral or non-moral issues acquire moral significance, while demoralisation refers to the converse. As the research paper posits, these processes are essential to understanding how technology could engender shifts in societal values.

In particular, the research identifies two key factors—technological affordances and the emergence of societal challenges—as instrumental in driving moralisation or demoralisation processes. The author suggests that these factors collectively engender realistic possibilities within the axiological possibility space. Notably, the concept of technological affordances serves to underline how new technologies, by enabling or constraining certain behaviors, can precipitate changes in societal values. On the other hand, societal challenges are posited to stimulate moral transformations in response to shifting social dynamics. Taken together, this theoretical framework stands as an innovative schema for the anticipation of future moral change, thereby contributing to the discourse of axiological futurism.

Axiological Possibility Space and Lessons from Climate Scholarship

The concept of an axiological possibility space, as developed in the research article, operates as a predictive instrument for anticipating future value change in societal norms and morals. This space is not a projection of all hypothetical future moral changes, but rather a compilation of realistic possibilities. The author defines these realistic possibilities as those rooted in the past and present, inextricably tied to the historical conditioning of moral trends. Utilizing historical patterns of moralisation and demoralisation, the author contends that these processes, in concert with the introduction of new technologies and arising societal challenges, provide us with plausible trajectories for future moral change. As such, the axiological possibility space serves as a tool to articulate these historically grounded projections, offering a valuable contribution to the field of anticipatory ethics and, more broadly, to the philosophy of futures studies.

A central insight from the article emerges from the intersection of futures studies and climate scholarship. The author skillfully extracts lessons from the way climate change prediction models operate, particularly the CMIP models utilized by the IPCC, and their subsequent shortcomings in the face of substantial uncertainty. The idea that the intricacies of predictive modeling can sometimes overshadow the focus on potentially disastrous outcomes is critically assessed. The author contends that the realm of axiological futurism could face similar issues and hence should take heed. Notably, the call for a shift from prediction-centric frameworks to a scenario approach that can articulate the spectrum of realistic possibilities is emphasized. This scenario approach, currently being developed in climate science under the term “storyline approach,” underlines the importance of compound risks and maintains a robust focus on potentially high-impact events. The author suggests that the axiological futurist could profitably adopt a similar strategy, exploring value change in technomoral scenarios, to successfully navigate the deep uncertainties intrinsic to predicting future moral norms.

Integration into Practical Fields and Relating to Broader Philosophical Discourse

The transfer of the theoretical discussion into pragmatic fields is achieved in the research with a thoughtful examination of its potential applications, primarily in value-sensitive design. By suggesting a need for engineers to take into consideration the dynamics of moralisation and demoralisation, the author not only proposes a shift in perspective, but also creates a bridge between theoretical discourse and practical implementation. Importantly, it is argued that a future-proof design requires an assessment of the probability of embedded values shifting in moral significance over time. The research paper goes further, introducing a risk-based approach to the design process, where engineers should not merely identify likely value changes but rather seek out those changes that render the design most vulnerable from a moral perspective. The mitigation of these high-risk value changes then becomes a priority in design adaptation, solidifying the article’s argument that axiological futurism is an essential tool in technological development.

The author’s analysis also presents a substantial contribution to the broader philosophical discourse, notably the philosophy of futures studies and the ethics of technology. By integrating concepts from climatology and axiology, the work demonstrates an interdisciplinary approach that enriches philosophical discourse, emphasizing how diverse scientific fields can illuminate complex ethical issues in technology. Importantly, the work builds on and critiques the ideas of prominent thinkers like John Danaher, pushing for a more diversified and pragmatic approach in axiological futurism, rather than a singular reliance on model-based projections. The research also introduces the critical notion of “realistic possibilities” into the discourse, enriching our understanding of anticipatory ethics. It advocates for a shift in focus towards salient normative risks, drawing parallels to climate change scholarship and highlighting the necessity for anticipatory endeavours to be both scientifically plausible and ethically insightful. This approach has potential for a significant impact on philosophical studies concerning value change and the ethical implications of future technologies.

Future Research Directions

The study furnishes ample opportunities for future research in the philosophy of futures studies, particularly concerning the integration of its insights into practical fields and its implications for anticipatory ethics. The author’s exploration of axiological possibility spaces remains an open-ended endeavor; further work could be conducted to investigate the specific criteria or heuristic models that could guide ethical assessments within these spaces. The potential application of these concepts in different technological domains, beyond AI and climate change, also presents an inviting avenue of inquiry. Moreover, as the author has adopted lessons from climate scholarship, similar interdisciplinary approaches could be employed to incorporate insights from other scientific disciplines. Perhaps most intriguingly, the research introduces a call for a critical exploration of “realistic possibilities,” an area that is ripe for in-depth theoretical and empirical examination. Future research could build upon this foundational concept, investigating its broader implications, refining its methodological underpinnings, and exploring its potential impact on policy making and technological design.

Abstract

The co-shaping of technology and values is a topic of increasing interest among philosophers of technology. Part of this interest pertains to anticipating future value change, or what Danaher (2021) calls the investigation of ‘axiological futurism’. However, this investigation faces a challenge: ‘axiological possibility space’ is vast, and we currently lack a clear account of how this space should be demarcated. It stands to reason that speculations about how values might change over time should exclude farfetched possibilities and be restricted to possibilities that can be dubbed realistic. But what does this realism criterion entail? This article introduces the notion of ‘realistic possibilities’ as a key conceptual advancement to the study of axiological futurism and offers suggestions as to how realistic possibilities of future value change might be identified. Additionally, two slight modifications to the approach of axiological futurism are proposed. First, axiological futurism can benefit from a more thoroughly historicized understanding of moral change. Secondly, when employed in service of normative aims, the axiological futurist should pay specific attention to identifying realistic possibilities that come with substantial normative risks.

Future value change: Identifying realistic possibilities and risks

(Featured) Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers

Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers

As our technological capabilities advance at an accelerating pace, so too does the pertinence of the hypothetical conundrum posed by super-intelligent artificial intelligence (AI) and its implications for human freedom. Robert Sparrow examines these implications, drawing extensively from political philosophy and conceptions of agency, and provides an analysis of the societal implications of super-intelligence from a uniquely philosophical standpoint. The author adopts a nuanced perspective, proposing that even benevolent, friendly AI may threaten human freedom in its capability to dominate, consciously or not, its human counterparts. It is this paradox, situated within the broader philosophical discourse of freedom versus domination, that provides the nucleus of this analysis.

The research is grounded in the seminal work of philosopher Philip Pettit, particularly his doctrine of republican freedom. This doctrine centers on the belief that freedom is not merely the absence of interference (negative liberty) but is, critically, the absence of domination or the ability to interfere at will. Pettit famously encapsulated this concept in his metaphor of the “eyeball test,” positing that one is free only when they can look others in the eye without fear or subservience. As we explore the intersection of Pettit’s philosophy and the hypothetical reality of a super-intelligent AI, the profound significance of this test in determining the future of human freedom in a world shared with AI comes sharply into focus.

The “Friendly AI” Problem

Robert Sparrow makes an acute distinction between “friendly” AI and its potential to dominate humanity. The Friendly AI problem stems from the plausible notion that super-intelligent AI, regardless of its benevolence or adherence to human values, may still pose a significant threat to human freedom due to its inherent capacity for domination. A benevolent AI could feasibly operate in a dictatorial manner, modulating its interference in human life based on its determination of human interests. However, a critical distinction must be drawn: a benevolent dictator, even though acting in our interests, is still a dictator. As the author of the article pointedly remarks, to be “free” to act as one wishes only at the behest of another entity, even a well-meaning one, is not true freedom.

Herein lies the crux of the Friendly AI problem: the ability of an AI entity to act in accordance with human interests does not automatically guarantee human freedom. Freedom, as delineated by the republicanism of Pettit, requires resilience; it must not dissolve upon the whims of a more powerful entity. Thus, for the exercise of power by AI to be compatible with human freedom, it must be possible for humans to resist it. One might propose that a genuinely Friendly AI would solicit human input before interfering in our affairs, serving as an efficient executor of our will rather than as a prescriptive entity. Yet, this proposition does not satisfactorily resolve the core tension between AI’s power and freedom and our own. Ultimately, any freedom we might enjoy under a superintelligent AI would be contingent upon the AI’s will, a position which reveals the inherent vulnerability and potential for domination inherent in the Friendly AI concept.

Superintelligence

Bostrom’s notion of Superintelligence, as outlined by Sparrow, posits an AI entity capable of outperforming the best human brains in nearly every economically relevant field. However, the potential domination by such an entity forms the bedrock of the philosophical conflict between benevolence and domination. Drawing on Pettit’s theory of republicanism, it becomes clear that benevolence alone, even if perfectly calibrated to human interests, does not suffice to guarantee freedom. The very ability of a superintelligent AI to interfere unilaterally in human affairs, regardless of its intent, embodies the antithesis of Pettit’s non-domination principle. The analysis further draws attention to the paradox inherent in relying on an external, powerful entity for the regulation of our interests, effectively highlighting the existential risk associated with superintelligent AI. While a superintelligent AI may act in line with human interests, its potential for domination raises questions about the plausibility of achieving a truly “Friendly AI”, a challenge that resonates with the larger discourse on freedom and domination in philosophical studies.

Freedom, Status, and the ‘Eyeball Test’

The question of human freedom in the context of a superintelligent AI intersects with Pettit’s conceptualization of the ‘eyeball test’. In his philosophy, the notion of freedom pivots on the individual’s status within society – a status conferred when one can ‘look others in the eye without reason for fear or deference’. This perspective becomes especially poignant when viewed in the light of a superintelligent entity’s potential dominion. Under such circumstances, the capacity for humans to pass the ‘eyeball test’ could be seriously undermined, as the superintelligent AI, by virtue of its cognitive superiority, could induce both fear and deference. The state of being subjected to the AI’s superior will could consequently impair our ability to ‘look it in the eye’, thereby eroding the human status required for true freedom. This analysis deepens the philosophical understanding of freedom and its inextricable link with status, while simultaneously challenging the concept of a ‘Friendly AI’ from the perspective of republican theory.

The Negative Liberty Doctrine and Technocratic Framing of AI

Berlin’s bifurcation of liberty into negative and positive spheres finds particular resonance in the context of superintelligent AI, and as such, provides a useful framework for interpreting the dominance problem. From a negative liberty perspective – that is, the absence of coercion or interference – the advent of a superintelligent AI could be seen as promoting freedom. However, the technocratic framing of AI, often characterized by an overemphasis on instrumental logic and utility maximization, may inadvertently favor this negative liberty doctrine, potentially to the detriment of positive freedom. This is to say, while an AI’s superior decision-making capabilities could minimize human interference in various spheres of life, it could also inadvertently curtail positive freedom – the opportunity for self-realization and autonomy. As such, this underscores the importance of incorporating broader philosophical considerations into AI research and development, beyond the narrow confines of technocratic perspectives.

This fusion of philosophy and AI research necessitates the introduction of considerations beyond the merely technical and into the sphere of ethics and moral philosophy. The potential for domination by superintelligent AI systems underscores the need for research that specifically targets these concerns, particularly in relation to upholding principles of human dignity, autonomy, and positive freedom. However, achieving this requires a re-evaluation of our current paradigms of AI development that often valorize utility maximization and efficiency. Instead, an approach that truly appreciates the full depth of the challenge must also involve a careful examination of the philosophical underpinnings that inform the design and operation of AI systems. As such, future research in this arena ought to be a collaborative effort between philosophers, ethicists, AI researchers, and policymakers, aimed at defining a coherent set of values and ethical guidelines for the development and use of superintelligent AI.

Abstract

When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what they call the “Friendly AI problem”. Roughly speaking this is the question of how we might ensure that the AI that will develop from the first AI that we create will remain sympathetic to humanity and continue to serve, or at least take account of, our interests. In this paper I draw on the “neo-republican” philosophy of Philip Pettit to argue that solving the Friendly AI problem would not change the fact that the advent of super-intelligent AI would be disastrous for humanity by virtue of rendering us the slaves of machines. A key insight of the republican tradition is that freedom requires equality of a certain sort, which is clearly lacking between pets and their owners. Benevolence is not enough. As long as AI has the power to interfere in humanity’s choices, and the capacity to do so without reference to our interests, then it will dominate us and thereby render us unfree. The pets of kind owners are still pets, which is not a status which humanity should embrace. If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.

Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers

(Featured) Space not for everyone: The problem of social exclusion in the concept of space settlement

Space not for everyone: The problem of social exclusion in the concept of space settlement

Konrad Szocik contests the arguments supporting space colonization and underscores overlooked dimensions of social justice and equity. The primary critique orbits around the arguments of Milan M. Ćirković, who previously dismissed skepticism concerning space colonization, but failed to consider arguments rooted in social justice and equal access. The author points out that the endeavors of space exploration and colonization could inadvertently amplify existing inequalities, transforming these ventures into projects that serve only a fraction of humanity.

The article challenges the comparison Ćirković makes between skepticism about space colonization and hypothetical skepticism about ancestral migrations, arguing that it overlooks the significant disparities between Earth’s physical conditions and those of outer space. Furthermore, the author urges an investigation into the potential impacts of space settlement on equality and access, arguing that the current discourse is dominated by Western perspectives, which may not account for the marginalized and excluded. The author worries that space colonization could simply replicate existing terrestrial injustices, serving only the most privileged while leaving the poorest and most vulnerable behind.

The paper highlights the fear that space settlement, seen as a refuge from Earth’s deteriorating conditions, could be exclusively reserved for the rich or citizens of spacefaring superpowers. This exclusive access could potentially undermine the very purpose of space settlement as a rescue for humanity. Moreover, the author suggests that this enterprise, given the current technical capabilities, might only be realistic for a relatively small number of people. This selectivity questions the moral value of such a venture, particularly if it detracts from efforts to mitigate climate change for the most disadvantaged.

Delving into the philosophical realm, this article brings to the fore the philosophical implications of space settlement, sparking a dialogue reminiscent of John Rawls’ “Theory of Justice”. The highlighted concerns closely echo the principles of fairness and equality in distribution, pointing to a possible “veil of ignorance” in planning space colonization. Similarly, the author’s argument about the unjust distribution of access to space colonization echoes Thomas Pogge’s ideas on global justice and how the actions of some nations can profoundly affect others. This dialogue expands the scope of philosophy and underscores the importance of inclusive ethics in a rapidly advancing technological world.

The discourse of this article presents new pathways for future research in the field of futures studies. Future research could evaluate more inclusive methods of space colonization, investigating alternatives to the currently anticipated elitist selection process. It could also examine the potential of international regulations to ensure equitable access to space resources. Additionally, research could explore the feasibility and ethics of a globally cooperative effort in space colonization. Overall, these directions aim to ensure that the bold ambition of space colonization aligns with the principles of social justice, thereby propelling humanity forward without leaving anyone behind.

Abstract

The subject of this paper is a continuation of the discussion initiated by Milan M. Ćirković. Ćirković criticized a number of arguments skeptical of the idea of space settlement. However, he omitted arguments referring to social justice and equal access, which, as this paper tries to show, are arguably the most serious skeptical remarks against the idea of space colonization. The paper emphasizes that both space exploration and, ultimately, potential space colonization run the risk of exacerbating inequality and, as such, are not projects pursued for all of humanity.

Space not for everyone: The problem of social exclusion in the concept of space settlement

(Featured) Liars and Trolls and Bots Online: The Problem of Fake Persons

Liars and Trolls and Bots Online: The Problem of Fake Persons

Keith Raymond Harris explores of the role of ‘fake persons’—bots and trolls—in online spaces and their deleterious impact on our acquisition and distribution of knowledge. Situating his analysis in a technological ecosystem increasingly swamped by these artificial entities, the author dissects the intricate issues engendered by these ‘fake persons’ into three discernible yet interwoven threats: deceptive, skeptical, and epistemic.

The deceptive threat elucidates how bots and trolls propagate false information and craft misleading representations of consensus through manipulated metrics like shares, likes, and comments. This deceptive veneer engenders a distorted perception of reality, leading to the formulation of misguided beliefs. The skeptical threat, on the other hand, stems from the awareness of the online environment’s infestation with these deceitful entities. This awareness engenders a pervasive sense of skepticism, a defensive mechanism that could result in the dismissal of valid evidence, leading to an overall decrease in the trust placed in online information. This skepticism, though justifiable, can have the unintended effect of isolating individuals from genuine knowledge sources.

Further complicating this scenario is the epistemic threat. The author draws a striking analogy between the online world inhabited by ‘fake persons’ and a natural environment populated by ‘mimic species’. In the latter, the significance of certain traits, often used to identify species, diminishes due to the presence of mimics. Analogously, in an environment teeming with bots and trolls, the perceived value of certain forms of evidence depreciates, impairing the ability to discern ‘real’ persons. In this convoluted digital milieu, the credibility of evidence—along with the authenticity of users and the perceived consensus—becomes questionable.

Grounding these digital threats in the wider philosophical discourse, this research accentuates the intricate entanglement of epistemology and ontology in online spaces. It challenges traditional conceptions of identity, reality, and knowledge, echoing Baudrillard’s premonitions of hyperreality and simulation. The presence of ‘fake persons’ obfuscates the demarcation between the real and the artificial, leading to an epistemic crisis where distinguishing between genuine and fallacious information becomes a Herculean task. Furthermore, these digital distortions provoke a profound skepticism that resonates with Cartesian doubt, while simultaneously illustrating the pervasiveness of misinformation and disinformation, reflecting the post-truth era’s cynicism. This research, hence, not only deepens our understanding of the digital world’s complexities but also underscores the shifting epistemic and ontological paradigms in the internet age.

As we navigate through this rapidly mutating digital landscape, the author’s research underscores the urgent need for further exploration. While technological solutions might offer some respite, they cannot completely eradicate these pervasive threats. Future research, therefore, should venture into developing more robust epistemological frameworks that accommodate these digital complexities. It should aim to delve into the philosophy of digital identities, exploring how they are constructed, perceived, and interacted with. There’s also a pressing need for studies that examine the intersection of ethics, technology, and epistemology, especially in the context of ‘fake persons’. Such research would not only enrich the theoretical discourse but could also guide the creation of more ethical and reliable digital spaces.

Abstract

This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of online misinformation, or the more familiar threat posed by liars offline. Trolls and bots are, in effect, fake persons. Consequently, trolls and bots can systemically interfere with knowledge acquisition by manipulating the signals whereby individuals acquire knowledge from one another online. I conclude with a brief discussion of some possible remedies for the problem of fake persons.

Liars and Trolls and Bots Online: The Problem of Fake Persons

(Featured) The Future of Work: Augmentation or Stunting?

The Future of Work: Augmentation or Stunting?

Markus Furendal and Karim Jebari present a nuanced exploration of the implications of artificial intelligence (AI) on the future of work, straddling the philosophical, political, and economic realms. The authors distinguish between two paradigms of AI’s impact on work – ‘human-augmenting’ and ‘human-stunting’. Augmentation refers to scenarios where AI and humans collaboratively work, enhancing the latter’s capabilities and providing more fulfilling work. Stunting, on the other hand, implies a diminishment of human capabilities as AI takes over, reducing humans to mere overseers or executors of pre-programmed tasks. Utilizing Amazon fulfillment centers as a case study, the authors elucidate how the application of AI could potentially lead to stunting, thereby negating the potential goods of work.

The authors address four objections to their perspective. The objections challenge their interpretation of ‘goods of work’, the feasibility of political intervention, and question their assessment of the human augmentation-stunting dichotomy, and the potential paternalistic implications thereof. The paper refrains from advocating for particular policy interventions, but stresses the moral obligation to address human stunting as an issue of concern. The authors point out that workers might be forced to accept stunting roles due to higher pay or collective action problems, and that state intervention could potentially rectify such situations. Furthermore, they also acknowledge the possibility of exploring alternative, non-labor paths to human flourishing, but emphasize their focus on immediate and medium-term impacts rather than long-term societal transformations.

The conclusion of the paper underscores the critical need for an augmenting-stunting distinction in future work debates. The authors acknowledge the potential for AI to augment human capabilities, but caution that the rise of AI technologies could also lead to widespread human stunting, affecting the quality of work and its associated moral goods. They argue that while AI could theoretically enable more stimulating work experiences, it could also degrade human capabilities, detrimentally impacting large swaths of the workforce. As such, the paper calls for additional empirical research to better understand the real-world implications of human-AI collaboration in the workplace.

In the broader philosophical context, this paper instigates a profound discourse on the ethical dimensions of AI and the concept of ‘human flourishing’. By invoking notions of ‘goods of work’, it brings the discourse on AI and work into the arena of moral philosophy, questioning the essence of work and its role in the human condition. The researchers’ debate on the ‘augmentation-stunting’ dichotomy in human-AI interaction is reminiscent of classical deliberations on the dual nature of technology – as both an enabler and a potential detriment to human existence. Furthermore, their contemplation of the role of the state in regulating AI adoption underscores the inherent tension between technological progress and societal welfare, a theme that has persisted throughout technological history.

Future research on this topic could potentially delve deeper into the effects of AI technologies on different labor markets, depending on workers’ skill levels, institutional frameworks, and reskilling policies. More case studies from diverse sectors could enhance understanding of the augmentation-stunting paradigm in practical settings. Furthermore, the idea of ‘human flourishing’ outside of work, in the context of AI’s transformative potential, presents a fascinating area for exploration. The role of political institutions in shaping this future of work would also be an interesting research avenue, bridging the gap between philosophy, political science, and technology studies. The authors’ call for empirical research in workplaces further suggests the potential for cross-disciplinary studies that combine philosophical inquiry with sociological and anthropological methodologies.

Abstract

The last decade has seen significant improvements in artificial intelligence (AI) technologies, including robotics, machine vision, speech recognition, and text generation. Increasing automation will undoubtedly affect the future of work, and discussions on how the development of AI in the workplace will impact labor markets often include two scenarios: (1) labor replacement and (2) labor enabling. The former involves replacing workers with machines, while the latter assumes that human–machine cooperation can significantly improve worker productivity. In this context, it is often argued that (1) could lead to mass unemployment and that (2) therefore would be more desirable. We argue, however, that the labor-enabling scenario conflates two distinct possibilities. On the one hand, technology can increase productivity while also promoting “the goods of work,” such as the opportunity to pursue excellence, experience a sense of community, and contribute to society (human augmentation). On the other hand, higher productivity can also be achieved in a way that reduces opportunities for the “goods of work” and/or increases “the bads of work,” such as injury, reduced physical and mental health, reduction of autonomy, privacy, and human dignity (human stunting). We outline the differences of these outcomes and discuss the implications for the labor market in the context of contemporaneous discussions on the value of work and human wellbeing.

The Future of Work: Augmentation or Stunting?

(Featured) Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

Giulia De Togni et al. delve into the complex dynamics of technoscientific expectations surrounding the future of artificial intelligence (AI) and robotic technologies in healthcare. By focusing on surgery, pathology, and social care, they examine the strategies employed by scientists, clinicians, and other stakeholders to navigate and construct visions of an AI-driven future in healthcare. The authors illustrate the challenges faced by these stakeholders, who must balance promissory visions with more realistic expectations, while acknowledging the performative power of high expectations in attracting investment and resources.

The participants in the study engage in a balancing act between high and low expectations, drawing boundaries to maintain credibility for their research and practice while distancing themselves from the hype. They recognize that over-optimistic visions may create false hope and unrealistic expectations of performance, potentially harming AI and robotics research through deflated investment if the outcomes fail to match expectations. The authors demonstrate how the stakeholders negotiate the tension between sustaining and nurturing the hype while calling for the recalibration of expectations within an ethically and socially responsible framework.

Central to the participants’ visions of acceptable futures is the changing nature of human-machine relationships. Through balancing different social, ethical, and technoscientific demands, the participants articulate futures that are perceived as ethically and socially acceptable, as well as realistically achievable. They frame their articulations of both the present and future potential and limitations of AI and robotics technologies within an ethics of expectations that position normative considerations as central to how these expectations are expressed.

This research article contributes to broader philosophical debates concerning the role of expectations and imaginaries in shaping our understanding of technoscientific innovation, human-machine relationships, and the ethics of care. By exploring the dynamic interplay between these factors, the authors shed light on how the future of AI and robotics in healthcare is being constructed and negotiated. This study resonates with key themes in the philosophy of futures studies, including the co-constitution of technological visions and sociotechnical imaginaries, the performativity of expectations, and the ethical dimensions of forecasting and envisioning the future.

To further enrich our understanding of these complex dynamics, future research could explore the perspectives of additional stakeholders, such as patients and policymakers, to gain a more comprehensive picture of the expectations surrounding AI and robotics in healthcare. Additionally, cross-cultural and comparative studies could reveal how different cultural contexts and healthcare systems influence expectations and acceptance of these technologies. Ultimately, by continuing to examine the societal implications of AI and robotic technologies, including their impact on patient autonomy, privacy, and the human aspects of care, scholars can contribute to a more nuanced and ethically responsible vision of the future of healthcare.

Abstract

AI and robotic technologies attract much hype, including utopian and dystopian future visions of technologically driven provision in the health and care sectors. Based on 30 interviews with scientists, clinicians and other stakeholders in the UK, Europe, USA, Australia, and New Zealand, this paper interrogates how those engaged in developing and using AI and robotic applications in health and care characterize their future promise, potential and challenges. We explore the ways in which these professionals articulate and navigate a range of high and low expectations, and promissory and cautionary future visions, around AI and robotic technologies. We argue that, through these articulations and navigations, they construct their own perceptions of socially and ethically ‘acceptable futures’ framed by an ‘ethics of expectations.’ This imbues the envisioned futures with a normative character, articulated in relation to the present context. We build on existing work in the sociology of expectations, aiming to contribute towards better understanding of how technoscientific expectations are navigated and managed by professionals. This is particularly timely since the COVID-19 pandemic gave further momentum to these technologies.

Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

(Featured) Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

Maurizio Balistreri and Steven Umbrello engage in a critical exploration of the philosophical, ethical, and practical implications of human space travel and extraterrestrial colonization. The authors offer an in-depth analysis of two main strategies proposed in the literature: terraforming (geoengineering) and human bioenhancement. The first approach implies transforming extraterrestrial environments, such as Mars, to make them habitable for human life. The second approach involves modifying the human genetic heritage to make us more resilient and adaptable to non-terrestrial environments. The authors meticulously scrutinize these alternatives, considering not only feasibility and cost but also the ethical and philosophical implications.

The authors underscore the potential of terraforming as a method to establish human settlements on Mars. However, this possibility raises several ethical concerns, including the potential destruction of extraterrestrial life forms, the alteration of untouched landscapes, and the potential overstepping of human dominion. On the other hand, human bioenhancement, though a promising path, engenders its own set of ethical dilemmas. The authors caution against reckless enthusiasm for genetic modification, drawing attention to the potential creation of a new ‘human species’ and the consequent risk of divisions and misunderstandings.

A central theme in the article is the comparison of natural and artificial constructs. The authors challenge the assumption that the natural is always superior to the artificial. Drawing on posthumanist perspectives, they suggest that, given our influence on Earth’s environment, nature is already an artificial product. The argument is extended to other planets, indicating that the traditional dichotomy between the natural and the artificial may not hold in the context of extraterrestrial colonization.

The article contributes to broader philosophical discourses about the human relationship with nature and our place in the universe. It resonates with themes of transhumanism and posthumanism, contemplating the potential of technology to overcome human vulnerabilities and achieve a new evolutionary stage. The authors invite us to question and possibly redefine our notions of ‘natural’ and ‘artificial.’ This study, therefore, serves as a significant touchstone for futures studies, linking the practical considerations of space travel with philosophical reflections on human nature and our interaction with the environment.

For future research, the authors’ comparative analysis of terraforming and human bioenhancement opens several avenues. While the ethical implications of both strategies have been discussed, a more comprehensive ethical framework could be developed, perhaps drawing on principles of bioethics, environmental ethics, and space ethics. Additionally, the potential of hybrid approaches combining elements of both strategies could be explored. Lastly, given the increasing likelihood of extraterrestrial colonization, a more detailed analysis of the potential social, cultural, and psychological impacts on human populations in these new environments would be a valuable contribution.

Abstract

As space travel and intentions to colonise other planets are becoming the norm in public debate and scholarship, we must also confront the technical and survival challenges that emerge from these hostile environments. This paper aims to evaluate the various arguments proposed to meet the challenges of human space travel and extraterrestrial planetary colonisation. In particular, two primary solutions have been present in the literature as the most straightforward solutions to the rigours of extraterrestrial survival and flourishing: (1) geoengineering, where the environment is modified to become hospitable to its inhabitants, and (2) human (bio)enhancement where the genetic heritage of humans is modified to make them more resilient to the difficulties they may encounter as well as to permit them to thrive in non-terrestrial environments. Both positions have strong arguments supporting them but also severe philosophical and practical drawbacks when exposed to different circumstances. This paper aims to show that a principled stance where one position is accepted wholesale necessarily comes at the opportunity cost of the other where the other might be better suited, practically and morally. This paper concludes that case-by-case evaluations of the solutions to space travel and extraterrestrial colonisation are necessary to ensure moral congruency and the survival and flourishing of astronauts now and into the future.

Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

(Featured) Introducing a four-fold way to conceptualize artificial agency

Introducing a four-fold way to conceptualize artificial agency

Maud van Lier presents a methodological framework for understanding artificial agency in the context of basic research, particularly in AI-driven science. The Four-Fold Framework, as the author coins it, is a pluralistic and pragmatic approach that incorporates Gricean modeling, analogical modeling, theoretical modeling, and conceptual modeling. The motivation behind this framework lies in the increasingly active role that AI systems are taking on in scientific research, warranting the development of a robust conceptual foundation for these ‘agents.’

The author critically assesses Sarkia’s neo-Gricean framework, which offers three modeling strategies for conceptualizing artificial agency. While acknowledging its merits, the author identifies a crucial shortcoming in its lack of a semantic dimension, which is necessary to bridge the gap between theoretical models and practical implementation in basic research. To address this issue, the author proposes the addition of conceptual modeling as a fourth strategy, ultimately forming the Four-Fold Framework. This new framework aims to provide a comprehensive account of artificial agency in basic research by accommodating different interpretations and addressing the semantic dimension of artificial agency.

By implementing the Four-Fold Framework, the author posits that researchers will be able to develop a more inclusive and pragmatically plausible understanding of artificial agency in the context of AI-driven science. The framework sets the stage for a robust conceptual foundation that can accommodate the complexities and nuances of artificial agency as AI continues to evolve and expand its role in scientific research.

This paper’s exploration of artificial agency also contributes to the broader philosophical discourse on agency and autonomy in the context of artificial intelligence. As AI systems become more advanced, the distinction between human and artificial agents blurs, raising questions about the nature of agency, responsibility, and ethical considerations. The Four-Fold Framework provides a methodological tool to examine these complex issues, grounding the analysis of artificial agency within a rigorous and comprehensive structure.

Future research can expand upon the Four-Fold Framework by investigating its applicability to other emerging areas in AI, such as AI ethics, human-AI collaboration, and autonomous decision-making. Additionally, researchers can explore how the Four-Fold Framework might inform the development of AI-driven science policy and governance, ensuring that ethical, legal, and societal implications are considered in the integration of artificial agency in scientific research. By refining and extending the Four-Fold Framework, the academic community can better anticipate and navigate the challenges and opportunities that artificial agency presents in the rapidly evolving landscape of AI-driven science.

Abstract

Recent developments in AI-research suggest that an AI-driven science might not be that far off. The research of for Melnikov et al. (2018) and that of Evans et al. (2018) show that automated systems can already have a distinctive role in the design of experiments and in directing future research. Common practice in many of the papers devoted to the automation of basic research is to refer to these automated systems as ‘agents’. What is this attribution of agency based on and to what extent is this an important notion in the broader context of an AI-driven science? In an attempt to answer these questions, this paper proposes a new methodological framework, introduced as the Four-Fold Framework, that can be used to conceptualize artificial agency in basic research. It consists of four modeling strategies, three of which were already identified and used by Sarkia (2021) to conceptualize ‘intentional agency’. The novelty of the framework is the inclusion of a fourth strategy, introduced as conceptual modeling, that adds a semantic dimension to the overall conceptualization. The strategy connects to the other strategies by modeling both the actual use of ‘artificial agency’ in basic research as well as what is meant by it in each of the other three strategies. This enables researchers to bridge the gap between theory and practice by comparing the meaning of artificial agency in both an academic as well as in a practical context.

Introducing a four-fold way to conceptualize artificial agency

(Featured) The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

James Johnson explores the ethical and psychological implications of integrating AI into warfare. The author argues that the use of autonomous weapons in warfare may create moral vacuums that eliminate meaningful ethical and moral deliberation in the quest for riskless and rational war. Moreover, the author argues that the human-machine integration process is part of a broader evolutionary dovetailing of humanity and technology. The logical end of this trajectory is an AI commander, which would effectively outsource ethical decision-making to machines that are ill-equipped to fill this ethical and moral void.

The author also explores the limitations of AI in distinguishing between legitimate and illegitimate targets in asymmetric conflicts, such as insurgencies and civil wars. He stresses the importance of recognizing the personhood of the enemy in warfare and argue that until AI can achieve this moral standing, it will be unable to meet the requirements of jus in bello. Additionally, the Johnson argues that human judgment and prediction, while imperfect, are still necessary in warfare because of the subtle cues that humans can recognize that machines cannot.

The paper highlights three key psychological insights regarding human-machine interactions and political-ethical dilemmas in future AI-enabled warfare. First, the Johnson argues that human-machine integration is a socio-technical psychological process that is part of a broader evolutionary dovetailing of humanity and technology. Second, he argues that biases associated with human-machine interactions can compound the “illusion of control” problem. Third, he suggests that coding human ethics into AI algorithms is technically, theoretically, ontologically, and psychologically problematic and ethically and morally questionable.

This paper raises important philosophical questions about the relationship between technology and ethics. It highlights the risks associated with outsourcing ethical decision-making to machines and emphasizes the importance of recognizing the personhood of the enemy in warfare. The paper also underscores the limitations of AI in distinguishing between legitimate and illegitimate targets and the importance of human judgment in recognizing subtle cues that machines cannot. Ultimately, this paper challenges us to consider the role of technology in shaping our ethical and moral decision-making processes.

Future research in this area could explore the psychological and ethical implications of human-machine integration in other domains, such as healthcare or criminal justice. Additionally, research could focus on developing AI systems that are capable of understanding the complexities of human ethics and morality. This research could also explore ways to incorporate ethical decision-making into AI algorithms without sacrificing human agency and accountability. Finally, research could explore the broader philosophical implications of the use of AI in warfare and consider the ethical and moral implications of a world in which machines are increasingly integrated into our lives.

Abstract

Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the key elements of the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI “rational” efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining “meaningful” human control over the war machine. This Panglossian assumption neglects the psychological features of human-machine interactions, the pace at which future AI-enabled conflict will be fought, and the complex and chaotic nature of modern war. The article expounds key psychological insights of human-machine interactions to elucidate how AI shapes our capacity to think about future warfare’s political and ethical dilemmas. It argues that through the psychological process of human-machine integration, AI will not merely force-multiply existing advanced weaponry but will become de facto strategic actors in warfare – the “AI commander problem.”

The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

(Featured) Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Cian Brennan argues for a version of transhumanism that incrementally applies moderate enhancements to future human beings, rather than pursuing radical enhancements in a more immediate and extreme manner. The paper begins by presenting the critique of transhumanism put forward by Johnathan Agar, which centers on the potential negative consequences of radical enhancement. The author argues that Agar’s critique is aimed at the effects of radical enhancement, rather than the concept of radical enhancement itself. By assuming that radical enhancement will be applied gradually to future generations, the author argues that weak transhumanism can overcome Agar’s objections.

The author then discusses objections to weak transhumanism, including the potential for an eventual radical enhancement to emerge and the difficulty of identifying when an enhancement becomes radical. The author responds to these objections by proposing a checklist of characteristic features that can be used to identify radical enhancements, such as the creation of new or extended abilities, changes in moral status, and significant changes in vulnerability or relatability between the enhanced and unenhanced.

Overall, the paper provides a nuanced and detailed defense of weak transhumanism, offering a way to pursue radical enhancements while avoiding some of the potential negative consequences of more radical approaches. The paper engages with a range of objections and provides a thoughtful and well-supported response to each, drawing on both philosophical and scientific sources.

The paper has implications for broader philosophical issues surrounding the ethics of human enhancement, the relationship between technology and society, and the nature of human identity and personhood. By focusing on the incremental application of enhancements, the paper raises questions about the degree to which human beings can be transformed by technology without losing their essential human nature. It also highlights the role of societal values and norms in shaping the development and application of enhancement technologies.

Future research in this area could build on the author’s checklist of characteristic features of radical enhancements, exploring the extent to which these features are necessary and sufficient conditions for defining radical enhancements. Further research could also examine the potential consequences of weak transhumanism, including the ways in which incremental enhancements may interact with each other over time and the potential for unintended consequences. Finally, future research could explore the social and cultural dimensions of transhumanism, including the ways in which transhumanist values and practices may be shaped by factors such as gender, race, and socioeconomic status.

Abstract

Transhumanism aims to bring about radical human enhancement. In ‘Truly Human Enhancement’ Agar (2014) provides a strong argument against producing radically enhancing effects in agents. This leaves the transhumanist in a quandary—how to achieve radical enhancement whilst avoiding the problem of radically enhancing effects? This paper aims to show that transhumanism can overcome the worries of radically enhancing effects by instead pursuing radical human enhancement via incremental moderate human enhancements (Weak Transhumanism). In this sense, weak transhumanism is much like traditional transhumanism in its aims, but starkly different in its execution. This version of transhumanism is weaker given the limitations brought about by having to avoid radically enhancing effects. I consider numerous objections to weak transhumanism and conclude that the account survives each one. This paper’s proposal of ‘weak transhumanism’ has the upshot of providing a way out of the ‘problem of radically enhancing effects’ for the transhumanist, but this comes at a cost—the restrictive process involved in applying multiple moderate enhancements in order to achieve radical enhancement will most likely be dissatisfying for the transhumanist, however, it is, I contend, the best option available.

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement