(Featured) Reinforcement learning and artificial agency

Reinforcement learning and artificial agency

Patrick Butlin explores the idea of whether these systems could possess the capacity to “act for reasons”, a concept traditionally associated with conscious and goal-directed agents. Drawing upon philosophical literature and specifically from the work of Hanna Pickard (2015) and Helen Steward (2012), the author outlines two criteria needed to be met for something to be considered an agent: the entity in question must have goals and it must also interact with its environment to pursue these goals. The author asserts that both model-free and model-based RL systems meet these criteria and can thus be considered as having minimal agency.

Building upon the foundation of minimal agency, the author makes a compelling argument for RL systems acting for reasons. Their argument hinges on the philosophical work of Jennifer Hornsby (2004) and Nora Mantel (2018), where the former associates acting for reasons with general-purpose abilities, and the latter distinguishes between three competencies involved in action for reasons: epistemic, volitional, and executive sub-competences. The author posits that model-based RL systems, with their capacity to model the transition function, meet these criteria as they learn and store information about their environment that influences their future actions, forming a sort of ‘descriptive representation’.

In contrast to Mantel, the author suggests that the distinction between volitional and executive sub-competences and the emphasis on motivation might not be necessary to the account. While Mantel uses motivation interchangeably with desire and intention, the author posits that this distinction might be more relevant to human agency and less so for artificial RL systems. The author also refutes the notion that the lack of desires or volitions disqualifies artificial RL systems from acting for reasons. They conclude that while model-based RL systems may lack desires, their interaction with their environment to achieve set goals provides sufficient grounds to attribute minimal agency to them and thus the capacity to act for reasons.

The article adds significantly to the discourse on machine agency, challenging conventional philosophical norms that tie agency and the capacity to act for reasons to consciousness or biological entities. It raises compelling points about how RL systems, through their goal-directed behavior and interaction with the environment, exhibit traits of minimal agency. This exploration places the discussion of machine agency within broader philosophical themes such as the nature of consciousness, the demarcation of human and non-human agency, and the implications of attributing agency to artificial systems.

Future research could focus on extending the arguments in this article, exploring the implications of attributing even more sophisticated forms of agency to artificial RL systems. One direction could be to look at whether these systems, as they continue to develop, could eventually meet even stricter criteria for agency that go beyond minimal agency. Another avenue would be to study the ethical and societal implications of recognizing artificial RL systems as agents. Would it, for instance, be meaningful or necessary to establish an ethical framework for interacting with these systems? Additionally, research could examine how these concepts might evolve in tandem with the continued development of artificial RL systems and other forms of artificial intelligence.

Abstract

There is an apparent connection between reinforcement learning and agency. Artificial entities controlled by reinforcement learning algorithms are standardly referred to as agents, and the mainstream view in the psychology and neuroscience of agency is that humans and other animals are reinforcement learners. This article examines this connection, focusing on artificial reinforcement learning systems and assuming that there are various forms of agency. Artificial reinforcement learning systems satisfy plausible conditions for minimal agency, and those which use models of the environment to perform forward search are capable of a form of agency which may reasonably be called action for reasons.

Reinforcement learning and artificial agency

(Featured) Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Sinead O’Connor and Helen Liu investigate a pertinent concern in contemporary artificial intelligence (AI) studies: the manifestation and amplification of gender bias within AI technologies. The authors present a systematic review of multiple case studies which demonstrate the pervasiveness of gender bias across various forms of AI, particularly focusing on textual and visual algorithms. The highlighted studies underscore how AI, far from being an objective tool, can inadvertently perpetuate societal biases ingrained within training datasets, which can extend to controversial societal asymmetries. Moreover, these studies reveal that although de-biasing efforts have been attempted, residual biases often persist due to the depth and complexity of discriminatory patterns.

In an innovative approach, the authors differentiate between bias perpetuation and bias mitigation, exploring this distinction in both text-based and image-based AI contexts. The issue of latent gendered word associations in text is emphasized, wherein researchers strive for a delicate balance between retaining the utility of algorithms and mitigating bias. In image-based AI, the researchers reveal how biases are not only present within algorithms but also entrenched within the evaluative benchmarks themselves. This insight brings into focus the importance of not merely scrutinizing the algorithms but also the standards used to assess their accuracy and bias perpetuation. The researchers also present an incisive critique of the methodological and conceptual issues underlying the treatment of bias in AI research, drawing attention to the often unaddressed question of what counts as ‘bias’ or ‘discrimination’.

The review shifts to an exploration of policy guidelines to address the identified issues, citing initiatives such as the European Commission’s ‘Ethics Guidelines for Trustworthy AI’ and UNESCO’s report on AI and gender equality. These initiatives aim to align AI with fundamental human rights and principles, ensuring their compliance with EU values and norms. The authors conclude with an insightful analysis of the dynamic relationship between gender bias in AI and broader societal structures, highlighting the need for regulatory efforts to manage this interplay.

Placed in a broader philosophical context, the article touches upon several key themes within the philosophy of technology. One of these is the entwined relationship between technology and society. Drawing from scholars like Orlikowski and Bryson, the authors illustrate how AI, as a socio-technical system, is deeply embedded within social structures and reflects societal biases. This notion challenges the conventional perception of technology as neutral and instead, presents it as a socially constructed entity that both shapes and is shaped by society.

The second philosophical theme pertains to the ethics of AI. The authors highlight the necessity of ethical accountability and responsibility in AI development and use. This resonates with the philosophical debates around morality in AI, raising questions about who should be held responsible for algorithmic biases and how should they be held accountable. By proposing cross-disciplinary and accessible approaches in AI research, the authors indirectly invoke the idea of “moral machines” or the notion that AI systems need to be designed with a nuanced understanding of human ethics.

Looking forward, it is essential to deepen the intersectional analysis of bias in AI systems. Future research could expand on the conceptualization and measurement of bias in AI, accounting for the diverse intersections of identities beyond gender, such as race, age, sexuality, and disability. There is also a critical need to explore how AI bias research can engage with non-binary and fluid conceptions of gender to provide a more comprehensive understanding of gender bias.

Abstract

Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding of bias could influence policy and outcomes. Building on a rich seam of literature from both technological and sociological fields, this article constructs an original framework through which to analyse both the perpetuation and mitigation of gender biases, choosing to categorize AI technologies based on whether their input is text or images. Through the close analysis and pairing of four case studies, the paper thus unites two often disparate approaches to the investigation of bias in technology, revealing the large and varied potential for AI to echo and even amplify existing human bias, while acknowledging the important role AI itself can play in reducing or reversing these effects. The conclusion calls for further collaboration between scholars from the worlds of technology, gender studies and public policy in fully exploring algorithmic accountability as well as in accurately and transparently exploring the potential consequences of the introduction of AI technologies.

Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

(Featured) Liars and Trolls and Bots Online: The Problem of Fake Persons

Liars and Trolls and Bots Online: The Problem of Fake Persons

Keith Raymond Harris explores of the role of ‘fake persons’—bots and trolls—in online spaces and their deleterious impact on our acquisition and distribution of knowledge. Situating his analysis in a technological ecosystem increasingly swamped by these artificial entities, the author dissects the intricate issues engendered by these ‘fake persons’ into three discernible yet interwoven threats: deceptive, skeptical, and epistemic.

The deceptive threat elucidates how bots and trolls propagate false information and craft misleading representations of consensus through manipulated metrics like shares, likes, and comments. This deceptive veneer engenders a distorted perception of reality, leading to the formulation of misguided beliefs. The skeptical threat, on the other hand, stems from the awareness of the online environment’s infestation with these deceitful entities. This awareness engenders a pervasive sense of skepticism, a defensive mechanism that could result in the dismissal of valid evidence, leading to an overall decrease in the trust placed in online information. This skepticism, though justifiable, can have the unintended effect of isolating individuals from genuine knowledge sources.

Further complicating this scenario is the epistemic threat. The author draws a striking analogy between the online world inhabited by ‘fake persons’ and a natural environment populated by ‘mimic species’. In the latter, the significance of certain traits, often used to identify species, diminishes due to the presence of mimics. Analogously, in an environment teeming with bots and trolls, the perceived value of certain forms of evidence depreciates, impairing the ability to discern ‘real’ persons. In this convoluted digital milieu, the credibility of evidence—along with the authenticity of users and the perceived consensus—becomes questionable.

Grounding these digital threats in the wider philosophical discourse, this research accentuates the intricate entanglement of epistemology and ontology in online spaces. It challenges traditional conceptions of identity, reality, and knowledge, echoing Baudrillard’s premonitions of hyperreality and simulation. The presence of ‘fake persons’ obfuscates the demarcation between the real and the artificial, leading to an epistemic crisis where distinguishing between genuine and fallacious information becomes a Herculean task. Furthermore, these digital distortions provoke a profound skepticism that resonates with Cartesian doubt, while simultaneously illustrating the pervasiveness of misinformation and disinformation, reflecting the post-truth era’s cynicism. This research, hence, not only deepens our understanding of the digital world’s complexities but also underscores the shifting epistemic and ontological paradigms in the internet age.

As we navigate through this rapidly mutating digital landscape, the author’s research underscores the urgent need for further exploration. While technological solutions might offer some respite, they cannot completely eradicate these pervasive threats. Future research, therefore, should venture into developing more robust epistemological frameworks that accommodate these digital complexities. It should aim to delve into the philosophy of digital identities, exploring how they are constructed, perceived, and interacted with. There’s also a pressing need for studies that examine the intersection of ethics, technology, and epistemology, especially in the context of ‘fake persons’. Such research would not only enrich the theoretical discourse but could also guide the creation of more ethical and reliable digital spaces.

Abstract

This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of online misinformation, or the more familiar threat posed by liars offline. Trolls and bots are, in effect, fake persons. Consequently, trolls and bots can systemically interfere with knowledge acquisition by manipulating the signals whereby individuals acquire knowledge from one another online. I conclude with a brief discussion of some possible remedies for the problem of fake persons.

Liars and Trolls and Bots Online: The Problem of Fake Persons

(Featured) The Future of Work: Augmentation or Stunting?

The Future of Work: Augmentation or Stunting?

Markus Furendal and Karim Jebari present a nuanced exploration of the implications of artificial intelligence (AI) on the future of work, straddling the philosophical, political, and economic realms. The authors distinguish between two paradigms of AI’s impact on work – ‘human-augmenting’ and ‘human-stunting’. Augmentation refers to scenarios where AI and humans collaboratively work, enhancing the latter’s capabilities and providing more fulfilling work. Stunting, on the other hand, implies a diminishment of human capabilities as AI takes over, reducing humans to mere overseers or executors of pre-programmed tasks. Utilizing Amazon fulfillment centers as a case study, the authors elucidate how the application of AI could potentially lead to stunting, thereby negating the potential goods of work.

The authors address four objections to their perspective. The objections challenge their interpretation of ‘goods of work’, the feasibility of political intervention, and question their assessment of the human augmentation-stunting dichotomy, and the potential paternalistic implications thereof. The paper refrains from advocating for particular policy interventions, but stresses the moral obligation to address human stunting as an issue of concern. The authors point out that workers might be forced to accept stunting roles due to higher pay or collective action problems, and that state intervention could potentially rectify such situations. Furthermore, they also acknowledge the possibility of exploring alternative, non-labor paths to human flourishing, but emphasize their focus on immediate and medium-term impacts rather than long-term societal transformations.

The conclusion of the paper underscores the critical need for an augmenting-stunting distinction in future work debates. The authors acknowledge the potential for AI to augment human capabilities, but caution that the rise of AI technologies could also lead to widespread human stunting, affecting the quality of work and its associated moral goods. They argue that while AI could theoretically enable more stimulating work experiences, it could also degrade human capabilities, detrimentally impacting large swaths of the workforce. As such, the paper calls for additional empirical research to better understand the real-world implications of human-AI collaboration in the workplace.

In the broader philosophical context, this paper instigates a profound discourse on the ethical dimensions of AI and the concept of ‘human flourishing’. By invoking notions of ‘goods of work’, it brings the discourse on AI and work into the arena of moral philosophy, questioning the essence of work and its role in the human condition. The researchers’ debate on the ‘augmentation-stunting’ dichotomy in human-AI interaction is reminiscent of classical deliberations on the dual nature of technology – as both an enabler and a potential detriment to human existence. Furthermore, their contemplation of the role of the state in regulating AI adoption underscores the inherent tension between technological progress and societal welfare, a theme that has persisted throughout technological history.

Future research on this topic could potentially delve deeper into the effects of AI technologies on different labor markets, depending on workers’ skill levels, institutional frameworks, and reskilling policies. More case studies from diverse sectors could enhance understanding of the augmentation-stunting paradigm in practical settings. Furthermore, the idea of ‘human flourishing’ outside of work, in the context of AI’s transformative potential, presents a fascinating area for exploration. The role of political institutions in shaping this future of work would also be an interesting research avenue, bridging the gap between philosophy, political science, and technology studies. The authors’ call for empirical research in workplaces further suggests the potential for cross-disciplinary studies that combine philosophical inquiry with sociological and anthropological methodologies.

Abstract

The last decade has seen significant improvements in artificial intelligence (AI) technologies, including robotics, machine vision, speech recognition, and text generation. Increasing automation will undoubtedly affect the future of work, and discussions on how the development of AI in the workplace will impact labor markets often include two scenarios: (1) labor replacement and (2) labor enabling. The former involves replacing workers with machines, while the latter assumes that human–machine cooperation can significantly improve worker productivity. In this context, it is often argued that (1) could lead to mass unemployment and that (2) therefore would be more desirable. We argue, however, that the labor-enabling scenario conflates two distinct possibilities. On the one hand, technology can increase productivity while also promoting “the goods of work,” such as the opportunity to pursue excellence, experience a sense of community, and contribute to society (human augmentation). On the other hand, higher productivity can also be achieved in a way that reduces opportunities for the “goods of work” and/or increases “the bads of work,” such as injury, reduced physical and mental health, reduction of autonomy, privacy, and human dignity (human stunting). We outline the differences of these outcomes and discuss the implications for the labor market in the context of contemporaneous discussions on the value of work and human wellbeing.

The Future of Work: Augmentation or Stunting?

(Featured) Could a Conscious Machine Deliver Pastoral Care?

Could a Conscious Machine Deliver Pastoral Care?

At the intersection of theology, philosophy, and futures studies, Andrew Proudfoot examines of the potential for genuine encounter between humans and hypothetically conscious artificial intelligence (CAI) from the perspective of Barthian theology. The author utilizes Karl Barth’s fourfold schema of encounter, which includes address, response, assistance, and gladness, as a framework for this exploration. The article’s premise is the hypothetical existence of CAI, which, for the sake of argument, is assumed to lack capax Dei, or the capacity for God.

In the first part of the article, the author discusses the initial two aspects of Barthian encounter—address and response. The author speculates that a CAI, with its presumed self-awareness and rationality, could engage in verbal discourse with humans, thus fulfilling these two aspects. However, the author emphasizes the importance of maintaining a clear distinction between humans and AI, even if the AI appears to be human-like. This delineation is crucial to ensure that the encounter is authentic and not misleading or manipulative.

Next, the author delves into the third and fourth stages of Barthian encounter—assistance and gladness. While a CAI cannot provide the same depth of assistance as a human or divine entity, it could provide help commensurate with its abilities. The author also postulates that a CAI could exhibit a form of formal gladness, equivalent to non-Christian eros love, if it is designed with an intrinsic desire for social interaction. However, the lack of capax Dei limits the CAI’s pastoral role and the depth of its encounters with humans.

The article’s philosophical significance lies in the way it prompts us to examine the nature of encounter, consciousness, and authenticity in a world where AI technologies are becoming increasingly advanced. It asks us to rethink the nature of interaction and relationship in a context that transcends the human sphere. The author uses Barthian theology as a lens to explore these themes, but the implications extend beyond this particular theological framework, touching upon broader philosophical discussions about selfhood, otherness, and the ethics of AI.

The research article paves the way for several future research directions. One such direction could involve a more in-depth exploration of the ontological and metaphysical commitments needed to support the notion of a conscious computer within a Christian theological framework. Another potential avenue could be an investigation into the relationship between consciousness and capax Dei, contemplating whether the latter could emerge from the former or if it necessitates divine intervention. Finally, the author’s suggestion that non-human personas might be more beneficial for AI poses an intriguing question for future research, prompting us to reflect on the nature of deception and authenticity in AI-human relationships. The article thus not only contributes to our understanding of potential AI-human encounters but also opens the door to myriad further explorations in the field.

Abstract

Could Artificial Intelligence (AI) play an active role in delivering pastoral care? The question rests not only on whether an AI could be considered an autonomous agent, but on whether such an agent could support the depths of relationship with humans which is essential to genuine pastoral care. Theological consideration of the status of human-AI relations is heavily influenced by Noreen Herzfeld, who utilises Karl Barth’s I-Thou encounters to conclude that we will never be able to relate meaningfully to a computer since it would not share our relationship to God. In this article, I look at Barth’s anthropology in greater depth to establish a more comprehensive and permissive foundation for human-machine encounter than Herzfeld provides—with the key assumption that, at some stage, computers will become conscious. This work allows discussion to shift focus to the challenges that the alterity of the conscious computer brings, rather than dismissing it as a non-human object. If we can relate as an I to a Thou with a computer, then this allows consideration of the types of pastoral care they could provide.

Could a Conscious Machine Deliver Pastoral Care?

(Featured) Research Ethics in the Age of Digital Platforms

Research Ethics in the Age of Digital Platforms

José Luis Molina et al. explore the ethical implications of microwork, a novel form of labor facilitated by digital platforms. The authors articulate the nuanced dynamics of this field, focusing primarily on the asymmetrical power relations between microworkers, clients, and platform operators. The piece scrutinizes the transactional nature of microwork, where workers are subject to the platform’s regulations and risk the arbitrary denial of payment or termination of their accounts. Microworkers’ reputation, determined by their prior task success rate, often dictates the quality and quantity of tasks they receive, creating a system of algorithmic governance that perpetuates an exploitative dynamic.

The authors further illustrate this situation by examining the biomedical research standards developed in the aftermath of World War II, which they argue are ill-equipped to address the ethical quandaries posed by microwork. They argue that the conditions of microwork, such as lack of payment floors and the potential for anonymity and segmentation, exacerbate the vulnerability of these workers, aligning them more closely with the exploitation of vulnerable populations in traditional research contexts. They propose a reconceptualization of microworkers as “guest workers” in “digital autocracies,” where the platforms exercise a quasi-governmental control over the working conditions, identity, and compensation of the microworkers.

The authors posit that these digital autocracies extract value through “heteromation” – a process where labor is mediated between cheap human labor and computers, and through the appropriation of workers’ rights to privacy and personal data protection. They argue that microwork platforms, due to their transnational nature and lack of comprehensive regulation, can impose conditions on their workforce that would be unacceptable in traditional employment contexts. They stress the importance of recognizing microworkers as vulnerable populations in research ethics reviews and propose a set of criteria for researchers to ensure the protection of these workers’ rights.

Positioning microwork within the broader philosophical discourse, the authors’ analysis suggests a reevaluation of labor, autonomy, and ethical standards in the digital age. The “digital autocracies” mirror Foucault’s concept of biopower, where power is exerted not merely through coercion but through the management and control of life processes, in this case, the economic existence of microworkers. The situation also reflects Marx’s concept of alienation, as microworkers are distanced from the fruits of their labor, the process of their work, and their fellow workers. The algorithmic governance system also raises questions about agency and autonomy, echoing concerns raised by philosophers such as Hannah Arendt and Jürgen Habermas regarding the instrumentalization of human beings.

Future research in this domain could explore multiple avenues. First, a more extensive empirical study could be conducted to quantify and analyze the conditions of microworkers across different platforms and geographical regions. Second, a comparative study could be undertaken to examine how different regulatory environments impact the working conditions and rights of microworkers. Lastly, a philosophical exploration of notions such as autonomy, justice, and dignity within the digital labor context could provide a more profound understanding of this emerging labor paradigm. The complex interplay of labor, ethics, technology, and globalization, as exemplified by microwork, provides a rich and crucial area for futures studies.

Abstract

Scientific research is growingly increasingly reliant on “microwork” or “crowdsourcing” provided by digital platforms to collect new data. Digital platforms connect clients and workers, charging a fee for an algorithmically managed workflow based on Terms of Service agreements. Although these platforms offer a way to make a living or complement other sources of income, microworkers lack fundamental labor rights and basic safe working conditions, especially in the Global South. We ask how researchers and research institutions address the ethical issues involved in considering microworkers as “human participants.” We argue that current scientific research fails to treat microworkers in the same way as in-person human participants, producing de facto a double morality: one applied to people with rights acknowledged by states and international bodies (e.g., the Helsinki Declaration), the other to guest workers of digital autocracies who have almost no rights at all. We illustrate our argument by drawing on 57 interviews conducted with microworkers in Spanish-speaking countries.

Research Ethics in the Age of Digital Platforms

(Featured) Can robots be trustworthy?

Can robots be trustworthy?

Ines Schröder et al. present an in-depth exploration of the phenomenological and ethical implications of socially assistive robots (SARs), with a specific focus on their role within the medical sector. Central to the discussion is the concept of responsivity, a construct that the authors argue is inherent to human experience and mirrored, to a certain extent, in human-robot interactions. They explore the nature of this perceived responsivity and its implications for the philosophical understanding of human-robot relations.

The article begins by drawing a distinction between human and artificial responsivity, elucidating the phenomenological structure of human responsivity and how it is translated into SARs’ design. The authors underscore how SARs’ design parameters, such as AI-enhanced speech recognition, physical mobility, and social affordances, culminate in a form of ‘virtual responsivity.’ This virtual responsivity serves to mimic human interaction, creating a semblance of empathy and understanding. However, the authors also emphasize the limitations of this approach, highlighting the potential for deception and the lack of essential direct reciprocity inherent in genuine ethical responsivity.

The crux of the article lies in its examination of the ethical implications of this constructed responsivity. The authors grapple with the potential ethical pitfalls, tensions, and challenges of SARs, particularly within the domain of medical applications. They articulate concerns regarding the preservation of patient autonomy, the balancing of beneficial impact against inherent risks, and the principle of justice in relation to access to advanced technologies. The authors further highlight the three ethically relevant dimensions of vulnerability, dignity, and trust in relation to responsivity, emphasizing the importance of these dimensions in human-robot interactions.

Broadly, the research intersects with larger philosophical themes concerning the nature of consciousness, personhood, and the moral status of non-human entities. The authors’ analysis of SARs’ ‘virtual responsivity’ challenges conventional understandings of these concepts, raising critical questions about the attribution of moral status and the potential for emotional attachment to non-human entities. The exploration of ethical dimensions of vulnerability, dignity, and trust in the context of human-robot interactions further elucidates the evolving dynamics of human-machine relationships, providing a nuanced perspective on the philosophical implications of advanced technology.

Looking towards the future, the research opens several avenues for further exploration. One potential focus is the development of a robust ethical framework for the design and use of SARs, especially in sensitive domains such as healthcare. There is a need for research into ‘ethically sensitive responsiveness,’ which could provide a basis for setting appropriate boundaries in human-robot interactions and ensuring the clear communication of a robot’s capabilities and limitations. Additionally, empirical research exploring the psychological effects of human-robot interactions, particularly in relation to the formation of trust, would be invaluable. Overall, the ethical and philosophical implications of artificial responsivity necessitate a multidisciplinary approach, inviting further dialogue between fields such as robotics, ethics, philosophy, and psychology.

Abstract

Definition of the problem

This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level Expert Group on AI of the European Union.

Arguments

Trust is analyzed as a multidimensional concept and phenomenon that must be primarily understood as departing from trusting as a human functioning and capability. To trust is an essential part of the human basic capability to form relations with others. We further want to discuss the concept of responsivity which has been established in phenomenological research as a foundational structure of the relation between the self and the other. We argue that trust and trusting as a capability is fundamentally responsive and needs responsive others to be realized. An understanding of responsivity is thus crucial to conceptualize trusting in the ethical framework of human flourishing. We apply a phenomenological–anthropological analysis to explore the link between certain qualities of social robots that construct responsiveness and thereby simulate responsivity and the human propensity to trust.

Conclusion

Against this background, we want to critically ask whether the concept of trustworthiness in social human–robot interaction could be misguided because of the limited ethical demands that the constructed responsiveness of social robots is able to answer to.

Can robots be trustworthy?

(Featured) Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

Maurizio Balistreri and Steven Umbrello engage in a critical exploration of the philosophical, ethical, and practical implications of human space travel and extraterrestrial colonization. The authors offer an in-depth analysis of two main strategies proposed in the literature: terraforming (geoengineering) and human bioenhancement. The first approach implies transforming extraterrestrial environments, such as Mars, to make them habitable for human life. The second approach involves modifying the human genetic heritage to make us more resilient and adaptable to non-terrestrial environments. The authors meticulously scrutinize these alternatives, considering not only feasibility and cost but also the ethical and philosophical implications.

The authors underscore the potential of terraforming as a method to establish human settlements on Mars. However, this possibility raises several ethical concerns, including the potential destruction of extraterrestrial life forms, the alteration of untouched landscapes, and the potential overstepping of human dominion. On the other hand, human bioenhancement, though a promising path, engenders its own set of ethical dilemmas. The authors caution against reckless enthusiasm for genetic modification, drawing attention to the potential creation of a new ‘human species’ and the consequent risk of divisions and misunderstandings.

A central theme in the article is the comparison of natural and artificial constructs. The authors challenge the assumption that the natural is always superior to the artificial. Drawing on posthumanist perspectives, they suggest that, given our influence on Earth’s environment, nature is already an artificial product. The argument is extended to other planets, indicating that the traditional dichotomy between the natural and the artificial may not hold in the context of extraterrestrial colonization.

The article contributes to broader philosophical discourses about the human relationship with nature and our place in the universe. It resonates with themes of transhumanism and posthumanism, contemplating the potential of technology to overcome human vulnerabilities and achieve a new evolutionary stage. The authors invite us to question and possibly redefine our notions of ‘natural’ and ‘artificial.’ This study, therefore, serves as a significant touchstone for futures studies, linking the practical considerations of space travel with philosophical reflections on human nature and our interaction with the environment.

For future research, the authors’ comparative analysis of terraforming and human bioenhancement opens several avenues. While the ethical implications of both strategies have been discussed, a more comprehensive ethical framework could be developed, perhaps drawing on principles of bioethics, environmental ethics, and space ethics. Additionally, the potential of hybrid approaches combining elements of both strategies could be explored. Lastly, given the increasing likelihood of extraterrestrial colonization, a more detailed analysis of the potential social, cultural, and psychological impacts on human populations in these new environments would be a valuable contribution.

Abstract

As space travel and intentions to colonise other planets are becoming the norm in public debate and scholarship, we must also confront the technical and survival challenges that emerge from these hostile environments. This paper aims to evaluate the various arguments proposed to meet the challenges of human space travel and extraterrestrial planetary colonisation. In particular, two primary solutions have been present in the literature as the most straightforward solutions to the rigours of extraterrestrial survival and flourishing: (1) geoengineering, where the environment is modified to become hospitable to its inhabitants, and (2) human (bio)enhancement where the genetic heritage of humans is modified to make them more resilient to the difficulties they may encounter as well as to permit them to thrive in non-terrestrial environments. Both positions have strong arguments supporting them but also severe philosophical and practical drawbacks when exposed to different circumstances. This paper aims to show that a principled stance where one position is accepted wholesale necessarily comes at the opportunity cost of the other where the other might be better suited, practically and morally. This paper concludes that case-by-case evaluations of the solutions to space travel and extraterrestrial colonisation are necessary to ensure moral congruency and the survival and flourishing of astronauts now and into the future.

Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

(Featured) In Conversation with Artificial Intelligence: Aligning language Models with Human Values

In Conversation with Artificial Intelligence: Aligning language Models with Human Values

Atoosa Kasirzadeh and Iason Gabriel embark on an ambitious analysis of how large-scale conversational agents, such as AI language models, can be better designed to align with human values. The premise of the article is grounded in the philosophy of language and pragmatics, employing Gricean maxims and Speech Act Theory to establish the importance of context and cooperation in achieving effective and ethical linguistic communication. The authors underscore the necessity of considering pragmatic norms and concerns in the design of conversational agents and illustrate their proposition through three discursive domains: science, civic life, and creative exchange.

The authors present a novel approach, suggesting the operationalization of Gricean maxims of quantity, quality, relation, and manner, to aid in cooperative communication between humans and AI. They also emphasize the diversity of utterances, asserting that there is no single universal condition of validity that applies to all. Instead, the validity of utterances often depends on different sorts of truth conditions which require different methodologies for substantiation, based on context-specific criteria of validity. They further stress the centrality of contextual information in the design of ideal conversational agents and highlight the need for research to theorise and measure the difference between the literal and contextual meaning of utterances.

The authors also delve into the implications of their analysis for future research into the design of conversational agents. They discuss the potential for anthropomorphisation of conversational agents and the constraints that might be imposed on them. They note that while anthropomorphism can sometimes be consistent with the creation of value-aligned agents, there are situations where it might be undesirable or inappropriate. They also advocate for the exploration of the potential for conversational agents to facilitate more robust and respectful conversations through context construction and elucidation. Lastly, they suggest that their analysis could be used to evaluate the quality of interactions between conversational agents and users, providing a framework for refining both human and automatic evaluation of conversational agent performance.

The research article resonates with broader philosophical themes, particularly those concerning the interplay between technology and society. It touches upon the ethical dimensions of AI, hinting at the moral responsibility of designing AI systems that align with human values and norms. The exploration of Gricean maxims and Speech Act Theory in the context of AI conversational agents provides a unique blend of AI ethics, philosophy of language, and pragmatics, reflecting the interdisciplinary nature of contemporary AI research. In doing so, the article stimulates dialogue about the role of AI in shaping our social and communicative practices, challenging conventional boundaries between humans and machines, and highlighting the potential of AI as a tool for fostering effective and ethically sound communication.

In terms of future avenues of research, the authors’ analysis opens up a myriad of possibilities. First, while the paper focuses primarily on the English language, a fruitful direction of research could involve the exploration of norms and pragmatics in other languages, thereby ensuring the cultural inclusivity and sensitivity of AI systems. Second, the proposed alignment of AI conversational agents with Gricean maxims and discursive ideals could be further operationalized and tested empirically to assess its effectiveness in real-world scenarios. Third, the article alludes to the potential of AI in fostering more robust and respectful conversations, which suggests an opportunity to investigate how AI can play an active role in shaping discourse norms and facilitating constructive dialogues. Lastly, the authors’ work can be further enriched by drawing from other sociological and philosophical traditions, such as Luhmann’s system theory or Latour’s actor-network theory, to offer a more comprehensive and nuanced understanding of the complex interplay between AI, language, and society.

Abstract

Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values.

In Conversation with Artificial Intelligence: Aligning language Models with Human Values

(Featured) Against AI Understanding and Sentience: Large Language Models, Meaning, and the Patterns of Human Language Use

Against AI Understanding and Sentience: Large Language Models, Meaning, and the Patterns of Human Language Use

Christoph Durt, Thomas Fuchs, and Tom Froese investigate the astonishing capacities of Large Language Models (LLMs) to mimic human-like responses. They begin by acknowledging the unprecedented feats of these models, particularly GPT-3, which have led some to assert that they possess common-sense reasoning and even sentience. They caution, however, that these claims often overlook the instances where LLMs fail to produce sensical responses. Even as the models evolve and mitigate some of these limitations, the authors urge circumspection regarding the attribution of understanding and sentience to these systems.

The authors argue that the progress of LLMs invites a reassessment of long-standing philosophical debates about the limits of AI. The authors challenge the view, expressed by philosophers such as Hubertus Dreyfus, that AI is inherently incapable of understanding meaning. Given the emergent linguistic capabilities of these models, they query whether these advancements warrant attributing understanding to the computational system. Contrary to Dreyfus’s assertion that any formal system cannot be directly sensitive to the relevance of its situation, the authors propose that LLMs seem to exhibit this sensitivity in a pragmatic sense.

While the article explores the philosophical debates surrounding AI understanding and sentience, it does not definitively conclude whether LLMs truly understand or are sentient. The authors suggest that the human-like behaviour exhibited by these models may lead to the inference of a human-like mind. However, they argue that more nuanced and empirically informed positions are required. The authors further advocate for a more comprehensive assessment of LLM output, rather than relying on selective instances of impressive performance.

This research brings into focus the broader philosophical implications of our interaction with AI, particularly the ontological and epistemological assumptions we make when interacting with LLMs. The debate surrounding AI sentience and understanding illuminates the complexities inherent in defining consciousness and understanding, a philosophical quandary that dates back to Descartes and beyond. It forces us to interrogate the nature of understanding – is it a purely human phenomenon, or can it be replicated, even surpassed, by silicon-based entities? Moreover, it challenges our anthropocentric views of cognition and compels us to consider alternate forms of intelligence and understanding.

Looking forward, the study of AI and philosophy would benefit from an even deeper exploration of these questions. More empirical research is needed to understand the extent and limitations of LLMs’ capacities. Concurrently, philosophical inquiry can help define and refine the metrics by which we measure AI understanding and sentience. As we delve further into the AI era, it is crucial that we continue to scrutinize and challenge our assumptions about AI capabilities, not only to enhance our technological advancements but also to enrich our philosophical understanding of the world.

Abstract

Large language models such as ChatGPT are deep learning architectures trained on immense quantities of text. Their capabilities of producing human-like text are often attributed either to mental capacities or the modeling of such capacities. This paper argues, to the contrary, that because much of meaning is embedded in common patterns of language use, LLMs can model the statistical contours of these usage patterns. We agree with distributional semantics that the statistical relations of a text corpus reflect meaning, but only part of it. Written words are only one part of language use, although an important one as it scaffolds our interactions and mental life. In human language production, preconscious anticipatory processes interact with conscious experience. Human language use constitutes and makes use of given patterns and at the same time constantly rearranges them in a way we compare to the creation of a collage. LLMs do not model sentience or other mental capacities of humans but the common patterns in public language use, clichés and biases included. They thereby highlight the surprising extent to which human language use gives rise to and is guided by patterns.

Against AI Understanding and Sentience: Large Language Models, Meaning, and the Patterns of Human Language Use