(Featured) On Artificial Intelligence and Manipulation

On Artificial Intelligence and Manipulation

On the ethics of emerging technologies, Marcello Ienca critically examines the role of digital technologies, particularly artificial intelligence, in facilitating manipulation. This research involves a comprehensive analysis of the nature of manipulation, its manifestation in the digital realm, impacts on human agency, and the ethical ramifications thereof. The findings illuminate the nuanced interplay between technology, manipulation, and ethics, situating the discussion about technology within the broader philosophical discourse.

Ienca distinguishes between concepts of persuasion and manipulation, underscoring the role of rational defenses in bypassing effective manipulation. Furthermore, they unpack how artificial intelligence and other digital technologies contribute to manipulation, with a detailed exploration of tactics such as personalization, emotional appeal, social influence, repetition, trustworthiness, user awareness, and time constraints. Finally, they propose a set of mitigation strategies, including regulatory, technical, and ethical approaches, that aim to protect users from manipulation.

The Nature of Manipulation

Within the discourse on digital ethics, the issue of manipulation has garnered notable attention. Ienca begins with an account of manipulation, revealing its layered complexity. They distinguish manipulation from persuasion, contending that while both aim to alter behavior or attitudes, manipulation uniquely bypasses the rational defenses of the subject. They posit that manipulation’s unethical nature emerges from this bypassing, as it subverts the individual’s autonomy. While persuasion is predicated on providing reasons, manipulation strategically leverages non-rational influence to shape behavior or attitudes. The author, thus, highlights the ethical chasm between these two forms of influence.

Building on this, the author contends that manipulation becomes especially potent in digital environments, given the technological means at disposal. Digital technologies, such as AI, facilitate an unprecedented capacity to bypass rational defenses by harnessing a broad repertoire of tactics, including personalized messaging, emotional appeal, and repetition. These tactics, which exploit the cognitive vulnerabilities of individuals, are coupled with the broad reach and immediate feedback afforded by digital platforms, magnifying the scope and impact of manipulation. As such, Ienca’s research contributes to a deeper understanding of the nature of digital manipulation and its divergence from the concept of persuasion.

Digital Technologies and the Unraveling of Manipulation

Ienca critically engages with the symbiotic relationship between digital technologies and manipulation. They elucidate that contemporary platforms, such as social media and search engines, employ personalized algorithms to curate user experiences. While such personalization is often marketed as enhancing user satisfaction, the author contends it serves as a conduit for manipulation. These algorithms invisibly mould user preferences and beliefs, thereby posing a potent threat to personal autonomy. The authors extend this analysis to AI technologies as well. A key dimension of their argument is the delineation of “black-box” AI systems, which make decisions inexplicably, leaving users susceptible to undisclosed manipulative tactics. The inability to scrutinize the processes underpinning these decisions amplifies their potential to manipulate users. The author’s analysis thus illuminates the subversive role digital technologies play in exacerbating the risk of manipulation, informing a nuanced understanding of the ethical complexities inherent to digital environments.

Ienca posits that such manipulation essentially thrives on two key elements – informational asymmetry and cognitive bias exploitation. Informational asymmetry is established when the algorithms controlling digital environments wield extensive knowledge about the user, engendering a power imbalance. This understanding is used to shape user experience subtly, enhancing the susceptibility to manipulation. The exploitation of cognitive biases further solidifies this manipulation by capitalizing on inherent human tendencies, thus subtly directing user choices. An example provided is the use of default settings, which exploit the status quo bias and contribute to passive consent, a potent form of manipulation. The author’s exploration of these elements illustrates the insidious mechanisms by which digital manipulation functions, enriching our understanding of the dynamics at play within digital landscapes.

Mitigation Strategies for Digital Manipulation and the Broader Philosophical Discourse

Ienca proposes a multi-pronged strategy to curb the pervasiveness of digital manipulation, relying significantly on user education and digital literacy, contending that informed users can better identify and resist manipulation attempts. Transparency, particularly around the use of algorithms and data processing practices, is also stressed, facilitating users’ understanding of their data’s utilization. From a regulatory standpoint, the authors discuss the role of governing bodies in enforcing laws that protect user privacy and promote transparency and accountability. The EU AI Act (2021) is highlighted as a significant stride in this direction. The authors also advocate for ethical design, suggesting that prioritizing user cognitive liberty, privacy, transparency, and control in digital technology can reduce manipulation potential. They also highlight the potential of policy proposals aimed at enshrining a neuroright to cognitive liberty and mental integrity. In their collective approach, Ienca and Vayena synthesize technical, regulatory, and ethical strategies, underscoring the necessity of cooperation among multiple stakeholders to cultivate a safer digital environment.

This study on digital manipulation connects to a broader philosophical discourse surrounding the ethics of technology and information dissemination, particularly in the age of proliferating artificial intelligence. It is situated at the intersection of moral philosophy, moral psychology, and the philosophy of technology, inquiring into the agency and autonomy of users within digital spaces and the ethical responsibility of technology designers. The discussion on ‘neurorights’ brings to the fore the philosophical debate on personal freedom and cognitive liberty, reinforcing the question of how these rights ought to be defined and protected in a digitized world. The author’s consideration of manipulation, not as an anomaly, but as an inherent characteristic of pre-designed digital environments challenges traditional understanding of free will and consent in these spaces. This work contributes to the broader discourse on the power dynamics between technology users and creators, a topic of increasing relevance as AI and digital technologies become ubiquitous.

Abstract

The increasing diffusion of novel digital and online sociotechnical systems for arational behavioral influence based on Artificial Intelligence (AI), such as social media, microtargeting advertising, and personalized search algorithms, has brought about new ways of engaging with users, collecting their data and potentially influencing their behavior. However, these technologies and techniques have also raised concerns about the potential for manipulation, as they offer unprecedented capabilities for targeting and influencing individuals on a large scale and in a more subtle, automated and pervasive manner than ever before. This paper, provides a narrative review of the existing literature on manipulation, with a particular focus on the role of AI and associated digital technologies. Furthermore, it outlines an account of manipulation based of four key requirements: intentionality, asymmetry of outcome, non-transparency and violation of autonomy. I argue that while manipulation is not a new phenomenon, the pervasiveness, automaticity, and opacity of certain digital technologies may raise a new type of manipulation, called “digital manipulation”. I call “digital manipulation” any influence exerted through the use of digital technology that is intentionally designed to bypass reason and to produce an asymmetry of outcome between the data processor (or a third party that benefits thereof) and the data subject. Drawing on insights from psychology, sociology, and computer science, I identify key factors that can make manipulation more or less effective, and highlight the potential risks and benefits of these technologies for individuals and society. I conclude that manipulation through AI and associated digital technologies is not qualitatively different from manipulation through human–human interaction in the physical world. However, some functional characteristics make it potentially more likely of evading the subject’s cognitive defenses. This could increase the probability and severity of manipulation. Furthermore, it could violate some fundamental principles of freedom or entitlement related to a person’s brain and mind domain, hence called neurorights. To this end, an account of digital manipulation as a violation of the neuroright to cognitive liberty is presented.

On Artificial Intelligence and Manipulation

(Featured) Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers

Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers

As our technological capabilities advance at an accelerating pace, so too does the pertinence of the hypothetical conundrum posed by super-intelligent artificial intelligence (AI) and its implications for human freedom. Robert Sparrow examines these implications, drawing extensively from political philosophy and conceptions of agency, and provides an analysis of the societal implications of super-intelligence from a uniquely philosophical standpoint. The author adopts a nuanced perspective, proposing that even benevolent, friendly AI may threaten human freedom in its capability to dominate, consciously or not, its human counterparts. It is this paradox, situated within the broader philosophical discourse of freedom versus domination, that provides the nucleus of this analysis.

The research is grounded in the seminal work of philosopher Philip Pettit, particularly his doctrine of republican freedom. This doctrine centers on the belief that freedom is not merely the absence of interference (negative liberty) but is, critically, the absence of domination or the ability to interfere at will. Pettit famously encapsulated this concept in his metaphor of the “eyeball test,” positing that one is free only when they can look others in the eye without fear or subservience. As we explore the intersection of Pettit’s philosophy and the hypothetical reality of a super-intelligent AI, the profound significance of this test in determining the future of human freedom in a world shared with AI comes sharply into focus.

The “Friendly AI” Problem

Robert Sparrow makes an acute distinction between “friendly” AI and its potential to dominate humanity. The Friendly AI problem stems from the plausible notion that super-intelligent AI, regardless of its benevolence or adherence to human values, may still pose a significant threat to human freedom due to its inherent capacity for domination. A benevolent AI could feasibly operate in a dictatorial manner, modulating its interference in human life based on its determination of human interests. However, a critical distinction must be drawn: a benevolent dictator, even though acting in our interests, is still a dictator. As the author of the article pointedly remarks, to be “free” to act as one wishes only at the behest of another entity, even a well-meaning one, is not true freedom.

Herein lies the crux of the Friendly AI problem: the ability of an AI entity to act in accordance with human interests does not automatically guarantee human freedom. Freedom, as delineated by the republicanism of Pettit, requires resilience; it must not dissolve upon the whims of a more powerful entity. Thus, for the exercise of power by AI to be compatible with human freedom, it must be possible for humans to resist it. One might propose that a genuinely Friendly AI would solicit human input before interfering in our affairs, serving as an efficient executor of our will rather than as a prescriptive entity. Yet, this proposition does not satisfactorily resolve the core tension between AI’s power and freedom and our own. Ultimately, any freedom we might enjoy under a superintelligent AI would be contingent upon the AI’s will, a position which reveals the inherent vulnerability and potential for domination inherent in the Friendly AI concept.

Superintelligence

Bostrom’s notion of Superintelligence, as outlined by Sparrow, posits an AI entity capable of outperforming the best human brains in nearly every economically relevant field. However, the potential domination by such an entity forms the bedrock of the philosophical conflict between benevolence and domination. Drawing on Pettit’s theory of republicanism, it becomes clear that benevolence alone, even if perfectly calibrated to human interests, does not suffice to guarantee freedom. The very ability of a superintelligent AI to interfere unilaterally in human affairs, regardless of its intent, embodies the antithesis of Pettit’s non-domination principle. The analysis further draws attention to the paradox inherent in relying on an external, powerful entity for the regulation of our interests, effectively highlighting the existential risk associated with superintelligent AI. While a superintelligent AI may act in line with human interests, its potential for domination raises questions about the plausibility of achieving a truly “Friendly AI”, a challenge that resonates with the larger discourse on freedom and domination in philosophical studies.

Freedom, Status, and the ‘Eyeball Test’

The question of human freedom in the context of a superintelligent AI intersects with Pettit’s conceptualization of the ‘eyeball test’. In his philosophy, the notion of freedom pivots on the individual’s status within society – a status conferred when one can ‘look others in the eye without reason for fear or deference’. This perspective becomes especially poignant when viewed in the light of a superintelligent entity’s potential dominion. Under such circumstances, the capacity for humans to pass the ‘eyeball test’ could be seriously undermined, as the superintelligent AI, by virtue of its cognitive superiority, could induce both fear and deference. The state of being subjected to the AI’s superior will could consequently impair our ability to ‘look it in the eye’, thereby eroding the human status required for true freedom. This analysis deepens the philosophical understanding of freedom and its inextricable link with status, while simultaneously challenging the concept of a ‘Friendly AI’ from the perspective of republican theory.

The Negative Liberty Doctrine and Technocratic Framing of AI

Berlin’s bifurcation of liberty into negative and positive spheres finds particular resonance in the context of superintelligent AI, and as such, provides a useful framework for interpreting the dominance problem. From a negative liberty perspective – that is, the absence of coercion or interference – the advent of a superintelligent AI could be seen as promoting freedom. However, the technocratic framing of AI, often characterized by an overemphasis on instrumental logic and utility maximization, may inadvertently favor this negative liberty doctrine, potentially to the detriment of positive freedom. This is to say, while an AI’s superior decision-making capabilities could minimize human interference in various spheres of life, it could also inadvertently curtail positive freedom – the opportunity for self-realization and autonomy. As such, this underscores the importance of incorporating broader philosophical considerations into AI research and development, beyond the narrow confines of technocratic perspectives.

This fusion of philosophy and AI research necessitates the introduction of considerations beyond the merely technical and into the sphere of ethics and moral philosophy. The potential for domination by superintelligent AI systems underscores the need for research that specifically targets these concerns, particularly in relation to upholding principles of human dignity, autonomy, and positive freedom. However, achieving this requires a re-evaluation of our current paradigms of AI development that often valorize utility maximization and efficiency. Instead, an approach that truly appreciates the full depth of the challenge must also involve a careful examination of the philosophical underpinnings that inform the design and operation of AI systems. As such, future research in this arena ought to be a collaborative effort between philosophers, ethicists, AI researchers, and policymakers, aimed at defining a coherent set of values and ethical guidelines for the development and use of superintelligent AI.

Abstract

When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what they call the “Friendly AI problem”. Roughly speaking this is the question of how we might ensure that the AI that will develop from the first AI that we create will remain sympathetic to humanity and continue to serve, or at least take account of, our interests. In this paper I draw on the “neo-republican” philosophy of Philip Pettit to argue that solving the Friendly AI problem would not change the fact that the advent of super-intelligent AI would be disastrous for humanity by virtue of rendering us the slaves of machines. A key insight of the republican tradition is that freedom requires equality of a certain sort, which is clearly lacking between pets and their owners. Benevolence is not enough. As long as AI has the power to interfere in humanity’s choices, and the capacity to do so without reference to our interests, then it will dominate us and thereby render us unfree. The pets of kind owners are still pets, which is not a status which humanity should embrace. If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.

Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers

(Featured) A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

Alejo José G. Sison and Dulce M. Redín take a critical look at the concept of autonomous moral agents (AMAs), especially in relation to artificial intelligence (AI), from a neo-Aristotelian ethical standpoint. The authors open with a compelling critique of the arguments in favor of AMAs, asserting that they are neither inevitable nor guaranteed to bring practical benefits. They elucidate that the term ‘autonomous’ may not be fitting, as AMAs are, at their core, bound to the algorithmic instructions they follow. Moreover, the term ‘moral’ is questioned due to the inherent external nature of the proposed morality. According to the authors, the true moral good is internally driven and cannot be separated from the agent nor the manner in which it is achieved.

The authors proceed to suggest that the arguments against the development of AMAs have been insufficiently considered, proposing a neo-Aristotelian ethical framework as a potential remedy. This approach places emphasis on human intelligence, grounded in biological and psychological scaffolding, and distinguishes between the categories of heterotelic production (poiesis) and autotelic action (praxis), highlighting that the former can accommodate machine operations, while the latter is strictly reserved for human actors. Further, the authors propose that this framework offers greater clarity and coherence by explicitly denying bots the status of moral agents due to their inability to perform voluntary actions.

Lastly, the authors explore the potential alignment of AI and virtue ethics. They scrutinize the potential for AI to impact human flourishing and virtues through their actions or the consequences thereof. Herein, they feature the work of Vallor, who has proposed the design of “moral machines” by embedding norms, laws, and values into computational systems, thereby, focusing on human-computer interaction. However, they caution that such an approach, while intriguing, may be inherently flawed. The authors also examine two possible ways of embedding ethics in AI: value alignment and virtue embodiment.

The research article provides an interesting contribution to the ongoing debate on the potential for AI to function as moral agents. The authors adopt a neo-Aristotelian ethical framework to add depth to the discourse, providing a fresh perspective that integrates virtue ethics and emphasizes the role of human agency. This perspective brings to light the broader philosophical questions around the very nature of morality, autonomy, and the distinctive attributes of human intelligence.

Future research avenues might revolve around exploring more extensively how virtue ethics can interface with AI and if the goals that Vallor envisages can be realistically achieved. Further philosophical explorations around the assumptions of agency and morality in AI are also needed. Moreover, studies examining the practical implications of the neo-Aristotelian ethical framework, especially in the realm of human-computer interaction, would be invaluable. Lastly, it may be insightful to examine the authors’ final suggestion of approaching AI as a moral agent within the realm of fictional ethics, a proposal that opens up a new and exciting area of interdisciplinary research between philosophy, AI, and literature.

Abstract

We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may benefit their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fill this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient.

A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

(Featured) The unwitting labourer: extracting humanness in AI training

The unwitting labourer: extracting humanness in AI training

Fabio Morreale et al. examine the nature and implications of unseen digital labor within the realm of artificial intelligence (AI). The article, structured methodically, dissects the issue by studying three distinctive case studies—Google’s reCAPTCHA, Spotify’s recommendation algorithms, and OpenAI’s language model GPT-3, and then extrapolates five characteristics defining “unwitting laborers” in AI systems: unawareness, non-consensual labor, unwaged and uncompensated labor, misappropriation of original intent, and the nature of being unwitting.

The study meticulously scrutinizes the fundamental premise of unawareness, arguing that many individuals unknowingly perform labor that trains AI systems. It elaborates that such activities often occur without the participant’s conscious awareness that their interactions are being used to improve machine learning algorithms. The research then delves into the realm of non-consensual labor. The authors point out that while traditional working agreements require consent from both parties, such consent is often absent or uninformed in the context of digital labor for AI training, thus resulting in exploitation.

In terms of compensation, the authors challenge the traditional notion of labor, arguing that even though the unwitting laborers receive no wage or acknowledgement for their efforts, the aggregate data they provide can yield significant value for the companies leveraging it. The research further highlights the misappropriation of original intent, illustrating that the purpose of the labor performed is often obscured or transfigured, causing a significant divergence between the exploited-intentionality and the exploiter-intentionality.

The article’s argument prompts a re-evaluation of our understanding of labour and consent, raising questions that align with broader philosophical discourses around the ethics of AI and labor rights in the digital age. By examining the human-AI interaction through the lens of exploitation, the authors contribute to the growing discourse around AI ethics, invoking notions reminiscent of Marxist critiques of capitalism, where labor is commodified and surplus value is extracted without adequate compensation or acknowledgement.

Furthermore, the study enriches the dialogue surrounding the notion of consent, autonomy, and freedom in the digital age, forcing us to reconsider how these concepts should be reframed in light of the increasing integration of AI into our everyday lives. It also raises significant questions about the role and place of human cognition in the age of AI, suggesting that our uniquely human skills and experiences are not just being utilized, but potentially commodified and exploited, adding another dimension to the ongoing discourse on cognitive capitalism.

Looking forward, the authors’ arguments open numerous avenues for further exploration. There is a need for studies that delve into the societal and individual impacts of such exploitation—how it influences our understanding of labor, our autonomy, and our interactions with technology. Additional research could also explore potential mechanisms for informing and compensating users for their contribution to AI training. Moreover, investigation into policy interventions and regulatory mechanisms to mitigate the exploitation of such digital labor would be invaluable. Ultimately, the authors’ research catalyses a dialogue about the balance of power between individuals and technology companies, and the importance of ensuring this balance in an increasingly AI-integrated future.

Abstract

Many modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.

The unwitting labourer: extracting humanness in AI training

(Featured) Reinforcement learning and artificial agency

Reinforcement learning and artificial agency

Patrick Butlin explores the idea of whether these systems could possess the capacity to “act for reasons”, a concept traditionally associated with conscious and goal-directed agents. Drawing upon philosophical literature and specifically from the work of Hanna Pickard (2015) and Helen Steward (2012), the author outlines two criteria needed to be met for something to be considered an agent: the entity in question must have goals and it must also interact with its environment to pursue these goals. The author asserts that both model-free and model-based RL systems meet these criteria and can thus be considered as having minimal agency.

Building upon the foundation of minimal agency, the author makes a compelling argument for RL systems acting for reasons. Their argument hinges on the philosophical work of Jennifer Hornsby (2004) and Nora Mantel (2018), where the former associates acting for reasons with general-purpose abilities, and the latter distinguishes between three competencies involved in action for reasons: epistemic, volitional, and executive sub-competences. The author posits that model-based RL systems, with their capacity to model the transition function, meet these criteria as they learn and store information about their environment that influences their future actions, forming a sort of ‘descriptive representation’.

In contrast to Mantel, the author suggests that the distinction between volitional and executive sub-competences and the emphasis on motivation might not be necessary to the account. While Mantel uses motivation interchangeably with desire and intention, the author posits that this distinction might be more relevant to human agency and less so for artificial RL systems. The author also refutes the notion that the lack of desires or volitions disqualifies artificial RL systems from acting for reasons. They conclude that while model-based RL systems may lack desires, their interaction with their environment to achieve set goals provides sufficient grounds to attribute minimal agency to them and thus the capacity to act for reasons.

The article adds significantly to the discourse on machine agency, challenging conventional philosophical norms that tie agency and the capacity to act for reasons to consciousness or biological entities. It raises compelling points about how RL systems, through their goal-directed behavior and interaction with the environment, exhibit traits of minimal agency. This exploration places the discussion of machine agency within broader philosophical themes such as the nature of consciousness, the demarcation of human and non-human agency, and the implications of attributing agency to artificial systems.

Future research could focus on extending the arguments in this article, exploring the implications of attributing even more sophisticated forms of agency to artificial RL systems. One direction could be to look at whether these systems, as they continue to develop, could eventually meet even stricter criteria for agency that go beyond minimal agency. Another avenue would be to study the ethical and societal implications of recognizing artificial RL systems as agents. Would it, for instance, be meaningful or necessary to establish an ethical framework for interacting with these systems? Additionally, research could examine how these concepts might evolve in tandem with the continued development of artificial RL systems and other forms of artificial intelligence.

Abstract

There is an apparent connection between reinforcement learning and agency. Artificial entities controlled by reinforcement learning algorithms are standardly referred to as agents, and the mainstream view in the psychology and neuroscience of agency is that humans and other animals are reinforcement learners. This article examines this connection, focusing on artificial reinforcement learning systems and assuming that there are various forms of agency. Artificial reinforcement learning systems satisfy plausible conditions for minimal agency, and those which use models of the environment to perform forward search are capable of a form of agency which may reasonably be called action for reasons.

Reinforcement learning and artificial agency

(Featured) Liars and Trolls and Bots Online: The Problem of Fake Persons

Liars and Trolls and Bots Online: The Problem of Fake Persons

Keith Raymond Harris explores of the role of ‘fake persons’—bots and trolls—in online spaces and their deleterious impact on our acquisition and distribution of knowledge. Situating his analysis in a technological ecosystem increasingly swamped by these artificial entities, the author dissects the intricate issues engendered by these ‘fake persons’ into three discernible yet interwoven threats: deceptive, skeptical, and epistemic.

The deceptive threat elucidates how bots and trolls propagate false information and craft misleading representations of consensus through manipulated metrics like shares, likes, and comments. This deceptive veneer engenders a distorted perception of reality, leading to the formulation of misguided beliefs. The skeptical threat, on the other hand, stems from the awareness of the online environment’s infestation with these deceitful entities. This awareness engenders a pervasive sense of skepticism, a defensive mechanism that could result in the dismissal of valid evidence, leading to an overall decrease in the trust placed in online information. This skepticism, though justifiable, can have the unintended effect of isolating individuals from genuine knowledge sources.

Further complicating this scenario is the epistemic threat. The author draws a striking analogy between the online world inhabited by ‘fake persons’ and a natural environment populated by ‘mimic species’. In the latter, the significance of certain traits, often used to identify species, diminishes due to the presence of mimics. Analogously, in an environment teeming with bots and trolls, the perceived value of certain forms of evidence depreciates, impairing the ability to discern ‘real’ persons. In this convoluted digital milieu, the credibility of evidence—along with the authenticity of users and the perceived consensus—becomes questionable.

Grounding these digital threats in the wider philosophical discourse, this research accentuates the intricate entanglement of epistemology and ontology in online spaces. It challenges traditional conceptions of identity, reality, and knowledge, echoing Baudrillard’s premonitions of hyperreality and simulation. The presence of ‘fake persons’ obfuscates the demarcation between the real and the artificial, leading to an epistemic crisis where distinguishing between genuine and fallacious information becomes a Herculean task. Furthermore, these digital distortions provoke a profound skepticism that resonates with Cartesian doubt, while simultaneously illustrating the pervasiveness of misinformation and disinformation, reflecting the post-truth era’s cynicism. This research, hence, not only deepens our understanding of the digital world’s complexities but also underscores the shifting epistemic and ontological paradigms in the internet age.

As we navigate through this rapidly mutating digital landscape, the author’s research underscores the urgent need for further exploration. While technological solutions might offer some respite, they cannot completely eradicate these pervasive threats. Future research, therefore, should venture into developing more robust epistemological frameworks that accommodate these digital complexities. It should aim to delve into the philosophy of digital identities, exploring how they are constructed, perceived, and interacted with. There’s also a pressing need for studies that examine the intersection of ethics, technology, and epistemology, especially in the context of ‘fake persons’. Such research would not only enrich the theoretical discourse but could also guide the creation of more ethical and reliable digital spaces.

Abstract

This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of online misinformation, or the more familiar threat posed by liars offline. Trolls and bots are, in effect, fake persons. Consequently, trolls and bots can systemically interfere with knowledge acquisition by manipulating the signals whereby individuals acquire knowledge from one another online. I conclude with a brief discussion of some possible remedies for the problem of fake persons.

Liars and Trolls and Bots Online: The Problem of Fake Persons

(Featured) Algorithmic Nudging: The Need for an Interdisciplinary Oversight

Algorithmic Nudging: The Need for an Interdisciplinary Oversight

Christian Schmauder et al. critically assess the implications and risks of employing “black box” AI systems for the development and implementation of personalized nudges in various domains of life. They begin by outlining the power and promise of algorithmic nudging, drawing attention to how AI-driven nudges could bring about widespread benefits in areas such as health, finance, and sustainability. However, they contend that outsourcing nudging to opaque AI systems poses challenges in terms of understanding the underlying reasons for their effectiveness and addressing potential unintended consequences.

The authors delve deeper into the nuances of algorithmic nudging by examining the role of personalized advice in influencing human decision-making. They highlight a key concern that arises when AI systems attempt to maximize user satisfaction: the tendency of the algorithms to exploit cognitive biases in order to achieve desired outcomes. Consequently, the effectiveness of the AI-developed nudges might come at the cost of truthfulness, ultimately undermining the very goals they were designed to achieve.

To address this issue, the authors advocate for the need to look “under the hood” of AI systems, arguing that understanding the underlying cognitive processes harnessed by these systems is crucial for mitigating unintended side effects. They emphasize the importance of interdisciplinary collaboration between computer scientists, cognitive scientists, and psychologists in the development, monitoring, and refinement of AI systems designed to influence human decision-making.

The authors’ exploration of the limitations and risks of “black box” AI nudges raises broader philosophical concerns, particularly in relation to the ethics of autonomy, transparency, and accountability. These concerns call into question the balance between leveraging AI-driven nudges to benefit society and preserving individual autonomy and freedom of choice. Furthermore, the analysis highlights the tension between relying on AI’s predictive power and fostering a deeper understanding of the mechanisms driving human behavior.

This paper provides a valuable foundation for future research on the ethical and philosophical implications of AI-driven nudging. Further investigation could delve into the possible approaches to designing more transparent and explainable AI systems, exploring how such systems might enhance, rather than hinder, human decision-making processes. Additionally, researchers could examine the moral responsibilities of AI developers and regulators, studying the ethical frameworks necessary to guide the development and deployment of AI nudges that respect human autonomy, values, and dignity. Ultimately, a deeper understanding of these complex philosophical questions will be instrumental in realizing the full potential of AI-driven nudges while safeguarding against their potential pitfalls.

Abstract

Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to “black box” AI systems means that the ultimate reasons for why such nudges work, that is, the underlying human cognitive processes that they harness, will often be unknown. In this paper, we unpack this concern by considering a series of examples and case studies that demonstrate how AI systems can learn to harness biases in human judgment to reach a specified goal. Drawing on an analogy in a philosophical debate concerning the methodology of economics, we call for the need of an interdisciplinary oversight of AI systems that are tasked and deployed to nudge human behaviours.

Algorithmic Nudging: The Need for an Interdisciplinary Oversight

(Featured) Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

Giorgia Lorenzini et al. examine the evolving nature of the doctor-patient relationship in the context of integrating artificial intelligence (AI) into healthcare. They focus on the shared decision-making (SDM) process between doctors and patients, a consensual partnership founded on communication and respect for voluntary choices. The authors argue that the introduction of AI can potentially enhance SDM, provided it is implemented with care and consideration. The paper addresses the communication between doctors and AI and the communication of this interaction to patients, evaluating its potential impact on SDM and proposing strategies to preserve both doctors’ and patients’ autonomy.

The authors explore the communication and autonomy challenges arising from AI integration into clinical practice. They posit that AI’s influence could unintentionally limit doctors’ autonomy by heavily guiding their decisions, which in turn raises questions about the balance of power in the decision-making process. The paper emphasizes the importance of doctors understanding AI’s recommendations and checking for errors while also being competent in working with AI systems. By examining the “black box problem” of AI’s opaqueness, the authors argue that explainability is crucial for fostering the AI-doctor relationship and preserving doctors’ autonomy.

The paper then investigates doctor-patient communication and autonomy within the context of AI integration. The authors argue that in order to promote patients’ autonomy and encourage their participation in SDM, doctors must disclose and discuss AI’s involvement in the clinical evaluation process. They also contend that AI should consider patients’ preferences and unique situations, thus ensuring that their values are respected and that they are able to participate actively in the SDM process.

In relating the research to broader philosophical issues, the authors’ examination of the AI-doctor-patient relationship aligns with questions surrounding the ethical and moral implications of AI in society. As AI increasingly permeates various aspects of our lives, its impact on human autonomy, agency, and moral responsibility becomes a focal point for philosophical inquiry. The paper contributes to this discourse by delving into the specific context of healthcare and the evolving dynamics of the doctor-patient relationship, providing a microcosm for understanding the broader implications of AI integration in human decision-making processes.

As the authors outline the potential benefits and challenges of incorporating AI into the SDM process, future research could investigate the practical implementation of AI in various clinical settings, evaluating the effectiveness of AI-doctor collaboration in promoting SDM. Further research might also address the training and education necessary for medical professionals to adapt to AI integration, ensuring a seamless transition that optimizes patient care. Additionally, exploring methods for incorporating patients’ values into AI algorithms could provide a path to more personalized and autonomy-respecting AI-assisted healthcare.

Abstract

Artificial intelligence (AI) based clinical decision support systems (CDSS) are becoming ever more widespread in healthcare and could play an important role in diagnostic and treatment processes. For this reason, AI-based CDSS has an impact on the doctor–patient relationship, shaping their decisions with its suggestions. We may be on the verge of a paradigm shift, where the doctor–patient relationship is no longer a dual relationship, but a triad. This paper analyses the role of AI-based CDSS for shared decision-making to better comprehend its promises and associated ethical issues. Moreover, it investigates how certain AI implementations may instead foster the inappropriate paradigm of paternalism. Understanding how AI relates to doctors and influences doctor–patient communication is essential to promote more ethical medical practice. Both doctors’ and patients’ autonomy need to be considered in the light of AI.

Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

(Featured) Mobile health technology and empowerment

Mobile health technology and empowerment

Karola V. Kreitmair critically evaluates the notion of empowerment that has become pervasive in the discourse surrounding direct-to-consumer (DTC) mobile health technologies. The author argues that while these technologies claim to empower users by providing knowledge, enabling control, and fostering responsibility, the actual outcome is often not genuine empowerment but merely the perception of empowerment. This distinction has significant implications for individuals who might be seeking to affect behavior change and improve their health and well-being.

The paper meticulously breaks down the concept of empowerment into five key features: knowledgeability, control, responsibility, availability of good choices, and healthy desires. The author presents a thorough review of the evidence related to the efficacy, privacy, and security concerns surrounding the use of m-health technologies. They demonstrate that these technologies, while marketed as empowering tools, often fail to live up to their promises and, in some cases, even contribute to negative health outcomes or exacerbate existing issues such as disordered eating.

The core of the argument lies in the distinction between genuine empowerment and the mere perception of empowerment. The author posits that, rather than fostering true empowerment, DTC m-health technologies often create a psychological illusion of control and knowledgeability. This illusion can lead users to form unrealistic expectations and place undue burden on themselves to effect change when the necessary conditions for change are not met. This “empowerment paradox” ultimately calls into question the purported benefits of DTC m-health technologies and the societal narrative around personal responsibility and control over one’s health.

This paper’s findings resonate with broader philosophical discussions around individual autonomy, agency, and the role of technology in shaping our lives. The empowerment paradox highlights the complex interplay between the individual and the structural factors that shape health outcomes. It raises crucial questions about the ethical implications of profit-driven technologies and the responsibilities of technology developers, marketers, and users in navigating an increasingly technologically-driven healthcare landscape. The insights from this paper contribute to ongoing debates about the nature of empowerment and the limits of individual autonomy in an age where our lives are increasingly mediated by technology.

Future research should focus on the prevalence and consequences of the empowerment paradox in the context of DTC m-health technologies. A deeper understanding of how individuals make decisions around their health in the presence of perceived empowerment could inform the development of more effective and ethically responsible technologies. Additionally, examining the social and cultural factors that influence the marketing and adoption of these technologies may provide insight into how the industry can foster genuine empowerment, rather than perpetuating an illusion of control. Ultimately, a more nuanced understanding of the relationship between DTC m-health technologies and empowerment will pave the way for a more responsible and equitable approach to healthcare in the digital age.

Abstract

Mobile Health (m-health) technologies, such as wearables, apps, and smartwatches, are increasingly viewed as tools for improving health and well-being. In particular, such technologies are conceptualized as means for laypersons to master their own health, by becoming “engaged” and “empowered” “managers” of their bodies and minds. One notion that is especially prevalent in the discussions around m-health technology is that of empowerment. In this paper, I analyze the notion of empowerment at play in the m-health arena, identifying five elements that are required for empowerment. These are (1) knowledge, (2) control, (3) responsibility, (4) the availability of good choices, and (5) healthy desires. I argue that at least sometimes, these features are not present in the use of these technologies. I then argue that instead of empowerment, it is plausible that m-health technology merely facilitates a feeling of empowerment. I suggest this may be problematic, as it risks placing the burden of health and behavior change solely on the shoulders of individuals who may not be in a position to affect such change.

Mobile health technology and empowerment

(Featured) Introducing a four-fold way to conceptualize artificial agency

Introducing a four-fold way to conceptualize artificial agency

Maud van Lier presents a methodological framework for understanding artificial agency in the context of basic research, particularly in AI-driven science. The Four-Fold Framework, as the author coins it, is a pluralistic and pragmatic approach that incorporates Gricean modeling, analogical modeling, theoretical modeling, and conceptual modeling. The motivation behind this framework lies in the increasingly active role that AI systems are taking on in scientific research, warranting the development of a robust conceptual foundation for these ‘agents.’

The author critically assesses Sarkia’s neo-Gricean framework, which offers three modeling strategies for conceptualizing artificial agency. While acknowledging its merits, the author identifies a crucial shortcoming in its lack of a semantic dimension, which is necessary to bridge the gap between theoretical models and practical implementation in basic research. To address this issue, the author proposes the addition of conceptual modeling as a fourth strategy, ultimately forming the Four-Fold Framework. This new framework aims to provide a comprehensive account of artificial agency in basic research by accommodating different interpretations and addressing the semantic dimension of artificial agency.

By implementing the Four-Fold Framework, the author posits that researchers will be able to develop a more inclusive and pragmatically plausible understanding of artificial agency in the context of AI-driven science. The framework sets the stage for a robust conceptual foundation that can accommodate the complexities and nuances of artificial agency as AI continues to evolve and expand its role in scientific research.

This paper’s exploration of artificial agency also contributes to the broader philosophical discourse on agency and autonomy in the context of artificial intelligence. As AI systems become more advanced, the distinction between human and artificial agents blurs, raising questions about the nature of agency, responsibility, and ethical considerations. The Four-Fold Framework provides a methodological tool to examine these complex issues, grounding the analysis of artificial agency within a rigorous and comprehensive structure.

Future research can expand upon the Four-Fold Framework by investigating its applicability to other emerging areas in AI, such as AI ethics, human-AI collaboration, and autonomous decision-making. Additionally, researchers can explore how the Four-Fold Framework might inform the development of AI-driven science policy and governance, ensuring that ethical, legal, and societal implications are considered in the integration of artificial agency in scientific research. By refining and extending the Four-Fold Framework, the academic community can better anticipate and navigate the challenges and opportunities that artificial agency presents in the rapidly evolving landscape of AI-driven science.

Abstract

Recent developments in AI-research suggest that an AI-driven science might not be that far off. The research of for Melnikov et al. (2018) and that of Evans et al. (2018) show that automated systems can already have a distinctive role in the design of experiments and in directing future research. Common practice in many of the papers devoted to the automation of basic research is to refer to these automated systems as ‘agents’. What is this attribution of agency based on and to what extent is this an important notion in the broader context of an AI-driven science? In an attempt to answer these questions, this paper proposes a new methodological framework, introduced as the Four-Fold Framework, that can be used to conceptualize artificial agency in basic research. It consists of four modeling strategies, three of which were already identified and used by Sarkia (2021) to conceptualize ‘intentional agency’. The novelty of the framework is the inclusion of a fourth strategy, introduced as conceptual modeling, that adds a semantic dimension to the overall conceptualization. The strategy connects to the other strategies by modeling both the actual use of ‘artificial agency’ in basic research as well as what is meant by it in each of the other three strategies. This enables researchers to bridge the gap between theory and practice by comparing the meaning of artificial agency in both an academic as well as in a practical context.

Introducing a four-fold way to conceptualize artificial agency