The philosophical discourse on artificial intelligence (AI) often negotiates the boundary of the human-anthropocentric worldview, pivoting around the use of human attributes to describe and assess AI. In this context, the research article by Ophelia Deroy presents a compelling inquiry into our linguistic and cognitive tendency to ascribe human characteristics, particularly “trustworthiness,” to artificial entities. In an attempt to unravel the philosophical implications and ramifications of this anthropomorphism, the author explores three conceptual frameworks – new ontological category, extended human-category, and Deroy’s semi-propositional beliefs. The divergence among these perspectives underscores the complexity of the issue, highlighting how our conceptions of AI shape our interactions with and attitudes towards it.
In addition to ontological and communicative aspects, the article scrutinizes the legal dimension of AI personhood. It analyzes the merits and shortcomings of the legal argument for ascribing personhood to AI, juxtaposing it with the established notion of corporate personhood. Although this comparison offers certain pragmatic and epistemic advantages, it does not unequivocally endorse the uncritical application of human terminology to AI. Through this multi-faceted analysis, the research article integrates perspectives from philosophy, cognitive science, and law, extending the ongoing discourse about AI into uncharted territories. The examination of AI within this framework thus emerges as an indispensable part of philosophical futures studies.
Understanding Folk Concepts of AI
The exploration of folk concepts of AI is critical in understanding how people conceive and interpret artificial intelligence within their worldview. Ophelia Deroy meticulously dissects these concepts by challenging the prevalent ascription of ‘trustworthiness’ to AI. The article emphasizes the potential mismatch between our cognitive conception of trust in humans and the attributes usually associated with AI, such as reliability or predictability. The focus is not only on the logical inconsistencies of such anthropomorphic attributions but also on the potential for miscommunication they could engender, especially given the complexity and variability of the term ‘trustworthiness’ across cultures and languages.
The author employs an interesting analytical angle by exploring the notion of AI as a possible extension of the human category, or alternatively, as a distinct ontological category. The question at hand is whether people perceive AI as fundamentally different from humans or merely view them as extreme non-prototypical cases of humans. This consideration reflects the complex cognitive landscape we navigate when dealing with AI, pointing towards the potential ontological ambiguity surrounding AI. Understanding these folk concepts and the mental models they reflect not only enriches our comprehension of AI from a sociocultural perspective but also yields important insights for the development and communication strategies of AI technologies.
Human Terms and their Implications, Legal Argument
The linguistic choice of using human terms such as “trustworthiness” to describe AI, arguably entrenched in anthropocentric reasoning, poses substantial problems. The author identifies three interpretations of how people categorize AI: an extension of the human category, a distinct ontological category, or a semi-propositional belief akin to religious or spiritual constructs. This last interpretation is particularly illuminating, suggesting that people might hold inconsistent beliefs about AI without considering them irrational. This offers a crucial insight into how human language shapes our understanding and discourse about AI, potentially fostering misconceptions. Yet, the author points out, there is a lack of empirical evidence supporting the appropriateness of applying such human-centric terms to AI, raising questions about the legitimacy of this linguistic practice in both scientific and broader public contexts.
In the discussion of AI’s anthropomorphic portrayal, Deroy introduces a compelling legal perspective. Drawing parallels with the legal status granted to non-human entities like corporations, the author investigates whether AI could be treated as a “legal person,” a concept that could reconcile the use of human terms in AI discourse. However, this argument presents its own set of challenges and limitations. The text using such terms must clearly state that the analogical use of “trust” is with respect to legal persons and not actual persons, a nuance often overlooked in many texts. Moreover, the justification for using such legal fiction must weigh the potential benefits against possible costs or risks, a task best left to legal experts. Thus, despite its merits, the legal argument does not provide an unproblematic justification for humanizing AI discourse.
The Broader Philosophical Discourse and Future Directions
This study is an important contribution to the broader philosophical discourse, illuminating the intersection of linguistics, ethics, and futures studies. The argument challenges the conventional notion of language as a neutral medium, stressing the normative power of language in shaping societal perception of AI. This aligns with the poststructuralist argument that reality is socially constructed, extending it to a technological context. The insight that folk concepts, embedded in language, influence our collective vision of AI’s role echoes phenomenological philosophies which underscore the role of intersubjectivity in shaping our shared reality. The ethical implications arising from the anthropomorphic portrayal of AI resonate with moral philosophy, particularly debates on moral agency and personhood. Thus, this study reinforces the growing realization that philosophical reflections are integral to our navigation of an increasingly AI-infused future.
Furthermore, the research points towards several promising avenues for future investigation. The most apparent is an extension of this study across diverse cultures and languages to explore how varying linguistic contexts may shape differing conceptions of AI, revealing cultural variations in anthropomorphizing technology. A comparative study might yield valuable insights into the societal implications of folk concepts across the globe. Additionally, an exploration into the real-world impact of anthropomorphic language in AI discourse, such as its effects on policy-making and public sentiment towards AI, would be an enlightening sequel. Lastly, this work paves the way for developing an ethical framework to guide the linguistic portrayal of AI in public discourse, a timely topic given the accelerating integration of AI into our daily lives. Thus, this research sets a fertile ground for multidisciplinary inquiries into linguistics, sociology, ethics, and futures studies.
Abstract
Despite facing significant criticism for assigning human-like characteristics to artificial intelligence, phrases like “trustworthy AI” are still commonly used in official documents and ethical guidelines. It is essential to consider why institutions continue to use these phrases, even though they are controversial. This article critically evaluates various reasons for using these terms, including ontological, legal, communicative, and psychological arguments. All these justifications share the common feature of trying to justify the official use of terms like “trustworthy AI” by appealing to the need to reflect pre-existing facts, be it the ontological status, ways of representing AI or legal categories. The article challenges the justifications for these linguistic practices observed in the field of AI ethics and AI science communication. In particular, it takes aim at two main arguments. The first is the notion that ethical discourse can move forward without the need for philosophical clarification, bypassing existing debates. The second justification argues that it’s acceptable to use anthropomorphic terms because they are consistent with the common concepts of AI held by non-experts—exaggerating this time the existing evidence and ignoring the possibility that folk beliefs about AI are not consistent and come closer to semi-propositional beliefs. The article sounds a strong warning against the use of human-centric language when discussing AI, both in terms of principle and the potential consequences. It argues that the use of such terminology risks shaping public opinion in ways that could have negative outcomes.
The Ethics of Terminology: Can We Use Human Terms to Describe AI?
