Artificial intelligence (AI) and its interaction with human language present a challenging yet intriguing frontier in both linguistics and philosophy. The ability of AI to process and generate language has seen significant advancement, with tools such as GPT-4 demonstrating an impressive capacity to imitate human-like text generation. However, this research article by Jacob Hesse draws attention to an understudied dimension—AI’s capabilities in dealing with metaphors. The author dissects the complexities of metaphor interpretation, positioning it as an intellectual hurdle for AI that tests the boundaries of machine language comprehension. It brings into question whether AI, despite its technical prowess, can successfully navigate the subtleties and nuances that come with understanding, interpreting, and creating metaphors, a quintessential aspect of human communication.
The research article ventures into the philosophical implications of AI’s competence with three specific types of metaphors: Twice-Apt-Metaphors, presuppositional pretence-based metaphors, and self-expressing Indirect Discourse Metaphors (IDMs). The author suggests that these metaphor types require certain faculties such as aesthetic appreciation, a higher-order Theory of Mind, and affective experiential states, which might be absent in AI. This analysis unravels a paradoxical situation, where AI, an embodiment of logical and rational computation, grapples with the emotional and experiential realm of metaphors. Thus, it invites us to critically reflect on the nature and limits of machine learning, providing a compelling starting point for our exploration into the philosophy of AI’s language understanding.
Analysis
The research contributes a nuanced analysis of AI’s interaction with metaphors, taking into consideration linguistic, psychological, and philosophical dimensions. It focuses on three types of metaphors: Twice-Apt-Metaphors, presuppositional pretence-based metaphors, and self-expressing IDMs. The author argues that each metaphor type presents unique interpretative challenges that push the boundaries of AI’s language understanding. For instance, Twice-Apt-Metaphors require an aesthetic judgment, presuppositional pretence-based metaphors demand a higher-order Theory of Mind, and self-expressing IDMs necessitate an understanding of affective experiential states. The article posits that these metaphor types may lay bare potential limitations of AI due to the absence of these cognitive and affective faculties.
This comprehensive analysis is underpinned by a philosophical exploration of the nature of AI. The author leverages the arguments of Alan Turing and John Searle to engage in a broader debate about whether AI can possess mental states and consciousness. Turing’s perspective that successful AI behavior in dealing with figurative language might suggest consciousness is juxtaposed with Searle’s argument against attributing internal states to AI. This dialectic frames the discourse on the potential and limitations of AI in understanding metaphors. Consequently, the research article navigates the intricate interplay between AI’s computational prowess and the nuances of human language, offering an intricate analysis that enriches our understanding of AI’s metaphor interpretation capabilities.
Theory of Mind, Affective and Experiential States, and AI
Where concerns AI and metaphor interpretation, the research invokes the theory of mind as an essential conceptual tool. Specifically, the discussion of presuppositional pretence-based metaphors emphasizes the necessity of a higher-order theory of mind for their interpretation—a capability that current AI models lack. The author elaborates that this kind of metaphor requires the ability to simulate pretence while assuming the addressee’s perspective, effectively necessitating the understanding of another’s mental states—an ability attributed to conscious beings. The proposition challenges the notion that AI, as currently conceived, can adequately simulate human-like understanding of language, as it underscores the fundamental gap between processing information and genuine comprehension that is imbued with conscious, subjective experience. This argument not only extends the discussion about AI’s ability to handle complex metaphors but also ventures into the philosophical debate on whether machines could, in principle, develop consciousness or an equivalent functional attribute.
On the concepts of affective and experiential states, the author emphasizes their indispensable role in the understanding of metaphors known as self-expressing IDMs. These metaphors, as outlined by the author, necessitate an emotional resonance and experiential comparison on the part of the listener—an attribute currently unattainable for AI models. The argument propounds that without internal affective and experiential states, the AI’s responses to these metaphors would likely be less apt compared to human responses. This perspective raises profound questions about the nature of AI, pivoting the conversation toward whether machines can ever achieve the depth of understanding inherent to human cognition. The author acknowledges the controversy surrounding this assumption, illuminating the enduring philosophical debate around consciousness, internal states, and their potential existence within the realm of artificial intelligence.
Conscious Machines and Implications for Linguistics and Philosophy
Turing’s philosophy of conscious machines is integral to the discourse of the article, thus allowing it to expand into the wider intellectual milieu of AI consciousness. The research invokes Turing’s counter-argument to Sir Geoffrey Jefferson’s assertion, thereby stimulating a deeper conversation on AI’s potential to possess mental and emotional states. Turing’s contention against Jefferson’s solipsistic argument holds that if we attribute consciousness to other humans despite not experiencing their internal states, we should, by parity of reasoning, be open to the idea of conscious machines. The author, through this engagement with Turing’s thinking, underscores the seminal contribution of Turing’s dialogue example, where an interrogator and a machine engage in a discussion on metaphoric language. This excerpt presents a pertinent, and as yet unresolved, challenge for AI: the ability to handle complex, poetic language that requires deeper, affective understanding. Thus, Turing’s perspective on conscious machines emerges as a significant philosophical vantage point within the research, with implications far beyond the realm of linguistics and into the broader study of futures.
The author’s research effectively brings into focus the intertwined destinies of linguistics, philosophy, and AI, stimulating a philosophical debate with practical ramifications. It poses crucial challenges to the prevalent theories of metaphor interpretation that presuppose a sense for aesthetic pleasure, a higher-order theory of mind, and internal experiential or affective states. If future AI systems successfully handle twice-apt, presuppositional pretence-based and certain IDM metaphors, then the cognitive prerequisites for understanding these metaphors could require reconsideration. This eventuality could disrupt established thinking in linguistics and philosophy, prompting scholars to rethink the very foundation of their theories about metaphors and figurative language. Yet, if AI systems fail to improve their aptitude for metaphorical language, it may solidify the author’s hypothesis about the essential mental capabilities for metaphor interpretation that computer programs lack. Thus, the research serves as a launchpad for future philosophical and linguistic exploration, establishing an impetus for re-evaluating established theories and conceptions.
Abstract
Powerful transformer models based on neural networks such as GPT-4 have enabled huge progress in natural language processing. This paper identifies three challenges for computer programs dealing with metaphors. First, the phenomenon of Twice-Apt-Metaphors shows that metaphorical interpretations do not have to be triggered by syntactical, semantic or pragmatic tensions. The detection of these metaphors seems to involve a sense of aesthetic pleasure or a higher-order theory of mind, both of which are difficult to implement into computer programs. Second, the contexts relative to which metaphors are interpreted are not simply given but must be reconstructed based on pragmatic considerations that can involve presuppositional pretence. If computer programs cannot produce or understand such a form of pretence, they will have problems dealing with certain metaphors. Finally, adequately interpreting and reacting to some metaphors seems to require the ability to have internal, first-personal experiential and affective states. Since it is questionable whether computer programs have such mental states, it can be assumed that they will have problems with these kinds of metaphors.
Machines and metaphors: Challenges for the detection, interpretation and production of metaphors by computer programs