Reto Gubelmann articulates a “loosely Wittgensteinian” conception of linguistic understanding, particularly in the context of advanced artificial intelligence (AI) models such as BERT, GPT-3, and ChatGPT. The author posits that these transformer-based natural language processing (NNLP) models are closing in on the capacity to genuinely understand language, a claim that is buttressed by both empirical and conceptual arguments. The empirical basis is grounded on the remarkable performance of these AI models on benchmarks like GLUE and SuperGLUE, which evaluate them on tasks that, in a human context, would necessitate a deep understanding of language, such as answering questions about a text, summarizing text, and discerning logical relationships between statements. The conceptual underpinnings of this claim draw upon the works of Glock, Taylor, and Wittgenstein to argue that linguistic understanding, a form of intelligence, is marked by flexibility in handling new tasks and novel inputs, as well as the capability to autonomously adapt to new tasks.
The article further navigates through the terrain of philosophical objections to the idea that AI can understand language. The author counters objections raised by Searle, Bender, Koller, Davidson, and Nagel, among others, arguing that understanding language does not necessitate any esoteric or mysterious component such as qualia. Rather, it is dependent on the competencies of the AI model, specifically its autonomous adaptability and performance in a wide array of linguistic tasks in diverse settings. By this definition, the author contends that current transformer-based NNLP models are inching closer to meeting the criteria for linguistic understanding.
The author also provides a succinct yet comprehensive overview of the evolution of AI models, from the era of “Good Old-Fashioned AI” (GOFAI), which relied on explicit rules and logical processing, to the emergence of neural network models or connectionist AI, which represent a fundamentally different approach to designing intelligent systems. The distinguishing feature of these neural network models, such as the transformer-based models under discussion, is their learning-based approach, which enables them to adapt to new tasks and exhibit flexibility in the face of novel inputs.
Embedding these discussions within broader philosophical issues, the article provides a fruitful platform for exploring the nature of intelligence, understanding, and language. The examination of whether or not AI models can understand language opens up questions about the definition of understanding and the conditions that must be met to ascribe understanding to a being. This interrogation is undergirded by a Wittgensteinian perspective, which has profound implications for our understanding of language, mind, and the possibilities of AI. It also prompts us to reconsider the boundaries we draw between human and machine intelligence.
Future research should continue to explore these Wittgensteinian conceptions of linguistic understanding, particularly as AI models continue to evolve and improve. More empirical work could be conducted to test the adaptability and flexibility of AI models in novel linguistic situations, providing more robust evidence for or against their capacity to understand language. Furthermore, the philosophical debate concerning language understanding in AI should continue to be pushed forward, with deeper explorations of the arguments against AI understanding and the development of new philosophical frameworks that can accommodate the rapidly advancing capabilities of AI. As this field advances, interdisciplinary collaboration between AI researchers, linguists, and philosophers will be vital in order to fully grasp the implications of these transformative technologies.
Abstract
In this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an AI system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being fulfills these criteria. On the conceptual side, the article builds on Glock’s concept of intelligence, Taylor’s conception of intrinsic rightness as well as Wittgenstein’s rule-following considerations. On the empirical side, it is argued that current transformer-based NNLP models, such as BERT and GPT-3 come close to fulfilling these criteria.
A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like BERT, GPT-3, and ChatGPT

