(Featured) An Alternative to Cognitivism: Computational Phenomenology for Deep Learning

An Alternative to Cognitivism: Computational Phenomenology for Deep Learning

Research conducted by Pierre Beckmann, Guillaume Köstner, and Inês Hipólito expounds on the cognitive processes inherent in artificial neural networks (ANNs) through the lens of phenomenology. The authors’ novel approach to Computational Phenomenology (CP) veers away from the conventional paradigms of cognitivism and neuro-representationalism, and instead, aligns itself with the phenomenological framework proposed by Edmund Husserl. They engage a deep learning model through this lens, disentangling the cognitive processes from their neurophysiological sources.

The authors construct a phenomenological narrative around ANNs by characterizing them as reflective entities that simulate our ‘corps propre’—subjective structures interacting continuously with the surrounding environment. Beckmann et al.’s proposal is to adopt an innovative method of ‘bracketing’, as suggested by Husserl, that calls for a conscious disregard of any external influences to enable an examination of the phenomena as they occur. This method’s application to ANNs directs attention to the cognitive mechanisms underlying deep learning, proposing a shift from symbol-driven processes to those orchestrated by habits, and consequently redefining the notions of cognition and AI from a phenomenological standpoint.

The Conception of Computational Phenomenology

In their work, Beckmann, Köstner, and Hipólito offer a holistic overview of Computational Phenomenology (CP), which encompasses the application of phenomenology’s theoretical constructs to the computational realm. As opposed to the reductionist notions that dominated the field previously, this new perspective promotes an understanding of cognition as a dynamic, integrated system. The authors reveal that, when viewed through the lens of phenomenology, the cognitive mechanisms driving ANNs can be conceived as direct interactions between systems and their environments, rather than static mappings of the world. This is reminiscent of Husserl’s intentionality concept – the idea that consciousness is always consciousness “of” something.

Beckmann et al. further unpack this idea, presenting the potential of ANNs as entities capable of undergoing perceptual experiences analogous to the phenomenological concept of ‘corps propre’. They hypothesize that this subjective structure interacts with the world, not through predefined symbolic representations, but via habit-driven processes. The authors elaborate on this by outlining how ANNs, like humans, can adapt to a wide range of situations, building on past experiences and altering their responses accordingly. In essence, the authors pivot away from cognitive frameworks dominated by symbolic computation and towards an innovative model where habit is central to cognitive function.

Conscious Representation, Language, and a New Toolkit for Deep Learning

The authors strongly posit that, contrary to earlier assertions, ANNs do not strictly rely on symbolic representations, but rather on an internal dynamic state. This parallels phenomenology’s concept of pre-reflective consciousness, underscoring how ANNs, like human consciousness, may engage with their environment without explicit symbolic mediation. This is further intertwined with language, which the authors argue isn’t merely a collection of pre-programmed symbols, but a dynamic process. It is presented as a mechanism through which habits form and unfold, a fluid interface between the neural network and its environment. This unique perspective challenges the conventional linguistic model, effectively bridging the gap between phenomenology and computational studies by depicting language not as a static symbol system, but as an active constructor of reality.

ANNs, through their complex layers of abstraction and data processing capabilities, are considered to embody mathematical structures that mirror aspects of phenomenological structures, thereby providing an innovative toolkit for understanding cognitive processes. They emphasize the concept of neuroplasticity in ANNs as a bridge between the computational and phenomenological, providing a model to understand the malleability and adaptability of cognitive processes. This approach views cognition not as an individual process, but a collective interaction, reflecting how the computational can encapsulate and model the phenomenological. The authors’ exploration into this dynamic interplay demonstrates how the mathematization of cognition can serve as a valuable instrument in the study of consciousness.

The Broader Philosophical Discourse

This research aligns with and further advances the phenomenological discourse initiated by thinkers such as Edmund Husserl and Maurice Merleau-Ponty. The authors’ conceptual framework illuminates the cognitive mechanisms by establishing a parallel with ANNs and their plasticity, emphasizing phenomenological tenets such as perception, consciousness, and experience. As a result, their work responds to the call for a more grounded approach to cognitive science, one that acknowledges the lived experience and its intrinsic connection to cognition.

Moreover, their approach revitalizes philosophical investigation by integrating it with advanced computational concepts. This synthesis allows for an enriched exploration into the nature of consciousness, aligning with the philosophical tradition’s quest to decipher the mysteries of human cognition. By threading the path between the phenomenological and the computational, the authors contribute to the larger dialogue surrounding the philosophy of mind. Their method offers a novel approach to the mind-body problem, refuting the Cartesian dualism and presenting a holistic view of cognition where phenomenological and computational aspects are intertwined. Thus, their work does not only provide a novel toolkit for cognitive investigation but also instigates a paradigm shift in the philosophy of mind.

Abstract

We propose a non-representationalist framework for deep learning relying on a novel method computational phenomenology, a dialogue between the first-person perspective (relying on phenomenology) and the mechanisms of computational models. We thereby propose an alternative to the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities. This interpretation mainly relies on neuro-representationalism, a position that combines a strong ontological commitment towards scientific theoretical entities and the idea that the brain operates on symbolic representations of these entities. We proceed as follows: after offering a review of cognitivism and neuro-representationalism in the field of deep learning, we first elaborate a phenomenological critique of these positions; we then sketch out computational phenomenology and distinguish it from existing alternatives; finally we apply this new method to deep learning models trained on specific tasks, in order to formulate a conceptual framework of deep-learning, that allows one to think of artificial neural networks’ mechanisms in terms of lived experience.

An Alternative to Cognitivism: Computational Phenomenology for Deep Learning

(Featured) Cognitive architectures for artificial intelligence ethics

Cognitive architectures for artificial intelligence ethics

The landscape of artificial intelligence (AI) is a complex and rapidly evolving field, one that increasingly intersects with ethical, philosophical, and societal considerations. The role of AI in shaping our future is now largely uncontested, with potential applications spanning an array of sectors from healthcare to education, logistics to creative industries. Of particular interest, however, is not merely the surface-level functionality of these AI systems, but the cognitive architectures underpinning them. Cognitive architectures, a theoretical blueprint for cognitive and intelligent behavior, essentially dictate how AI systems perceive, think, and act. They therefore represent a foundational aspect of AI design and hold substantial implications for how AI systems will interact with, and potentially transform, our broader societal structures.

Yet, the discourse surrounding these architectures is, to a large extent, bifurcated between two paradigms: the biological cognitive architecture and the functional cognitive architecture. The biological paradigm, primarily drawing from neuroscience and biology, emphasizes replicating the cognitive processes of the human brain. On the other hand, the functional paradigm, rooted more in computer science and engineering, is concerned with designing efficient systems capable of executing cognitive tasks, regardless of whether they emulate human cognitive processes. This fundamental divergence in design philosophy thus embodies distinct assumptions about the nature of cognition and intelligence, consequently shaping the way AI systems are created and how they might impact society. It is these paradigms, their implications, and their interplay with AI ethics principles, that form the main themes of this essay.

Frameworks for Understanding Cognitive Architectures and the Role of Mental Models in AI Design

Cognitive architectures, central to the progression of artificial intelligence, encapsulate the fundamental rules and structures that drive the operation of an intelligent agent. The research article situates its discussion within two dominant theoretical frameworks: symbolic and connectionist cognitive architectures. Symbolic cognitive architectures, rooted in the realm of logic and explicit representation, emphasize rule-based systems and algorithms. They are typified by their capacity for discrete, structured reasoning, often relating to high-level cognitive functions such as planning and problem-solving. This structured approach carries the advantage of interpretability, affording clearer insights into the decision-making processes.

On the other hand, connectionist cognitive architectures embody a divergent perspective, deriving their inspiration from biological neural networks. Connectionist models prioritize emergent behavior and learning from experience, expressed in the form of neural networks that adjust synaptic weights based on input. These architectures have exhibited exceptional performance in pattern recognition and adaptive learning scenarios. However, their opaque, ‘black-box’ nature presents challenges to understanding and predicting their behavior. The interplay between these two models, symbolizing the tension between the transparent but rigid symbolic approach and the flexible but opaque connectionist approach, forms the foundation upon which contemporary discussions of cognitive architectures in AI rest.

The incorporation of mental models in AI design represents a nexus where philosophical interpretations of cognition intersect with computational practicalities. The use of mental models, i.e., internal representations of the world and its operational mechanisms, is a significant bridge between biological and functional cognitive architectures. This highlights the philosophical significance of mental models in the study of AI design: they reflect the complex interplay between the reality we perceive and the reality we construct. The efficacy of mental models in AI system design underscores their pivotal role in knowledge acquisition and problem-solving. In the biological cognitive framework, mental models mimic human cognition’s non-linear, associative, and adaptive nature, thereby conforming to the cognitive isomorphism principle. On the other hand, the functional cognitive framework employs mental models as pragmatic tools for efficient task execution, demonstrating a utilitarian approach to cognition. Thus, the role of mental models in AI design serves as a litmus test for the philosophical assumptions underlying distinct cognitive architectures.

Philosophical Reflections and AI Ethics Principles in Relation to Cognitive Architectures

AI ethics principles, primarily those concerning autonomy, beneficence, and justice, possess substantial implications for the understanding and application of cognitive architectures. If we consider the biological framework, ethical considerations significantly arise concerning the autonomy and agency of AI systems. To what extent can, or should, an AI system with a human-like cognitive structure make independent decisions? The principle of beneficence—commitment to do good and prevent harm—profoundly impacts the design of functional cognitive architectures. Here, a tension surfaces between the utilitarian goal of optimized task execution and the prevention of potential harm resulting from such single-mindedness. Meanwhile, the principle of justice—fairness in the distribution of benefits and burdens—prompts critical scrutiny of the societal consequences of both architectures. As these models become more prevalent, we must continuously ask: Who benefits from these technologies, and who bears the potential harms? Consequently, the intricate intertwining of AI ethics principles with cognitive architectures brings philosophical discourse to the forefront of AI development, establishing its pivotal role in shaping the future of artificial cognition.

The philosophical discourse surrounding AI and cognitive architectures is deeply entwined with the ethical, ontological, and epistemological considerations inherent to AI design. On an ethical level, the discourse probes the societal implications of these technologies and the moral responsibilities of their developers. The questions of what AI is and what it could be—an ontological debate—become pressing as cognitive architectures increasingly mimic the complexities of the human mind. Furthermore, the epistemological dimension of this discourse explores the nature of AI’s knowledge acquisition and decision-making processes. This discourse, therefore, cannot be separated from the technological progression of AI, as the philosophical issues at play directly inform the design choices made. Thus, philosophical reflections are not merely theoretical musings but tangible influences on the future of AI and, by extension, society. As AI continues to evolve, the ongoing dialogue between philosophy and technology will be critical in guiding its development towards beneficial and ethical ends.

Future Directions for Research

Considering the rapid advancement of AI, cognitive architectures, and their deep-rooted philosophical implications, potential avenues for future research appear vast and multidimensional. It would be valuable to delve deeper into the empirical examination of cognitive architectures’ impact on decision-making processes in AI, quantitatively exploring their effect on AI reliability and behavior. A comparative study across different cognitive architecture models, analyzing their benefits and drawbacks in diverse real-world contexts, would further enrich the understanding of their practical applications. As ethical considerations take center stage, research exploring the development and implementation of ethical guidelines specific to cognitive architectures is essential. Notably, studies addressing the question of how to efficiently integrate philosophical perspectives into the technical development process could be transformative. Furthermore, in this era of advancing AI technologies, maintaining a dialogue between the technologists and the philosophers is crucial; thus, fostering interdisciplinary collaborations between AI research and philosophy should be a high priority in future research agendas.

Abstract

As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their creators beyond those commonly discussed (e.g., trolley problems and variants of it) and to which solutions cannot be hard-coded and are often still up for debate. Given the sensitivity of such social and ethical dilemmas and the implications of these for human society at large, when and if our AI make the “wrong” choice we need to understand how they got there in order to make corrections and prevent recurrences. This is particularly true in situations where human livelihoods are at stake (e.g., health, well-being, finance, law) or when major individual or household decisions are taken. Doing so requires opening up the “black box” of AI; especially as they act, interact, and adapt in a human world and how they interact with other AI in this world. In this article, we argue for the application of cognitive architectures for ethical AI. In particular, for their potential contributions to AI transparency, explainability, and accountability. We need to understand how our AI get to the solutions they do, and we should seek to do this on a deeper level in terms of the machine-equivalents of motivations, attitudes, values, and so on. The path to future AI is long and winding but it could arrive faster than we think. In order to harness the positive potential outcomes of AI for humans and society (and avoid the negatives), we need to understand AI more fully in the first place and we expect this will simultaneously contribute towards greater understanding of their human counterparts also.

Cognitive architectures for artificial intelligence ethics