(Featured) An Alternative to Cognitivism: Computational Phenomenology for Deep Learning

Research conducted by Pierre Beckmann, Guillaume Köstner, and Inês Hipólito expounds on the cognitive processes inherent in artificial neural networks (ANNs) through the lens of phenomenology. The authors’ novel approach to Computational Phenomenology (CP) veers away from the conventional paradigms of cognitivism and neuro-representationalism, and instead, aligns itself with the phenomenological framework proposed by Edmund Husserl. They engage a deep learning model through this lens, disentangling the cognitive processes from their neurophysiological sources.

The authors construct a phenomenological narrative around ANNs by characterizing them as reflective entities that simulate our ‘corps propre’—subjective structures interacting continuously with the surrounding environment. Beckmann et al.’s proposal is to adopt an innovative method of ‘bracketing’, as suggested by Husserl, that calls for a conscious disregard of any external influences to enable an examination of the phenomena as they occur. This method’s application to ANNs directs attention to the cognitive mechanisms underlying deep learning, proposing a shift from symbol-driven processes to those orchestrated by habits, and consequently redefining the notions of cognition and AI from a phenomenological standpoint.

The Conception of Computational Phenomenology

In their work, Beckmann, Köstner, and Hipólito offer a holistic overview of Computational Phenomenology (CP), which encompasses the application of phenomenology’s theoretical constructs to the computational realm. As opposed to the reductionist notions that dominated the field previously, this new perspective promotes an understanding of cognition as a dynamic, integrated system. The authors reveal that, when viewed through the lens of phenomenology, the cognitive mechanisms driving ANNs can be conceived as direct interactions between systems and their environments, rather than static mappings of the world. This is reminiscent of Husserl’s intentionality concept – the idea that consciousness is always consciousness “of” something.

Beckmann et al. further unpack this idea, presenting the potential of ANNs as entities capable of undergoing perceptual experiences analogous to the phenomenological concept of ‘corps propre’. They hypothesize that this subjective structure interacts with the world, not through predefined symbolic representations, but via habit-driven processes. The authors elaborate on this by outlining how ANNs, like humans, can adapt to a wide range of situations, building on past experiences and altering their responses accordingly. In essence, the authors pivot away from cognitive frameworks dominated by symbolic computation and towards an innovative model where habit is central to cognitive function.

Conscious Representation, Language, and a New Toolkit for Deep Learning

The authors strongly posit that, contrary to earlier assertions, ANNs do not strictly rely on symbolic representations, but rather on an internal dynamic state. This parallels phenomenology’s concept of pre-reflective consciousness, underscoring how ANNs, like human consciousness, may engage with their environment without explicit symbolic mediation. This is further intertwined with language, which the authors argue isn’t merely a collection of pre-programmed symbols, but a dynamic process. It is presented as a mechanism through which habits form and unfold, a fluid interface between the neural network and its environment. This unique perspective challenges the conventional linguistic model, effectively bridging the gap between phenomenology and computational studies by depicting language not as a static symbol system, but as an active constructor of reality.

ANNs, through their complex layers of abstraction and data processing capabilities, are considered to embody mathematical structures that mirror aspects of phenomenological structures, thereby providing an innovative toolkit for understanding cognitive processes. They emphasize the concept of neuroplasticity in ANNs as a bridge between the computational and phenomenological, providing a model to understand the malleability and adaptability of cognitive processes. This approach views cognition not as an individual process, but a collective interaction, reflecting how the computational can encapsulate and model the phenomenological. The authors’ exploration into this dynamic interplay demonstrates how the mathematization of cognition can serve as a valuable instrument in the study of consciousness.

The Broader Philosophical Discourse

This research aligns with and further advances the phenomenological discourse initiated by thinkers such as Edmund Husserl and Maurice Merleau-Ponty. The authors’ conceptual framework illuminates the cognitive mechanisms by establishing a parallel with ANNs and their plasticity, emphasizing phenomenological tenets such as perception, consciousness, and experience. As a result, their work responds to the call for a more grounded approach to cognitive science, one that acknowledges the lived experience and its intrinsic connection to cognition.

Moreover, their approach revitalizes philosophical investigation by integrating it with advanced computational concepts. This synthesis allows for an enriched exploration into the nature of consciousness, aligning with the philosophical tradition’s quest to decipher the mysteries of human cognition. By threading the path between the phenomenological and the computational, the authors contribute to the larger dialogue surrounding the philosophy of mind. Their method offers a novel approach to the mind-body problem, refuting the Cartesian dualism and presenting a holistic view of cognition where phenomenological and computational aspects are intertwined. Thus, their work does not only provide a novel toolkit for cognitive investigation but also instigates a paradigm shift in the philosophy of mind.

Abstract

We propose a non-representationalist framework for deep learning relying on a novel method computational phenomenology, a dialogue between the first-person perspective (relying on phenomenology) and the mechanisms of computational models. We thereby propose an alternative to the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities. This interpretation mainly relies on neuro-representationalism, a position that combines a strong ontological commitment towards scientific theoretical entities and the idea that the brain operates on symbolic representations of these entities. We proceed as follows: after offering a review of cognitivism and neuro-representationalism in the field of deep learning, we first elaborate a phenomenological critique of these positions; we then sketch out computational phenomenology and distinguish it from existing alternatives; finally we apply this new method to deep learning models trained on specific tasks, in order to formulate a conceptual framework of deep-learning, that allows one to think of artificial neural networks’ mechanisms in terms of lived experience.

An Alternative to Cognitivism: Computational Phenomenology for Deep Learning

Leave a Reply

Your email address will not be published. Required fields are marked *