Pondering the mind manifold
June 11th, 2025
Lately I’ve been enjoying imagining the space of my mind as a latent space of a neural network - a high-dimensional manifold, folded in such a way that every concept I hold can be unfurled to reveal further hidden associations, such that two ideas might appear close along some axes but distant along others. Thinking this way helps conceptualize experience not as a flat sequence of thoughts and feelings, but as trajectories through a richly structured geometry.
What intrigues me is that this mind-manifold isn’t just filled with concepts, it also seems to hold my ways of forming concepts: my habits, intuitions, tendencies, and methods of meaning-making. Every encounter in life activates not just the region associated with the thing itself, but also regions tied to how I am experiencing it, forming meaning, and embodying myself in the world.
In this view, thought is less like flipping light switches for individual ideas, and more like a continuous, interconnected choreography: activating one region inevitably excites a constellation of others in a never-ending, cosmic brain-dance.
Given the mind as a manifold, one might naturally start wondering: How is this space structured? How does it change as I gain new experiences? What makes one explanation feel coherent, useful, and satisfying—while another falls flat?
The question about “good explanation,” connects with a paper that I read recently and really enjoyed: Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis by Kumar, Clune, Lehman, and Stanley. The paper contrasts two possibilities for learned representations: firstly the Unified Factored Representation (UFR): where the internal geometry of a model is clean, compositional, and coherent, such that related things cluster together; knowledge generalizes smoothly, and secondly the Fractured Entangled Representation (FER): where representations are messy, fragmented, and inconsistent. Related concepts are scattered across the space, far apart when they should be near, degrading the system’s capacity to generalize, learn continually, or be creative.
Their point is sobering: just because a massive model can generate fluid language or solve tasks doesn’t mean its internal representations are unified - the space might be fractured in ways that limit its deeper capacities.
Human minds, by contrast, seem unusually good at maintaining unified representations—at least around core anchors like selfhood and experience. Most of us rarely doubt whether we are a self or whether we are alive. It takes profound disruption (neurological damage, psychedelic states, or intense meditation) to shake that unity. So what is the mechanism that preserves this consistency? Why does the manifold of my mind remain relatively coherent, even as it absorbs contradictory inputs?
When I peer inward into my own experience, the process feels like constant self-explanation. I receive information, and then I try to weave it into a coherent story about reality—ideally one that reduces surprise. This mirrors what philosophers of mind and computational neuroscientists call free-energy minimization: our cognitive apparatus strives to explain the world in ways that make incoming data less shocking.
Narrative, in this sense, isn’t just decoration, it’s a compression strategy. By embedding events into a storyline, I can forget specific details yet still retain the scaffolding from which they arose. The story acts like a compressed file: small enough to carry forward, rich enough to reconstruct.
This brings to mind systems like Dreamcoder and other program synthesis models, which don’t just minimize prediction error but actively search for the simplest programs that generate accurate predictions. Perhaps minds, too, optimize not just for accuracy but for explanation quality—for compressive, story-like structure.
If so, the next question becomes: how do we formalize “good explanation” computationally? Standard methods like gradient clipping or regularization encourage consistency, but they don’t explicitly enforce narrative coherence.
One promising direction might be compression based architectures: agents that must not only maximize reward but also compress their experiences into explanatory structures. To succeed, they’d need to identify which events matter most, encode causal relationships efficiently, and reconstruct predictions from compressed representations. The pressure to compress could naturally yield narrative-like hierarchies—simple at the top, elaborated only as necessary when new data demands.
This line of thought resonates with another paper: Sources of Richness and Ineffability for Phenomenally Conscious States. It frames consciousness as simultaneously rich (holding immense informational detail) and ineffable (resisting full description). Minds live in this tension: the need to compress overwhelming richness into coherent, communicable structures.
Perhaps the drive toward narrative consistency in minds, whether biological or artificial, emerges from this fundamental tension between the richness of experience and the compression necessary for coherent representation. The realm of feeling, in this view, might be where high-dimensional experiences get compressed into navigable, explanatory structures that allow us to act coherently in the world.