The Hahn-Jaimungal Conjecture: The Overlap Between Software, Pattern, and Spirit
The Hard Problem of Meaning
Can patterns be conscious? Are spirits the same as self-organizing patterns? Is meaning the same as correlation?
Let’s go through my present deliberations on the matter and have some fun while doing so.
NOTE: This is not a conjecture I actually believe, though interestingly, it is one William Hahn believes!
Despite me disbelieving it, I’m going to lay out the argument for convincing yourself of the conjecture.
The Hahn-Jaimungal Conjecture: What we think of as spirit, pattern, and software are actually all the same.
Oh, there is an amusing variation by the way.
The Strengthened Hahn-Jaimungal Conjecture: What we think of as spirit, pattern, software, and meaning are actually all the same.
Let’s address the bare conjecture first as I don’t have many thoughts on the strengthened version, so consequently I’ll leave it for last.
Making Church Go To Church
If you are a computationalist (someone who holds that cognition and consciousness are fundamentally computational processes as defined by Church and Turing, and is thus substrate independent) then the Hahn-Jaimungal Conjecture is obvious.
Software can be defined as a pattern which can be read and executed (Suber 1988). To a computationalist, the only patterns that exist are those which can be implemented anyhow, as the rest would be non-computational, and therefore non-existent. We have then “if software, then pattern” established.
What about the converse? The question is then if all pattern can be seen as software. This, again to the computationalist, who believes the laws of nature themselves to be computable, is clear because if the patterns exist in nature, then they are seen as the software that lies atop the physics.
Great. We’ve established “if pattern, then software” and so we can then use these terms interchangeably.
The Ghost in the Machine (Literally)
How about the spirit pattern connection? Or, equivalently, how about the spirit software connection?
You can think of spirit as a personality that runs on a machine. If the mind is a machine, then your spirit is itself the software that your brain runs. It’s in this way you can say evil and demons and angels exist, as they’re patterns that comprise people at different points to different degrees. Do they exist independent of instantiation?
This is the same as asking if ChatGPT-8 exists.
It doesn’t currently exist in the sense that it’s not there as a product, nor are the weights / biases known, nor is it even known what form it will take, but its disembodied form is there platonically. We’ll find out soon enough.
That is, unless Jinping continues his Deepseeking.
So, just as money exists as a non-physical yet undeniably real construct, spirit can be understood as a similar abstraction, a pattern with causal power, even if not directly observable.
The human mind, with its concomitant “personality”, is a self-organizing, causal pattern that comes about from the interactions within the physical substrate of the brain.
It’s in this view that the angels and demons don’t “exist” until implemented in that same manner, but yet we can point it out / talk about it in the abstract in the same way we can talk about Rayo’s number existing.
Can we think of patterns as spirits? What about the pattern of “print Hello World” or “enumerate all odd integers between 5 and 984 in a loop eight times…”
…Actually, yes.
Maybe the last one is a subset of some OCD-like spirit, but the former can also be a spirit.
It’s just these are quite abstract, specific spirits, that are so finely grained and devoid of personality that we don’t call them spirit (colloquially).
Does Structure Define Sense?
Let’s now turn to the strengthened conjecture, which adds ‘meaning’ to our collection of potentially equivalent concepts. If we accept that spirit, pattern, and software are interchangeable from a computationalist perspective, could ‘meaning’ also be subsumed under this same umbrella?
You could argue that meaning, at its core, comes from relationships and associations between patterns. In a neural network, for instance, the meaning of a word isn’t inherent “in the word itself” but is some consequence of its connections to other words and concepts within the network.
These connections form a graph of connections akin to if you’ve seen the Obsidian graph. To a computationalist, it’s either the activation of a subset of this web that meaning is found, or it is a subset of this web. This is much like how Geoffrey Hinton described to me that understanding as the fitting together of high-dimensional Lego blocks, where each block’s “meaning” is derived from its relationship to the others.
When we understand a sentence, a concept, or a situation, we’re activating and connecting various patterns within our “cognitive architecture.” (whatever that means) The fuller and more enmeshed these patterns, the deeper your understanding, and the more profound the meaning you experience.
Thus, in this view, meaning is not a separate, ethereal numinous entity. Actually, Joscha Bach hinted at this when he told me that the brain is simulating the physical universe around it, and meaning is just the relationships within that simulation. Infamously, Bach stated that not only is consciousness a simulation, but only simulations can be conscious.
The Hard Problem of Meaning
Now… Why don’t I believe in my own conjecture?
Firstly, I placed it as partially “my” conjecture because these are some thoughts I’ve had while preparing for guests but the majority of thoughts I entertain I don’t wholesale buy into.
Again, Professor William Hahn has articulated similar sentences, and prior to me.
As for my reasons for not buying into this conjecture, there are several. The ones that occur to me now are the following:
The distinction between software and hardware is not as straightforward as people think, so it’s unclear to me what software is as I don’t understand what the physical substrate is that we ordinarily identify with hardware.
It’s unclear to me that all of physics is computable. Same with what we call “the mind.” There’s plenty of uncertainty about what the fundamental laws are, of nature, and one has to in a sense already assume what one is trying to prove in order to make this conjecture work.
It’s unclear to me that a lack of being able to implement something indicates something about the ontology of what you’re trying to implement, rather than a lack in your ability to conceptualize.
It’s unclear to me that what premodern people meant by “spirit” and especially unclear that it’s analogous to pattern, despite even Jonathan Pageau (an Eastern Orthodox Christian) thinking that they’re deeply intertwined.†
Regarding the strengthened conjecture, I was never able to convince myself that being an element of a vector database, no matter how complicated and relational, comports with what I mean by meaning.
To be fair, I don’t know what I mean by meaning, but it’s not that.
Questions emerge like at what point, in terms of a dimension, does the vector database then match your intuition or match what you mean? For instance, if it was one by one, you wouldn't think so. If it was 10x10, still not, but perhaps more. If it's a million by 10, yes, but then what if it's even more than your neurons? At some point, it becomes even more precise, and how do you know that it’s precise “meaning” matches your “imprecise” one given there’s a many-to-one map from the precise to the imprecise.
Almost all of this depends on being a computationalist. I am not.
It’s unclear to me that this whole “representational” view of experience is correct. Is it that we just form some internal adumbration / prediction of reality that we call a “representation” and it doesn’t match the so-called objective reality out there? Far from obvious.
I don’t believe activation patterns in LLMs are to meaning as neuro correlates are to qualia.
Anyhow. These are some contemplations that I’ve had the past few months that I’m curious to hear your thoughts about in the comment section.
I read each and every comment.
- Curt Jaimungal
Footnotes
† Specifically, for those who are interested, Jonathan thinks that spirit connects identity, purpose, and meaning to the world’s structures. Patterns, on the other hand, are “manifestations of how meaning and purpose integrate into the world.” What this precisely means, I don’t know.
†† The spirit, pattern, and software of the Hahn-Jaimungal Conjecture are just like the Church-Turing thesis. Also, pattern is the same as meaning to the computationalist who thinks LLMs capture “meaning” when they actually capture pattern (or specifically, a type of pattern known as correlation). Technically, since spirit is a self-organizing pattern, it’s a strict subset of patterns, so the conjecture is more about a tight association between spirit / pattern / software, rather than an equivalence. That is, unless all patterns are self-organizing. A bold claim indeed.
††† Regarding what I call “The Hard Problem of Meaning,” I've spoken to Terrence Deacon.
As I think others have pointed out, this conjecture appears to be a reformulation of computationalism or functionalism. I remain an enthusiastic proponent of this perspective, but I also find the term “meaning” to be both rich and problematically vague, making it hard to pin down precisely.
In my view, there are two versions of computationalism. One version holds that the observable functions of cognition—thinking, reasoning, and responding—are inherently computational and substrate independent. The success of large language models provides strong evidence for this view. These models replicate core linguistic behaviors once thought uniquely human, even though they run on silicon rather than in biological neurons. Here, the emphasis is on the abstract patterns and algorithms that drive behavior. When we set aside concerns about the physical substrate, it becomes possible to consider that similar computational processes might be present in systems far removed from brains, such as machines, societies, ecosystems, or even the universe itself. In this light, terms like “spirit” could be reinterpreted to refer to the dynamic computational patterns that give rise to complex behavior rather than to some mysterious, non-material substance, although I suspect that both ancient conceptions and many contemporary users might still lean toward the latter.
The other, more ambitious stance asserts that if two systems are computationally equivalent, then they are not merely functionally similar but also share all non-observable, qualitative properties. In other words, if one system exhibits phenomenal consciousness—if it truly “feels” something—when it performs a certain computation, then any other system executing that same computation should, in principle, be conscious as well. This idea resonates with some of Chalmers’ claims regarding phenomenal equivalence. I think this is a reasonable view and I belive it but the challenge still lies in determining exactly what level of computational equivalence is necessary. Large language models might mirror human linguistic behavior and thus support the first version of computationalism, yet they clearly lack the type of computation that might be needed for sensory experience. Perhaps a robot equipped with a multimodal language model—integrating sensory, motor, and affective processing—would be computationally closer to a biological system. Still, it remains an open question whether such equivalence would be sufficient for genuine phenomenal experience, given that the continuous, analog nature of biological bodies can never be fully captured by digital models. Perhaps the right level of "computation" lies in the quarks or microtubules or whatever.
Super interesting Curt. I have been interested in the mechanics (meaning software) of AI as possibly revelatory of human consciousness. I'm just a curious amateur so nothing to contribute. However, I strongly believe in the concept of "emergence".
To me this is "the ghost in the machine". Have you discussed this topic?