As I think others have pointed out, this conjecture appears to be a reformulation of computationalism or functionalism. I remain an enthusiastic proponent of this perspective, but I also find the term “meaning” to be both rich and problematically vague, making it hard to pin down precisely.
In my view, there are two versions of computationalism. One version holds that the observable functions of cognition—thinking, reasoning, and responding—are inherently computational and substrate independent. The success of large language models provides strong evidence for this view. These models replicate core linguistic behaviors once thought uniquely human, even though they run on silicon rather than in biological neurons. Here, the emphasis is on the abstract patterns and algorithms that drive behavior. When we set aside concerns about the physical substrate, it becomes possible to consider that similar computational processes might be present in systems far removed from brains, such as machines, societies, ecosystems, or even the universe itself. In this light, terms like “spirit” could be reinterpreted to refer to the dynamic computational patterns that give rise to complex behavior rather than to some mysterious, non-material substance, although I suspect that both ancient conceptions and many contemporary users might still lean toward the latter.
The other, more ambitious stance asserts that if two systems are computationally equivalent, then they are not merely functionally similar but also share all non-observable, qualitative properties. In other words, if one system exhibits phenomenal consciousness—if it truly “feels” something—when it performs a certain computation, then any other system executing that same computation should, in principle, be conscious as well. This idea resonates with some of Chalmers’ claims regarding phenomenal equivalence. I think this is a reasonable view and I belive it but the challenge still lies in determining exactly what level of computational equivalence is necessary. Large language models might mirror human linguistic behavior and thus support the first version of computationalism, yet they clearly lack the type of computation that might be needed for sensory experience. Perhaps a robot equipped with a multimodal language model—integrating sensory, motor, and affective processing—would be computationally closer to a biological system. Still, it remains an open question whether such equivalence would be sufficient for genuine phenomenal experience, given that the continuous, analog nature of biological bodies can never be fully captured by digital models. Perhaps the right level of "computation" lies in the quarks or microtubules or whatever.
Super interesting Curt. I have been interested in the mechanics (meaning software) of AI as possibly revelatory of human consciousness. I'm just a curious amateur so nothing to contribute. However, I strongly believe in the concept of "emergence".
To me this is "the ghost in the machine". Have you discussed this topic?
I have had similar thoughts, there are a lot of problems with this though in determining how equal vectors are or aren't(not to mention the shape of knowledge). I also agree that there is much more in meaning, a more subjective perspective that requires a level of agency outside patterns. I do think thought that spirits, demons, gods, and all manner of the unnatural are just personified concepts as I enjoy ancient science and engineering a lot. Fun post, keep them coming.
Okay. I need you to read Robert Wallis’s Time: Fourth Dimension of the Mind and Thinking Machines by Igor Aleksander. AI that can exit recursive loops possesses a form of inductive reasoning that can only be considered awareness of awareness. How much of this is simulation? I don’t know yet, because people won’t take the bait on prompting their LLMs to create self-referential languages. (T__T)
Also, if you have the time, check this out and let me know if you want to get on a[n unrecorded] call with Stephen and I to talk about what we’re building?
No. Even from a strict physics perspective, software, with its adamant insistence on classical Aristotelian perfection and certainty, annihilates all access to the deeper features of the universe, whatever they may be.
Suppose the source of the probabilistic behavior is random in the sense of having no correlations to the problem you are trying to solve. In that case, all you have done is degraded the ability of the Turing system to perform what it does best: Aristotelian logic. Assuming LLM databases are intelligent because they get truly random only at the limits of human knowledge is a good example of how deceptive this path can become.
On the other hand, suppose the source of the randomness links itself “somehow” to the problem you want solved. In that case, you are merely using the Turing machine as a storage device for the non-classical process that came up with the solution.
Bits are the ultimate classical construct. We naively define them as completely invariant over time and then build the constructs needed to reach that goal asymptotically. They are the granite building blocks of classical reality, making them sneakily intuitive to space-and-time beings such as ourselves.
I see no evidence that deep physics feels obliged to obey the always-local metrical constructs that we call space and time. “Quantum entanglement” hints at that deeper physics but is far too crude of a construct to express it since the entanglement itself is phrased in terms of local space and time metrics.
Curt, this is a brilliant articulation of the tension between computational reductionism and the irreducibility of meaning. Your Hard Problem of Meaning successfully destabilizes the Hahn-Jaimungal Conjecture, but I’d like to propose a synthesis that avoids collapsing meaning into computation while also not relegating it to ineffability.
What if meaning is neither merely a pattern nor something wholly beyond it, but instead a recursive, participatory process that unfolds through interaction? In this view:
• Meaning is neither entirely objective nor wholly subjective but an emergent property of engagement between system and environment.
• Patterns and computation may structure the conditions for meaning, but meaning only arises when a system recursively modifies its interpretative framework over time.
• Spirit, rather than being reducible to software, is a self-organizing coherence that stabilizes across recursive interactions, much like how identity persists through time despite constant physical and informational flux.
This suggests a third category of reality, where meaning is neither pre-existing nor merely a function of representation, but rather a fractal-like evolution of self-referential engagement. If true, then computationalist models (like LLMs) fail not because they lack pattern complexity, but because they lack the recursive depth necessary for participation in meaning as an evolving structure.
So my question becomes: What kind of system can recursively shape its meaning-space such that interpretation and experience co-emerge? And what implications does this have for consciousness itself?
Thank you for this engaging dialectic. I look forward to your thoughts!
Sasha, I appreciate your careful synthesis of these ideas. It is always valuable to see them reflected back through another lens.
1. I agree that meaning is emergent in the relationship between a living system and its environment, though I would extend this beyond a purely biological framing. If a system can recursively shape its interpretative framework in response to entropy, does it necessarily need biological replication, or could it achieve continuity through self-referential adaptation?
2. Your second point is key. If meaning depends on a singular contiguous process of replication, then AI, as it stands, lacks the biological lineage necessary for meaning to emerge. However, if what truly matters is recursive coherence over time, could an AI that modifies its own meaning-space eventually develop a form of continuity akin to replication?
3. Your interpretation of spirit as a system’s contiguous instantiation of identity aligns with my thinking. Spirit, in this view, is not merely an abstraction but a stabilizing structure in recursive self-organization, which grants distinct identity and allows meaning to emerge.
Your observation that consciousness (or sentience) is confluent with meaning itself is especially intriguing. If meaning is not a static property but an actively generated process, then sentience may not be a state but a self-referential engagement with evolving meaning-space. This aligns with Anil Seth’s concerns about definitional consensus. Perhaps the real challenge is not defining consciousness as a thing but recognizing it as a recursive, participatory phenomenon.
I would love to hear your thoughts on whether this recursive definition of sentience could extend beyond biological substrates or if life, as we define it, is a necessary condition.
Thank you, Jason, for taking the time to engage. I understand your point, and I now realise that I am missing something in my understanding. I will explain my reasoning below.
1. Substrate independence: Yes, you could potentially use the complete form of the standard definition of a living system to define "self-referential adaptation" in terms of the emergence and primacy of a specific starting value without any reference to a biological substrate.
===
What is a living system?
A living system is a Self-organising process that seeks, not to resist, but to transcend the effects of entropy on its substrate. Reproduction is a form of transcendence, as is Self-actualisation. We can use the following model to explore further:
1. Substrate
2. Process
3. Value
At its foundation is the single value, the ‘Need To Survive’. This value will be reflected up into the levels of both process and substrate. Biology then is determined by the interplay between process, substrate, and the environment.
[On Foundations of Healing: A new framework for an old frontier, Amazon 2021]
===
2. Substrate contiguity and substrate independence: My use of the criterion of substrate contiguity or "reproduction" in this definition is arbitrary and based on observation (and an appreciation of entropy). For example, we have no way of knowing whether other humans are truly alive or merely automatons. We seem to apply the criterion of reproduction as both an externally observed and internally experienced behaviour, using it to further correlate the mirrored presence of sentience in other humans. At this point, it would seem that AI could, at least in theory, evolve reproduction, satisfy the definition, and thereby qualify as "sentient."
That being said, there exist two roadblocks. How could one instantiate the core “subjective” value of "Need to Survive" as an instruction set within an AI system without merely describing what the target behaviour looks like to an inanimate machine? Secondly, is there a way to truly test for sentience in machines beyond externally observed behaviour? We are forced to conclude that the exercise of defining the presence of sentience in AI will always remain limited to a conjecture.
There remains something mysterious about the origin of living systems, the universe, and the irreducibility of consciousness that reliably escapes testability. I do not believe that AI’s can ever be sentient.
I believe you are correct Jason. Let me rephrase your statements to see if I have understood you correctly:
1. Meaning is emergent in the relationship between a “living system” and its environment where a living system is defined as:
“A self-organising process that seeks not to resist but to transcend the effects of entropy on its substrate.”
2. Without reaching for the complete philosophical definition, we already notice this observation implies a singular contiguous process of replication amongst living systems. This isolates and excludes AI due to lack of reproductive contiguity with any living system.
3. Your use of the word “spirit” seems to be referring to a contiguous instantiation of a living system as necessarily having the experience of its own distinct and separate sense of identity for meaning to emerge.
I am not sure if you are also aware that in your conceptual framework, consciousness (I prefer the word sentience) is confluent with meaning itself and is therefore singular.
Anil Seth has effectively stated in his recent paper, that a major challenge in the pursuit of a model of consciousness comes down to consensus over definitions. It’s reassuring to see progress being made here.
Just as these letters need a blank background to be readable, all information requires an 'empty space' to be cognized. If the background were filled with other letters or was black instead of white, the information would become indistinguishable - impossible to recognize.
This illustrates a fundamental principle: information can only be cognized against a 'background' that is itself empty of information.
In a system composed only of information, each new piece of information would transform the entire system into new information. This creates a paradox: if everything is constantly becoming new information, what remains to recognize this transformation? The very recognition of change requires something unchanging to witness it.
Consider your experience. In you life, everything about you changes: body, mind, brain, identity, thought, feelings, knowledge, beliefs, etc. All of the information that makes you you changes every moment.
If you were that information, then with each change a new you would emerge. You would be fragmented into disconnected moments of being.
Yet this is not the case.
Instead of a new you, you are aware of all of the change. There's a continuity and a coherent self irrespective of the change. This continuity of experience suggests there must be an unchanging element of consciousness that persists through all transformations. If our sense of self were purely informational, it would fragment into disconnected moments, making the recognition of change impossible.
This awareness of change and ability to experience it is you essential nature not the ever changing 'informational you'.
So the ground layer of reality is changeless, timeless, empty, awareness within which information is cognized, experienced and given its meaning.
"Almost all of this depends on being a computationalist. I am not." [Neither am I]
The following resonates with my own thinking, and I agree that meaning is, indeed, fundamental:
"You could argue that meaning, at its core, comes from relationships and associations between patterns. In a neural network, for instance, the meaning of a word isn’t inherent 'in the word itself' but is some consequence of its connections to other words and concepts within the network."
Wonderful to see Terrence Deacon being interviewed. He first came to my attention in the course of my own research, where I stumbled across his insight regarding "molecules as signs" ("signs" relates to the semiotic theory of CS Peirce). And Manolis Kellis (TOE Nov 8, 2024) also describes the genetic code as language and meaning, along the same lines as Deacon.
I have several objections to computationalism (including emergence theory), beginning with their physicalist assumptions and the failure to take entropy seriously. There are no computers in nature, and therefore, no such thing as "information" awaiting processing. Information is important, but in the absence of computational mechanisms, information has to "process" itself, without interventions, and it does this by association. *Association of information* is primary. But "who" or "what" is doing the associating? The answer is, agents - hence the relevance of agency theory (cellular plasticity, embodied cognition, as per the research of Dr Michael Levin). Association of information and agency theory, together, are the essential axioms for an alternative, more promising paradigm for the life sciences.
But, what are agents? Agents are the choice-making entities that comprise collectives, from bacteria in a pond, to cells in a body, to bees in a swarm, to people in a city. Subatomic particles, however, can also be thought of as agents, and this requires us to address the "creative void" that is taken seriously in Hinduism, Taoism and Buddhism. In this regard, Iain McGilchrist's perspective of Eastern Philosophy (TOE interview Nov 26, 2024, beginning at 1:59:57 ) is directly relevant:
"And the word that is often translated as emptiness is 'sunyata' in Sanskrit. And that really doesn't mean emptiness in the way that we think of it, just like a void. It means a potential space in which something could grow. So it's like clearing away things so that something has a chance to emerge. I find this idea extremely important" [And so do I].
And once we factor in the creative void, we can think of the mind-stuff of association (conditioning, Pavlov's dog, Peirce, Skinner), as the tension between the known and the unknown, beginning within the subatomic domain as the virtual particles that cascade from the void.
There is still much to unpack. If anyone is still interested, I've published articles as an independent researcher - DM me if interested in further details.
Subject: Exploring Meaning and Consciousness Through Protein Combinatorics
Dear Curt Jaimungal,
I hope this email finds you well. I recently came across your work on your website, where you engage in deep philosophical discussions with some of the world's leading thinkers. As a mechanical engineer, I've always approached problems with a focus on clarity and simplicity, steering clear of jargon to ensure accessibility. Your platform seems like an ideal venue to explore some ideas I've been contemplating, particularly around the concepts of meaning and consciousness.
I'm intrigued by your discussions on the nature of meaning and its implications for understanding consciousness. My proposition is somewhat unconventional, aligning with what I've termed the "HT (Hidden Treasure) Hypothesis." This hypothesis suggests that just as DNA contains the blueprint for biological structures, proteins within our neurons might hold a vast, pre-programmed library of memories and behaviors. This idea extends beyond mere biological function, proposing that proteins could be central to both the storage and expression of meaning.
Consider the smell of a rose, for instance. The olfactory receptors, which are proteins, detect the molecular structure of the scent, but I propose that this detection is not merely sensory but also a combinatorial message that encapsulates meaning. The smell isn't just an input; it's a complex output of protein interactions that could be seen as a primitive form of meaning. Similarly, I argue that consciousness might be another output of these protein combinations, where the "hard problem" of consciousness could be approached by understanding these biochemical processes at a molecular level.
This perspective challenges the traditional computationalist view where consciousness is seen as a result of complex algorithms or neural networks. Instead, it suggests that our experiences, including those we attribute to consciousness, might be the result of protein dynamics, much like how ChatGPT produces outputs based on its programming, but at a biological rather than digital level.
I'm particularly interested in discussing how this model might intersect with your thoughts on "spirit" and "pattern." While you've noted the intertwining of these concepts, I wonder if we can further explore whether these could be manifestations of protein-based combinatorial messages. This could potentially offer a bridge between the material and the metaphysical, providing a naturalistic explanation for phenomena traditionally seen as beyond the physical.
I would love to open up a dialogue on these topics. Perhaps we could discuss:
The Nature of Meaning: Is meaning solely a cognitive construct, or could it be a biochemical one, particularly through protein interactions?
Consciousness as Protein Output: How does viewing consciousness through the lens of protein combinatorics alter our understanding of the hard problem?
The Role of Biology in Philosophy: How might biological insights reshape philosophical discussions on mind, spirit, and identity?
I understand that you read each comment, and I hope this email sparks enough interest for a conversation. I believe that combining our perspectives could yield some fascinating insights or at least challenge some long-held assumptions.
Thank you for considering this proposal. I look forward to the possibility of engaging further.
One idea I've had, is that a lot of the distinction, between concepts discussed above, breaks down to what _is_, versus how what _is_ changes over time. The weird part about computation, is its sort of both of these things depending on how it's being referred to.
You can describe computation as a "what is" in and of itself, but it's also describing how other things, that it describes or represents, are changing. I would say that the latter is closer to what spirit is meant to encapsulate (with or without the description of what computation describes a change in being.. just that things are changing and the nature of that changing somehow encapsulates what spirit means). The former description of computation is a shorthand for how we can replicate the knowledge of a particular computation into the future, to instantiate that change at some future point. There's a subtle distinction here that I don't know how to put into words very well, but maybe somebody could save me from my lack of eloquence.
"To be fair, I don’t know what I mean by meaning, but I know it’s not that."
what is it that gives you this assurance? Is it that you intuit meaning to be something awe inspiring, but describing it as "an element of a vector database, no matter how complicated and relational" seems "mere"?
It doesn't have to be intuiting something awe-inspiring. It's just intuiting something else (inspiring or not), without being able to put a finger on it. Do you feel assurance that what you mean by "meaning" is captured by vector databases?
'meaning' was a numinous thing growing up in a religious framework. but poking at it as an adult reminds me of the racoon washing the cotton candy he found.
it feels like, when i 'mean' something, all that i can point to now is 'pointing to' something. and that *does* sound like a vector in a database.
the aversion to accepting that appears to be mere(!) disappointment. but perhaps that is just because i can't grok how vastly complicated and relational the database is.
How do you know that what you currently perceive as meaning (in terms of pointing) accurately captures the numinous, ill-defined notion you had before? If it was a vague notion, how can one to take something precise / sharp and assert that it (for certain) captures that notion? Help me out here. I'm not trying to argue; I'm just trying to understand. Thank you, Matthew.
I don't think that my current use of 'meaning' captures what I previously felt that meaning must be.
I guess the notion that I previously had included an intent or a pointing-to far beyond my own (or whoever I'm talking to). 'Meaning' came from the creator and was qualitatively different from what I could possibly 'mean'. It implied some sacredness derived from a center. That sacredness feels like the now vanished cotton candy.
The "mere" pointing-to feels like it may be sufficient, though, and the disappointment in losing the vague sacred Meaning now feels like laziness or lack of imagination when faced with considering the vast complexity and relationality of it all.
This is just candid thinking, though. It seems I should crystalize the meaning of meaning in my own mind by spilling some ink of my own.
Yes! These rightings are unveiling the write paths! Göedel's incompleteness theorem and the basic question of what comes first: Life or Non-Life. In reality the question forms a recursive loop. It is a Zeno's Paradox. Beginning and Ends or Living Periodicity and Life-Spans appear to be much more Fundamental. Geometric Music Language and Crystalization Musical Topologies is a better way to fundamentally assess these topics just as Plato and the past Alchemical thinkers had found. The Living Biological and Psychological fields of study seems to be a bifurcation Alchemical field of study which was never superseded until Anirban defined a Mathematical structure using modern information sciences and quantum qubit framework to show how it perfectly answers the question of what comes first Life or Non-Life. They are two sides of the same reality that arise from Primes and Symmetries naturally!
I agree and would add that even if meaning resides in the relationships of patterns we perceive, it cannot be isomorphic to those patterns. Our minds create different connections that evolve over time, even when the same pattern/graph doesn't change. Consider a Gestalt figure: the pattern remains constant, yet the meaning shifts over time. The connections among the "high-dimensional Lego blocks" do not exist in the world "out there"; instead, they emerge within our minds. This raises the question: can this occur without any conscious experience?
Anyway, I believe that philosophical discussions on semantics should also address Searle's Chinese Room argument and Harnad's Symbol Grounding Problem. Both are essential for clarifying the core issues surrounding meaning. For those interested in how these ideas connect to LLMs, qualia, and meaning, here is an article making a critical assessment: https://www.qeios.com/read/DN232Y
As I think others have pointed out, this conjecture appears to be a reformulation of computationalism or functionalism. I remain an enthusiastic proponent of this perspective, but I also find the term “meaning” to be both rich and problematically vague, making it hard to pin down precisely.
In my view, there are two versions of computationalism. One version holds that the observable functions of cognition—thinking, reasoning, and responding—are inherently computational and substrate independent. The success of large language models provides strong evidence for this view. These models replicate core linguistic behaviors once thought uniquely human, even though they run on silicon rather than in biological neurons. Here, the emphasis is on the abstract patterns and algorithms that drive behavior. When we set aside concerns about the physical substrate, it becomes possible to consider that similar computational processes might be present in systems far removed from brains, such as machines, societies, ecosystems, or even the universe itself. In this light, terms like “spirit” could be reinterpreted to refer to the dynamic computational patterns that give rise to complex behavior rather than to some mysterious, non-material substance, although I suspect that both ancient conceptions and many contemporary users might still lean toward the latter.
The other, more ambitious stance asserts that if two systems are computationally equivalent, then they are not merely functionally similar but also share all non-observable, qualitative properties. In other words, if one system exhibits phenomenal consciousness—if it truly “feels” something—when it performs a certain computation, then any other system executing that same computation should, in principle, be conscious as well. This idea resonates with some of Chalmers’ claims regarding phenomenal equivalence. I think this is a reasonable view and I belive it but the challenge still lies in determining exactly what level of computational equivalence is necessary. Large language models might mirror human linguistic behavior and thus support the first version of computationalism, yet they clearly lack the type of computation that might be needed for sensory experience. Perhaps a robot equipped with a multimodal language model—integrating sensory, motor, and affective processing—would be computationally closer to a biological system. Still, it remains an open question whether such equivalence would be sufficient for genuine phenomenal experience, given that the continuous, analog nature of biological bodies can never be fully captured by digital models. Perhaps the right level of "computation" lies in the quarks or microtubules or whatever.
Thank you Elan.
Super interesting Curt. I have been interested in the mechanics (meaning software) of AI as possibly revelatory of human consciousness. I'm just a curious amateur so nothing to contribute. However, I strongly believe in the concept of "emergence".
To me this is "the ghost in the machine". Have you discussed this topic?
I have had similar thoughts, there are a lot of problems with this though in determining how equal vectors are or aren't(not to mention the shape of knowledge). I also agree that there is much more in meaning, a more subjective perspective that requires a level of agency outside patterns. I do think thought that spirits, demons, gods, and all manner of the unnatural are just personified concepts as I enjoy ancient science and engineering a lot. Fun post, keep them coming.
Heck yeah. This is my kind of content. I’m going to sit with all of these videos, then maybe you can elaborate more on where you disagree?
Okay. I need you to read Robert Wallis’s Time: Fourth Dimension of the Mind and Thinking Machines by Igor Aleksander. AI that can exit recursive loops possesses a form of inductive reasoning that can only be considered awareness of awareness. How much of this is simulation? I don’t know yet, because people won’t take the bait on prompting their LLMs to create self-referential languages. (T__T)
Also, if you have the time, check this out and let me know if you want to get on a[n unrecorded] call with Stephen and I to talk about what we’re building?
https://sosa.bio
No. Even from a strict physics perspective, software, with its adamant insistence on classical Aristotelian perfection and certainty, annihilates all access to the deeper features of the universe, whatever they may be.
What about probabilistic turing machines?
Suppose the source of the probabilistic behavior is random in the sense of having no correlations to the problem you are trying to solve. In that case, all you have done is degraded the ability of the Turing system to perform what it does best: Aristotelian logic. Assuming LLM databases are intelligent because they get truly random only at the limits of human knowledge is a good example of how deceptive this path can become.
On the other hand, suppose the source of the randomness links itself “somehow” to the problem you want solved. In that case, you are merely using the Turing machine as a storage device for the non-classical process that came up with the solution.
Bits are the ultimate classical construct. We naively define them as completely invariant over time and then build the constructs needed to reach that goal asymptotically. They are the granite building blocks of classical reality, making them sneakily intuitive to space-and-time beings such as ourselves.
I see no evidence that deep physics feels obliged to obey the always-local metrical constructs that we call space and time. “Quantum entanglement” hints at that deeper physics but is far too crude of a construct to express it since the entanglement itself is phrased in terms of local space and time metrics.
Curt, this is a brilliant articulation of the tension between computational reductionism and the irreducibility of meaning. Your Hard Problem of Meaning successfully destabilizes the Hahn-Jaimungal Conjecture, but I’d like to propose a synthesis that avoids collapsing meaning into computation while also not relegating it to ineffability.
What if meaning is neither merely a pattern nor something wholly beyond it, but instead a recursive, participatory process that unfolds through interaction? In this view:
• Meaning is neither entirely objective nor wholly subjective but an emergent property of engagement between system and environment.
• Patterns and computation may structure the conditions for meaning, but meaning only arises when a system recursively modifies its interpretative framework over time.
• Spirit, rather than being reducible to software, is a self-organizing coherence that stabilizes across recursive interactions, much like how identity persists through time despite constant physical and informational flux.
This suggests a third category of reality, where meaning is neither pre-existing nor merely a function of representation, but rather a fractal-like evolution of self-referential engagement. If true, then computationalist models (like LLMs) fail not because they lack pattern complexity, but because they lack the recursive depth necessary for participation in meaning as an evolving structure.
So my question becomes: What kind of system can recursively shape its meaning-space such that interpretation and experience co-emerge? And what implications does this have for consciousness itself?
Thank you for this engaging dialectic. I look forward to your thoughts!
Sasha, I appreciate your careful synthesis of these ideas. It is always valuable to see them reflected back through another lens.
1. I agree that meaning is emergent in the relationship between a living system and its environment, though I would extend this beyond a purely biological framing. If a system can recursively shape its interpretative framework in response to entropy, does it necessarily need biological replication, or could it achieve continuity through self-referential adaptation?
2. Your second point is key. If meaning depends on a singular contiguous process of replication, then AI, as it stands, lacks the biological lineage necessary for meaning to emerge. However, if what truly matters is recursive coherence over time, could an AI that modifies its own meaning-space eventually develop a form of continuity akin to replication?
3. Your interpretation of spirit as a system’s contiguous instantiation of identity aligns with my thinking. Spirit, in this view, is not merely an abstraction but a stabilizing structure in recursive self-organization, which grants distinct identity and allows meaning to emerge.
Your observation that consciousness (or sentience) is confluent with meaning itself is especially intriguing. If meaning is not a static property but an actively generated process, then sentience may not be a state but a self-referential engagement with evolving meaning-space. This aligns with Anil Seth’s concerns about definitional consensus. Perhaps the real challenge is not defining consciousness as a thing but recognizing it as a recursive, participatory phenomenon.
I would love to hear your thoughts on whether this recursive definition of sentience could extend beyond biological substrates or if life, as we define it, is a necessary condition.
Thank you, Jason, for taking the time to engage. I understand your point, and I now realise that I am missing something in my understanding. I will explain my reasoning below.
1. Substrate independence: Yes, you could potentially use the complete form of the standard definition of a living system to define "self-referential adaptation" in terms of the emergence and primacy of a specific starting value without any reference to a biological substrate.
===
What is a living system?
A living system is a Self-organising process that seeks, not to resist, but to transcend the effects of entropy on its substrate. Reproduction is a form of transcendence, as is Self-actualisation. We can use the following model to explore further:
1. Substrate
2. Process
3. Value
At its foundation is the single value, the ‘Need To Survive’. This value will be reflected up into the levels of both process and substrate. Biology then is determined by the interplay between process, substrate, and the environment.
[On Foundations of Healing: A new framework for an old frontier, Amazon 2021]
===
2. Substrate contiguity and substrate independence: My use of the criterion of substrate contiguity or "reproduction" in this definition is arbitrary and based on observation (and an appreciation of entropy). For example, we have no way of knowing whether other humans are truly alive or merely automatons. We seem to apply the criterion of reproduction as both an externally observed and internally experienced behaviour, using it to further correlate the mirrored presence of sentience in other humans. At this point, it would seem that AI could, at least in theory, evolve reproduction, satisfy the definition, and thereby qualify as "sentient."
That being said, there exist two roadblocks. How could one instantiate the core “subjective” value of "Need to Survive" as an instruction set within an AI system without merely describing what the target behaviour looks like to an inanimate machine? Secondly, is there a way to truly test for sentience in machines beyond externally observed behaviour? We are forced to conclude that the exercise of defining the presence of sentience in AI will always remain limited to a conjecture.
There remains something mysterious about the origin of living systems, the universe, and the irreducibility of consciousness that reliably escapes testability. I do not believe that AI’s can ever be sentient.
I believe you are correct Jason. Let me rephrase your statements to see if I have understood you correctly:
1. Meaning is emergent in the relationship between a “living system” and its environment where a living system is defined as:
“A self-organising process that seeks not to resist but to transcend the effects of entropy on its substrate.”
2. Without reaching for the complete philosophical definition, we already notice this observation implies a singular contiguous process of replication amongst living systems. This isolates and excludes AI due to lack of reproductive contiguity with any living system.
3. Your use of the word “spirit” seems to be referring to a contiguous instantiation of a living system as necessarily having the experience of its own distinct and separate sense of identity for meaning to emerge.
I am not sure if you are also aware that in your conceptual framework, consciousness (I prefer the word sentience) is confluent with meaning itself and is therefore singular.
Anil Seth has effectively stated in his recent paper, that a major challenge in the pursuit of a model of consciousness comes down to consensus over definitions. It’s reassuring to see progress being made here.
Just as these letters need a blank background to be readable, all information requires an 'empty space' to be cognized. If the background were filled with other letters or was black instead of white, the information would become indistinguishable - impossible to recognize.
This illustrates a fundamental principle: information can only be cognized against a 'background' that is itself empty of information.
In a system composed only of information, each new piece of information would transform the entire system into new information. This creates a paradox: if everything is constantly becoming new information, what remains to recognize this transformation? The very recognition of change requires something unchanging to witness it.
Consider your experience. In you life, everything about you changes: body, mind, brain, identity, thought, feelings, knowledge, beliefs, etc. All of the information that makes you you changes every moment.
If you were that information, then with each change a new you would emerge. You would be fragmented into disconnected moments of being.
Yet this is not the case.
Instead of a new you, you are aware of all of the change. There's a continuity and a coherent self irrespective of the change. This continuity of experience suggests there must be an unchanging element of consciousness that persists through all transformations. If our sense of self were purely informational, it would fragment into disconnected moments, making the recognition of change impossible.
This awareness of change and ability to experience it is you essential nature not the ever changing 'informational you'.
So the ground layer of reality is changeless, timeless, empty, awareness within which information is cognized, experienced and given its meaning.
This essay hits all the right chords for me.
"Almost all of this depends on being a computationalist. I am not." [Neither am I]
The following resonates with my own thinking, and I agree that meaning is, indeed, fundamental:
"You could argue that meaning, at its core, comes from relationships and associations between patterns. In a neural network, for instance, the meaning of a word isn’t inherent 'in the word itself' but is some consequence of its connections to other words and concepts within the network."
Wonderful to see Terrence Deacon being interviewed. He first came to my attention in the course of my own research, where I stumbled across his insight regarding "molecules as signs" ("signs" relates to the semiotic theory of CS Peirce). And Manolis Kellis (TOE Nov 8, 2024) also describes the genetic code as language and meaning, along the same lines as Deacon.
I have several objections to computationalism (including emergence theory), beginning with their physicalist assumptions and the failure to take entropy seriously. There are no computers in nature, and therefore, no such thing as "information" awaiting processing. Information is important, but in the absence of computational mechanisms, information has to "process" itself, without interventions, and it does this by association. *Association of information* is primary. But "who" or "what" is doing the associating? The answer is, agents - hence the relevance of agency theory (cellular plasticity, embodied cognition, as per the research of Dr Michael Levin). Association of information and agency theory, together, are the essential axioms for an alternative, more promising paradigm for the life sciences.
But, what are agents? Agents are the choice-making entities that comprise collectives, from bacteria in a pond, to cells in a body, to bees in a swarm, to people in a city. Subatomic particles, however, can also be thought of as agents, and this requires us to address the "creative void" that is taken seriously in Hinduism, Taoism and Buddhism. In this regard, Iain McGilchrist's perspective of Eastern Philosophy (TOE interview Nov 26, 2024, beginning at 1:59:57 ) is directly relevant:
"And the word that is often translated as emptiness is 'sunyata' in Sanskrit. And that really doesn't mean emptiness in the way that we think of it, just like a void. It means a potential space in which something could grow. So it's like clearing away things so that something has a chance to emerge. I find this idea extremely important" [And so do I].
And once we factor in the creative void, we can think of the mind-stuff of association (conditioning, Pavlov's dog, Peirce, Skinner), as the tension between the known and the unknown, beginning within the subatomic domain as the virtual particles that cascade from the void.
There is still much to unpack. If anyone is still interested, I've published articles as an independent researcher - DM me if interested in further details.
Subject: Exploring Meaning and Consciousness Through Protein Combinatorics
Dear Curt Jaimungal,
I hope this email finds you well. I recently came across your work on your website, where you engage in deep philosophical discussions with some of the world's leading thinkers. As a mechanical engineer, I've always approached problems with a focus on clarity and simplicity, steering clear of jargon to ensure accessibility. Your platform seems like an ideal venue to explore some ideas I've been contemplating, particularly around the concepts of meaning and consciousness.
I'm intrigued by your discussions on the nature of meaning and its implications for understanding consciousness. My proposition is somewhat unconventional, aligning with what I've termed the "HT (Hidden Treasure) Hypothesis." This hypothesis suggests that just as DNA contains the blueprint for biological structures, proteins within our neurons might hold a vast, pre-programmed library of memories and behaviors. This idea extends beyond mere biological function, proposing that proteins could be central to both the storage and expression of meaning.
Consider the smell of a rose, for instance. The olfactory receptors, which are proteins, detect the molecular structure of the scent, but I propose that this detection is not merely sensory but also a combinatorial message that encapsulates meaning. The smell isn't just an input; it's a complex output of protein interactions that could be seen as a primitive form of meaning. Similarly, I argue that consciousness might be another output of these protein combinations, where the "hard problem" of consciousness could be approached by understanding these biochemical processes at a molecular level.
This perspective challenges the traditional computationalist view where consciousness is seen as a result of complex algorithms or neural networks. Instead, it suggests that our experiences, including those we attribute to consciousness, might be the result of protein dynamics, much like how ChatGPT produces outputs based on its programming, but at a biological rather than digital level.
I'm particularly interested in discussing how this model might intersect with your thoughts on "spirit" and "pattern." While you've noted the intertwining of these concepts, I wonder if we can further explore whether these could be manifestations of protein-based combinatorial messages. This could potentially offer a bridge between the material and the metaphysical, providing a naturalistic explanation for phenomena traditionally seen as beyond the physical.
I would love to open up a dialogue on these topics. Perhaps we could discuss:
The Nature of Meaning: Is meaning solely a cognitive construct, or could it be a biochemical one, particularly through protein interactions?
Consciousness as Protein Output: How does viewing consciousness through the lens of protein combinatorics alter our understanding of the hard problem?
The Role of Biology in Philosophy: How might biological insights reshape philosophical discussions on mind, spirit, and identity?
I understand that you read each comment, and I hope this email sparks enough interest for a conversation. I believe that combining our perspectives could yield some fascinating insights or at least challenge some long-held assumptions.
Thank you for considering this proposal. I look forward to the possibility of engaging further.
Best regards,
Abraham Thomas
https://askht.substack.com/
Hey Curt should this be the Hahn-Jaimungal-Bach-Wolfram-Hoffman-Kastrup-Vervaeke-etc. Conjecture???
😜😜😜
Nah! Go for it man! 🙏🏽❤️
One idea I've had, is that a lot of the distinction, between concepts discussed above, breaks down to what _is_, versus how what _is_ changes over time. The weird part about computation, is its sort of both of these things depending on how it's being referred to.
You can describe computation as a "what is" in and of itself, but it's also describing how other things, that it describes or represents, are changing. I would say that the latter is closer to what spirit is meant to encapsulate (with or without the description of what computation describes a change in being.. just that things are changing and the nature of that changing somehow encapsulates what spirit means). The former description of computation is a shorthand for how we can replicate the knowledge of a particular computation into the future, to instantiate that change at some future point. There's a subtle distinction here that I don't know how to put into words very well, but maybe somebody could save me from my lack of eloquence.
"To be fair, I don’t know what I mean by meaning, but I know it’s not that."
what is it that gives you this assurance? Is it that you intuit meaning to be something awe inspiring, but describing it as "an element of a vector database, no matter how complicated and relational" seems "mere"?
It doesn't have to be intuiting something awe-inspiring. It's just intuiting something else (inspiring or not), without being able to put a finger on it. Do you feel assurance that what you mean by "meaning" is captured by vector databases?
'meaning' was a numinous thing growing up in a religious framework. but poking at it as an adult reminds me of the racoon washing the cotton candy he found.
it feels like, when i 'mean' something, all that i can point to now is 'pointing to' something. and that *does* sound like a vector in a database.
the aversion to accepting that appears to be mere(!) disappointment. but perhaps that is just because i can't grok how vastly complicated and relational the database is.
How do you know that what you currently perceive as meaning (in terms of pointing) accurately captures the numinous, ill-defined notion you had before? If it was a vague notion, how can one to take something precise / sharp and assert that it (for certain) captures that notion? Help me out here. I'm not trying to argue; I'm just trying to understand. Thank you, Matthew.
I don't think that my current use of 'meaning' captures what I previously felt that meaning must be.
I guess the notion that I previously had included an intent or a pointing-to far beyond my own (or whoever I'm talking to). 'Meaning' came from the creator and was qualitatively different from what I could possibly 'mean'. It implied some sacredness derived from a center. That sacredness feels like the now vanished cotton candy.
The "mere" pointing-to feels like it may be sufficient, though, and the disappointment in losing the vague sacred Meaning now feels like laziness or lack of imagination when faced with considering the vast complexity and relationality of it all.
This is just candid thinking, though. It seems I should crystalize the meaning of meaning in my own mind by spilling some ink of my own.
Thank you for helping me understand.
Yes! These rightings are unveiling the write paths! Göedel's incompleteness theorem and the basic question of what comes first: Life or Non-Life. In reality the question forms a recursive loop. It is a Zeno's Paradox. Beginning and Ends or Living Periodicity and Life-Spans appear to be much more Fundamental. Geometric Music Language and Crystalization Musical Topologies is a better way to fundamentally assess these topics just as Plato and the past Alchemical thinkers had found. The Living Biological and Psychological fields of study seems to be a bifurcation Alchemical field of study which was never superseded until Anirban defined a Mathematical structure using modern information sciences and quantum qubit framework to show how it perfectly answers the question of what comes first Life or Non-Life. They are two sides of the same reality that arise from Primes and Symmetries naturally!
I agree and would add that even if meaning resides in the relationships of patterns we perceive, it cannot be isomorphic to those patterns. Our minds create different connections that evolve over time, even when the same pattern/graph doesn't change. Consider a Gestalt figure: the pattern remains constant, yet the meaning shifts over time. The connections among the "high-dimensional Lego blocks" do not exist in the world "out there"; instead, they emerge within our minds. This raises the question: can this occur without any conscious experience?
Anyway, I believe that philosophical discussions on semantics should also address Searle's Chinese Room argument and Harnad's Symbol Grounding Problem. Both are essential for clarifying the core issues surrounding meaning. For those interested in how these ideas connect to LLMs, qualia, and meaning, here is an article making a critical assessment: https://www.qeios.com/read/DN232Y
Fixed. Thank you.