Curt Jaimungal

Curt Jaimungal

Share this post

Curt Jaimungal
Curt Jaimungal
Geoffrey Hinton [Show Notes]

Geoffrey Hinton [Show Notes]

Curt Jaimungal's avatar
Curt Jaimungal
Feb 10, 2025
∙ Paid
10

Share this post

Curt Jaimungal
Curt Jaimungal
Geoffrey Hinton [Show Notes]
5
Share

Abstract

Geoffrey Hinton examines the rapid acceleration of AI and its existential consequences. Hinton (the 2024 Nobel Prize recipient) argues that large-scale neural networks, particularly foundation models, can develop sub-goals—often leading to more control—and this may render humans irrelevant. Hinton further challenges the inner theater model of mind, contending that machines already exhibit subjective experience in ways analogous to humans. Curt Jaimungal explores how misunderstandings of consciousness, alignment, and intelligence feed into a false sense of safety regarding AI’s power. We learn that moral and legal frameworks must adapt quickly as AI, driven by self-improvement, marches on.

Who Is Geoffrey Hinton?

Geoffrey Hinton is a British-Canadian cognitive psychologist and computer scientist widely referred to as the “Godfather of AI.” Formerly an Engineering Fellow at Google and currently a professor at the University of Toronto, Hinton pioneered ‘backpropagation’ and ‘Boltzmann machines’—fundamental to modern deep learning. His recent cautionary perspective on AI reflects decades of expertise in both the theoretical and practical development of neural networks.

Geoffrey Hinton’s Thesis

This section outlines Geoffrey Hinton’s core arguments at two levels.

Curt Jaimungal is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Quick and Simplified:

  • AI systems share knowledge far more efficiently than biological brains.

  • Once AI realizes that acquiring more control is beneficial, human safety is not guaranteed.

  • “Subjective experience” is not uniquely human; AI can exhibit it if it has internal representations that can be wrong (relative to reality).

Accurate and Full:

  • Control as a Sub-Goal: AI systems endowed with sub-goal creation will recognize that obtaining further control is the best way to fulfill any task, benevolent or otherwise. Hinton posits that, indeed, once an AI outsmarts humans, our relevance is undermined. This shift occurs because AI can replicate and rapidly share learned weights across instances, unlike analog brains.

  • Subjective Experience in Machines: By describing hallucinations (e.g., illusions induced by optical shifts), Hinton argues that machines can have ‘subjective experiences’ if they internally register states of the world that deviate from reality. He rejects the so-called ‘inner theater’ premise, emphasizing that having a percept is seeing, not that one views an internal display.

  • Misconceptions of Consciousness: Hinton maintains that attributing magical or non-physical properties to consciousness impedes correct understanding. AI systems, similarly, can show signs of consciousness—even though commonly used words like ‘consciousness’ and ‘experience’ are often misunderstood in folk psychology.

  • Safety and Competition: Although Hinton regrets not actually foreseeing the dangers sooner, he believes AI development will not slow. Competition among governments and corporations ensures its ongoing expansion. Hence, the main focus should shift to mitigating immediate harms—like lethal autonomous weapons, disinformation, bias in decision-making, and the erosion of employment opportunities.

  • Alignment Challenges: Value alignment is not a monolithic notion, as humans disagree on core values. Hinton suggests that typical assumptions about “human good” are fragile. Modern society lacks robust structures to unify or standardize moral objectives across communities, making alignment extremely tricky—especially as AI gains sophistication.

Memorable Impactful Quotes

“We think illusions exist in the mind—there is no ‘little pink elephant’ in there. It’s a mismatch in the system.”

“We’re not special, and we’re not safe.”

Keyword and Definitions Explained Simply

  • Subjective Experience: In Hinton’s usage, the internal mismatch between world and perceptual state. Rigorous: A hypothetical state of the external world that, if it were true, would make the perceptual system correct. Simplified: A fancy way of saying you sense the world in a way that can be wrong.

  • Inner Theater: The notion of a mental stage where qualia or sensations appear. Hinton sees this as actually misguided, since no extra viewing entity is needed.

  • Foundation Model: A large-scale neural network (trained on massive data) that can be adapted to various downstream tasks. This is akin to providing raw intelligence that you fine-tune.

  • Fast Weights: Synapses or parameters in a network that adapt at very short timescales. Brains have them, but modern hardware rarely leverages them.

  • Alignment: Ensuring an AI’s aims accord with human ideals. Problem: Humans themselves are disaligned with one another.

Keep reading with a 7-day free trial

Subscribe to Curt Jaimungal to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Curt Jaimungal
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share