11 Comments

Good interview, Curt! As one of Geoff's former PhD students (from before your and Ilya's time), I have deep respect for his foundational work and expertise in this area.

LOTS to discuss, of course, but I'll keep it brief for now: I would respectfully advise folks to separate Geoff's concerns about AI risks from whatever you may think of his views on the nature of consciousness.

(The latter is a TOUGH nut to crack, debated for millennia, not about to be "solved" one way or another. I favor Chalmers' approach generally, but it's far from settled).

Expand full comment

45:57 "Rope" 1948 1H20min of your life but it is worth it.

Expand full comment

Great film. Hitchcock. Amazing cinematographic techniques — "one shot". (Actually 10, made to look like 5, due to film reel limitations of the time).

Expand full comment

Yes great movie but I was referring about the idea in the movie: if someone is so ''smart'' as to do an evil and nobody can see you for what you are then is this an evil?'' The guy that looks like Lampwick from Pinocchio (Brandon) says it fairly clear. Is evil an evil if nobody notices you doing it? And this sends me to the obligation-mitzvah ''Sending Away the Mother Bird''. If one stumbles upon a nest with eggs in it has the obligation to send the mother bird away so she wouldn't witness the destruction of what she holds most dear. It is supposed to be out of mercy but it is also a self protection. The mother bird will always remember you for the atrocity you've done to her. And will always be a potential enemy.

That being said can someone be so smart to do the evil without being noticed?

In the interview they speak about smart people who are evil and smart people who are good. Can someone be smart without being for himself? Being for one self puts that one in the collision path with other smart-one. Can one touch without being touched? - if you want another reference to the quantum physics. How is this superior intelligence going to stay untouched when it does the touching? Basically the law legitimizes that doing evil is ok if nobody notices (after all they are so stupid that they haven't even noticed I plundered his or hers future). So theft is OK as opposed to murder. At least after murder nobody is left behind with their ''guts and heart teared out".

Expand full comment

Thanks for that. It was a total joy to watch that interview. 🥰 Kudos, 👍 and well done you. 👏

I'm less than objective, as I learned about, and programmed, neural networks about ~1990 from a photocopy of the "PDP Volumes" notebooks by McClelland, Rumelhart, and *Hinton* (and others). It was during a "neural networks winter", so the field had seemingly been "renamed" PDP = Parallel Distributed Processing. (1st lesson on day 1: "neural networks had failed, a single perceptron can't solve the XOR problem"; still true that 😂 ; searched lecturer's name S.Bozinvski recently, chuffed to learn not only live&kicking still, but also this https://life.ieee.org/ieee-president-attends-plaque-unveiling-of-ieee-milestone-in-north-macedonia/ showed 🤩 )

I'm of course amazed by the turn the field has taken in the last 10 years. No one expected this.

I saw you got lots of heat in the YT comments for the "we are not special". I'm sorry to make light of this. Very many people take it very personally. (and some TBH seemingly still in toddler-hood stage, afas their inner lives) I don't see anything wrong with what Hinton says. Every life creature is one and unique, every one of us is "special" (but so are my cat, my dog etc), so in that sense - no one human (and no one living thing) is really "that special". Or at least not in a way they think - and too many it seems - hope.

We have of course already been here. First Earth was special, whole Cosmos revolved around it, and by extension - us!, then - Copernicus! 😳 Then we humans were special, He made us in His likeness, and then - Darwin! Just another leaf on the tree of life?? 😰 But uh-ah-ok - we were at least *rational* animals! - but then, Freud! 🧐 Looks to me this is the direction of travel with intelligence, consciousness etc.

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Expand full comment

Did you grok this? Who cares what the old man thinks

Expand full comment

What is scary, Curt, is not a possible AI takeover as much as this materialist Dr. Hinton dismissing creation ex nihilo as 'simply wrong'. What a powerful, logical argument....what convincing evidence....it's simply wrong!

Expand full comment

Prof Hinton, like Prof Penrose, are giants. But as H says about P, they are also capable of spouting nonsense. I do not know of any contemporary philosopher who clings to the idea of the theatre of the mind, which is an idea from many centuries ago. Any who do would be guilty of the very well known homunculus fallacy. It is a straw man, which is how it appeared in Dan Dennett's work - perhaps that is why Prof Hinton uses it, as they were friends. By attacking the theatre of the mind, one avoids dealing with the current very serious questions about conscious experience, which is essentially this: why do we have it, when it appears to be superfluous - systems, either biological or silicon/graphite, which run algorithmically, do not need to be able to 'have a conscious experience' of an object in order to process the 'percept' of the object. In other words, the robot used in the pink elephant example does not need to have an experience of pink elephants in order to point its arm in the direction indicated by the percepts - its software and hardware simply process the percepts and activate the arm. The funny thing is that there are more and more experiments showing that we can complete some tasks while uncounscious that we can also complete when we are awake. And yet conscious experience is appended to the latter and not the former. Some philosophers but more neuroscientists and computer scientists take this to conclude that conscious experience is a hallucination, without seeing the irony that one cannot hallucinate without experiencing the hallucination. The point is, some take this to deny that we have conscious experience at all. There is irony there too, in that modern science is based on the fundamental principle of testing by observation, and every single human being that has ever existed has observed what they have observed, ie conscious experience. But, it may be that a non-biological system has experience, and the more sophisticated its information process the more sophisticated might be its own understanding of its experience. That, of course, is pan-psychism or a part thereof. Humanoid experience is very real, as anyone who has stubbed their toe can attest, and it is the basis of much of our social structure. As AI and robotics deliver agents that look sound and act as humans, with very sophisticated decision making that (yes) can deceive and strategise, we will need to seriously address the question of whether they 'experience' pain, joy, love and hate. That is an essential aspect of preparing for the potential apocalyptic outcome that Prof H fears

Expand full comment

With all you've learned and cogitated upon, Kurt--a breathtaking landscape to be sure--I'm curious what you might think of Rupert Spira's rather simple take on AI and consciousness as in https://www.youtube.com/watch?v=7tGGnUHmcls. You of course had a great conversation with him a couple of times. There are of course varying definitions of consciousness, as you pointed out in your amazing recent video bringing theories of consciousness together. But sometimes the implications align. In this clip Spira suggests that AI cannot be conscious because it is not something that one "has". Like a cup, a cat, a human brain, a thought, or a computer, AI is made and appears within consciousness.

Expand full comment

6:22 “Most people — almost everybody, in fact — think one reason we are fairly safe is we have one thing that they don't have and will never have … We have consciousness or sentience or subjective experience.”

Dr. Hinton, this seems to me entirely the wrong question. I do not worry whether any machine that uses current AI tech has consciousness, sentience, or subjective experience. However, I care very much whether it can supply constructive insights beyond the data encoded into them. People expect that from these machines, but that is not what they deliver.

This disconnect of expectations began decades ago when your fellow Nobel Prize winner mistook the simple redundant fault tolerance in his clever new Hopfield networks as digital examples of the order-from-chaos effect he had seen years earlier in his quantum biomolecular work.

Digital systems do not work that way because they have a finite information capacity. If a system is digital, what you get out is never more than what you put in. That limitation is not an accident but what we wanted. If anything goes outside that finite limit, we call it an error.

Unquestionably, you have done some of the most profound work in this area. Your book is amazing. Nonetheless, I cannot agree with your conclusion that this particular approach to AI is a threat in terms of superintelligence. It is nothing more than a database with an unusual data storage format that I would describe as holographic. That's powerful, but it does not produce insights beyond the ones already in the training data.

My concern is that people continue to mistake vast but digitally limited databases for artificial intelligences when the truth is they are always and only databases — and, in the case of LLMs, ones encoded in a probability-pair format fashion that guarantees inaccuracy and degradation. Look carefully at what has happened to YouTube if you don't believe me.

Even now, we are training people to stop thinking and instead listen to a voice that is little more than a vast tape recorder filtered through a clever probability-based retrieval process. Innovation will stop if this becomes the norm for education and engineering since you end up with everyone attempting to solve problems by looking over each other's shoulders.

I, too, worry that future technologies may change this situation and create truly insightful systems capable of advancing current knowledge. But in the case of the technologies you help develop and are most concerned with, I would sincerely ask you to consider that the profound threat they pose is not super intelligence but fooling people into giving up everyday insight in favor of addiction to the most robust form of cheating ever created.

Expand full comment