|
WEBVTT |
|
|
|
00:00.000 --> 00:06.000 |
|
As part of MIT course 6S099 on artificial general intelligence, I got a chance to sit down with |
|
|
|
00:06.000 --> 00:13.680 |
|
Kristof Koch, who is one of the seminal figures in neurobiology, neuroscience, and generally in |
|
|
|
00:13.680 --> 00:20.320 |
|
the study of consciousness. He is the president, the chief scientific officer of the Allen Institute |
|
|
|
00:20.320 --> 00:27.520 |
|
for Brain Science in Seattle. From 1986 to 2013, he was the professor at Caltech. Before that, |
|
|
|
00:27.520 --> 00:34.400 |
|
he was at MIT. He is extremely well cited, over a hundred thousand citations. His research, |
|
|
|
00:34.400 --> 00:40.080 |
|
his writing, his ideas have had big impact on the scientific community and the general public |
|
|
|
00:40.080 --> 00:44.560 |
|
in the way we think about consciousness, in the way we see ourselves as human beings. |
|
|
|
00:44.560 --> 00:49.120 |
|
He's the author of several books, The Quest for Consciousness and Your Biological Approach, |
|
|
|
00:49.120 --> 00:54.320 |
|
and a more recent book, Consciousness, Confessions of a Romantic Reductionist. |
|
|
|
00:54.320 --> 01:00.000 |
|
If you enjoy this conversation, this course, subscribe, click the little bell icon to make |
|
|
|
01:00.000 --> 01:05.360 |
|
sure you never miss a video. And in the comments, leave suggestions for any people you'd like to |
|
|
|
01:05.360 --> 01:10.480 |
|
see be part of the course or any ideas that you would like us to explore. Thanks very much, |
|
|
|
01:10.480 --> 01:16.320 |
|
and I hope you enjoy. Okay, before we delve into the beautiful mysteries of consciousness, |
|
|
|
01:16.320 --> 01:22.240 |
|
let's zoom out a little bit. And let me ask, do you think there's intelligent life out there in |
|
|
|
01:22.240 --> 01:27.920 |
|
the universe? Yes, I do believe so. We have no evidence of it, but I think the probabilities |
|
|
|
01:27.920 --> 01:33.440 |
|
are overwhelming in favor of it. Give me a universe where we have 10 to the 11 galaxies, |
|
|
|
01:33.440 --> 01:39.280 |
|
and each galaxy has between 10 to the 11, 10 to the 12 stars, and we know more stars have one or |
|
|
|
01:39.280 --> 01:47.360 |
|
more planets. So how does that make you feel? It still makes me feel special, because I have |
|
|
|
01:47.360 --> 01:54.480 |
|
experiences. I feel the world, I experience the world. And independent of whether there are the |
|
|
|
01:54.480 --> 02:00.480 |
|
creatures out there, I still feel the world and I have access to this world in this very strange, |
|
|
|
02:00.480 --> 02:07.840 |
|
compelling way. And that's the core of human existence. Now you said human, do you think |
|
|
|
02:07.840 --> 02:13.360 |
|
if those intelligent creatures are out there, do you think they experience their world? |
|
|
|
02:13.360 --> 02:18.080 |
|
Yes, they are evolved. If they are a product of natural evolution, as they would have to be, |
|
|
|
02:18.080 --> 02:22.400 |
|
they will also experience their own world. So consciousness isn't just a human, you're right, |
|
|
|
02:22.400 --> 02:28.320 |
|
it's much wider. It's probably, it may be spread across all of biology. We have, the only thing |
|
|
|
02:28.320 --> 02:33.040 |
|
that we have special is we can talk about it. Of course, not all people can talk about it. |
|
|
|
02:33.040 --> 02:38.320 |
|
Babies and little children can talk about it. Patients who have a stroke in the left |
|
|
|
02:38.320 --> 02:43.040 |
|
inferior frontal gyrus can talk about it. But most normal adult people can talk about it. |
|
|
|
02:43.040 --> 02:48.000 |
|
And so we think that makes us special compared to little monkeys or dogs or cats or mice or all |
|
|
|
02:48.000 --> 02:52.400 |
|
the other creatures that we share the planet with. But all the evidence seems to suggest |
|
|
|
02:52.400 --> 02:56.960 |
|
that they too experience the world. And so it's overwhelmingly likely that other aliens, |
|
|
|
02:56.960 --> 03:00.640 |
|
that aliens would also experience their world. Of course, differently, because they have a |
|
|
|
03:00.640 --> 03:04.000 |
|
different sensorium, they have different sensors, they have a very different environment. |
|
|
|
03:04.640 --> 03:11.360 |
|
But the fact that I would strongly suppose that they also have experiences. They feel pain and |
|
|
|
03:11.360 --> 03:17.600 |
|
pleasure and see in some sort of spectrum and hear and have all the other sensors. |
|
|
|
03:17.600 --> 03:21.040 |
|
Of course, their language, if they have one would be different. So we might not be able to |
|
|
|
03:21.040 --> 03:24.320 |
|
understand their poetry about the experiences that they have. |
|
|
|
03:24.320 --> 03:32.240 |
|
That's correct. Right. So in a talk, in a video, I've heard you mention Siputzo, a Daxhound that |
|
|
|
03:32.240 --> 03:37.360 |
|
you came up with, that you grew up with was part of your family when you were young. First of all, |
|
|
|
03:37.360 --> 03:46.320 |
|
you're technically a Midwestern boy. Technically. But after that, you traveled there on a bit, |
|
|
|
03:46.320 --> 03:50.800 |
|
hence a little bit of the accent. You talked about Siputzo, the Daxhound, having |
|
|
|
03:51.680 --> 03:57.440 |
|
these elements of humanness, of consciousness that you discovered. So I just wanted to ask, |
|
|
|
03:58.000 --> 04:03.520 |
|
can you look back in your childhood and remember when was the first time you realized you yourself, |
|
|
|
04:03.520 --> 04:09.600 |
|
sort of from a third person perspective, our conscious being, this idea of, |
|
|
|
04:11.280 --> 04:16.320 |
|
you know, stepping outside yourself and seeing there's something special going on here in my brain? |
|
|
|
04:17.840 --> 04:22.480 |
|
I can't really actually. It's a good question. I'm not sure I recall a discrete moment. I mean, |
|
|
|
04:22.480 --> 04:27.280 |
|
you take it for granted because that's the only world you know, right? The only world I know, |
|
|
|
04:27.280 --> 04:33.520 |
|
you know, is the world of seeing and hearing voices and touching and all the other things. |
|
|
|
04:33.520 --> 04:40.000 |
|
So it's only much later at early in my undergraduate days when I became, when I enrolled in physics |
|
|
|
04:40.000 --> 04:43.520 |
|
and in philosophy that I really thought about it and thought, well, this is really fundamentally |
|
|
|
04:43.520 --> 04:48.800 |
|
very, very mysterious. And there's nothing really in physics right now that explains this transition |
|
|
|
04:48.800 --> 04:54.560 |
|
from the physics of the brain to feelings. Where do the feelings come in? So you can look at the |
|
|
|
04:54.560 --> 04:58.800 |
|
foundational equation of quantum mechanics, general relativity, you can look at the period table of |
|
|
|
04:58.800 --> 05:05.360 |
|
the elements, you can look at the endless ATG seed chat in our genes and no way is consciousness. |
|
|
|
05:05.360 --> 05:11.120 |
|
Yet I wake up every morning to a world where I have experiences. And so that's the heart of the |
|
|
|
05:11.120 --> 05:19.040 |
|
ancient mind body problem. How do experiences get into the world? So what is consciousness? |
|
|
|
05:19.040 --> 05:25.680 |
|
Experience. Consciousness is any, any, any experience. Some people call it subjective |
|
|
|
05:25.680 --> 05:30.560 |
|
feelings, some people call it phenomenon, phenomenology, some people call it qualia, |
|
|
|
05:30.560 --> 05:34.480 |
|
their philosophy, but they all denote the same thing. It feels like something in the |
|
|
|
05:34.480 --> 05:40.640 |
|
famous word of the philosopher Thomas Nagel, it feels like something to be a bad or to be, you know, |
|
|
|
05:40.640 --> 05:47.440 |
|
an American or to be angry or to be sad or to be in love or to have pain. |
|
|
|
05:49.040 --> 05:54.480 |
|
And that is what experience is, any possible experience could be as mundane as just sitting |
|
|
|
05:54.480 --> 05:59.760 |
|
here in a chair could be as exalted as, you know, having a mystical moment, you know, |
|
|
|
05:59.760 --> 06:03.200 |
|
in deep meditation, those are just different forms of experiences. |
|
|
|
06:03.200 --> 06:10.080 |
|
Experience. So if you were to sit down with maybe the next skip a couple generations of |
|
|
|
06:10.080 --> 06:16.480 |
|
IBM Watson, something that won jeopardy, what is the gap? I guess the question is between Watson, |
|
|
|
06:18.000 --> 06:26.000 |
|
that might be much smarter than you than us than all any human alive, but may not have experience. |
|
|
|
06:26.000 --> 06:31.920 |
|
What is the gap? Well, so that's a big, big question that's occupied people for the last, |
|
|
|
06:32.640 --> 06:38.800 |
|
certainly last 50 years since we, you know, since the advent of birth of, of computers. |
|
|
|
06:38.800 --> 06:42.720 |
|
That's a question on Turing tried to answer. And of course, he did it in this indirect way |
|
|
|
06:42.720 --> 06:48.480 |
|
by proposing a test, an operational test. So, but that's not really that's, you know, |
|
|
|
06:48.480 --> 06:52.480 |
|
he tried to get it. What does it mean for a person to think? And then he had this test, |
|
|
|
06:52.480 --> 06:56.400 |
|
right? You lock him away, and then you have a communication with them. And then you try to, |
|
|
|
06:56.400 --> 07:00.400 |
|
to guess after a while whether that is a person or whether it's a computer system. |
|
|
|
07:00.400 --> 07:05.680 |
|
There's no question that now or very soon, you know, Alexa or Siri or, you know, Google now |
|
|
|
07:05.680 --> 07:10.720 |
|
will pass this test, right? And you can game it. But, you know, ultimately, certainly in your |
|
|
|
07:10.720 --> 07:15.440 |
|
generation, there will be machines that will speak with complete points that will remember |
|
|
|
07:15.440 --> 07:20.320 |
|
everything you ever said. They'll remember every email you ever had, like, like Samantha remember |
|
|
|
07:20.320 --> 07:25.520 |
|
in the movie, her snow question is going to happen. But of course, the key question is, |
|
|
|
07:25.520 --> 07:31.120 |
|
does it feel like anything to be Samantha in the movie? Does it feel like anything to be Watson? |
|
|
|
07:31.120 --> 07:38.240 |
|
And there one has to very, very strongly think there are two different concepts here that we |
|
|
|
07:38.240 --> 07:44.400 |
|
co mingle. There is a concept of intelligence, natural or artificial, and there is a concept of |
|
|
|
07:44.400 --> 07:48.880 |
|
consciousness of experience, natural or artificial. Those are very, very different things. |
|
|
|
07:49.520 --> 07:55.520 |
|
Now, historically, we associate consciousness with intelligence. Why? Because we live in a world |
|
|
|
07:55.520 --> 08:01.520 |
|
living aside computers of natural selection, where we're surrounded by creatures, either our own kin |
|
|
|
08:01.520 --> 08:06.880 |
|
that are less or more intelligent, or we go across species, some some are more adapted to |
|
|
|
08:06.880 --> 08:12.000 |
|
particular environment, others are less adapted, whether it's a whale or dog, or you go talk about |
|
|
|
08:12.000 --> 08:17.680 |
|
a permitium or a little worm, all right. And we see the complexity of the nervous system goes from |
|
|
|
08:17.680 --> 08:24.400 |
|
one cell to to a specialized cells to a worm that has three net that has 30% of its cells are nerve |
|
|
|
08:24.400 --> 08:29.920 |
|
cells, to creature like us or like a blue whale that has 100 billion even more nerve cells. |
|
|
|
08:29.920 --> 08:35.280 |
|
And so based on behavioral evidence and based on the underlying neuroscience, we believe that |
|
|
|
08:35.840 --> 08:41.440 |
|
as these creatures become more complex, they are better adapted to to their particular ecological |
|
|
|
08:41.440 --> 08:46.960 |
|
niche. And they become more conscious, partly because their brain grows. And we believe |
|
|
|
08:46.960 --> 08:52.080 |
|
consciousness unlike the ancient ancient people thought most almost every culture thought that |
|
|
|
08:52.080 --> 08:56.800 |
|
consciousness with intelligence has to do with your heart. And you still to see that today, |
|
|
|
08:56.800 --> 09:00.800 |
|
you see honey, I love you with all my heart. Yes. But what you should actually say is they |
|
|
|
09:00.800 --> 09:05.520 |
|
know honey, I love you with all my lateral hypothalamus. And for Valentine's Day, you should |
|
|
|
09:05.520 --> 09:11.040 |
|
give your sweetheart, you know, hypothalamus in piece of chocolate, not a heart shaped chocolate, |
|
|
|
09:11.040 --> 09:15.280 |
|
right. And so we still have this language, but now we believe it's a brain. And so we see brains |
|
|
|
09:15.280 --> 09:19.680 |
|
of different complexity. And we think, well, they have different levels of consciousness, |
|
|
|
09:19.680 --> 09:28.160 |
|
they're capable of different experiences. But now we confront a world where we know where we're |
|
|
|
09:28.160 --> 09:34.960 |
|
beginning to engineer intelligence. And it's radical unclear whether the intelligence we're |
|
|
|
09:34.960 --> 09:40.160 |
|
engineering has anything to do with consciousness and whether it can experience anything. Because |
|
|
|
09:40.160 --> 09:45.520 |
|
fundamentally, what's the difference? Intelligence is about function. Intelligence, no matter exactly |
|
|
|
09:45.520 --> 09:50.400 |
|
how you define it sort of adaptation to new environments, being able to learn and quickly |
|
|
|
09:50.400 --> 09:54.400 |
|
understand, you know, the setup of this and what's going on and who are the actors and what's |
|
|
|
09:54.400 --> 10:00.800 |
|
going to happen next. That's all about function. Consciousness is not about function. Consciousness |
|
|
|
10:00.800 --> 10:08.080 |
|
is about being. It's in some sense much fundamental. You can see folks, you can see this in several |
|
|
|
10:08.080 --> 10:13.600 |
|
cases, you can see it, for instance, in the case of the clinic, when you're dealing with patients |
|
|
|
10:13.600 --> 10:19.200 |
|
who are, let's say, had a stroke or had wear and traffic accident, etc. They're pretty much |
|
|
|
10:19.200 --> 10:24.960 |
|
immobile. Terri Schievo, you may have heard historically, she was a person here in the |
|
|
|
10:24.960 --> 10:29.680 |
|
90s in Florida. Her heart stood still. She was reanimated. Then for the next 14 years, |
|
|
|
10:29.680 --> 10:33.600 |
|
she was what's called in a vegetative state. So there's thousands of people in the vegetative |
|
|
|
10:33.600 --> 10:37.840 |
|
state. So they're, you know, they're, you know, they're like this. Occasionally, they open their |
|
|
|
10:37.840 --> 10:42.160 |
|
eyes for two, three, four, five, six, eight hours and then close their eyes. They have sleep wake cycle. |
|
|
|
10:42.160 --> 10:48.720 |
|
Occasionally, they have behaviors. They do like, you know, they, but there's no way that you can |
|
|
|
10:48.720 --> 10:53.200 |
|
establish a lawful relationship between what you say, or the doctor says, or the mom says, |
|
|
|
10:53.200 --> 11:00.000 |
|
and what the patient does. Right. So, so the, so the, there isn't any behavior yet in some of |
|
|
|
11:00.000 --> 11:06.480 |
|
these people, there is still experience. You can, you can design and build brain machine interfaces |
|
|
|
11:06.480 --> 11:10.400 |
|
where you can see there's, they still explain something. And of course, that these cases |
|
|
|
11:10.400 --> 11:15.040 |
|
are blocked in state. There's this famous book called the, the, the diving bell in the butterfly |
|
|
|
11:15.040 --> 11:20.160 |
|
where you had an editor, a French editor, he had a stroke in the, in the brainstem, unable to move |
|
|
|
11:20.160 --> 11:25.920 |
|
except his vertical eyes, eye movement. He could just move his eyes up and down. You need dictated |
|
|
|
11:25.920 --> 11:31.760 |
|
in an entire book. And some people even lose this at the end. And all the evidence seems to suggest |
|
|
|
11:31.760 --> 11:37.120 |
|
that they're still in there. In this case, you have no behavior. You have consciousness. |
|
|
|
11:37.120 --> 11:42.080 |
|
Second case is tonight, like all of us, you're going to go to sleep, close your eyes, you go to |
|
|
|
11:42.080 --> 11:46.720 |
|
sleep. You will wake up inside your sleeping body and you will have conscious experiences. |
|
|
|
11:47.440 --> 11:52.000 |
|
They're different from everyday experience. You might fly. You might not be surprised that you're |
|
|
|
11:52.000 --> 11:57.600 |
|
flying. You might meet a long dead pet, childhood dog, and you're not surprised that you're meeting |
|
|
|
11:57.600 --> 12:01.280 |
|
them, you know, but you have conscious experience of love, of hate, you know, they can be very |
|
|
|
12:01.280 --> 12:07.680 |
|
emotional. Your body doing this state, typically to them, state sends an active signal to your |
|
|
|
12:07.680 --> 12:13.120 |
|
motor neurons to paralyze you. It's called atonia, right? Because if you don't have that, like some |
|
|
|
12:13.120 --> 12:17.200 |
|
patients, what do you do? You act out your dreams, you get for example, rem behavioral disorder, |
|
|
|
12:17.200 --> 12:23.440 |
|
which is the bad, which is bad juju to get. Okay. Third case is pure experience. So I recently had |
|
|
|
12:23.440 --> 12:29.600 |
|
this, what some people call a mystical experience, I went to Singapore and went into a flotation |
|
|
|
12:29.600 --> 12:36.960 |
|
tank. Yeah. All right. So this is a big tub filled with water, that's body temperature and absent |
|
|
|
12:36.960 --> 12:42.400 |
|
salt. You strip completely naked, you lie inside of it, you close the, the, the darkness, complete |
|
|
|
12:42.400 --> 12:48.480 |
|
darkness, soundproof. So very quickly, you become bodyless because you're floating and you're naked. |
|
|
|
12:48.480 --> 12:53.600 |
|
You have no rings, no watch, no nothing. You don't feel your body anymore. It's no sound, |
|
|
|
12:53.600 --> 13:00.320 |
|
soundless. There's no photon, a sightless, timeless, because after a while, early on, |
|
|
|
13:00.320 --> 13:05.360 |
|
you actually hear your heart, but then that you, you sort of adapt to that and then sort of the |
|
|
|
13:05.360 --> 13:11.360 |
|
passage of time ceases. And if you train yourself like in a, in a meditation, not to think early |
|
|
|
13:11.360 --> 13:15.200 |
|
on, you think a lot, it's a little bit spooky. You feel somewhat uncomfortable or you think, |
|
|
|
13:15.200 --> 13:20.320 |
|
well, I'm going to get bored. But if you try to not to think actively, you become mindless. |
|
|
|
13:20.320 --> 13:27.520 |
|
There you are, bodyless, timeless, soundless, sightless, mindless, but you're in a conscious |
|
|
|
13:27.520 --> 13:32.560 |
|
experience. You're not asleep. You're not asleep. You are, you are being of pure, |
|
|
|
13:32.560 --> 13:36.400 |
|
you're pure being. There isn't any function. You aren't doing any computation. You're not |
|
|
|
13:36.400 --> 13:40.400 |
|
remembering, you're not projecting, you're not planning, yet you are fully conscious. |
|
|
|
13:40.400 --> 13:44.400 |
|
You're fully conscious. There's something going on there. It could be just a side effect. So what |
|
|
|
13:44.400 --> 13:52.480 |
|
is the, the, um, you mean epiphenomena? So what's the select, meaning why, uh, why, what, what, |
|
|
|
13:52.480 --> 13:59.200 |
|
what is the function of you being able to lay in this, uh, sense sensory free deprivation tank |
|
|
|
13:59.200 --> 14:04.640 |
|
and still have a conscious experience? Evolutionary. Obviously we didn't evolve with floatation |
|
|
|
14:04.640 --> 14:10.480 |
|
tanks in our, in our environment. I mean, so biology is not totally bad at asking why question to |
|
|
|
14:10.480 --> 14:14.560 |
|
leonormical question. Why do we have two eyes? Why don't we have four eyes like some teachers or three |
|
|
|
14:14.560 --> 14:19.360 |
|
eyes or something? Well, no, there's probably, there is a function to that, but it's, we're not |
|
|
|
14:19.360 --> 14:23.200 |
|
very good at answering those questions. We can speculate. And Leslie, where biology is very, |
|
|
|
14:23.200 --> 14:26.800 |
|
or science is very good about mechanistic question. Why is there charge in the universe, |
|
|
|
14:26.800 --> 14:31.040 |
|
right? We find a certain universe where there are positive and negative charges. Why? Why does |
|
|
|
14:31.040 --> 14:36.320 |
|
quantum mechanics hold? You know, why, why, why doesn't some other theory hold quantum mechanics |
|
|
|
14:36.320 --> 14:41.040 |
|
hold in our universe? It's very unclear why. So telenomical question, why questions are difficult |
|
|
|
14:41.040 --> 14:46.480 |
|
to answer. Clearly there's some relationship between complexity, brain processing power and |
|
|
|
14:46.480 --> 14:53.280 |
|
consciousness. But however, in these cases, in the three examples I gave, one is an everyday |
|
|
|
14:53.280 --> 14:57.920 |
|
experience at night. The other one is a trauma and third one is in principle, you can, everybody |
|
|
|
14:57.920 --> 15:04.080 |
|
can have these sort of mystical experiences. You have a dissociation of function from, of |
|
|
|
15:04.080 --> 15:12.320 |
|
intelligence from, from conscious consciousness. You caught me asking a white question. Let me |
|
|
|
15:12.320 --> 15:17.120 |
|
ask a question that's not a white question. You're giving a talk later today on the touring test |
|
|
|
15:17.760 --> 15:23.200 |
|
for intelligence and consciousness drawn lines between the two. So is there a scientific way |
|
|
|
15:23.200 --> 15:30.560 |
|
to say there's consciousness present in this entity or not? And to anticipate your answer, |
|
|
|
15:30.560 --> 15:35.840 |
|
because you, you will also, there's a neurobiological answer. So we can test a human brain. But if you |
|
|
|
15:35.840 --> 15:42.240 |
|
take a machine brain that you don't know tests for yet, how would you even begin to approach |
|
|
|
15:42.960 --> 15:47.760 |
|
a test if there's consciousness present in this thing? Okay, that's a really good question. So |
|
|
|
15:47.760 --> 15:54.000 |
|
let me take in two steps. So as you point out for, for, for humans, let's just stick with humans, |
|
|
|
15:54.000 --> 15:58.640 |
|
there's now a test called a zap and zip is a procedure where you ping the brain using |
|
|
|
15:58.640 --> 16:04.160 |
|
transcranial magnetic stimulation. You look at the electrical reverberations essentially using EG |
|
|
|
16:04.960 --> 16:08.800 |
|
and then you can measure the complexity of this brain response. And you can do this in awake |
|
|
|
16:08.800 --> 16:13.680 |
|
people in asleep, normal people, you can do it in awake people and then anesthetize them. You |
|
|
|
16:13.680 --> 16:20.240 |
|
can do it in patients and it has 100% accuracy that in all those cases, when you're clear, |
|
|
|
16:20.240 --> 16:24.080 |
|
the patient or the person is either conscious or unconscious, the complexity is either high or |
|
|
|
16:24.080 --> 16:28.320 |
|
low. And then you can adopt these techniques to similar creatures like monkeys and dogs and, |
|
|
|
16:28.320 --> 16:34.400 |
|
and mice that have very similar brains. Now, of course, you point out that may not help you |
|
|
|
16:34.400 --> 16:39.040 |
|
because we don't have a cortex, you know, and if I send a magnetic pulse into my iPhone or my |
|
|
|
16:39.040 --> 16:43.600 |
|
computer, it's probably going to break something. So we don't have that. So what we need ultimately, |
|
|
|
16:45.120 --> 16:50.080 |
|
we need a theory of consciousness. We can't just rely on our intuition. Our intuition is, well, |
|
|
|
16:50.080 --> 16:55.040 |
|
yeah, if somebody talks, they're conscious. However, then they're all these page children, |
|
|
|
16:55.040 --> 17:00.560 |
|
babies don't talk, right? But we believe that that the babies also have conscious experiences, |
|
|
|
17:00.560 --> 17:05.360 |
|
right? And then they're all these patients I mentioned. And they don't talk when you dream, |
|
|
|
17:05.360 --> 17:10.000 |
|
you can't talk because you're paralyzed. So, so what we ultimately need, we can't just rely |
|
|
|
17:10.000 --> 17:15.280 |
|
on our intuition, we need a theory of consciousness that tells us what is it about a piece of matter, |
|
|
|
17:15.280 --> 17:19.600 |
|
what is it about a piece of highly excitable matter like the brain or like a computer that |
|
|
|
17:19.600 --> 17:24.160 |
|
gives rise to conscious experience. We all believe none of us believe anymore in the old story, |
|
|
|
17:24.160 --> 17:28.160 |
|
it's a soul, right? That used to be the most common explanation that most people accept that |
|
|
|
17:28.160 --> 17:33.440 |
|
it's still a lot of people today believe, well, there's God in doubt, only us with a special |
|
|
|
17:33.440 --> 17:38.240 |
|
thing that animals don't have, René Descartes famously said, a dog, if you hit it with your |
|
|
|
17:38.240 --> 17:42.800 |
|
carriage, may yell, may cry, but it doesn't have this special thing. It doesn't have the magic, |
|
|
|
17:42.800 --> 17:47.920 |
|
the magic sauce. So yeah, it doesn't have red corketons, the soul. Now we believe that isn't |
|
|
|
17:47.920 --> 17:54.000 |
|
the case anymore. So what is the difference between brains and, and these guys, silicon. |
|
|
|
17:55.200 --> 18:01.120 |
|
And in particular, once their behavior matches. So if you have Siri of Alexa and 20 years from now |
|
|
|
18:01.120 --> 18:06.400 |
|
that she can talk just as good as any possible human, what grounds do you have to say she's not |
|
|
|
18:06.400 --> 18:11.280 |
|
conscious? In particular, if she says, it's of course she will. Well, of course I'm conscious. |
|
|
|
18:11.280 --> 18:15.600 |
|
You ask, how are you doing? And she'll say, well, you know, they'll generate some way to, |
|
|
|
18:15.600 --> 18:21.840 |
|
yeah, yeah, exactly. She'll behave like a, like a person. Now there's several differences. One is, |
|
|
|
18:23.280 --> 18:29.120 |
|
so this relates to the problem, the very heart. Why is consciousness a heart problem? It's because |
|
|
|
18:29.120 --> 18:35.280 |
|
it's subjective, right? Only I have it, for only I know, I have direct experience of my own |
|
|
|
18:35.280 --> 18:40.240 |
|
consciousness. I don't have experience, your consciousness. Now I assume as a sort of a |
|
|
|
18:40.240 --> 18:43.680 |
|
Bayesian person who believes in probability theory and all of that, you know, I can do, |
|
|
|
18:43.680 --> 18:48.160 |
|
I can do an abduction to the, to the best available facts. I deduce your brain is very |
|
|
|
18:48.160 --> 18:51.840 |
|
similar to mine. If I put you in a scanner, your brain is roughly going to behave the same with |
|
|
|
18:51.840 --> 18:56.240 |
|
I do. If, if, if, you know, if I give you this music and ask you, how does it taste? |
|
|
|
18:56.240 --> 19:00.240 |
|
Do you tell me things that, you know, that, that I would also say more or less, right? |
|
|
|
19:00.240 --> 19:03.920 |
|
So I infer based on all of that, that you're conscious. Now with Siri, I can't do that. So |
|
|
|
19:03.920 --> 19:09.680 |
|
there I really need a theory that tells me what is it about, about any system this or this that |
|
|
|
19:09.680 --> 19:14.880 |
|
makes it conscious. We have such a theory. Yes. So the, the integrated information theory. |
|
|
|
19:15.520 --> 19:19.360 |
|
But let me first, maybe it's introduction for people who are not familiar to car. |
|
|
|
19:20.640 --> 19:28.400 |
|
Can you, you talk a lot about panpsychism. Can you describe what physicalism versus dualism |
|
|
|
19:29.040 --> 19:35.280 |
|
this you mentioned the soul? What, what is the history of that idea? What is the idea of panpsychism? |
|
|
|
19:35.280 --> 19:44.560 |
|
Well, no, the debate really out of which panpsychism can emerge of, of, of dualism versus |
|
|
|
19:45.200 --> 19:49.200 |
|
physicalism. Or do you not see panpsychism as fitting into that? |
|
|
|
19:49.760 --> 19:53.840 |
|
No, you can argue there's some, well, okay, so let's step back. So panpsychism is a very |
|
|
|
19:53.840 --> 19:58.560 |
|
ancient belief that's been around. I mean, Plato and us totally talks about it. |
|
|
|
19:59.200 --> 20:05.040 |
|
Modern philosophers talk about it. Of course, in Buddhism, the idea is very prevalent that |
|
|
|
20:05.040 --> 20:08.960 |
|
I mean, there are different versions of it. One version says everything is in salt, |
|
|
|
20:08.960 --> 20:13.680 |
|
everything rocks and stones and dogs and people and forest and iPhones all have a soul. |
|
|
|
20:14.240 --> 20:20.240 |
|
All matter is in soul. That's sort of one version. Another version is that all biology, |
|
|
|
20:20.240 --> 20:26.080 |
|
all creatures, smaller, large from a single cell to a giant sequoia tree feel like something. That's |
|
|
|
20:26.080 --> 20:30.800 |
|
one I think is somewhat more realistic. So the different versions of what do you mean by feel |
|
|
|
20:30.800 --> 20:36.240 |
|
like something? Well, have, have feeling, have some kind of experience. It may well be possible |
|
|
|
20:36.240 --> 20:41.120 |
|
that it feels like something to be a paramedium. I think it's pretty likely it feels like something |
|
|
|
20:41.120 --> 20:48.320 |
|
to be a bee or a mouse or a dog. Sure. So, okay. So, so that you can say that's also, |
|
|
|
20:48.320 --> 20:53.920 |
|
so panpsychism is very broad, right? And you can, so some people, for example, |
|
|
|
20:53.920 --> 21:00.720 |
|
Bertrand Russell, try to advocate this, this idea is called Rasellian monism that, that |
|
|
|
21:00.720 --> 21:06.640 |
|
panpsychism is really physics viewed from the inside. So the idea is that physics is very |
|
|
|
21:06.640 --> 21:13.120 |
|
good at describing relationship among objects like charges or like gravity, right? You know, |
|
|
|
21:13.120 --> 21:17.360 |
|
describe the relationship between curvature and mass distribution. Okay. That's the relationship |
|
|
|
21:17.360 --> 21:21.440 |
|
among things. Physics doesn't really describe the ultimate reality itself. It's just |
|
|
|
21:21.440 --> 21:27.280 |
|
relationship among, you know, quarks or all these other stuff from like a third person observer. |
|
|
|
21:27.280 --> 21:33.440 |
|
Yes. Yes. And consciousness is what physics feels from the inside to my conscious experience. |
|
|
|
21:33.440 --> 21:37.760 |
|
It's the way the physics of my brain, particular my cortex feels from the inside. |
|
|
|
21:38.400 --> 21:42.400 |
|
And so if you are a paramedium, you got to remember, you say paramedium, well, that's a |
|
|
|
21:42.400 --> 21:48.960 |
|
pretty dumb creature. It is, but it has already a billion different molecules, probably, you know, |
|
|
|
21:48.960 --> 21:54.160 |
|
5,000 different proteins assembled in a highly, highly complex system that no single person, |
|
|
|
21:54.160 --> 21:58.880 |
|
no computer system so far on this planet has ever managed to accurately simulate. |
|
|
|
21:58.880 --> 22:04.240 |
|
It's complexity vastly escapes us. Yes. And it may well be that that little thing feels like a |
|
|
|
22:04.240 --> 22:08.160 |
|
tiny bit. Now, it doesn't have a voice in the head like me. It doesn't have expectations. |
|
|
|
22:08.160 --> 22:12.400 |
|
You know, it doesn't have all that complex things, but it may well feel like something. |
|
|
|
22:12.400 --> 22:17.520 |
|
Yeah. So this is really interesting. Can we draw some lines and maybe try to understand |
|
|
|
22:17.520 --> 22:24.560 |
|
the difference between life, intelligence and consciousness? How do you see all of those? |
|
|
|
22:25.280 --> 22:31.120 |
|
If you have to define what is a living thing, what is a conscious thing and what is an intelligent |
|
|
|
22:31.120 --> 22:35.920 |
|
thing? Do those intermix for you or are they totally separate? Okay. So A, that's a question |
|
|
|
22:35.920 --> 22:40.000 |
|
that we don't have a full answer. Right. A lot of the stuff we're talking about today |
|
|
|
22:40.000 --> 22:44.240 |
|
is full of mysteries and fascinating ones, right? Well, you can go to Aristotle, |
|
|
|
22:44.240 --> 22:48.480 |
|
who's probably the most important scientist and philosopher who's ever lived in certainly |
|
|
|
22:48.480 --> 22:53.280 |
|
in Western culture. He had this idea. It's called hylomorphism. It's quite popular these days |
|
|
|
22:53.280 --> 22:58.080 |
|
that there are different forms of soul. The soul is really the form of something. He says, |
|
|
|
22:58.080 --> 23:02.480 |
|
all biological creatures have a vegetative soul. That's life principle. Today we think we |
|
|
|
23:02.480 --> 23:07.680 |
|
understand something more than this biochemistry in nonlinear thermodynamics. Right. Then he says |
|
|
|
23:07.680 --> 23:13.840 |
|
they have a sensitive soul. Only animals and humans have also a sensitive soul or a |
|
|
|
23:13.840 --> 23:18.960 |
|
petitive soul. They can see, they can smell and they have drives. They want to reproduce, |
|
|
|
23:18.960 --> 23:24.080 |
|
they want to eat, etc. Then only humans have what he called a rational soul. |
|
|
|
23:25.040 --> 23:29.840 |
|
Okay. Right. And that idea that made it into Christendom and then the rational soul is the one |
|
|
|
23:29.840 --> 23:34.240 |
|
that lives forever. He was very unclear. He wasn't really... I mean, different readings of Aristotle |
|
|
|
23:34.240 --> 23:39.040 |
|
give different... Did he believe that rational soul was immortal or not? I probably think he |
|
|
|
23:39.040 --> 23:43.040 |
|
didn't. But then, of course, that made it through play to into Christianity and then this soul |
|
|
|
23:43.040 --> 23:49.840 |
|
became immortal and then became the connection to God. Now, so you asked me, essentially, |
|
|
|
23:49.840 --> 23:56.240 |
|
what is our modern conception of these three... Aristotle would have called them different forms. |
|
|
|
23:56.240 --> 24:00.160 |
|
Life, we think we know something about it, at least life on this planet. Right. Although, |
|
|
|
24:00.160 --> 24:05.120 |
|
we don't understand how to originate it, but it's been difficult to rigorously pin down. |
|
|
|
24:05.120 --> 24:10.320 |
|
You see this in modern definitions of death. It's in fact, right now there's a conference |
|
|
|
24:10.320 --> 24:16.000 |
|
ongoing, again, that tries to define legally and medically what is death. It used to be very |
|
|
|
24:16.000 --> 24:19.920 |
|
simple. Death is, you stop breathing, your heart stops beating, you're dead. Right? |
|
|
|
24:19.920 --> 24:24.160 |
|
Totally unconventional. If you're unsure, you wait another 10 minutes. If the patient doesn't |
|
|
|
24:24.160 --> 24:28.640 |
|
breathe, you know, he's dead. Well, now we have ventilators, we have pacemakers. So, |
|
|
|
24:28.640 --> 24:33.360 |
|
it's much more difficult to define what death is. Typically, death is defined as the end of life |
|
|
|
24:33.360 --> 24:38.160 |
|
and life is defined before death. Thank you for that. Okay. So, we don't have really very good |
|
|
|
24:38.160 --> 24:42.640 |
|
definitions. Intelligence, we don't have a rigorous definition. We know something how to |
|
|
|
24:42.640 --> 24:49.920 |
|
measure. It's called IQ or G factors. Right. And we're beginning to build it in a narrow sense. |
|
|
|
24:49.920 --> 24:56.560 |
|
Right. Like, go AlphaGo and Watson and, you know, Google cars and Uber cars and all of that. |
|
|
|
24:56.560 --> 25:00.960 |
|
That's still narrow AI. And some people are thinking about artificial general intelligence. |
|
|
|
25:00.960 --> 25:05.120 |
|
But roughly, as we said before, it's something to do with the ability to learn and to adapt |
|
|
|
25:05.120 --> 25:10.160 |
|
to new environments. But that is, as I said, also its radical difference from experience. |
|
|
|
25:10.800 --> 25:16.480 |
|
And it's very unclear if you build a machine that has AGI, it's not at all a priori. It's |
|
|
|
25:16.480 --> 25:20.560 |
|
not at all clear that this machine will have consciousness. It may or may not. |
|
|
|
25:20.560 --> 25:25.280 |
|
So, let's ask it the other way. Do you think if you were to try to build an artificial general |
|
|
|
25:25.280 --> 25:31.040 |
|
intelligence system, do you think figuring out how to build artificial consciousness |
|
|
|
25:31.040 --> 25:39.360 |
|
would help you get to an AGI? Or put another way, do you think intelligent requires consciousness? |
|
|
|
25:40.320 --> 25:46.240 |
|
In human, it goes hand in hand. In human, or I think in biology, consciousness, intelligence |
|
|
|
25:46.240 --> 25:52.640 |
|
goes hand in hand. Quay is illusion because the brain evolved to be highly complex, complexity |
|
|
|
25:52.640 --> 25:58.160 |
|
via the theory integrated information theory is sort of ultimately is what is closely tied to |
|
|
|
25:58.160 --> 26:04.000 |
|
consciousness. Ultimately, it's causal power upon itself. And so, in evolved systems, they go |
|
|
|
26:04.000 --> 26:09.120 |
|
together. In artificial systems, particularly in digital machines, they do not go together. |
|
|
|
26:09.120 --> 26:16.800 |
|
And if you ask me point blank, is Alexa 20.0 in the year 2040, once she can easily pass every |
|
|
|
26:16.800 --> 26:21.600 |
|
Turing test that she conscious? No. Even if she claims she's conscious. In fact, you could even |
|
|
|
26:21.600 --> 26:25.840 |
|
do a more radical version of this thought experiment. We can build a computer simulation |
|
|
|
26:25.840 --> 26:30.320 |
|
of the human brain. You know what Henry Markham in the blue brain project or the human brain |
|
|
|
26:30.320 --> 26:34.240 |
|
project in Switzerland is trying to do. Let's grant them all the success. So in 10 years, |
|
|
|
26:34.240 --> 26:38.560 |
|
we have this perfect simulation of the human brain, every neuron is simulated. And it has |
|
|
|
26:38.560 --> 26:43.520 |
|
alarmics, and it has motor neurons, it has a blockers area. And of course, they'll talk and |
|
|
|
26:43.520 --> 26:48.480 |
|
they'll say, Hi, I just woken up. I feel great. Okay, even that computer simulation that can in |
|
|
|
26:48.480 --> 26:53.840 |
|
principle map onto your brain will not be conscious. Why? Because it simulates, it's a difference |
|
|
|
26:53.840 --> 26:59.280 |
|
between the simulated and the real. So it simulates the behavior associated with consciousness. It |
|
|
|
26:59.280 --> 27:03.920 |
|
might be, it will, if it's done properly, will have all the intelligence that that particular |
|
|
|
27:03.920 --> 27:09.760 |
|
person that simulating has. But simulating intelligence is not the same as having conscious |
|
|
|
27:09.760 --> 27:14.240 |
|
experiences. And I give you a really nice metaphor that engineers and physicists typically get. |
|
|
|
27:15.120 --> 27:20.000 |
|
I can write down Einstein's field equation nine or 10 equations that describe the link in general |
|
|
|
27:20.000 --> 27:27.600 |
|
relativity between curvature and mass. I can do that. I can run this on my laptop to predict that |
|
|
|
27:27.600 --> 27:34.960 |
|
the center, the black hole at the center of our galaxy will be so massive that it will twist space |
|
|
|
27:34.960 --> 27:40.240 |
|
time around it so no light can escape. It's a black hole, right? But funny, have you ever wondered |
|
|
|
27:40.240 --> 27:47.040 |
|
why doesn't this computer simulation suck me in? Right? It simulates gravity, but it doesn't have |
|
|
|
27:47.040 --> 27:52.720 |
|
the causal power of gravity. That's a huge difference. So it's a difference between the real |
|
|
|
27:52.720 --> 27:57.360 |
|
and the simulated, just like it doesn't get wet inside a computer when the computer runs code |
|
|
|
27:57.360 --> 28:03.360 |
|
that simulates a weather storm. And so in order to have to have artificial consciousness, you have |
|
|
|
28:03.360 --> 28:08.720 |
|
to give it the same causal power as the human brain. You have to build so called a neuromorphic |
|
|
|
28:08.720 --> 28:15.040 |
|
machine that has hardware that is very similar to the human brain, not a digital clock for |
|
|
|
28:15.040 --> 28:22.480 |
|
Neumann computer. So that's just to clarify, though, you think that consciousness is not required |
|
|
|
28:22.480 --> 28:29.360 |
|
to create human level intelligence. It seems to accompany in the human brain, but for machine |
|
|
|
28:29.360 --> 28:37.360 |
|
not. That's correct. So maybe just because this is AGI, let's dig in a little bit about what we |
|
|
|
28:37.360 --> 28:44.000 |
|
mean by intelligence. So one thing is the g factor of these kind of IQ tests of intelligence. |
|
|
|
28:44.000 --> 28:51.520 |
|
But I think if you maybe another way to say so in 2040, 2050, people will have Siri that is |
|
|
|
28:52.320 --> 28:57.600 |
|
just really impressive. Do you think people will say Siri is intelligent? |
|
|
|
28:57.600 --> 29:04.080 |
|
Yes. Intelligence is this amorphous thing. So to be intelligent, it seems like you have to have |
|
|
|
29:04.080 --> 29:09.520 |
|
some kind of connections with other human beings in the sense that you have to impress them with |
|
|
|
29:09.520 --> 29:17.200 |
|
your intelligence. And there feels you have to somehow operate in this world full of humans. |
|
|
|
29:17.200 --> 29:22.240 |
|
And for that, there feels like there has to be something like consciousness. So you think you |
|
|
|
29:22.240 --> 29:28.080 |
|
can have just the world's best natural NLP system, natural language, understanding a generation, |
|
|
|
29:28.080 --> 29:33.840 |
|
and that will be that will get us happy and say, you know what, we've created an AGI. |
|
|
|
29:33.840 --> 29:41.120 |
|
I don't know, happy. Yes, I do believe we can get what we call high level functional intelligence, |
|
|
|
29:41.120 --> 29:47.200 |
|
particularly sort of the G, you know, this fluid like intelligence that we charge, |
|
|
|
29:47.200 --> 29:52.640 |
|
particularly the place like MIT, right? In machines, I see a priori no reasons, |
|
|
|
29:52.640 --> 29:58.160 |
|
and I see a lot of reason to believe it's going to happen very over the next 50 years or 30 years. |
|
|
|
29:58.160 --> 30:04.160 |
|
So for beneficial AI, for creating an AI system that's, so you mentioned ethics, |
|
|
|
30:04.880 --> 30:10.880 |
|
that is exceptionally intelligent, but also does not do, does, you know, aligns its values |
|
|
|
30:10.880 --> 30:14.800 |
|
with our values of humanity. Do you think then it needs consciousness? |
|
|
|
30:14.800 --> 30:19.920 |
|
Yes, I think that that is a very good argument that if we're concerned about AI and the threat |
|
|
|
30:19.920 --> 30:26.160 |
|
of AI, like Nick Bostrom, existentialist threat, I think having an intelligence that has empathy, |
|
|
|
30:26.160 --> 30:32.480 |
|
right? Why do we find abusing a dog? Why do most of us find that abhorrent or abusing any animal? |
|
|
|
30:32.480 --> 30:37.600 |
|
Right? Why do we find that abhorrent? Because we have this thing called empathy, which if you |
|
|
|
30:37.600 --> 30:42.960 |
|
look at the Greek really means feeling with. I feel a pathos empathy. I have feeling with you. |
|
|
|
30:42.960 --> 30:48.400 |
|
I see somebody else suffer. That isn't even my conspecific. It's not a person. It's not a love. |
|
|
|
30:48.400 --> 30:53.280 |
|
It's not my wife or my kids. It's a dog. But I feel naturally, most of us, not all of us, |
|
|
|
30:53.280 --> 31:00.560 |
|
most of us will feel emphatic. And so it may well be in the long term interest of survival |
|
|
|
31:00.560 --> 31:05.840 |
|
of Homo sapiens sapiens, that if we do build AGI, and it's really becomes very powerful, |
|
|
|
31:05.840 --> 31:10.880 |
|
that it has an emphatic response and doesn't just exterminate humanity. |
|
|
|
31:11.760 --> 31:17.920 |
|
So as part of the full conscious experience to create a consciousness, artificial, or in our |
|
|
|
31:17.920 --> 31:24.320 |
|
human consciousness, do you think fear, maybe we're going to get into your earlier days with |
|
|
|
31:24.320 --> 31:30.560 |
|
Nietzsche and so on, but do you think fear and suffering are essential to have consciousness? |
|
|
|
31:30.560 --> 31:34.800 |
|
Do you have to have the full range of experience to have a system that has experience? |
|
|
|
31:36.480 --> 31:41.600 |
|
Or can you have a system that only has very particular kinds of very positive experiences? |
|
|
|
31:41.600 --> 31:46.880 |
|
Look, you can have, in principle, people have done this in a rat where you implant an electrode |
|
|
|
31:46.880 --> 31:51.680 |
|
in the hypothalamus, the pleasure center of the rat, and the rat stimulates itself above and |
|
|
|
31:51.680 --> 31:57.200 |
|
beyond anything else. It doesn't care about food or natural sex or drink anymore, it just stimulates |
|
|
|
31:57.200 --> 32:02.960 |
|
itself because it's such a pleasurable feeling. I guess it's like an orgasm, just you have all |
|
|
|
32:02.960 --> 32:11.040 |
|
day long. And so a priori, I see no reason why you need different, why you need a great variety. |
|
|
|
32:11.040 --> 32:16.800 |
|
Now, clearly to survive, that wouldn't work. But if I engineered artificially, I don't think |
|
|
|
32:18.720 --> 32:24.960 |
|
you need a great variety of conscious experience. You could have just pleasure or just fear. |
|
|
|
32:25.680 --> 32:30.240 |
|
It might be a terrible existence, but I think that's possible, at least on conceptual logical |
|
|
|
32:30.240 --> 32:34.640 |
|
ground. Because any real creature, whether artificial or engineered, you want to give |
|
|
|
32:34.640 --> 32:40.080 |
|
it fear, the fear of extinction that we all have. And you also want to give it a positive, |
|
|
|
32:40.080 --> 32:45.600 |
|
repetitive states, states that you want the machine encouraged to do because they give |
|
|
|
32:45.600 --> 32:52.000 |
|
the machine positive feedback. So you mentioned panpsychism to jump back a little bit. |
|
|
|
32:53.440 --> 33:00.400 |
|
Everything having some kind of mental property. How do you go from there to something like human |
|
|
|
33:01.120 --> 33:05.760 |
|
consciousness? So everything having some elements of consciousness. Is there something |
|
|
|
33:05.760 --> 33:12.320 |
|
special about human consciousness? Well, so just it's not everything like a spoon. There's no |
|
|
|
33:13.600 --> 33:18.160 |
|
the form of panpsychism, I think about doesn't ascribe consciousness to anything like this, |
|
|
|
33:18.160 --> 33:25.440 |
|
the spoon on my liver. However, it is the theory of integrated information theory does say that |
|
|
|
33:25.440 --> 33:29.680 |
|
system even ones that look from the outside relatively simple, at least if they have this |
|
|
|
33:29.680 --> 33:36.800 |
|
internal causal power, they are they does feel like something. The theory doesn't say anything |
|
|
|
33:36.800 --> 33:41.920 |
|
what's special about human. Biologically, we know what the one thing that's special about |
|
|
|
33:41.920 --> 33:48.800 |
|
human is we speak. And we have an overblown sense of our own importance. Right. We believe we're |
|
|
|
33:48.800 --> 33:54.800 |
|
exceptional. And we're just God's gift to to into the universe. But the but behaviorally, |
|
|
|
33:54.800 --> 33:58.720 |
|
the main thing that we have, we can plan, we can plan over the long term, we have language, |
|
|
|
33:58.720 --> 34:04.080 |
|
and that gives us enormous amount of power. And that's why we are the the condominant species |
|
|
|
34:04.080 --> 34:11.440 |
|
on the planet. So you mentioned God, you grew up a devout Roman Catholic, you know, Roman Catholic |
|
|
|
34:11.440 --> 34:19.360 |
|
family. So, you know, with consciousness, you're sort of exploring some really deeply fundamental |
|
|
|
34:19.360 --> 34:24.320 |
|
human things that religion also touches on. So where does, where does religion fit into your |
|
|
|
34:24.320 --> 34:29.760 |
|
thinking about consciousness? And you've you've grown throughout your life and changed your views |
|
|
|
34:29.760 --> 34:35.520 |
|
on religion, as far as I understand. Yeah, I mean, I'm not much closer to so I'm not a Roman |
|
|
|
34:35.520 --> 34:40.960 |
|
Catholic anymore. I don't believe there's sort of this God, the God I was, I was educated to |
|
|
|
34:40.960 --> 34:46.560 |
|
believe in, you know, sit somewhere in the fullness of time, I'll be united in some sort of everlasting |
|
|
|
34:46.560 --> 34:52.160 |
|
bliss. I just don't see any evidence for that. Look, the world, the night is large and full of |
|
|
|
34:52.160 --> 34:57.360 |
|
wonders, right? There are many things that I don't understand. I think many things that we as a |
|
|
|
34:57.360 --> 35:01.520 |
|
cult, you look, we don't even understand more than 4% of all the the universe, right? Dark |
|
|
|
35:01.520 --> 35:05.760 |
|
matter, dark energy. We have no idea what it is. Maybe it's lost socks. What do I know? So, |
|
|
|
35:06.480 --> 35:12.960 |
|
so all I can tell you is it's a sort of my current religious or spiritual sentiment is much closer |
|
|
|
35:12.960 --> 35:19.600 |
|
to some form of Buddhism. Can you just without the reincarnation? Unfortunately, there's no evidence |
|
|
|
35:19.600 --> 35:25.280 |
|
for reincarnation. So, can you describe the way Buddhism sees the world a little bit? |
|
|
|
35:25.280 --> 35:31.920 |
|
Well, so, you know, they talk about so when I spent several meetings with the Dalai Lama and what |
|
|
|
35:31.920 --> 35:36.720 |
|
always impressed me about him, he really unlike, for example, let's say the Pope or some Cardinal, |
|
|
|
35:36.720 --> 35:42.560 |
|
he always emphasized minimizing the suffering of all creatures. So, they have this from the early |
|
|
|
35:42.560 --> 35:47.520 |
|
beginning, they look at suffering in all creatures, not just in people, but in everybody, this |
|
|
|
35:47.520 --> 35:52.880 |
|
universal. And of course, by degrees, right in the animal, general will have less is less capable |
|
|
|
35:52.880 --> 35:59.520 |
|
of suffering than a well developed, normally developed human. And they think consciousness |
|
|
|
35:59.520 --> 36:05.440 |
|
pervades in this universe. And they have these techniques, you know, you can think of them |
|
|
|
36:05.440 --> 36:10.880 |
|
like mindfulness, etc. in meditation that tries to access sort of what they claim of this more |
|
|
|
36:10.880 --> 36:15.840 |
|
fundamental aspect of reality. I'm not sure it's more fundamentalist. I think about it. There's |
|
|
|
36:15.840 --> 36:20.240 |
|
a physical and then there's this inside view consciousness. And those are the two aspects |
|
|
|
36:20.240 --> 36:24.880 |
|
that the only thing I have access to in my life. And you got to remember my conscious |
|
|
|
36:24.880 --> 36:28.640 |
|
experience and your conscious experience comes prior to anything you know about physics, |
|
|
|
36:28.640 --> 36:33.680 |
|
comes prior to knowledge about the universe and atoms and super strings and molecules and all of |
|
|
|
36:33.680 --> 36:39.040 |
|
that. The only thing you directly are acquainted with is this world that's populated with things |
|
|
|
36:39.040 --> 36:42.720 |
|
and images and sounds in your head and touches and all of that. |
|
|
|
36:42.720 --> 36:49.120 |
|
I actually have a question. So it sounds like you kind of have a rich life. You talk about |
|
|
|
36:49.120 --> 36:55.040 |
|
rock climbing and it seems like you really love literature and consciousness is all about |
|
|
|
36:55.040 --> 37:00.160 |
|
experiencing things. So do you think that has helped your research on this topic? |
|
|
|
37:00.640 --> 37:05.440 |
|
Yes, particularly if you think about it, the various states of what I'm going to do rock |
|
|
|
37:05.440 --> 37:11.600 |
|
climbing. And now I do a rowing crew rowing and a bike every day, you can get into this thing called |
|
|
|
37:11.600 --> 37:16.320 |
|
the zone. And I've always I want to I wanted about a particular with respect to consciousness |
|
|
|
37:16.320 --> 37:21.120 |
|
because it's a strangely addictive state. You want to you want to appear. I mean, once people |
|
|
|
37:21.120 --> 37:25.360 |
|
have it once, they want to keep on going back to it. And you wonder why what is it so addicting |
|
|
|
37:25.360 --> 37:31.840 |
|
about it. And I think it's the experience of almost close to pure experience. Because in this |
|
|
|
37:31.840 --> 37:35.840 |
|
in this zone, you're not conscious or inner voice anymore. There's always an inner voice |
|
|
|
37:35.840 --> 37:38.960 |
|
nagging you, right? You have to do this, you have to do that, you have to pay your taxes, |
|
|
|
37:38.960 --> 37:42.960 |
|
you had this fight with your ex and all of those things are always there. But when you're in the |
|
|
|
37:42.960 --> 37:47.280 |
|
zone, all of that is gone. And you're just this in this wonderful state where you're fully out in |
|
|
|
37:47.280 --> 37:53.440 |
|
the world, right? You're climbing or you're rowing or biking or doing soccer, whatever you're doing. |
|
|
|
37:53.440 --> 37:59.280 |
|
And sort of consciousness sort of is this your all action, or in this case of pure experience, |
|
|
|
37:59.280 --> 38:06.160 |
|
you're not action at all. But in both cases, you experience some aspect of you touch some basic |
|
|
|
38:06.160 --> 38:13.600 |
|
part of conscious existence that is so basic and so deeply satisfying. You I think you touch the |
|
|
|
38:13.600 --> 38:18.320 |
|
root of being. That's really what you're touching there, you're getting close to the root of being. |
|
|
|
38:18.320 --> 38:24.400 |
|
And that's very different from intelligence. So what do you think about the simulation hypothesis, |
|
|
|
38:24.400 --> 38:28.320 |
|
simulation theory, the idea that we all live in a computer simulation? Have you even ever? |
|
|
|
38:28.320 --> 38:35.760 |
|
Rapture for nerds. Rapture for nerds. I think it's as likely as the hypothesis that |
|
|
|
38:35.760 --> 38:41.440 |
|
engaged hundreds of scholars for many centuries, are we all just existing in the mind of God? |
|
|
|
38:42.000 --> 38:46.320 |
|
Right. And this is just a modern version of it. It's it's it's equally plausible. |
|
|
|
38:47.280 --> 38:51.280 |
|
People love talking about these sorts of things. I know their book written about the simulation |
|
|
|
38:51.280 --> 38:56.480 |
|
hypothesis. If that's what people want to do, that's fine. It seems rather esoteric. It's never |
|
|
|
38:56.480 --> 39:02.240 |
|
testable. But it's not useful for you to think of in those terms. So maybe connecting to the |
|
|
|
39:02.240 --> 39:07.200 |
|
questions of free will, which you've talked about. I think I vaguely remember you saying |
|
|
|
39:07.200 --> 39:11.760 |
|
that the idea that there's no free will, it makes you very uncomfortable. |
|
|
|
39:13.280 --> 39:17.840 |
|
So what do you think about free will? And from the from a physics perspective, |
|
|
|
39:17.840 --> 39:22.400 |
|
from a consciousness perspective, what does it all fit? Okay, so from the physics perspective, |
|
|
|
39:22.400 --> 39:26.960 |
|
leaving aside quantum mechanics, we believe we live in a fully deterministic world, right? |
|
|
|
39:26.960 --> 39:30.880 |
|
But then comes, of course, quantum mechanics. So now we know that certain things are in principle |
|
|
|
39:30.880 --> 39:37.040 |
|
not predictable, which I, as you said, I prefer because the idea that at the initial condition |
|
|
|
39:37.040 --> 39:40.800 |
|
of the universe, and then everything else, we're just acting out the initial condition of the |
|
|
|
39:40.800 --> 39:47.120 |
|
universe that doesn't that doesn't make it's not a romantic notion. Certainly not. Right. Now, |
|
|
|
39:47.120 --> 39:52.400 |
|
when it comes to consciousness, I think we do have certain freedom. We are much more constrained by |
|
|
|
39:52.400 --> 39:57.040 |
|
physics, of course, and by our past and by our own conscious desires and what our parents told us |
|
|
|
39:57.040 --> 40:01.200 |
|
and what our environment tells us, we all know that, right? There's hundreds of experiments |
|
|
|
40:01.200 --> 40:06.480 |
|
that show how we can be influenced. But finally, in the in the final analysis, when you make a |
|
|
|
40:06.480 --> 40:10.560 |
|
life and I'm talking really about critical decision, what you really think, should I marry, |
|
|
|
40:10.560 --> 40:14.800 |
|
should I go to this school of that good, should I take this job with that job, |
|
|
|
40:14.800 --> 40:20.240 |
|
should I cheat on my taxes or not? These sort of these are things where you really deliberate. |
|
|
|
40:20.240 --> 40:25.200 |
|
And I think on those conditions, you are as free as you can be. When you when you bring your |
|
|
|
40:25.200 --> 40:33.360 |
|
entire being your entire conscious being to that question and try to analyze it on all the |
|
|
|
40:33.360 --> 40:37.920 |
|
the various conditions, then you take you make a decision you are as free as you can ever be. |
|
|
|
40:38.560 --> 40:44.160 |
|
That is I think what what free will is it's not a will that's totally free to do anything it wants. |
|
|
|
40:44.160 --> 40:50.880 |
|
That's not possible. Right. So as Jack mentioned, yet you actually read a blog about books you've |
|
|
|
40:50.880 --> 41:01.680 |
|
read amazing books from Russian from Bulgakov. Yeah, Neil Gaiman, Carl Sagan, Murakami. So what |
|
|
|
41:01.680 --> 41:07.360 |
|
is a book that early in your life transformed the way you saw the world, something that changed your |
|
|
|
41:07.360 --> 41:14.160 |
|
life? Nietzsche, I guess, did. That's broke out Trista, because he talks about some of these problems. |
|
|
|
41:14.160 --> 41:18.720 |
|
You know, he was one of the first discoverer of the unconscious. This is, you know, a little bit |
|
|
|
41:18.720 --> 41:26.240 |
|
before Freud when it was in the air. And you know, he makes all these claims that people sort of |
|
|
|
41:26.240 --> 41:33.360 |
|
under the guise of under the mass of charity actually are very non charitable. So he is sort |
|
|
|
41:33.360 --> 41:40.480 |
|
of really the first discoverer of the great land of the of the unconscious. And that that really |
|
|
|
41:40.480 --> 41:44.000 |
|
struck me. And what do you think? What do you think about the unconscious? What do you think |
|
|
|
41:44.000 --> 41:49.760 |
|
about Freud? What do you think about these ideas? What's what's just like dark matter in the universe? |
|
|
|
41:49.760 --> 41:54.560 |
|
What's over there in that unconscious? A lot. I mean, much more than we think, right? This is what |
|
|
|
41:54.560 --> 42:00.080 |
|
a lot of last 100 years of research has shown. So I think he was a genius, misguided towards the |
|
|
|
42:00.080 --> 42:04.800 |
|
end. But he was all he started out as a neuroscientist, right? He contributed. He did the studies on the |
|
|
|
42:06.000 --> 42:11.600 |
|
on the lamp. He contributed himself to the neuron hypothesis, the idea that they're discrete units |
|
|
|
42:11.600 --> 42:17.920 |
|
that we call nerve cells now. And then he started then he he wrote, you know, about the unconscious. |
|
|
|
42:17.920 --> 42:22.720 |
|
And I think it's true. There's lots of stuff happening. You feel this particular when you're |
|
|
|
42:22.720 --> 42:27.520 |
|
in a relationship and it breaks a thunder, right? And then you have this terrible, you can have |
|
|
|
42:27.520 --> 42:33.200 |
|
love and hate and lust and anger and all of it is mixed in. And when you try to analyze yourself, |
|
|
|
42:33.200 --> 42:40.080 |
|
why am I so upset? It's very, very difficult to penetrate to those basements, those caverns in |
|
|
|
42:40.080 --> 42:45.440 |
|
your mind, because the prying eyes of conscience doesn't have access to those. But they're there |
|
|
|
42:45.440 --> 42:50.240 |
|
in the amygdala or, you know, in lots of other places, they make you upset or angry or sad or |
|
|
|
42:50.240 --> 42:55.360 |
|
depressed. And it's very difficult to try to actually uncover the reason you can go to a shrink, |
|
|
|
42:55.360 --> 42:59.760 |
|
you can talk with your friend endlessly, you construct finally a story why this happened, |
|
|
|
42:59.760 --> 43:03.760 |
|
why you love her or don't love her or whatever. But you don't really know whether that's actually |
|
|
|
43:04.640 --> 43:08.400 |
|
that actually happened, because you simply don't have access to those parts of the brain. And |
|
|
|
43:08.400 --> 43:13.120 |
|
they're very powerful. Do you think that's a feature or a bug of our brain? The fact that we |
|
|
|
43:13.120 --> 43:19.120 |
|
have this deep, difficult to dive into subconscious? I think it's a feature, because otherwise, |
|
|
|
43:19.120 --> 43:28.480 |
|
look, we are like any other brain or nervous system or computer, we are severely band limited. |
|
|
|
43:28.480 --> 43:34.640 |
|
If we, if everything I do, every emotion I feel, every eye movements I make, if all of that had |
|
|
|
43:34.640 --> 43:41.520 |
|
to be under the control of consciousness, I couldn't, I couldn't, I wouldn't be here. Right. |
|
|
|
43:41.520 --> 43:46.880 |
|
So, so what you do early on your brain, you have to be conscious when you learn things like typing |
|
|
|
43:46.880 --> 43:52.640 |
|
or like riding on a bike. But then you what you do, you train up route, I think that involved |
|
|
|
43:52.640 --> 43:57.760 |
|
basal ganglia and stratum, you train up different parts of your brain. And then once you do it |
|
|
|
43:57.760 --> 44:01.600 |
|
automatically like typing, you can show you do it much faster without even thinking about it, |
|
|
|
44:01.600 --> 44:05.280 |
|
because you've got these highly specialized what Franz Krik and I called zombie agents |
|
|
|
44:06.160 --> 44:09.680 |
|
that I sort of they're taking care of that while your consciousness can sort of worry |
|
|
|
44:09.680 --> 44:14.640 |
|
about the abstract sense of the text you want to write. I think that's true for many, many things. |
|
|
|
44:14.640 --> 44:20.400 |
|
But for the things like all the fights you had with an ex girlfriend, things that |
|
|
|
44:20.960 --> 44:25.120 |
|
you would think are not useful to still linger somewhere in the subconscious. |
|
|
|
44:25.120 --> 44:29.600 |
|
So that seems like a bug that it would stick there. You think it would be better if you can |
|
|
|
44:29.600 --> 44:34.320 |
|
analyze and then get it out of the system or just forget it ever happened. You know, |
|
|
|
44:34.320 --> 44:39.520 |
|
that that seems a very buggy kind of. Well, yeah, in general, we don't have and that's |
|
|
|
44:39.520 --> 44:43.840 |
|
probably functional. We don't have an ability unless it's extreme, the outcases clinical |
|
|
|
44:43.840 --> 44:48.880 |
|
dissociations, right? When people are heavily abused when they completely repress them, |
|
|
|
44:48.880 --> 44:53.280 |
|
they the memory, but that doesn't happen in, in, in, you know, in normal people, |
|
|
|
44:53.280 --> 44:58.800 |
|
we don't have an ability to remove traumatic memories. And of course, we suffer from that. |
|
|
|
44:58.800 --> 45:03.680 |
|
On the other hand, probably if you have the ability to constantly wipe your memory, |
|
|
|
45:03.680 --> 45:10.160 |
|
you probably do it to an extent that isn't useful to you. So yeah, it's a good question. |
|
|
|
45:10.160 --> 45:15.760 |
|
It's a balance. So on the books, as Jack mentioned, correct me if I'm wrong, but |
|
|
|
45:16.640 --> 45:21.280 |
|
broadly speaking, academia and different scientific disciplines, certainly in engineering, |
|
|
|
45:21.920 --> 45:27.680 |
|
reading literature seems to be a rare pursuit. Perhaps I'm wrong in this, but that's in my |
|
|
|
45:27.680 --> 45:34.880 |
|
experience, most people read much more technical texts and do not sort of escape or seek truth |
|
|
|
45:34.880 --> 45:40.400 |
|
in literature. It seems like you do. So what do you think is the value? What do you think |
|
|
|
45:40.400 --> 45:46.000 |
|
literature adds to the pursuit of scientific truth? Do you think it's good? It's useful for |
|
|
|
45:47.120 --> 45:51.520 |
|
give you access to much wider array of human experiences? |
|
|
|
45:52.320 --> 45:54.000 |
|
How valuable do you think it is? |
|
|
|
45:54.000 --> 45:58.720 |
|
Well, if you want to understand human nature and nature in general, then I think you have to |
|
|
|
45:58.720 --> 46:04.800 |
|
better understand a wide variety of experiences, not just sitting in a lab staring at a screen and |
|
|
|
46:04.800 --> 46:09.200 |
|
having a face flashed onto you for 100 milliseconds and pushing a button. That's what I used to do. |
|
|
|
46:09.200 --> 46:13.280 |
|
That's what most psychologists do. There's nothing wrong with that, but you need to consider |
|
|
|
46:13.280 --> 46:18.800 |
|
lots of other strange states. And literature is a shortcut for this. |
|
|
|
46:18.800 --> 46:22.880 |
|
Well, yeah, because literature, that's what literature is all about. All sorts of interesting |
|
|
|
46:22.880 --> 46:28.640 |
|
experiences that people have, the contingency of it, the fact that women experience a world |
|
|
|
46:28.640 --> 46:33.840 |
|
different, black people experience a world different. One way to experience that is reading |
|
|
|
46:33.840 --> 46:38.320 |
|
all these different literature and try to find out. You see everything so relative. You read |
|
|
|
46:38.320 --> 46:42.320 |
|
the books 100 years ago, they thought about certain problems very, very differently than |
|
|
|
46:42.320 --> 46:47.120 |
|
us today. We today, like any culture, think we know it all. That's common to every culture. |
|
|
|
46:47.120 --> 46:50.960 |
|
Every culture believes that it's heyday, they know it all. And then you realize, well, there's |
|
|
|
46:50.960 --> 46:55.360 |
|
other ways of viewing the universe. And some of them may have lots of things in their favor. |
|
|
|
46:56.560 --> 47:03.200 |
|
So this is a question I wanted to ask about timescale or scale in general. When you, with |
|
|
|
47:03.200 --> 47:07.520 |
|
IIT or in general, try to think about consciousness, try to think about these ideas, |
|
|
|
47:08.880 --> 47:17.520 |
|
kind of naturally thinking human timescales. Do you or and also entities that are sized |
|
|
|
47:17.520 --> 47:21.680 |
|
close to humans? Do you think of things that are much larger, much smaller as containing |
|
|
|
47:21.680 --> 47:31.200 |
|
consciousness? And do you think of things that take, you know, well, ages, eons to operate |
|
|
|
47:31.200 --> 47:36.560 |
|
in their conscious cause effect? It's a very good question. So I think a lot of what small |
|
|
|
47:36.560 --> 47:42.400 |
|
creatures, because experimentally, a lot of people work on flies and bees. So most people |
|
|
|
47:42.400 --> 47:46.320 |
|
just think they're automata. They're just bugs, for heaven's sake. But if you look at their behavior, |
|
|
|
47:46.320 --> 47:50.080 |
|
like bees, they can recognize individual humans. They have this very complicated |
|
|
|
47:50.960 --> 47:54.880 |
|
way to communicate. If you've ever been involved, or you know, your parents, when they bought a |
|
|
|
47:54.880 --> 47:59.760 |
|
house, what sort of agonizing decision that is. And bees have to do that once a year, right? |
|
|
|
47:59.760 --> 48:03.200 |
|
When they swarm in the spring, and then they have this very elaborate way, they have three |
|
|
|
48:03.200 --> 48:07.360 |
|
nuts scouts, they go to the individual sites, they come back, they have this power, this dance, |
|
|
|
48:07.360 --> 48:11.520 |
|
literally where they dance for several days, they try to recruit other these very complicated |
|
|
|
48:11.520 --> 48:16.480 |
|
decision weight. When they finally want to make a decision, the entire swarm, the scouts warm up |
|
|
|
48:16.480 --> 48:20.320 |
|
the entire swarm, then go to one location, they don't go to 50 equation, they go to one location |
|
|
|
48:20.320 --> 48:24.800 |
|
that the scouts have agreed upon by themselves. That's awesome. If we look at the circuit complexity, |
|
|
|
48:24.800 --> 48:28.400 |
|
it's 10 times more denser than anything we have in our brain. Now they only have a million |
|
|
|
48:28.400 --> 48:32.640 |
|
neurons, but the neurons are amazingly complex, complex behavior, very complicated circuitry. |
|
|
|
48:32.640 --> 48:37.120 |
|
So there's no question, they experience something, their life is very different, they're tiny, |
|
|
|
48:37.120 --> 48:44.240 |
|
they only live for, well, workers live maybe for two months. So I think an IIT tells you this, |
|
|
|
48:44.240 --> 48:49.680 |
|
in principle, the substrate of consciousness is the substrate that maximizes the cause effect |
|
|
|
48:49.680 --> 48:54.640 |
|
power over all possible spatial temple grains. So when I think about, for example, do you know |
|
|
|
48:54.640 --> 48:59.840 |
|
the science fiction story, The Black Cloud? Okay, it's a classic by Fred Hoel, the astronomer. |
|
|
|
48:59.840 --> 49:07.120 |
|
He has this cloud intervening between the earth and the sun and leading to some sort of global |
|
|
|
49:07.120 --> 49:13.200 |
|
cooling that's written in the 50s. It turns out you can, using the radio dish, they communicate |
|
|
|
49:13.200 --> 49:18.240 |
|
with actually an entity, it's actually an intelligent entity. And they sort of, they |
|
|
|
49:18.240 --> 49:23.600 |
|
convince it to move away. So here you have a radical different entity. And in principle, |
|
|
|
49:23.600 --> 49:28.640 |
|
IIT says, well, you can measure the integrated information in principle at least. And yes, |
|
|
|
49:28.640 --> 49:34.640 |
|
if that, if the maximum of that occurs at a time scale of month, rather than an acid sort of fraction |
|
|
|
49:34.640 --> 49:40.720 |
|
of a second, yes, and they would experience life where each moment is a month rather than, or |
|
|
|
49:40.720 --> 49:47.600 |
|
microsecond, right, rather than a fraction of a second in the human case. And so there may be |
|
|
|
49:47.600 --> 49:51.760 |
|
forms of consciousness that we simply don't recognize for what they are, because they are so |
|
|
|
49:51.760 --> 49:56.960 |
|
radical different from anything you and I are used to. Again, that's why it's good to read |
|
|
|
49:56.960 --> 50:03.120 |
|
or to watch science fiction movie or to think about this. Like this is, do you know Stanislaw |
|
|
|
50:03.120 --> 50:07.520 |
|
Lem, this Polish science fiction writer, he wrote Solaris, it was turned into a Hollywood movie? |
|
|
|
50:07.520 --> 50:13.280 |
|
Yes. His best novel was in the 60s, a very, very ingenious, an ingenious background. His most |
|
|
|
50:13.280 --> 50:19.680 |
|
interesting novel is called The Victorious, where human civilization, they have this mission to |
|
|
|
50:19.680 --> 50:25.280 |
|
this planet and everything is destroyed and they discover machines, humans got killed and then |
|
|
|
50:25.280 --> 50:30.480 |
|
these machines took over and there was a machine evolution, a Darwinian evolution, he talks about |
|
|
|
50:30.480 --> 50:37.200 |
|
this very vividly. And finally, the dominant, the dominant machine intelligence organism that |
|
|
|
50:37.200 --> 50:42.960 |
|
survived are gigantic clouds of little hexagonal universal cell automata. This is written in the |
|
|
|
50:42.960 --> 50:48.320 |
|
60s. So typically they're all lying on the ground individual by themselves, but in times of crisis |
|
|
|
50:48.320 --> 50:54.080 |
|
they can communicate, they assemble into gigantic nets, into clouds of trillions of these particles |
|
|
|
50:54.080 --> 50:59.760 |
|
and then they become hyper intelligent and they can beat anything that humans can throw at it. |
|
|
|
50:59.760 --> 51:05.040 |
|
It's a very beautiful and compelling way you have an intelligence where finally the humans |
|
|
|
51:05.040 --> 51:09.600 |
|
leave the planet, they simply unable to understand and comprehend this creature and they can say, |
|
|
|
51:09.600 --> 51:14.000 |
|
well, either we can nuke the entire planet and destroy it or we just have to leave because |
|
|
|
51:14.000 --> 51:19.520 |
|
fundamentally it's an alien, it's so alien from us and our ideas that we cannot communicate with them. |
|
|
|
51:19.520 --> 51:25.440 |
|
Yeah, actually in conversation, cellular automata, Stephen Wolf from |
|
|
|
51:25.440 --> 51:31.760 |
|
brought up is that there could be some ideas that you already have these artificial general |
|
|
|
51:31.760 --> 51:36.000 |
|
intelligence like super smart or maybe conscious beings in the cellular automata, we just don't |
|
|
|
51:36.000 --> 51:40.320 |
|
know how to talk to them. So it's the link with the communication, but you don't know what to do |
|
|
|
51:40.320 --> 51:47.920 |
|
with it. So that's one sort of view is consciousness is only something you can measure. So it's not |
|
|
|
51:47.920 --> 51:52.880 |
|
conscious if you can't measure it. But so you're making an ontological and an epistemic statement. |
|
|
|
51:52.880 --> 51:58.640 |
|
One is there, it's just like seeing their multiverses, that might be true, but I can't |
|
|
|
51:58.640 --> 52:03.600 |
|
communicate with them. I don't have any knowledge of them. That's an epistemic argument. Those are |
|
|
|
52:03.600 --> 52:07.520 |
|
two different things. So it may well be possible. Look, another case that's happening right now, |
|
|
|
52:07.520 --> 52:12.240 |
|
people are building these mini organoids. Do you know about this? So you can take stem cells from |
|
|
|
52:12.240 --> 52:15.920 |
|
under your arm, put it in a dish, add for transcription factors and then you can induce |
|
|
|
52:15.920 --> 52:20.800 |
|
them to grow into large, well large, they're a few millimeter, they're like a half a million |
|
|
|
52:20.800 --> 52:25.840 |
|
neurons that look like nerve cells in a dish called mini organoids. At Harvard, at Stanford, |
|
|
|
52:25.840 --> 52:29.760 |
|
everywhere they're building them. It may be well be possible that they're beginning to feel like |
|
|
|
52:29.760 --> 52:34.560 |
|
something. But we can't really communicate with them right now. So people are beginning to think |
|
|
|
52:34.560 --> 52:40.800 |
|
about the ethics of this. So yes, he may be perfectly right. But it's one question, are |
|
|
|
52:40.800 --> 52:44.480 |
|
they conscious or not? It's totally separate question. How would I know? Those are two different |
|
|
|
52:44.480 --> 52:52.480 |
|
things. Right. If you could give advice to a young researcher sort of dreaming of understanding or |
|
|
|
52:52.480 --> 52:58.560 |
|
creating human level intelligence or consciousness, what would you say? |
|
|
|
52:59.840 --> 53:07.680 |
|
Follow your dreams. Read widely. No, I mean, I suppose with discipline, what is the pursuit |
|
|
|
53:07.680 --> 53:11.760 |
|
that they should take on? Is it neuroscience? Is it competition, cognitive science? Is it |
|
|
|
53:11.760 --> 53:19.600 |
|
philosophy? Is it computer science or robotics? No, in a sense that, okay, so the only known |
|
|
|
53:20.960 --> 53:25.520 |
|
system that have high level of intelligence is homo sapiens. So if you wanted to build it, |
|
|
|
53:25.520 --> 53:30.080 |
|
it's probably good to continue to study closely what humans do. So cognitive neuroscience, |
|
|
|
53:30.080 --> 53:34.400 |
|
you know, somewhere between cognitive neuroscience on the one hand, and some philosophy of mind, |
|
|
|
53:34.400 --> 53:40.160 |
|
and then AI computer science. You can look at all the original ideas, neural networks, |
|
|
|
53:40.160 --> 53:45.440 |
|
they all came from neuroscience, right? And reinforce whether it's snarky, Minsky building |
|
|
|
53:45.440 --> 53:49.040 |
|
is snarky, or whether it's, you know, the early Schubel and Wiesel experiments at Harvard that |
|
|
|
53:49.040 --> 53:54.880 |
|
then gave rise to networks and then multi layer networks. So it may well be possible. In fact, |
|
|
|
53:54.880 --> 53:59.760 |
|
some people argue that to make the next big step in AI once we realize the limits of deep |
|
|
|
53:59.760 --> 54:03.680 |
|
convolutional networks, they can do certain things, but they can't really understand. |
|
|
|
54:03.680 --> 54:09.440 |
|
But they don't, they don't, they can't really, I can't really show them one image. I can show you a |
|
|
|
54:09.440 --> 54:15.600 |
|
single image of somebody a pickpocket who steals a wallet from a purse, you immediately know that's |
|
|
|
54:15.600 --> 54:20.800 |
|
a pickpocket. Now computer system would just say, well, it's a man, it's a woman, it's a purse, right? |
|
|
|
54:20.800 --> 54:25.200 |
|
Unless you train this machine on showing it 100,000 pickpockets, right? So it doesn't, |
|
|
|
54:25.200 --> 54:31.040 |
|
it doesn't have this easy understanding that you have, right? So, so some people make the argument |
|
|
|
54:31.040 --> 54:34.640 |
|
in order to go to the next step where you really want to build machines that understand in a way |
|
|
|
54:34.640 --> 54:39.280 |
|
you and I, we have to go to psychology. We need to understand how we do it and how our brains |
|
|
|
54:39.280 --> 54:44.480 |
|
enable us to do it. And so therefore being on the cusp, it's also so exciting to try to understand |
|
|
|
54:44.480 --> 54:49.040 |
|
better our nature and then to build to take some of those insight and build them. So I think the |
|
|
|
54:49.040 --> 54:53.680 |
|
most exciting thing is somewhere in the interface between cognitive science, neuroscience, AI, |
|
|
|
54:53.680 --> 54:55.520 |
|
computer science and philosophy of mind. |
|
|
|
54:55.520 --> 55:00.160 |
|
Beautiful. Yeah, I'd say if there is, from the machine learning from, from the computer science, |
|
|
|
55:00.160 --> 55:05.760 |
|
computer vision perspective, many of the research is kind of ignore the way the human brain works. |
|
|
|
55:05.760 --> 55:12.160 |
|
Ignore even psychology or literature or studying the brain. I would hope Josh Tannenbaum talks |
|
|
|
55:12.160 --> 55:18.640 |
|
about bringing that in more and more. And that's, yeah. So you worked on some amazing |
|
|
|
55:18.640 --> 55:25.520 |
|
stuff throughout your life. What's the thing that you're really excited about? What's the mystery |
|
|
|
55:25.520 --> 55:31.440 |
|
that you would love to uncover in the near term beyond, beyond all the mysteries that you already |
|
|
|
55:31.440 --> 55:37.200 |
|
surrounded by? Well, so there's this structure called the claustrum. Okay, this is a structure. |
|
|
|
55:37.200 --> 55:42.480 |
|
It's underneath our cortex. It's a big, you have one on the left and a right underneath this |
|
|
|
55:42.480 --> 55:46.720 |
|
underneath the insular. It's very thin. It's like one millimeter. It's embedded in, in wiring, |
|
|
|
55:46.720 --> 55:53.760 |
|
in white matters. It's very difficult to image. And it has, it has connection to every cortical |
|
|
|
55:53.760 --> 55:58.640 |
|
region. And Francis Crick, the last paper he ever wrote, he dictated corrections the day he died |
|
|
|
55:58.640 --> 56:05.360 |
|
in hospital on this paper. He now we hypothesize, well, because it has this unique anatomy, it gets |
|
|
|
56:05.360 --> 56:11.280 |
|
input from every cortical area and projects back to every, every cortical area, that the function |
|
|
|
56:11.280 --> 56:18.560 |
|
of this structure is similar. It's just a metaphor to the role of a conductor in a symphony orchestra. |
|
|
|
56:18.560 --> 56:22.720 |
|
You have all the different cortical players. You have some that do motion, some that do theory of |
|
|
|
56:22.720 --> 56:26.880 |
|
mind, some that infer social interaction and color and hearing and all the different modules and |
|
|
|
56:26.880 --> 56:31.280 |
|
cortex. But of course, what consciousness is, consciousness puts it all together into one |
|
|
|
56:31.280 --> 56:35.840 |
|
package, right? The binding problem, all of that. And this is really the function because it has |
|
|
|
56:35.840 --> 56:41.280 |
|
relatively few neurons compared to cortex, but it, it talks, it sort of receives input from all of |
|
|
|
56:41.280 --> 56:45.520 |
|
them, and it projects back to all of them. And so we're testing that right now. We've got this |
|
|
|
56:45.520 --> 56:51.120 |
|
beautiful neuronal reconstruction in the mouse called crown of thorn, crown of thorn neurons that |
|
|
|
56:51.120 --> 56:56.160 |
|
are in the classroom that have the most widespread connection of any nerve neuron I've ever seen. |
|
|
|
56:56.160 --> 57:00.320 |
|
They're very deep. You have individual neurons that sit in the clouds from tiny, but then they have |
|
|
|
57:00.320 --> 57:06.000 |
|
this single neuron that have this huge axonal tree that cover both ipsy and contralateral cortex |
|
|
|
57:06.880 --> 57:11.280 |
|
and, and trying to turn using, you know, fancy tools like optogenetics, trying to turn those |
|
|
|
57:11.280 --> 57:14.720 |
|
neurons on or off and study it. What happens in the, in the mouse? |
|
|
|
57:14.720 --> 57:18.720 |
|
So this thing is perhaps where the parts become the whole. |
|
|
|
57:19.920 --> 57:25.440 |
|
Perhaps it's one of the structures. It's a very good way of putting it where the, the individual |
|
|
|
57:25.440 --> 57:31.760 |
|
parts turn into the whole of, the whole of the conscious experience. Well, with that, |
|
|
|
57:32.640 --> 57:36.000 |
|
thank you very much for being here today. Thank you very much. |
|
|
|
57:36.000 --> 57:56.560 |
|
I'll be back in a minute. Thanks Jack. Thank you very much. |
|
|
|
|