|
WEBVTT |
|
|
|
00:00.000 --> 00:02.960 |
|
The following is a conversation with Tomaso Poggio. |
|
|
|
00:02.960 --> 00:06.200 |
|
He's a professor at MIT and is a director of the Center |
|
|
|
00:06.200 --> 00:08.360 |
|
for Brains, Minds, and Machines. |
|
|
|
00:08.360 --> 00:11.640 |
|
Cited over 100,000 times, his work |
|
|
|
00:11.640 --> 00:14.560 |
|
has had a profound impact on our understanding |
|
|
|
00:14.560 --> 00:17.680 |
|
of the nature of intelligence in both biological |
|
|
|
00:17.680 --> 00:19.880 |
|
and artificial neural networks. |
|
|
|
00:19.880 --> 00:23.840 |
|
He has been an advisor to many highly impactful researchers |
|
|
|
00:23.840 --> 00:26.120 |
|
and entrepreneurs in AI, including |
|
|
|
00:26.120 --> 00:28.000 |
|
Demisus Habbis of DeepMind, |
|
|
|
00:28.000 --> 00:31.200 |
|
Amnon Shashwa of Mobileye, and Christoph Koch |
|
|
|
00:31.200 --> 00:34.120 |
|
of the Allen Institute for Brain Science. |
|
|
|
00:34.120 --> 00:36.400 |
|
This conversation is part of the MIT course |
|
|
|
00:36.400 --> 00:38.120 |
|
on artificial general intelligence |
|
|
|
00:38.120 --> 00:40.240 |
|
and the artificial intelligence podcast. |
|
|
|
00:40.240 --> 00:42.760 |
|
If you enjoy it, subscribe on YouTube, iTunes, |
|
|
|
00:42.760 --> 00:44.600 |
|
or simply connect with me on Twitter |
|
|
|
00:44.600 --> 00:47.960 |
|
at Lex Freedman, spelled F R I D. |
|
|
|
00:47.960 --> 00:52.480 |
|
And now, here's my conversation with Tomaso Poggio. |
|
|
|
00:52.480 --> 00:54.520 |
|
You've mentioned that in your childhood, |
|
|
|
00:54.520 --> 00:56.960 |
|
you've developed a fascination with physics, |
|
|
|
00:56.960 --> 00:59.720 |
|
especially the theory of relativity, |
|
|
|
00:59.720 --> 01:03.600 |
|
and that Einstein was also a childhood hero to you. |
|
|
|
01:04.520 --> 01:09.040 |
|
What aspect of Einstein's genius, the nature of his genius, |
|
|
|
01:09.040 --> 01:10.200 |
|
do you think was essential |
|
|
|
01:10.200 --> 01:12.960 |
|
for discovering the theory of relativity? |
|
|
|
01:12.960 --> 01:15.960 |
|
You know, Einstein was a hero to me, |
|
|
|
01:15.960 --> 01:17.200 |
|
and I'm sure to many people, |
|
|
|
01:17.200 --> 01:21.680 |
|
because he was able to make, of course, |
|
|
|
01:21.680 --> 01:25.200 |
|
a major, major contribution to physics |
|
|
|
01:25.200 --> 01:28.520 |
|
with simplifying a bit, |
|
|
|
01:28.520 --> 01:33.520 |
|
just a gedanken experiment, a thought experiment. |
|
|
|
01:35.200 --> 01:38.880 |
|
You know, imagining communication with lights |
|
|
|
01:38.880 --> 01:43.240 |
|
between a stationary observer and somebody on a train. |
|
|
|
01:43.240 --> 01:48.240 |
|
And I thought, you know, the fact that just |
|
|
|
01:48.560 --> 01:52.720 |
|
with the force of his thought, of his thinking, of his mind, |
|
|
|
01:52.720 --> 01:55.640 |
|
it could get to something so deep |
|
|
|
01:55.640 --> 01:57.520 |
|
in terms of physical reality, |
|
|
|
01:57.520 --> 02:01.320 |
|
how time depends on space and speed. |
|
|
|
02:01.320 --> 02:04.120 |
|
It was something absolutely fascinating. |
|
|
|
02:04.120 --> 02:06.720 |
|
It was the power of intelligence, |
|
|
|
02:06.720 --> 02:08.440 |
|
the power of the mind. |
|
|
|
02:08.440 --> 02:11.120 |
|
Do you think the ability to imagine, |
|
|
|
02:11.120 --> 02:15.200 |
|
to visualize as he did, as a lot of great physicists do, |
|
|
|
02:15.200 --> 02:18.640 |
|
do you think that's in all of us human beings, |
|
|
|
02:18.640 --> 02:20.600 |
|
or is there something special |
|
|
|
02:20.600 --> 02:22.880 |
|
to that one particular human being? |
|
|
|
02:22.880 --> 02:27.160 |
|
I think, you know, all of us can learn |
|
|
|
02:27.160 --> 02:32.160 |
|
and have, in principle, similar breakthroughs. |
|
|
|
02:33.240 --> 02:37.200 |
|
There is lesson to be learned from Einstein. |
|
|
|
02:37.200 --> 02:42.200 |
|
He was one of five PhD students at ETA, |
|
|
|
02:42.600 --> 02:47.600 |
|
the Eidgenossische Technische Hochschule in Zurich, in physics. |
|
|
|
02:47.600 --> 02:49.840 |
|
And he was the worst of the five. |
|
|
|
02:49.840 --> 02:53.600 |
|
The only one who did not get an academic position |
|
|
|
02:53.600 --> 02:57.040 |
|
when he graduated, when he finished his PhD, |
|
|
|
02:57.040 --> 03:00.000 |
|
and he went to work, as everybody knows, |
|
|
|
03:00.000 --> 03:01.720 |
|
for the patent office. |
|
|
|
03:01.720 --> 03:05.000 |
|
So it's not so much that he worked for the patent office, |
|
|
|
03:05.000 --> 03:07.880 |
|
but the fact that obviously he was smart, |
|
|
|
03:07.880 --> 03:10.240 |
|
but he was not the top student, |
|
|
|
03:10.240 --> 03:12.640 |
|
obviously he was the anti conformist. |
|
|
|
03:12.640 --> 03:15.720 |
|
He was not thinking in the traditional way |
|
|
|
03:15.720 --> 03:18.760 |
|
that probably teachers and the other students were doing. |
|
|
|
03:18.760 --> 03:23.760 |
|
So there is a lot to be said about trying to do the opposite |
|
|
|
03:25.960 --> 03:29.800 |
|
or something quite different from what other people are doing. |
|
|
|
03:29.800 --> 03:31.840 |
|
That's certainly true for the stock market. |
|
|
|
03:31.840 --> 03:34.800 |
|
Never buy if everybody's buying it. |
|
|
|
03:35.800 --> 03:37.440 |
|
And also true for science. |
|
|
|
03:37.440 --> 03:38.440 |
|
Yes. |
|
|
|
03:38.440 --> 03:42.440 |
|
So you've also mentioned staying on the theme of physics |
|
|
|
03:42.440 --> 03:46.440 |
|
that you were excited at a young age |
|
|
|
03:46.440 --> 03:50.440 |
|
by the mysteries of the universe that physics could uncover. |
|
|
|
03:50.440 --> 03:54.440 |
|
Such, as I saw mentioned, the possibility of time travel. |
|
|
|
03:56.440 --> 03:59.440 |
|
So out of the box question I think I'll get to ask today, |
|
|
|
03:59.440 --> 04:01.440 |
|
do you think time travel is possible? |
|
|
|
04:02.440 --> 04:05.440 |
|
Well, it would be nice if it were possible right now. |
|
|
|
04:05.440 --> 04:11.440 |
|
In science you never say no. |
|
|
|
04:11.440 --> 04:14.440 |
|
But your understanding of the nature of time. |
|
|
|
04:14.440 --> 04:15.440 |
|
Yeah. |
|
|
|
04:15.440 --> 04:20.440 |
|
It's very likely that it's not possible to travel in time. |
|
|
|
04:20.440 --> 04:24.440 |
|
We may be able to travel forward in time. |
|
|
|
04:24.440 --> 04:28.440 |
|
If we can, for instance, freeze ourselves |
|
|
|
04:28.440 --> 04:34.440 |
|
or go on some spacecraft traveling close to the speed of light, |
|
|
|
04:34.440 --> 04:39.440 |
|
but in terms of actively traveling, for instance, back in time, |
|
|
|
04:39.440 --> 04:43.440 |
|
I find probably very unlikely. |
|
|
|
04:43.440 --> 04:49.440 |
|
So do you still hold the underlying dream of the engineering intelligence |
|
|
|
04:49.440 --> 04:54.440 |
|
that will build systems that are able to do such huge leaps |
|
|
|
04:54.440 --> 04:58.440 |
|
like discovering the kind of mechanism |
|
|
|
04:58.440 --> 05:00.440 |
|
that would be required to travel through time? |
|
|
|
05:00.440 --> 05:02.440 |
|
Do you still hold that dream? |
|
|
|
05:02.440 --> 05:05.440 |
|
Or echoes of it from your childhood? |
|
|
|
05:05.440 --> 05:06.440 |
|
Yeah. |
|
|
|
05:06.440 --> 05:10.440 |
|
I don't think there are certain problems |
|
|
|
05:10.440 --> 05:13.440 |
|
that probably cannot be solved, |
|
|
|
05:13.440 --> 05:17.440 |
|
depending on what you believe about the physical reality. |
|
|
|
05:17.440 --> 05:23.440 |
|
Maybe it's totally impossible to create energy from nothing |
|
|
|
05:23.440 --> 05:26.440 |
|
or to travel back in time. |
|
|
|
05:26.440 --> 05:35.440 |
|
But about making machines that can think as well as we do or better, |
|
|
|
05:35.440 --> 05:39.440 |
|
or more likely, especially in the short and mid term, |
|
|
|
05:39.440 --> 05:41.440 |
|
help us think better, |
|
|
|
05:41.440 --> 05:45.440 |
|
which in a sense is happening already with the computers we have, |
|
|
|
05:45.440 --> 05:47.440 |
|
and it will happen more and more. |
|
|
|
05:47.440 --> 05:49.440 |
|
But that I certainly believe, |
|
|
|
05:49.440 --> 05:53.440 |
|
and I don't see in principle why computers at some point |
|
|
|
05:53.440 --> 05:59.440 |
|
could not become more intelligent than we are, |
|
|
|
05:59.440 --> 06:03.440 |
|
although the word intelligence is a tricky one, |
|
|
|
06:03.440 --> 06:07.440 |
|
and one who should discuss what I mean with that. |
|
|
|
06:07.440 --> 06:12.440 |
|
Intelligence, consciousness, words like love, |
|
|
|
06:12.440 --> 06:16.440 |
|
all these need to be disentangled. |
|
|
|
06:16.440 --> 06:20.440 |
|
So you've mentioned also that you believe the problem of intelligence |
|
|
|
06:20.440 --> 06:23.440 |
|
is the greatest problem in science, |
|
|
|
06:23.440 --> 06:26.440 |
|
greater than the origin of life and the origin of the universe. |
|
|
|
06:26.440 --> 06:29.440 |
|
You've also, in the talk, |
|
|
|
06:29.440 --> 06:34.440 |
|
I've said that you're open to arguments against you. |
|
|
|
06:34.440 --> 06:40.440 |
|
So what do you think is the most captivating aspect |
|
|
|
06:40.440 --> 06:43.440 |
|
of this problem of understanding the nature of intelligence? |
|
|
|
06:43.440 --> 06:46.440 |
|
Why does it captivate you as it does? |
|
|
|
06:46.440 --> 06:54.440 |
|
Well, originally, I think one of the motivations that I had as a teenager, |
|
|
|
06:54.440 --> 06:58.440 |
|
when I was infatuated with the theory of relativity, |
|
|
|
06:58.440 --> 07:05.440 |
|
was really that I found that there was the problem of time and space |
|
|
|
07:05.440 --> 07:07.440 |
|
and general relativity, |
|
|
|
07:07.440 --> 07:12.440 |
|
but there were so many other problems of the same level of difficulty |
|
|
|
07:12.440 --> 07:16.440 |
|
and importance that I could, even if I were Einstein, |
|
|
|
07:16.440 --> 07:19.440 |
|
it was difficult to hope to solve all of them. |
|
|
|
07:19.440 --> 07:26.440 |
|
So what about solving a problem whose solution allowed me to solve all the problems? |
|
|
|
07:26.440 --> 07:32.440 |
|
And this was what if we could find the key to an intelligence |
|
|
|
07:32.440 --> 07:36.440 |
|
ten times better or faster than Einstein? |
|
|
|
07:36.440 --> 07:39.440 |
|
So that's sort of seeing artificial intelligence |
|
|
|
07:39.440 --> 07:42.440 |
|
as a tool to expand our capabilities. |
|
|
|
07:42.440 --> 07:47.440 |
|
But is there just an inherent curiosity in you |
|
|
|
07:47.440 --> 07:53.440 |
|
and just understanding what it is in here that makes it all work? |
|
|
|
07:53.440 --> 07:55.440 |
|
Yes, absolutely. You're right. |
|
|
|
07:55.440 --> 08:00.440 |
|
So I started saying this was the motivation when I was a teenager, |
|
|
|
08:00.440 --> 08:06.440 |
|
but soon after, I think the problem of human intelligence |
|
|
|
08:06.440 --> 08:14.440 |
|
became a real focus of my science and my research, |
|
|
|
08:14.440 --> 08:27.440 |
|
because I think for me the most interesting problem is really asking who we are. |
|
|
|
08:27.440 --> 08:31.440 |
|
It is asking not only a question about science, |
|
|
|
08:31.440 --> 08:37.440 |
|
but even about the very tool we are using to do science, which is our brain. |
|
|
|
08:37.440 --> 08:39.440 |
|
How does our brain work? |
|
|
|
08:39.440 --> 08:41.440 |
|
From where does it come from? |
|
|
|
08:41.440 --> 08:43.440 |
|
What are its limitations? |
|
|
|
08:43.440 --> 08:45.440 |
|
Can we make it better? |
|
|
|
08:45.440 --> 08:49.440 |
|
And that in many ways is the ultimate question |
|
|
|
08:49.440 --> 08:53.440 |
|
that underlies this whole effort of science. |
|
|
|
08:53.440 --> 08:58.440 |
|
So you've made significant contributions in both the science of intelligence |
|
|
|
08:58.440 --> 09:01.440 |
|
and the engineering of intelligence. |
|
|
|
09:01.440 --> 09:04.440 |
|
In a hypothetical way, let me ask, |
|
|
|
09:04.440 --> 09:08.440 |
|
how far do you think we can get in creating intelligence systems |
|
|
|
09:08.440 --> 09:11.440 |
|
without understanding the biological, |
|
|
|
09:11.440 --> 09:15.440 |
|
the understanding how the human brain creates intelligence? |
|
|
|
09:15.440 --> 09:18.440 |
|
Put another way, do you think we can build a strong ass system |
|
|
|
09:18.440 --> 09:24.440 |
|
without really getting at the core, understanding the functional nature of the brain? |
|
|
|
09:24.440 --> 09:28.440 |
|
Well, this is a real difficult question. |
|
|
|
09:28.440 --> 09:34.440 |
|
We did solve problems like flying |
|
|
|
09:34.440 --> 09:43.440 |
|
without really using too much our knowledge about how birds fly. |
|
|
|
09:43.440 --> 09:51.440 |
|
It was important, I guess, to know that you could have things heavier than air |
|
|
|
09:51.440 --> 09:55.440 |
|
being able to fly like birds. |
|
|
|
09:55.440 --> 10:00.440 |
|
But beyond that, probably we did not learn very much. |
|
|
|
10:00.440 --> 10:08.440 |
|
The brothers right did learn a lot of observation about birds |
|
|
|
10:08.440 --> 10:12.440 |
|
and designing their aircraft, |
|
|
|
10:12.440 --> 10:17.440 |
|
but you can argue we did not use much of biology in that particular case. |
|
|
|
10:17.440 --> 10:28.440 |
|
Now, in the case of intelligence, I think that it's a bit of a bet right now. |
|
|
|
10:28.440 --> 10:36.440 |
|
If you ask, okay, we all agree we'll get at some point, maybe soon, |
|
|
|
10:36.440 --> 10:42.440 |
|
maybe later, to a machine that is indistinguishable from my secretary |
|
|
|
10:42.440 --> 10:47.440 |
|
in terms of what I can ask the machine to do. |
|
|
|
10:47.440 --> 10:50.440 |
|
I think we'll get there and now the question is, |
|
|
|
10:50.440 --> 10:56.440 |
|
you can ask people, do you think we'll get there without any knowledge about the human brain |
|
|
|
10:56.440 --> 11:02.440 |
|
or the best way to get there is to understand better the human brain? |
|
|
|
11:02.440 --> 11:08.440 |
|
This is, I think, an educated bet that different people with different backgrounds |
|
|
|
11:08.440 --> 11:11.440 |
|
will decide in different ways. |
|
|
|
11:11.440 --> 11:17.440 |
|
The recent history of the progress in AI in the last, I would say, five years |
|
|
|
11:17.440 --> 11:26.440 |
|
or ten years has been that the main breakthroughs, the main recent breakthroughs, |
|
|
|
11:26.440 --> 11:31.440 |
|
really start from neuroscience. |
|
|
|
11:31.440 --> 11:35.440 |
|
I can mention reinforcement learning as one, |
|
|
|
11:35.440 --> 11:41.440 |
|
is one of the algorithms at the core of AlphaGo, |
|
|
|
11:41.440 --> 11:46.440 |
|
which is the system that beat the kind of an official world champion of Go, |
|
|
|
11:46.440 --> 11:52.440 |
|
Lee Siddle, two, three years ago in Seoul. |
|
|
|
11:52.440 --> 12:00.440 |
|
That's one, and that started really with the work of Pavlov in 1900, |
|
|
|
12:00.440 --> 12:07.440 |
|
Marvin Miski in the 60s and many other neuroscientists later on. |
|
|
|
12:07.440 --> 12:13.440 |
|
And deep learning started, which is the core again of AlphaGo |
|
|
|
12:13.440 --> 12:19.440 |
|
and systems like autonomous driving systems for cars, |
|
|
|
12:19.440 --> 12:25.440 |
|
like the systems that Mobileye, which is a company started by one of my ex, |
|
|
|
12:25.440 --> 12:30.440 |
|
Okamnon Shashua, so that is the core of those things. |
|
|
|
12:30.440 --> 12:35.440 |
|
And deep learning, really the initial ideas in terms of the architecture |
|
|
|
12:35.440 --> 12:42.440 |
|
of these layered hierarchical networks started with work of Thorston Wiesel |
|
|
|
12:42.440 --> 12:47.440 |
|
and David Hubel at Harvard up the river in the 60s. |
|
|
|
12:47.440 --> 12:54.440 |
|
So recent history suggests that neuroscience played a big role in these breakthroughs. |
|
|
|
12:54.440 --> 12:59.440 |
|
My personal bet is that there is a good chance they continue to play a big role, |
|
|
|
12:59.440 --> 13:03.440 |
|
maybe not in all the future breakthroughs, but in some of them. |
|
|
|
13:03.440 --> 13:05.440 |
|
At least in inspiration. |
|
|
|
13:05.440 --> 13:07.440 |
|
At least in inspiration, absolutely, yes. |
|
|
|
13:07.440 --> 13:12.440 |
|
So you studied both artificial and biological neural networks, |
|
|
|
13:12.440 --> 13:19.440 |
|
you said these mechanisms that underlie deep learning and reinforcement learning, |
|
|
|
13:19.440 --> 13:25.440 |
|
but there is nevertheless significant differences between biological and artificial neural networks |
|
|
|
13:25.440 --> 13:27.440 |
|
as they stand now. |
|
|
|
13:27.440 --> 13:32.440 |
|
So between the two, what do you find is the most interesting, mysterious, |
|
|
|
13:32.440 --> 13:37.440 |
|
maybe even beautiful difference as it currently stands in our understanding? |
|
|
|
13:37.440 --> 13:44.440 |
|
I must confess that until recently I found that the artificial networks |
|
|
|
13:44.440 --> 13:49.440 |
|
were too simplistic relative to real neural networks. |
|
|
|
13:49.440 --> 13:54.440 |
|
But, you know, recently I've been started to think that, yes, |
|
|
|
13:54.440 --> 13:59.440 |
|
there are very big simplification of what you find in the brain. |
|
|
|
13:59.440 --> 14:07.440 |
|
But on the other hand, they are much closer in terms of the architecture to the brain |
|
|
|
14:07.440 --> 14:13.440 |
|
than other models that we had, that computer science used as model of thinking, |
|
|
|
14:13.440 --> 14:19.440 |
|
or mathematical logics, you know, LISP, Prologue, and those kind of things. |
|
|
|
14:19.440 --> 14:23.440 |
|
So in comparison to those, they're much closer to the brain. |
|
|
|
14:23.440 --> 14:28.440 |
|
You have networks of neurons, which is what the brain is about. |
|
|
|
14:28.440 --> 14:35.440 |
|
The artificial neurons in the models are, as I said, caricature of the biological neurons, |
|
|
|
14:35.440 --> 14:39.440 |
|
but they're still neurons, single units communicating with other units, |
|
|
|
14:39.440 --> 14:50.440 |
|
something that is absent in the traditional computer type models of mathematics, reasoning, and so on. |
|
|
|
14:50.440 --> 14:56.440 |
|
So what aspect would you like to see in artificial neural networks added over time |
|
|
|
14:56.440 --> 14:59.440 |
|
as we try to figure out ways to improve them? |
|
|
|
14:59.440 --> 15:10.440 |
|
So one of the main differences and, you know, problems in terms of deep learning today, |
|
|
|
15:10.440 --> 15:17.440 |
|
and it's not only deep learning, and the brain is the need for deep learning techniques |
|
|
|
15:17.440 --> 15:22.440 |
|
to have a lot of labeled examples. |
|
|
|
15:22.440 --> 15:31.440 |
|
For instance, for ImageNet, you have a training set which is one million images, each one labeled by some human |
|
|
|
15:31.440 --> 15:34.440 |
|
in terms of which object is there. |
|
|
|
15:34.440 --> 15:46.440 |
|
And it's clear that in biology, a baby may be able to see a million images in the first years of life, |
|
|
|
15:46.440 --> 15:56.440 |
|
but will not have a million of labels given to him or her by parents or caretakers. |
|
|
|
15:56.440 --> 15:59.440 |
|
So how do you solve that? |
|
|
|
15:59.440 --> 16:07.440 |
|
You know, I think there is this interesting challenge that today, deep learning and related techniques |
|
|
|
16:07.440 --> 16:18.440 |
|
are all about big data, big data meaning a lot of examples labeled by humans, |
|
|
|
16:18.440 --> 16:22.440 |
|
whereas in nature you have... |
|
|
|
16:22.440 --> 16:29.440 |
|
So this big data is n going to infinity, that's the best, you know, n meaning labeled data. |
|
|
|
16:29.440 --> 16:34.440 |
|
But I think the biological world is more n going to 1. |
|
|
|
16:34.440 --> 16:42.440 |
|
A child can learn from a very small number of labeled examples. |
|
|
|
16:42.440 --> 16:49.440 |
|
Like you tell a child, this is a car, you don't need to say like in ImageNet, you know, this is a car, this is a car, |
|
|
|
16:49.440 --> 16:53.440 |
|
this is not a car, this is not a car, one million times. |
|
|
|
16:53.440 --> 17:05.440 |
|
And of course with AlphaGo and AlphaZero variants, because the world of Go is so simplistic that you can actually learn by yourself |
|
|
|
17:05.440 --> 17:08.440 |
|
through self play, you can play against each other. |
|
|
|
17:08.440 --> 17:15.440 |
|
And the real world, the visual system that you've studied extensively is a lot more complicated than the game of Go. |
|
|
|
17:15.440 --> 17:22.440 |
|
On the comment about children, which are fascinatingly good at learning new stuff, |
|
|
|
17:22.440 --> 17:26.440 |
|
how much of it do you think is hardware and how much of it is software? |
|
|
|
17:26.440 --> 17:32.440 |
|
Yeah, that's a good and deep question, in a sense is the old question of nurture and nature, |
|
|
|
17:32.440 --> 17:40.440 |
|
how much is in the gene and how much is in the experience of an individual. |
|
|
|
17:40.440 --> 17:55.440 |
|
Obviously, it's both that play a role and I believe that the way evolution gives put prior information, so to speak, hardwired, |
|
|
|
17:55.440 --> 18:02.440 |
|
it's not really hardwired, but that's essentially an hypothesis. |
|
|
|
18:02.440 --> 18:14.440 |
|
I think what's going on is that evolution is almost necessarily, if you believe in Darwin, it's very opportunistic. |
|
|
|
18:14.440 --> 18:23.440 |
|
And think about our DNA and the DNA of Drosophila. |
|
|
|
18:23.440 --> 18:28.440 |
|
Our DNA does not have many more genes than Drosophila. |
|
|
|
18:28.440 --> 18:32.440 |
|
The fly, the fruit fly. |
|
|
|
18:32.440 --> 18:39.440 |
|
Now, we know that the fruit fly does not learn very much during its individual existence. |
|
|
|
18:39.440 --> 18:51.440 |
|
It looks like one of these machinery that it's really mostly, not 100%, but 95% hardcoded by the genes. |
|
|
|
18:51.440 --> 19:02.440 |
|
But since we don't have many more genes than Drosophila, evolution could encode in us a kind of general learning machinery |
|
|
|
19:02.440 --> 19:09.440 |
|
and then had to give very weak priors. |
|
|
|
19:09.440 --> 19:20.440 |
|
Like, for instance, let me give a specific example, which is recent to work by a member of our Center for Brains, Mines and Machines. |
|
|
|
19:20.440 --> 19:30.440 |
|
We know because of work of other people in our group and other groups that there are cells in a part of our brain, neurons, that are tuned to faces. |
|
|
|
19:30.440 --> 19:33.440 |
|
They seem to be involved in face recognition. |
|
|
|
19:33.440 --> 19:43.440 |
|
Now, this face area seems to be present in young children and adults. |
|
|
|
19:43.440 --> 19:54.440 |
|
And one question is there from the beginning, is hardwired by evolution or somehow is learned very quickly. |
|
|
|
19:54.440 --> 20:00.440 |
|
So what's your, by the way, a lot of the questions I'm asking, the answer is we don't really know, |
|
|
|
20:00.440 --> 20:08.440 |
|
but as a person who has contributed some profound ideas in these fields, you're a good person to guess at some of these. |
|
|
|
20:08.440 --> 20:14.440 |
|
So, of course, there's a caveat before a lot of the stuff we talk about, but what is your hunch? |
|
|
|
20:14.440 --> 20:21.440 |
|
Is the face, the part of the brain that seems to be concentrated on face recognition, are you born with that? |
|
|
|
20:21.440 --> 20:26.440 |
|
Or are you just designed to learn that quickly, like the face of the mother and son? |
|
|
|
20:26.440 --> 20:42.440 |
|
My hunch, my bias was the second one, learned very quickly and turns out that Marge Livingstone at Harvard has done some amazing experiments in which she raised baby monkeys, |
|
|
|
20:42.440 --> 20:47.440 |
|
depriving them of faces during the first weeks of life. |
|
|
|
20:47.440 --> 20:52.440 |
|
So they see technicians, but the technicians have a mask. |
|
|
|
20:52.440 --> 20:54.440 |
|
Yes. |
|
|
|
20:54.440 --> 21:10.440 |
|
And so when they looked at the area in the brain of these monkeys that were usually you find faces, they found no face preference. |
|
|
|
21:10.440 --> 21:26.440 |
|
So my guess is that what evolution does in this case is there is a plastic area, which is plastic, which is kind of predetermined to be imprinted very easily. |
|
|
|
21:26.440 --> 21:31.440 |
|
But the command from the gene is not a detailed circuitry for a face template. |
|
|
|
21:31.440 --> 21:33.440 |
|
Could be. |
|
|
|
21:33.440 --> 21:35.440 |
|
But this will require probably a lot of bits. |
|
|
|
21:35.440 --> 21:39.440 |
|
You had to specify a lot of connection of a lot of neurons. |
|
|
|
21:39.440 --> 21:53.440 |
|
Instead, the command from the gene is something like imprint, memorize what you see most often in the first two weeks of life, especially in connection with food and maybe nipples. |
|
|
|
21:53.440 --> 21:54.440 |
|
I don't know. |
|
|
|
21:54.440 --> 21:55.440 |
|
Right. |
|
|
|
21:55.440 --> 21:56.440 |
|
Well, source of food. |
|
|
|
21:56.440 --> 22:00.440 |
|
And so in that area is very plastic at first and it solidifies. |
|
|
|
22:00.440 --> 22:10.440 |
|
It'd be interesting if a variant of that experiment would show a different kind of pattern associated with food than a face pattern, whether that could stick. |
|
|
|
22:10.440 --> 22:25.440 |
|
There are indications that during that experiment, what the monkeys saw quite often were the blue gloves of the technicians that were giving to the baby monkeys the milk. |
|
|
|
22:25.440 --> 22:33.440 |
|
And some of the cells instead of being face sensitive in that area are hand sensitive. |
|
|
|
22:33.440 --> 22:35.440 |
|
That's fascinating. |
|
|
|
22:35.440 --> 22:45.440 |
|
Can you talk about what are the different parts of the brain and in your view sort of loosely and how do they contribute to intelligence? |
|
|
|
22:45.440 --> 23:04.440 |
|
Do you see the brain as a bunch of different modules and they together come in the human brain to create intelligence or is it all one mush of the same kind of fundamental architecture? |
|
|
|
23:04.440 --> 23:21.440 |
|
Yeah, that's an important question and there was a phase in neuroscience back in the 1950s or so in which it was believed for a while that the brain was equipotential. |
|
|
|
23:21.440 --> 23:22.440 |
|
This was the term. |
|
|
|
23:22.440 --> 23:31.440 |
|
You could cut out a piece and nothing special happened apart, a little bit less performance. |
|
|
|
23:31.440 --> 23:50.440 |
|
There was a surgeon, Lashley, who did a lot of experiments of this type with mice and rats and concluded that every part of the brain was essentially equivalent to any other one. |
|
|
|
23:50.440 --> 24:12.440 |
|
It turns out that that's really not true. There are very specific modules in the brain, as you said, and people may lose the ability to speak if you have a stroke in a certain region or may lose control of their legs in another region. |
|
|
|
24:12.440 --> 24:33.440 |
|
So they're very specific. The brain is also quite flexible and redundant so often it can correct things and take over functions from one part of the brain to the other, but really there are specific modules. |
|
|
|
24:33.440 --> 25:02.440 |
|
So the answer that we know from this old work, which was basically based on lesions, either on animals or very often there was a mine of very interesting data coming from the war, from different types of injuries that soldiers had in the brain. |
|
|
|
25:02.440 --> 25:23.440 |
|
And more recently, functional MRI, which allow you to check which part of the brain are active when you're doing different tasks, as you can replace some of this. |
|
|
|
25:23.440 --> 25:32.440 |
|
You can see that certain parts of the brain are involved, are active in certain tasks. |
|
|
|
25:32.440 --> 26:01.440 |
|
But sort of taking a step back to that part of the brain that discovers that specializes in the face and how that might be learned, what's your intuition behind, you know, is it possible that the sort of from a physicist's perspective when you get lower and lower, that it's all the same stuff and it just, when you're born, it's plastic and it quickly figures out this part is going to be about vision, this is going to be about language, this is about common sense reasoning. |
|
|
|
26:01.440 --> 26:09.440 |
|
Do you have an intuition that that kind of learning is going on really quickly or is it really kind of solidified in hardware? |
|
|
|
26:09.440 --> 26:10.440 |
|
That's a great question. |
|
|
|
26:10.440 --> 26:21.440 |
|
So there are parts of the brain like the cerebellum or the hippocampus that are quite different from each other. |
|
|
|
26:21.440 --> 26:25.440 |
|
They clearly have different anatomy, different connectivity. |
|
|
|
26:25.440 --> 26:35.440 |
|
Then there is the cortex, which is the most developed part of the brain in humans. |
|
|
|
26:35.440 --> 26:47.440 |
|
And in the cortex, you have different regions of the cortex that are responsible for vision, for audition, for motor control, for language. |
|
|
|
26:47.440 --> 27:07.440 |
|
Now, one of the big puzzles of this is that in the cortex, it looks like it is the same in terms of hardware, in terms of type of neurons and connectivity across these different modalities. |
|
|
|
27:07.440 --> 27:17.440 |
|
So for the cortex, I think aside these other parts of the brain like spinal cord, hippocampus, cerebellum and so on. |
|
|
|
27:17.440 --> 27:28.440 |
|
For the cortex, I think your question about hardware and software and learning and so on, I think is rather open. |
|
|
|
27:28.440 --> 27:40.440 |
|
And I find it very interesting for us to think about an architecture, computer architecture that is good for vision and at the same time is good for language. |
|
|
|
27:40.440 --> 27:48.440 |
|
It seems to be so different problem areas that you have to solve. |
|
|
|
27:48.440 --> 27:54.440 |
|
But the underlying mechanism might be the same and that's really instructive for artificial neural networks. |
|
|
|
27:54.440 --> 28:00.440 |
|
So we've done a lot of great work in vision and human vision, computer vision. |
|
|
|
28:00.440 --> 28:07.440 |
|
And you mentioned the problem of human vision is really as difficult as the problem of general intelligence. |
|
|
|
28:07.440 --> 28:10.440 |
|
And maybe that connects to the cortex discussion. |
|
|
|
28:10.440 --> 28:21.440 |
|
Can you describe the human visual cortex and how the humans begin to understand the world through the raw sensory information? |
|
|
|
28:21.440 --> 28:36.440 |
|
What's for folks who are not familiar, especially on the computer vision side, we don't often actually take a step back except saying with a sentence or two that one is inspired by the other. |
|
|
|
28:36.440 --> 28:39.440 |
|
What is it that we know about the human visual cortex? |
|
|
|
28:39.440 --> 28:40.440 |
|
That's interesting. |
|
|
|
28:40.440 --> 28:53.440 |
|
So we know quite a bit at the same time, we don't know a lot, but the bit we know, in a sense, we know a lot of the details and many we don't know. |
|
|
|
28:53.440 --> 29:05.440 |
|
And we know a lot of the top level, the answer to the top level question, but we don't know some basic ones, even in terms of general neuroscience forgetting vision. |
|
|
|
29:05.440 --> 29:11.440 |
|
You know, why do we sleep? It's such a basic question. |
|
|
|
29:11.440 --> 29:14.440 |
|
And we really don't have an answer to that. |
|
|
|
29:14.440 --> 29:18.440 |
|
So taking a step back on that. So sleep, for example, is fascinating. |
|
|
|
29:18.440 --> 29:21.440 |
|
Do you think that's a neuroscience question? |
|
|
|
29:21.440 --> 29:30.440 |
|
Or if we talk about abstractions, what do you think is an interesting way to study intelligence or most effective on the levels of abstraction? |
|
|
|
29:30.440 --> 29:37.440 |
|
Is it chemical, is it biological, is it electrophysical, mathematical as you've done a lot of excellent work on that side? |
|
|
|
29:37.440 --> 29:42.440 |
|
Which psychology, sort of like at which level of abstraction do you think? |
|
|
|
29:42.440 --> 29:48.440 |
|
Well, in terms of levels of abstraction, I think we need all of them. |
|
|
|
29:48.440 --> 29:56.440 |
|
It's one, you know, it's like if you ask me, what does it mean to understand a computer? |
|
|
|
29:56.440 --> 30:04.440 |
|
That's much simpler. But in a computer, I could say, well, understand how to use PowerPoint. |
|
|
|
30:04.440 --> 30:13.440 |
|
That's my level of understanding a computer. It's, it has reasonable, you know, it gives me some power to produce slides and beautiful slides. |
|
|
|
30:13.440 --> 30:28.440 |
|
And now somebody else says, well, I know how the transistor work that are inside the computer can write the equation for, you know, transistor and diodes and circuits, logical circuits. |
|
|
|
30:28.440 --> 30:33.440 |
|
And I can ask this guy, do you know how to operate PowerPoint? No idea. |
|
|
|
30:33.440 --> 30:49.440 |
|
So do you think if we discovered computers walking amongst us full of these transistors that are also operating under windows and have PowerPoint, do you think it's digging in a little bit more? |
|
|
|
30:49.440 --> 31:00.440 |
|
How useful is it to understand the transistor in order to be able to understand PowerPoint in these higher level intelligence processes? |
|
|
|
31:00.440 --> 31:12.440 |
|
So I think in the case of computers, because they were made by engineers by us, these different level of understanding are rather separate on purpose. |
|
|
|
31:12.440 --> 31:23.440 |
|
You know, they are separate modules so that the engineer that designed the circuit for the chips does not need to know what is inside PowerPoint. |
|
|
|
31:23.440 --> 31:30.440 |
|
And somebody can write the software translating from one to the other. |
|
|
|
31:30.440 --> 31:40.440 |
|
So in that case, I don't think understanding the transistor help you understand PowerPoint or very little. |
|
|
|
31:40.440 --> 31:51.440 |
|
If you want to understand the computer, this question, you know, I would say you have to understanding at different levels if you really want to build one. |
|
|
|
31:51.440 --> 32:09.440 |
|
But for the brain, I think these levels of understanding, so the algorithms, which kind of computation, you know, the equivalent PowerPoint and the circuits, you know, the transistors, I think they are much more intertwined with each other. |
|
|
|
32:09.440 --> 32:15.440 |
|
There is not, you know, a neatly level of the software separate from the hardware. |
|
|
|
32:15.440 --> 32:29.440 |
|
And so that's why I think in the case of the brain, the problem is more difficult and more than for computers requires the interaction, the collaboration between different types of expertise. |
|
|
|
32:29.440 --> 32:35.440 |
|
So the brain is a big hierarchical mess that you can't just disentangle levels. |
|
|
|
32:35.440 --> 32:41.440 |
|
I think you can, but it's much more difficult and it's not completely obvious. |
|
|
|
32:41.440 --> 32:47.440 |
|
And I said, I think he's one of the person I think is the greatest problem in science. |
|
|
|
32:47.440 --> 32:52.440 |
|
So, you know, I think it's fair that it's difficult. |
|
|
|
32:52.440 --> 32:53.440 |
|
That's a difficult one. |
|
|
|
32:53.440 --> 32:58.440 |
|
That said, you do talk about compositionality and why it might be useful. |
|
|
|
32:58.440 --> 33:07.440 |
|
And when you discuss why these neural networks in artificial or biological sense learn anything, you talk about compositionality. |
|
|
|
33:07.440 --> 33:22.440 |
|
See, there's a sense that nature can be disentangled or well, all aspects of our cognition could be disentangled a little to some degree. |
|
|
|
33:22.440 --> 33:31.440 |
|
So why do you think what, first of all, how do you see compositionality and why do you think it exists at all in nature? |
|
|
|
33:31.440 --> 33:39.440 |
|
I spoke about, I use the term compositionality. |
|
|
|
33:39.440 --> 33:54.440 |
|
When we looked at deep neural networks, multi layers and trying to understand when and why they are more powerful than more classical one layer networks, |
|
|
|
33:54.440 --> 34:01.440 |
|
like linear classifier, kernel machines, so called. |
|
|
|
34:01.440 --> 34:12.440 |
|
And what we found is that in terms of approximating or learning or representing a function, a mapping from an input to an output, |
|
|
|
34:12.440 --> 34:20.440 |
|
like from an image to the label in the image, if this function has a particular structure, |
|
|
|
34:20.440 --> 34:28.440 |
|
then deep networks are much more powerful than shallow networks to approximate the underlying function. |
|
|
|
34:28.440 --> 34:33.440 |
|
And the particular structure is a structure of compositionality. |
|
|
|
34:33.440 --> 34:45.440 |
|
If the function is made up of functions of function, so that you need to look on when you are interpreting an image, |
|
|
|
34:45.440 --> 34:56.440 |
|
classifying an image, you don't need to look at all pixels at once, but you can compute something from small groups of pixels, |
|
|
|
34:56.440 --> 35:04.440 |
|
and then you can compute something on the output of this local computation and so on. |
|
|
|
35:04.440 --> 35:10.440 |
|
It is similar to what you do when you read a sentence, you don't need to read the first and the last letter, |
|
|
|
35:10.440 --> 35:17.440 |
|
but you can read syllables, combine them in words, combine the words in sentences. |
|
|
|
35:17.440 --> 35:20.440 |
|
So this is this kind of structure. |
|
|
|
35:20.440 --> 35:27.440 |
|
So that's as part of a discussion of why deep neural networks may be more effective than the shallow methods. |
|
|
|
35:27.440 --> 35:35.440 |
|
And is your sense for most things we can use neural networks for, |
|
|
|
35:35.440 --> 35:43.440 |
|
those problems are going to be compositional in nature, like language, like vision. |
|
|
|
35:43.440 --> 35:47.440 |
|
How far can we get in this kind of way? |
|
|
|
35:47.440 --> 35:51.440 |
|
So here is almost philosophy. |
|
|
|
35:51.440 --> 35:53.440 |
|
Well, let's go there. |
|
|
|
35:53.440 --> 35:55.440 |
|
Yeah, let's go there. |
|
|
|
35:55.440 --> 36:00.440 |
|
So friend of mine, Max Tagmark, who is a physicist at MIT. |
|
|
|
36:00.440 --> 36:02.440 |
|
I've talked to him on this thing. |
|
|
|
36:02.440 --> 36:04.440 |
|
Yeah, and he disagrees with you, right? |
|
|
|
36:04.440 --> 36:09.440 |
|
We agree on most, but the conclusion is a bit different. |
|
|
|
36:09.440 --> 36:14.440 |
|
His conclusion is that for images, for instance, |
|
|
|
36:14.440 --> 36:23.440 |
|
the compositional structure of this function that we have to learn or to solve these problems |
|
|
|
36:23.440 --> 36:35.440 |
|
comes from physics, comes from the fact that you have local interactions in physics between atoms and other atoms, |
|
|
|
36:35.440 --> 36:42.440 |
|
between particle of matter and other particles, between planets and other planets, |
|
|
|
36:42.440 --> 36:44.440 |
|
between stars and others. |
|
|
|
36:44.440 --> 36:48.440 |
|
It's all local. |
|
|
|
36:48.440 --> 36:55.440 |
|
And that's true, but you could push this argument a bit further. |
|
|
|
36:55.440 --> 36:57.440 |
|
Not this argument, actually. |
|
|
|
36:57.440 --> 37:02.440 |
|
You could argue that, you know, maybe that's part of the true, |
|
|
|
37:02.440 --> 37:06.440 |
|
but maybe what happens is kind of the opposite, |
|
|
|
37:06.440 --> 37:11.440 |
|
is that our brain is wired up as a deep network. |
|
|
|
37:11.440 --> 37:22.440 |
|
So it can learn, understand, solve problems that have this compositional structure. |
|
|
|
37:22.440 --> 37:29.440 |
|
And it cannot solve problems that don't have this compositional structure. |
|
|
|
37:29.440 --> 37:37.440 |
|
So the problems we are accustomed to, we think about, we test our algorithms on, |
|
|
|
37:37.440 --> 37:42.440 |
|
are this compositional structure because our brain is made up. |
|
|
|
37:42.440 --> 37:46.440 |
|
And that's, in a sense, an evolutionary perspective that we've... |
|
|
|
37:46.440 --> 37:54.440 |
|
So the ones that weren't dealing with the compositional nature of reality died off? |
|
|
|
37:54.440 --> 38:05.440 |
|
Yes, but also could be, maybe the reason why we have this local connectivity in the brain, |
|
|
|
38:05.440 --> 38:10.440 |
|
like simple cells in cortex looking only at the small part of the image, |
|
|
|
38:10.440 --> 38:16.440 |
|
each one of them, and then other cells looking at the small number of the simple cells and so on. |
|
|
|
38:16.440 --> 38:24.440 |
|
The reason for this may be purely that it was difficult to grow long range connectivity. |
|
|
|
38:24.440 --> 38:33.440 |
|
So suppose it's, you know, for biology, it's possible to grow short range connectivity, |
|
|
|
38:33.440 --> 38:39.440 |
|
but not long range also because there is a limited number of long range. |
|
|
|
38:39.440 --> 38:44.440 |
|
And so you have this limitation from the biology. |
|
|
|
38:44.440 --> 38:49.440 |
|
And this means you build a deep convolutional network. |
|
|
|
38:49.440 --> 38:53.440 |
|
This would be something like a deep convolutional network. |
|
|
|
38:53.440 --> 38:57.440 |
|
And this is great for solving certain class of problems. |
|
|
|
38:57.440 --> 39:02.440 |
|
These are the ones we find easy and important for our life. |
|
|
|
39:02.440 --> 39:06.440 |
|
And yes, they were enough for us to survive. |
|
|
|
39:06.440 --> 39:13.440 |
|
And you can start a successful business on solving those problems with mobile eye. |
|
|
|
39:13.440 --> 39:16.440 |
|
Driving is a compositional problem. |
|
|
|
39:16.440 --> 39:25.440 |
|
So on the learning task, we don't know much about how the brain learns in terms of optimization. |
|
|
|
39:25.440 --> 39:31.440 |
|
So the thing that's stochastic gradient descent is what artificial neural networks |
|
|
|
39:31.440 --> 39:38.440 |
|
use for the most part to adjust the parameters in such a way that it's able to deal |
|
|
|
39:38.440 --> 39:42.440 |
|
based on the labeled data, it's able to solve the problem. |
|
|
|
39:42.440 --> 39:49.440 |
|
So what's your intuition about why it works at all? |
|
|
|
39:49.440 --> 39:55.440 |
|
How hard of a problem it is to optimize a neural network, artificial neural network? |
|
|
|
39:55.440 --> 39:57.440 |
|
Is there other alternatives? |
|
|
|
39:57.440 --> 40:03.440 |
|
Just in general, your intuition is behind this very simplistic algorithm |
|
|
|
40:03.440 --> 40:05.440 |
|
that seems to do pretty good, surprising. |
|
|
|
40:05.440 --> 40:07.440 |
|
Yes, yes. |
|
|
|
40:07.440 --> 40:16.440 |
|
So I find neuroscience, the architecture of cortex is really similar to the architecture of deep networks. |
|
|
|
40:16.440 --> 40:26.440 |
|
So there is a nice correspondence there between the biology and this kind of local connectivity hierarchical |
|
|
|
40:26.440 --> 40:28.440 |
|
architecture. |
|
|
|
40:28.440 --> 40:35.440 |
|
The stochastic gradient descent, as you said, is a very simple technique. |
|
|
|
40:35.440 --> 40:49.440 |
|
It seems pretty unlikely that biology could do that from what we know right now about cortex and neurons and synapses. |
|
|
|
40:49.440 --> 40:58.440 |
|
So it's a big question open whether there are other optimization learning algorithms |
|
|
|
40:58.440 --> 41:02.440 |
|
that can replace stochastic gradient descent. |
|
|
|
41:02.440 --> 41:11.440 |
|
And my guess is yes, but nobody has found yet a real answer. |
|
|
|
41:11.440 --> 41:17.440 |
|
I mean, people are trying, still trying, and there are some interesting ideas. |
|
|
|
41:17.440 --> 41:27.440 |
|
The fact that stochastic gradient descent is so successful, this has become clearly not so mysterious. |
|
|
|
41:27.440 --> 41:39.440 |
|
And the reason is that it's an interesting fact, you know, is a change in a sense in how people think about statistics. |
|
|
|
41:39.440 --> 41:51.440 |
|
And this is the following is that typically when you had data and you had, say, a model with parameters, |
|
|
|
41:51.440 --> 41:55.440 |
|
you are trying to fit the model to the data, you know, to fit the parameter. |
|
|
|
41:55.440 --> 42:12.440 |
|
And typically the kind of kind of crowd wisdom type idea was you should have at least, you know, twice the number of data than the number of parameters. |
|
|
|
42:12.440 --> 42:15.440 |
|
Maybe 10 times is better. |
|
|
|
42:15.440 --> 42:24.440 |
|
Now, the way you train neural network these days is that they have 10 or 100 times more parameters than data. |
|
|
|
42:24.440 --> 42:26.440 |
|
Exactly the opposite. |
|
|
|
42:26.440 --> 42:34.440 |
|
And which, you know, it has been one of the puzzles about neural networks. |
|
|
|
42:34.440 --> 42:40.440 |
|
How can you get something that really works when you have so much freedom? |
|
|
|
42:40.440 --> 42:43.440 |
|
From that little data you can generalize somehow. |
|
|
|
42:43.440 --> 42:44.440 |
|
Right, exactly. |
|
|
|
42:44.440 --> 42:48.440 |
|
Do you think the stochastic nature of it is essential, the randomness? |
|
|
|
42:48.440 --> 43:00.440 |
|
I think we have some initial understanding why this happens, but one nice side effect of having this over parameterization, more parameters than data, |
|
|
|
43:00.440 --> 43:07.440 |
|
is that when you look for the minima of a loss function like stochastic degree of descent is doing, |
|
|
|
43:07.440 --> 43:19.440 |
|
you find I made some calculations based on some old basic theorem of algebra called Bezu theorem. |
|
|
|
43:19.440 --> 43:25.440 |
|
And that gives you an estimate of the number of solutions of a system of polynomial equation. |
|
|
|
43:25.440 --> 43:38.440 |
|
Anyway, the bottom line is that there are probably more minima for a typical deep networks than atoms in the universe. |
|
|
|
43:38.440 --> 43:43.440 |
|
Just to say there are a lot because of the over parameterization. |
|
|
|
43:43.440 --> 43:44.440 |
|
Yes. |
|
|
|
43:44.440 --> 43:48.440 |
|
More global minimum, zero minimum, good minimum. |
|
|
|
43:48.440 --> 43:51.440 |
|
More global minimum. |
|
|
|
43:51.440 --> 44:00.440 |
|
Yes, a lot of them, so you have a lot of solutions, so it's not so surprising that you can find them relatively easily. |
|
|
|
44:00.440 --> 44:04.440 |
|
This is because of the over parameterization. |
|
|
|
44:04.440 --> 44:09.440 |
|
The over parameterization sprinkles that entire space with solutions that are pretty good. |
|
|
|
44:09.440 --> 44:11.440 |
|
It's not so surprising, right? |
|
|
|
44:11.440 --> 44:17.440 |
|
It's like if you have a system of linear equation and you have more unknowns than equations, |
|
|
|
44:17.440 --> 44:24.440 |
|
then we know you have an infinite number of solutions and the question is to pick one. |
|
|
|
44:24.440 --> 44:27.440 |
|
That's another story, but you have an infinite number of solutions, |
|
|
|
44:27.440 --> 44:32.440 |
|
so there are a lot of value of your unknowns that satisfy the equations. |
|
|
|
44:32.440 --> 44:37.440 |
|
But it's possible that there's a lot of those solutions that aren't very good. |
|
|
|
44:37.440 --> 44:38.440 |
|
What's surprising is that they're pretty good. |
|
|
|
44:38.440 --> 44:39.440 |
|
So that's a separate question. |
|
|
|
44:39.440 --> 44:43.440 |
|
Why can you pick one that generalizes one? |
|
|
|
44:43.440 --> 44:46.440 |
|
That's a separate question with separate answers. |
|
|
|
44:46.440 --> 44:53.440 |
|
One theorem that people like to talk about that inspires imagination of the power of neural networks |
|
|
|
44:53.440 --> 45:00.440 |
|
is the universal approximation theorem that you can approximate any computable function |
|
|
|
45:00.440 --> 45:04.440 |
|
with just a finite number of neurons and a single hidden layer. |
|
|
|
45:04.440 --> 45:07.440 |
|
Do you find this theorem one surprising? |
|
|
|
45:07.440 --> 45:12.440 |
|
Do you find it useful, interesting, inspiring? |
|
|
|
45:12.440 --> 45:16.440 |
|
No, this one, I never found it very surprising. |
|
|
|
45:16.440 --> 45:22.440 |
|
It was known since the 80s, since I entered the field, |
|
|
|
45:22.440 --> 45:27.440 |
|
because it's basically the same as Viastras theorem, |
|
|
|
45:27.440 --> 45:34.440 |
|
which says that I can approximate any continuous function with a polynomial of sufficiently, |
|
|
|
45:34.440 --> 45:37.440 |
|
with a sufficient number of terms, monomials. |
|
|
|
45:37.440 --> 45:41.440 |
|
It's basically the same, and the proofs are very similar. |
|
|
|
45:41.440 --> 45:48.440 |
|
So your intuition was there was never any doubt that neural networks in theory could be very strong approximations. |
|
|
|
45:48.440 --> 45:58.440 |
|
The interesting question is that if this theorem says you can approximate fine, |
|
|
|
45:58.440 --> 46:06.440 |
|
but when you ask how many neurons, for instance, or in the case of how many monomials, |
|
|
|
46:06.440 --> 46:11.440 |
|
I need to get a good approximation. |
|
|
|
46:11.440 --> 46:20.440 |
|
Then it turns out that that depends on the dimensionality of your function, how many variables you have. |
|
|
|
46:20.440 --> 46:25.440 |
|
But it depends on the dimensionality of your function in a bad way. |
|
|
|
46:25.440 --> 46:35.440 |
|
For instance, suppose you want an error which is no worse than 10% in your approximation. |
|
|
|
46:35.440 --> 46:40.440 |
|
If you want to approximate your function within 10%, |
|
|
|
46:40.440 --> 46:48.440 |
|
then it turns out that the number of units you need are in the order of 10 to the dimensionality, d. |
|
|
|
46:48.440 --> 46:50.440 |
|
How many variables? |
|
|
|
46:50.440 --> 46:57.440 |
|
So if you have two variables, d is 2 and you have 100 units and OK. |
|
|
|
46:57.440 --> 47:02.440 |
|
But if you have, say, 200 by 200 pixel images, |
|
|
|
47:02.440 --> 47:06.440 |
|
now this is 40,000, whatever. |
|
|
|
47:06.440 --> 47:09.440 |
|
We again go to the size of the universe pretty quickly. |
|
|
|
47:09.440 --> 47:13.440 |
|
Exactly, 10 to the 40,000 or something. |
|
|
|
47:13.440 --> 47:21.440 |
|
And so this is called the curse of dimensionality, not quite appropriately. |
|
|
|
47:21.440 --> 47:27.440 |
|
And the hope is with the extra layers you can remove the curse. |
|
|
|
47:27.440 --> 47:34.440 |
|
What we proved is that if you have deep layers or hierarchical architecture |
|
|
|
47:34.440 --> 47:39.440 |
|
with the local connectivity of the type of convolutional deep learning, |
|
|
|
47:39.440 --> 47:46.440 |
|
and if you're dealing with a function that has this kind of hierarchical architecture, |
|
|
|
47:46.440 --> 47:50.440 |
|
then you avoid completely the curse. |
|
|
|
47:50.440 --> 47:53.440 |
|
You've spoken a lot about supervised deep learning. |
|
|
|
47:53.440 --> 47:58.440 |
|
What are your thoughts, hopes, views on the challenges of unsupervised learning |
|
|
|
47:58.440 --> 48:04.440 |
|
with GANs, with generative adversarial networks? |
|
|
|
48:04.440 --> 48:08.440 |
|
Do you see those as distinct, the power of GANs, |
|
|
|
48:08.440 --> 48:12.440 |
|
do you see those as distinct from supervised methods in neural networks, |
|
|
|
48:12.440 --> 48:16.440 |
|
or are they really all in the same representation ballpark? |
|
|
|
48:16.440 --> 48:24.440 |
|
GANs is one way to get estimation of probability densities, |
|
|
|
48:24.440 --> 48:29.440 |
|
which is a somewhat new way that people have not done before. |
|
|
|
48:29.440 --> 48:38.440 |
|
I don't know whether this will really play an important role in intelligence, |
|
|
|
48:38.440 --> 48:47.440 |
|
or it's interesting, I'm less enthusiastic about it than many people in the field. |
|
|
|
48:47.440 --> 48:53.440 |
|
I have the feeling that many people in the field are really impressed by the ability |
|
|
|
48:53.440 --> 49:00.440 |
|
of producing realistic looking images in this generative way. |
|
|
|
49:00.440 --> 49:02.440 |
|
Which describes the popularity of the methods, |
|
|
|
49:02.440 --> 49:10.440 |
|
but you're saying that while that's exciting and cool to look at, it may not be the tool that's useful for it. |
|
|
|
49:10.440 --> 49:12.440 |
|
So you describe it kind of beautifully. |
|
|
|
49:12.440 --> 49:17.440 |
|
Current supervised methods go N to infinity in terms of the number of labeled points, |
|
|
|
49:17.440 --> 49:20.440 |
|
and we really have to figure out how to go to N to 1. |
|
|
|
49:20.440 --> 49:24.440 |
|
And you're thinking GANs might help, but they might not be the right... |
|
|
|
49:24.440 --> 49:28.440 |
|
I don't think for that problem, which I really think is important. |
|
|
|
49:28.440 --> 49:35.440 |
|
I think they certainly have applications, for instance, in computer graphics. |
|
|
|
49:35.440 --> 49:43.440 |
|
I did work long ago, which was a little bit similar in terms of, |
|
|
|
49:43.440 --> 49:49.440 |
|
saying I have a network and I present images, |
|
|
|
49:49.440 --> 49:59.440 |
|
so the input is images and output is, for instance, the pose of the image, a face, how much is smiling, |
|
|
|
49:59.440 --> 50:02.440 |
|
is rotated 45 degrees or not. |
|
|
|
50:02.440 --> 50:08.440 |
|
What about having a network that I train with the same data set, |
|
|
|
50:08.440 --> 50:10.440 |
|
but now I invert input and output. |
|
|
|
50:10.440 --> 50:16.440 |
|
Now the input is the pose or the expression, a number, certain numbers, |
|
|
|
50:16.440 --> 50:19.440 |
|
and the output is the image and I train it. |
|
|
|
50:19.440 --> 50:27.440 |
|
And we did pretty good interesting results in terms of producing very realistic looking images. |
|
|
|
50:27.440 --> 50:35.440 |
|
It was less sophisticated mechanism, but the output was pretty less than GANs, |
|
|
|
50:35.440 --> 50:38.440 |
|
but the output was pretty much of the same quality. |
|
|
|
50:38.440 --> 50:43.440 |
|
So I think for computer graphics type application, |
|
|
|
50:43.440 --> 50:48.440 |
|
definitely GANs can be quite useful and not only for that, |
|
|
|
50:48.440 --> 51:01.440 |
|
but for helping, for instance, on this problem unsupervised example of reducing the number of labelled examples, |
|
|
|
51:01.440 --> 51:10.440 |
|
I think people, it's like they think they can get out more than they put in. |
|
|
|
51:10.440 --> 51:13.440 |
|
There's no free lunch, as you said. |
|
|
|
51:13.440 --> 51:16.440 |
|
What's your intuition? |
|
|
|
51:16.440 --> 51:24.440 |
|
How can we slow the growth of N to infinity in supervised learning? |
|
|
|
51:24.440 --> 51:29.440 |
|
So, for example, Mobileye has very successfully, |
|
|
|
51:29.440 --> 51:34.440 |
|
I mean essentially annotated large amounts of data to be able to drive a car. |
|
|
|
51:34.440 --> 51:40.440 |
|
Now, one thought is, so we're trying to teach machines, the school of AI, |
|
|
|
51:40.440 --> 51:45.440 |
|
and we're trying to, so how can we become better teachers, maybe? |
|
|
|
51:45.440 --> 51:47.440 |
|
That's one way. |
|
|
|
51:47.440 --> 51:58.440 |
|
I like that because, again, one caricature of the history of computer science, |
|
|
|
51:58.440 --> 52:09.440 |
|
it begins with programmers, expensive, continuous labellers, cheap, |
|
|
|
52:09.440 --> 52:16.440 |
|
and the future would be schools, like we have for kids. |
|
|
|
52:16.440 --> 52:26.440 |
|
Currently, the labelling methods, we're not selective about which examples we teach networks with. |
|
|
|
52:26.440 --> 52:33.440 |
|
I think the focus of making networks that learn much faster is often on the architecture side, |
|
|
|
52:33.440 --> 52:37.440 |
|
but how can we pick better examples with which to learn? |
|
|
|
52:37.440 --> 52:39.440 |
|
Do you have intuitions about that? |
|
|
|
52:39.440 --> 52:50.440 |
|
Well, that's part of the problem, but the other one is, if we look at biology, |
|
|
|
52:50.440 --> 52:58.440 |
|
the reasonable assumption, I think, is in the same spirit as I said, |
|
|
|
52:58.440 --> 53:03.440 |
|
evolution is opportunistic and has weak priors. |
|
|
|
53:03.440 --> 53:10.440 |
|
The way I think the intelligence of a child, a baby may develop, |
|
|
|
53:10.440 --> 53:17.440 |
|
is by bootstrapping weak priors from evolution. |
|
|
|
53:17.440 --> 53:26.440 |
|
For instance, you can assume that you have most organisms, |
|
|
|
53:26.440 --> 53:37.440 |
|
including human babies, built in some basic machinery to detect motion and relative motion. |
|
|
|
53:37.440 --> 53:46.440 |
|
In fact, we know all insects, from fruit flies to other animals, they have this. |
|
|
|
53:46.440 --> 53:55.440 |
|
Even in the retinas, in the very peripheral part, it's very conserved across species, |
|
|
|
53:55.440 --> 53:58.440 |
|
something that evolution discovered early. |
|
|
|
53:58.440 --> 54:05.440 |
|
It may be the reason why babies tend to look in the first few days to moving objects, |
|
|
|
54:05.440 --> 54:07.440 |
|
and not to not moving objects. |
|
|
|
54:07.440 --> 54:11.440 |
|
Now, moving objects means, okay, they're attracted by motion, |
|
|
|
54:11.440 --> 54:19.440 |
|
but motion also means that motion gives automatic segmentation from the background. |
|
|
|
54:19.440 --> 54:26.440 |
|
So because of motion boundaries, either the object is moving, |
|
|
|
54:26.440 --> 54:32.440 |
|
or the eye of the baby is tracking the moving object, and the background is moving. |
|
|
|
54:32.440 --> 54:37.440 |
|
Yeah, so just purely on the visual characteristics of the scene, that seems to be the most useful. |
|
|
|
54:37.440 --> 54:43.440 |
|
Right, so it's like looking at an object without background. |
|
|
|
54:43.440 --> 54:49.440 |
|
It's ideal for learning the object, otherwise it's really difficult, because you have so much stuff. |
|
|
|
54:49.440 --> 54:54.440 |
|
So suppose you do this at the beginning, first weeks, |
|
|
|
54:54.440 --> 55:01.440 |
|
then after that you can recognize the object, now they are imprinted, the number one, |
|
|
|
55:01.440 --> 55:05.440 |
|
even in the background, even without motion. |
|
|
|
55:05.440 --> 55:10.440 |
|
So that's the, by the way, I just want to ask on the object recognition problem, |
|
|
|
55:10.440 --> 55:16.440 |
|
so there is this being responsive to movement and doing edge detection, essentially. |
|
|
|
55:16.440 --> 55:20.440 |
|
What's the gap between being effectively, |
|
|
|
55:20.440 --> 55:27.440 |
|
effectively visually recognizing stuff, detecting where it is, and understanding the scene? |
|
|
|
55:27.440 --> 55:32.440 |
|
Is this a huge gap in many layers, or is it close? |
|
|
|
55:32.440 --> 55:35.440 |
|
No, I think that's a huge gap. |
|
|
|
55:35.440 --> 55:44.440 |
|
I think present algorithm with all the success that we have, and the fact that are a lot of very useful, |
|
|
|
55:44.440 --> 55:51.440 |
|
I think we are in a golden age for applications of low level vision, |
|
|
|
55:51.440 --> 55:56.440 |
|
and low level speech recognition, and so on, you know, Alexa, and so on. |
|
|
|
55:56.440 --> 56:01.440 |
|
There are many more things of similar level to be done, including medical diagnosis and so on, |
|
|
|
56:01.440 --> 56:11.440 |
|
but we are far from what we call understanding of a scene, of language, of actions, of people. |
|
|
|
56:11.440 --> 56:17.440 |
|
That is, despite the claims, that's, I think, very far. |
|
|
|
56:17.440 --> 56:19.440 |
|
We're a little bit off. |
|
|
|
56:19.440 --> 56:24.440 |
|
So in popular culture, and among many researchers, some of which I've spoken with, |
|
|
|
56:24.440 --> 56:34.440 |
|
the Sewell Russell and Elon Musk, in and out of the AI field, there's a concern about the existential threat of AI. |
|
|
|
56:34.440 --> 56:44.440 |
|
And how do you think about this concern, and is it valuable to think about large scale, |
|
|
|
56:44.440 --> 56:51.440 |
|
long term, unintended consequences of intelligent systems we try to build? |
|
|
|
56:51.440 --> 56:58.440 |
|
I always think it's better to worry first, you know, early rather than late. |
|
|
|
56:58.440 --> 56:59.440 |
|
So worry is good. |
|
|
|
56:59.440 --> 57:02.440 |
|
Yeah, I'm not against worrying at all. |
|
|
|
57:02.440 --> 57:15.440 |
|
Personally, I think that, you know, it will take a long time before there is real reason to be worried. |
|
|
|
57:15.440 --> 57:23.440 |
|
But as I said, I think it's good to put in place and think about possible safety against, |
|
|
|
57:23.440 --> 57:35.440 |
|
what I find a bit misleading are things like that have been said by people I know, like Elon Musk and what is Bostrom in particular, |
|
|
|
57:35.440 --> 57:39.440 |
|
and what is his first name, Nick Bostrom, right? |
|
|
|
57:39.440 --> 57:46.440 |
|
And, you know, and a couple of other people that, for instance, AI is more dangerous than nuclear weapons. |
|
|
|
57:46.440 --> 57:50.440 |
|
I think that's really wrong. |
|
|
|
57:50.440 --> 57:59.440 |
|
That can be misleading, because in terms of priority, we should still be more worried about nuclear weapons |
|
|
|
57:59.440 --> 58:05.440 |
|
and what people are doing about it and so on than AI. |
|
|
|
58:05.440 --> 58:15.440 |
|
And you've spoken about them as obvious and yourself saying that you think you'll be about 100 years out |
|
|
|
58:15.440 --> 58:20.440 |
|
before we have a general intelligence system that's on par with the human being. |
|
|
|
58:20.440 --> 58:22.440 |
|
Do you have any updates for those predictions? |
|
|
|
58:22.440 --> 58:23.440 |
|
Well, I think he said... |
|
|
|
58:23.440 --> 58:25.440 |
|
He said 20, I think. |
|
|
|
58:25.440 --> 58:26.440 |
|
He said 20, right. |
|
|
|
58:26.440 --> 58:27.440 |
|
This was a couple of years ago. |
|
|
|
58:27.440 --> 58:31.440 |
|
I have not asked him again, so I should have. |
|
|
|
58:31.440 --> 58:38.440 |
|
Your own prediction, what's your prediction about when you'll be truly surprised |
|
|
|
58:38.440 --> 58:42.440 |
|
and what's the confidence interval on that? |
|
|
|
58:42.440 --> 58:46.440 |
|
You know, it's so difficult to predict the future and even the present. |
|
|
|
58:46.440 --> 58:48.440 |
|
It's pretty hard to predict. |
|
|
|
58:48.440 --> 58:50.440 |
|
Right, but I would be... |
|
|
|
58:50.440 --> 58:52.440 |
|
As I said, this is completely... |
|
|
|
58:52.440 --> 58:56.440 |
|
I would be more like Rod Brooks. |
|
|
|
58:56.440 --> 58:59.440 |
|
I think he's about 200 years old. |
|
|
|
58:59.440 --> 59:01.440 |
|
200 years. |
|
|
|
59:01.440 --> 59:06.440 |
|
When we have this kind of AGI system, artificial intelligence system, |
|
|
|
59:06.440 --> 59:12.440 |
|
you're sitting in a room with her, him, it, |
|
|
|
59:12.440 --> 59:17.440 |
|
do you think it will be the underlying design of such a system |
|
|
|
59:17.440 --> 59:19.440 |
|
and something we'll be able to understand? |
|
|
|
59:19.440 --> 59:20.440 |
|
It will be simple? |
|
|
|
59:20.440 --> 59:25.440 |
|
Do you think it will be explainable? |
|
|
|
59:25.440 --> 59:27.440 |
|
Understandable by us? |
|
|
|
59:27.440 --> 59:31.440 |
|
Your intuition, again, we're in the realm of philosophy a little bit. |
|
|
|
59:31.440 --> 59:35.440 |
|
Well, probably no. |
|
|
|
59:35.440 --> 59:42.440 |
|
But again, it depends what you really mean for understanding. |
|
|
|
59:42.440 --> 59:53.440 |
|
I think we don't understand how deep networks work. |
|
|
|
59:53.440 --> 59:56.440 |
|
I think we're beginning to have a theory now. |
|
|
|
59:56.440 --> 59:59.440 |
|
But in the case of deep networks, |
|
|
|
59:59.440 --> 1:00:06.440 |
|
or even in the case of the simpler kernel machines or linear classifier, |
|
|
|
1:00:06.440 --> 1:00:12.440 |
|
we really don't understand the individual units or so. |
|
|
|
1:00:12.440 --> 1:00:20.440 |
|
But we understand what the computation and the limitations and the properties of it are. |
|
|
|
1:00:20.440 --> 1:00:24.440 |
|
It's similar to many things. |
|
|
|
1:00:24.440 --> 1:00:29.440 |
|
Does it mean to understand how a fusion bomb works? |
|
|
|
1:00:29.440 --> 1:00:35.440 |
|
How many of us, you know, many of us understand the basic principle |
|
|
|
1:00:35.440 --> 1:00:40.440 |
|
and some of us may understand deeper details? |
|
|
|
1:00:40.440 --> 1:00:44.440 |
|
In that sense, understanding is, as a community, as a civilization, |
|
|
|
1:00:44.440 --> 1:00:46.440 |
|
can we build another copy of it? |
|
|
|
1:00:46.440 --> 1:00:47.440 |
|
Okay. |
|
|
|
1:00:47.440 --> 1:00:50.440 |
|
And in that sense, do you think there'll be, |
|
|
|
1:00:50.440 --> 1:00:56.440 |
|
there'll need to be some evolutionary component where it runs away from our understanding? |
|
|
|
1:00:56.440 --> 1:00:59.440 |
|
Or do you think it could be engineered from the ground up? |
|
|
|
1:00:59.440 --> 1:01:02.440 |
|
The same way you go from the transistor to PowerPoint? |
|
|
|
1:01:02.440 --> 1:01:03.440 |
|
Right. |
|
|
|
1:01:03.440 --> 1:01:09.440 |
|
So many years ago, this was actually 40, 41 years ago, |
|
|
|
1:01:09.440 --> 1:01:13.440 |
|
I wrote a paper with David Maher, |
|
|
|
1:01:13.440 --> 1:01:19.440 |
|
who was one of the founding fathers of computer vision, computational vision. |
|
|
|
1:01:19.440 --> 1:01:23.440 |
|
I wrote a paper about levels of understanding, |
|
|
|
1:01:23.440 --> 1:01:28.440 |
|
which is related to the question we discussed earlier about understanding PowerPoint, |
|
|
|
1:01:28.440 --> 1:01:31.440 |
|
understanding transistors and so on. |
|
|
|
1:01:31.440 --> 1:01:38.440 |
|
And, you know, in that kind of framework, we had a level of the hardware |
|
|
|
1:01:38.440 --> 1:01:41.440 |
|
and the top level of the algorithms. |
|
|
|
1:01:41.440 --> 1:01:44.440 |
|
We did not have learning. |
|
|
|
1:01:44.440 --> 1:01:54.440 |
|
Recently, I updated adding levels and one level I added to those three was learning. |
|
|
|
1:01:54.440 --> 1:01:59.440 |
|
So, and you can imagine, you could have a good understanding |
|
|
|
1:01:59.440 --> 1:02:04.440 |
|
of how you construct learning machine, like we do. |
|
|
|
1:02:04.440 --> 1:02:13.440 |
|
But being unable to describe in detail what the learning machines will discover, right? |
|
|
|
1:02:13.440 --> 1:02:19.440 |
|
Now, that would be still a powerful understanding if I can build a learning machine, |
|
|
|
1:02:19.440 --> 1:02:25.440 |
|
even if I don't understand in detail every time it learns something. |
|
|
|
1:02:25.440 --> 1:02:31.440 |
|
Just like our children, if they start listening to a certain type of music, |
|
|
|
1:02:31.440 --> 1:02:33.440 |
|
I don't know, Miley Cyrus or something, |
|
|
|
1:02:33.440 --> 1:02:37.440 |
|
you don't understand why they came to that particular preference, |
|
|
|
1:02:37.440 --> 1:02:39.440 |
|
but you understand the learning process. |
|
|
|
1:02:39.440 --> 1:02:41.440 |
|
That's very interesting. |
|
|
|
1:02:41.440 --> 1:02:50.440 |
|
So, on learning for systems to be part of our world, |
|
|
|
1:02:50.440 --> 1:02:56.440 |
|
it has a certain, one of the challenging things that you've spoken about is learning ethics, |
|
|
|
1:02:56.440 --> 1:02:59.440 |
|
learning morals. |
|
|
|
1:02:59.440 --> 1:03:06.440 |
|
And how hard do you think is the problem of, first of all, humans understanding our ethics? |
|
|
|
1:03:06.440 --> 1:03:10.440 |
|
What is the origin on the neural and low level of ethics? |
|
|
|
1:03:10.440 --> 1:03:12.440 |
|
What is it at the higher level? |
|
|
|
1:03:12.440 --> 1:03:17.440 |
|
Is it something that's learnable from machines in your intuition? |
|
|
|
1:03:17.440 --> 1:03:23.440 |
|
I think, yeah, ethics is learnable, very likely. |
|
|
|
1:03:23.440 --> 1:03:36.440 |
|
I think it's one of these problems where I think understanding the neuroscience of ethics, |
|
|
|
1:03:36.440 --> 1:03:42.440 |
|
people discuss there is an ethics of neuroscience. |
|
|
|
1:03:42.440 --> 1:03:46.440 |
|
How a neuroscientist should or should not behave, |
|
|
|
1:03:46.440 --> 1:03:53.440 |
|
can think of a neurosurgeon and the ethics that he or she has to be. |
|
|
|
1:03:53.440 --> 1:03:57.440 |
|
But I'm more interested in the neuroscience of ethics. |
|
|
|
1:03:57.440 --> 1:04:01.440 |
|
You're blowing my mind right now, the neuroscience of ethics, it's very meta. |
|
|
|
1:04:01.440 --> 1:04:09.440 |
|
And I think that would be important to understand also for being able to design machines |
|
|
|
1:04:09.440 --> 1:04:14.440 |
|
that are ethical machines in our sense of ethics. |
|
|
|
1:04:14.440 --> 1:04:20.440 |
|
And you think there is something in neuroscience, there's patterns, |
|
|
|
1:04:20.440 --> 1:04:25.440 |
|
tools in neuroscience that could help us shed some light on ethics |
|
|
|
1:04:25.440 --> 1:04:29.440 |
|
or is it more on the psychologist's sociology at a much higher level? |
|
|
|
1:04:29.440 --> 1:04:33.440 |
|
No, there is psychology, but there is also, in the meantime, |
|
|
|
1:04:33.440 --> 1:04:41.440 |
|
there is evidence, fMRI, of specific areas of the brain |
|
|
|
1:04:41.440 --> 1:04:44.440 |
|
that are involved in certain ethical judgment. |
|
|
|
1:04:44.440 --> 1:04:49.440 |
|
And not only this, you can stimulate those areas with magnetic fields |
|
|
|
1:04:49.440 --> 1:04:54.440 |
|
and change the ethical decisions. |
|
|
|
1:04:54.440 --> 1:05:00.440 |
|
So that's work by a colleague of mine, Rebecca Sacks, |
|
|
|
1:05:00.440 --> 1:05:04.440 |
|
and there are other researchers doing similar work. |
|
|
|
1:05:04.440 --> 1:05:11.440 |
|
And I think this is the beginning, but ideally at some point |
|
|
|
1:05:11.440 --> 1:05:17.440 |
|
we'll have an understanding of how this works and why it evolved, right? |
|
|
|
1:05:17.440 --> 1:05:21.440 |
|
The big why question, yeah, it must have some purpose. |
|
|
|
1:05:21.440 --> 1:05:29.440 |
|
Yeah, obviously it has some social purposes, probably. |
|
|
|
1:05:29.440 --> 1:05:34.440 |
|
If neuroscience holds the key to at least eliminate some aspect of ethics, |
|
|
|
1:05:34.440 --> 1:05:36.440 |
|
that means it could be a learnable problem. |
|
|
|
1:05:36.440 --> 1:05:38.440 |
|
Yeah, exactly. |
|
|
|
1:05:38.440 --> 1:05:41.440 |
|
And as we're getting into harder and harder questions, |
|
|
|
1:05:41.440 --> 1:05:44.440 |
|
let's go to the hard problem of consciousness. |
|
|
|
1:05:44.440 --> 1:05:51.440 |
|
Is this an important problem for us to think about and solve on the engineering |
|
|
|
1:05:51.440 --> 1:05:55.440 |
|
of intelligence side of your work, of our dream? |
|
|
|
1:05:55.440 --> 1:05:57.440 |
|
You know, it's unclear. |
|
|
|
1:05:57.440 --> 1:06:04.440 |
|
So, again, this is a deep problem, partly because it's very difficult |
|
|
|
1:06:04.440 --> 1:06:16.440 |
|
to define consciousness and there is a debate among neuroscientists |
|
|
|
1:06:16.440 --> 1:06:22.440 |
|
about whether consciousness and philosophers, of course, |
|
|
|
1:06:22.440 --> 1:06:30.440 |
|
whether consciousness is something that requires flesh and blood, so to speak, |
|
|
|
1:06:30.440 --> 1:06:40.440 |
|
or could be, you know, that we could have silicon devices that are conscious, |
|
|
|
1:06:40.440 --> 1:06:45.440 |
|
or up to a statement like everything has some degree of consciousness |
|
|
|
1:06:45.440 --> 1:06:48.440 |
|
and some more than others. |
|
|
|
1:06:48.440 --> 1:06:53.440 |
|
This is like Giulio Tonioni and Fee. |
|
|
|
1:06:53.440 --> 1:06:56.440 |
|
We just recently talked to Christof Ko. |
|
|
|
1:06:56.440 --> 1:07:00.440 |
|
Christof was my first graduate student. |
|
|
|
1:07:00.440 --> 1:07:06.440 |
|
Do you think it's important to illuminate aspects of consciousness |
|
|
|
1:07:06.440 --> 1:07:10.440 |
|
in order to engineer intelligence systems? |
|
|
|
1:07:10.440 --> 1:07:14.440 |
|
Do you think an intelligence system would ultimately have consciousness? |
|
|
|
1:07:14.440 --> 1:07:18.440 |
|
Are they intro linked? |
|
|
|
1:07:18.440 --> 1:07:23.440 |
|
You know, most of the people working in artificial intelligence, I think, |
|
|
|
1:07:23.440 --> 1:07:29.440 |
|
they answer, we don't strictly need consciousness to have an intelligence system. |
|
|
|
1:07:29.440 --> 1:07:35.440 |
|
That's sort of the easier question, because it's a very engineering answer to the question. |
|
|
|
1:07:35.440 --> 1:07:38.440 |
|
It has a touring test, we don't need consciousness. |
|
|
|
1:07:38.440 --> 1:07:47.440 |
|
But if you were to go, do you think it's possible that we need to have that kind of self awareness? |
|
|
|
1:07:47.440 --> 1:07:49.440 |
|
We may, yes. |
|
|
|
1:07:49.440 --> 1:08:00.440 |
|
So, for instance, I personally think that when test a machine or a person in a touring test, |
|
|
|
1:08:00.440 --> 1:08:10.440 |
|
in an extended touring testing, I think consciousness is part of what we require in that test, |
|
|
|
1:08:10.440 --> 1:08:14.440 |
|
you know, implicitly to say that this is intelligent. |
|
|
|
1:08:14.440 --> 1:08:17.440 |
|
Christof disagrees. |
|
|
|
1:08:17.440 --> 1:08:19.440 |
|
Yes, he does. |
|
|
|
1:08:19.440 --> 1:08:24.440 |
|
Despite many other romantic notions he holds, he disagrees with that one. |
|
|
|
1:08:24.440 --> 1:08:26.440 |
|
Yes, that's right. |
|
|
|
1:08:26.440 --> 1:08:29.440 |
|
So, you know, who would see? |
|
|
|
1:08:29.440 --> 1:08:37.440 |
|
Do you think, as a quick question, Ernest Becker's fear of death, |
|
|
|
1:08:37.440 --> 1:08:48.440 |
|
do you think mortality and those kinds of things are important for consciousness and for intelligence, |
|
|
|
1:08:48.440 --> 1:08:53.440 |
|
the finiteness of life, finiteness of existence, |
|
|
|
1:08:53.440 --> 1:09:00.440 |
|
or is that just a side effect of evolutionary side effect that's useful for natural selection? |
|
|
|
1:09:00.440 --> 1:09:05.440 |
|
Do you think this kind of thing that this interview is going to run out of time soon, |
|
|
|
1:09:05.440 --> 1:09:08.440 |
|
our life will run out of time soon? |
|
|
|
1:09:08.440 --> 1:09:12.440 |
|
Do you think that's needed to make this conversation good and life good? |
|
|
|
1:09:12.440 --> 1:09:14.440 |
|
You know, I never thought about it. |
|
|
|
1:09:14.440 --> 1:09:16.440 |
|
It's a very interesting question. |
|
|
|
1:09:16.440 --> 1:09:25.440 |
|
I think Steve Jobs in his commencement speech at Stanford argued that, you know, |
|
|
|
1:09:25.440 --> 1:09:30.440 |
|
having a finite life was important for stimulating achievements. |
|
|
|
1:09:30.440 --> 1:09:32.440 |
|
It was a different. |
|
|
|
1:09:32.440 --> 1:09:34.440 |
|
You live every day like it's your last, right? |
|
|
|
1:09:34.440 --> 1:09:35.440 |
|
Yeah. |
|
|
|
1:09:35.440 --> 1:09:45.440 |
|
So, rationally, I don't think strictly you need mortality for consciousness, but... |
|
|
|
1:09:45.440 --> 1:09:46.440 |
|
Who knows? |
|
|
|
1:09:46.440 --> 1:09:49.440 |
|
They seem to go together in our biological system, right? |
|
|
|
1:09:49.440 --> 1:09:51.440 |
|
Yeah. |
|
|
|
1:09:51.440 --> 1:09:57.440 |
|
You've mentioned before and the students are associated with... |
|
|
|
1:09:57.440 --> 1:10:01.440 |
|
AlphaGo immobilized the big recent success stories in AI. |
|
|
|
1:10:01.440 --> 1:10:05.440 |
|
I think it's captivated the entire world of what AI can do. |
|
|
|
1:10:05.440 --> 1:10:10.440 |
|
So, what do you think will be the next breakthrough? |
|
|
|
1:10:10.440 --> 1:10:13.440 |
|
What's your intuition about the next breakthrough? |
|
|
|
1:10:13.440 --> 1:10:16.440 |
|
Of course, I don't know where the next breakthrough is. |
|
|
|
1:10:16.440 --> 1:10:22.440 |
|
I think that there is a good chance, as I said before, that the next breakthrough |
|
|
|
1:10:22.440 --> 1:10:27.440 |
|
would also be inspired by, you know, neuroscience. |
|
|
|
1:10:27.440 --> 1:10:31.440 |
|
But which one? |
|
|
|
1:10:31.440 --> 1:10:32.440 |
|
I don't know. |
|
|
|
1:10:32.440 --> 1:10:33.440 |
|
And there's... |
|
|
|
1:10:33.440 --> 1:10:35.440 |
|
So, MIT has this quest for intelligence. |
|
|
|
1:10:35.440 --> 1:10:36.440 |
|
Yeah. |
|
|
|
1:10:36.440 --> 1:10:41.440 |
|
And there's a few moonshots which, in that spirit, which ones are you excited about? |
|
|
|
1:10:41.440 --> 1:10:42.440 |
|
What... |
|
|
|
1:10:42.440 --> 1:10:44.440 |
|
Which projects kind of... |
|
|
|
1:10:44.440 --> 1:10:48.440 |
|
Well, of course, I'm excited about one of the moonshots with... |
|
|
|
1:10:48.440 --> 1:10:52.440 |
|
Which is our center for brains, minds, and machines. |
|
|
|
1:10:52.440 --> 1:10:57.440 |
|
The one which is fully funded by NSF. |
|
|
|
1:10:57.440 --> 1:10:59.440 |
|
And it's a... |
|
|
|
1:10:59.440 --> 1:11:02.440 |
|
It is about visual intelligence. |
|
|
|
1:11:02.440 --> 1:11:05.440 |
|
And that one is particularly about understanding. |
|
|
|
1:11:05.440 --> 1:11:07.440 |
|
Visual intelligence. |
|
|
|
1:11:07.440 --> 1:11:16.440 |
|
Visual cortex and visual intelligence in the sense of how we look around ourselves |
|
|
|
1:11:16.440 --> 1:11:25.440 |
|
and understand the world around ourselves, you know, meaning what is going on, |
|
|
|
1:11:25.440 --> 1:11:31.440 |
|
how we could go from here to there without hitting obstacles. |
|
|
|
1:11:31.440 --> 1:11:36.440 |
|
You know, whether there are other agents, people in the environment. |
|
|
|
1:11:36.440 --> 1:11:41.440 |
|
These are all things that we perceive very quickly. |
|
|
|
1:11:41.440 --> 1:11:47.440 |
|
And it's something actually quite close to being conscious, not quite. |
|
|
|
1:11:47.440 --> 1:11:53.440 |
|
But there is this interesting experiment that was run at Google X, |
|
|
|
1:11:53.440 --> 1:11:58.440 |
|
which is, in a sense, is just a virtual reality experiment, |
|
|
|
1:11:58.440 --> 1:12:09.440 |
|
but in which they had subject sitting, say, in a chair with goggles, like Oculus and so on. |
|
|
|
1:12:09.440 --> 1:12:11.440 |
|
Earphones. |
|
|
|
1:12:11.440 --> 1:12:20.440 |
|
And they were seeing through the eyes of a robot nearby to cameras, microphones for receiving. |
|
|
|
1:12:20.440 --> 1:12:23.440 |
|
So their sensory system was there. |
|
|
|
1:12:23.440 --> 1:12:30.440 |
|
And the impression of all the subjects, very strong, they could not shake it off, |
|
|
|
1:12:30.440 --> 1:12:35.440 |
|
was that they were where the robot was. |
|
|
|
1:12:35.440 --> 1:12:42.440 |
|
They could look at themselves from the robot and still feel they were where the robot is. |
|
|
|
1:12:42.440 --> 1:12:45.440 |
|
They were looking at their body. |
|
|
|
1:12:45.440 --> 1:12:48.440 |
|
Their self had moved. |
|
|
|
1:12:48.440 --> 1:12:54.440 |
|
So some aspect of seeing understanding has to have ability to place yourself, |
|
|
|
1:12:54.440 --> 1:12:59.440 |
|
have a self awareness about your position in the world and what the world is. |
|
|
|
1:12:59.440 --> 1:13:04.440 |
|
So we may have to solve the heart problem of consciousness to solve it. |
|
|
|
1:13:04.440 --> 1:13:05.440 |
|
On their way, yes. |
|
|
|
1:13:05.440 --> 1:13:07.440 |
|
It's quite a moonshot. |
|
|
|
1:13:07.440 --> 1:13:14.440 |
|
So you've been an advisor to some incredible minds, including Demis Osabis, Christof Koch, |
|
|
|
1:13:14.440 --> 1:13:21.440 |
|
Amna Shashwar, like you said, all went on to become seminal figures in their respective fields. |
|
|
|
1:13:21.440 --> 1:13:28.440 |
|
From your own success as a researcher and from perspective as a mentor of these researchers, |
|
|
|
1:13:28.440 --> 1:13:33.440 |
|
having guided them in the way of advice, |
|
|
|
1:13:33.440 --> 1:13:39.440 |
|
what does it take to be successful in science and engineering careers? |
|
|
|
1:13:39.440 --> 1:13:47.440 |
|
Whether you're talking to somebody in their teens, 20s and 30s, what does that path look like? |
|
|
|
1:13:47.440 --> 1:13:52.440 |
|
It's curiosity and having fun. |
|
|
|
1:13:52.440 --> 1:14:01.440 |
|
And I think it's important also having fun with other curious minds. |
|
|
|
1:14:01.440 --> 1:14:06.440 |
|
It's the people you surround with to have fun and curiosity. |
|
|
|
1:14:06.440 --> 1:14:09.440 |
|
You mentioned Steve Jobs. |
|
|
|
1:14:09.440 --> 1:14:14.440 |
|
Is there also an underlying ambition that's unique that you saw, |
|
|
|
1:14:14.440 --> 1:14:18.440 |
|
or is it really does boil down to insatiable curiosity and fun? |
|
|
|
1:14:18.440 --> 1:14:20.440 |
|
Well, of course. |
|
|
|
1:14:20.440 --> 1:14:29.440 |
|
It's being curious in an active and ambitious way, yes, definitely. |
|
|
|
1:14:29.440 --> 1:14:38.440 |
|
But I think sometime in science, there are friends of mine who are like this. |
|
|
|
1:14:38.440 --> 1:14:44.440 |
|
You know, there are some of the scientists who like to work by themselves |
|
|
|
1:14:44.440 --> 1:14:54.440 |
|
and kind of communicate only when they complete their work or discover something. |
|
|
|
1:14:54.440 --> 1:15:02.440 |
|
I think I always found the actual process of discovering something |
|
|
|
1:15:02.440 --> 1:15:09.440 |
|
is more fun if it's together with other intelligent and curious and fun people. |
|
|
|
1:15:09.440 --> 1:15:13.440 |
|
So if you see the fun in that process, the side effect of that process |
|
|
|
1:15:13.440 --> 1:15:16.440 |
|
would be that you'll actually end up discovering something. |
|
|
|
1:15:16.440 --> 1:15:25.440 |
|
So as you've led many incredible efforts here, what's the secret to being a good advisor, |
|
|
|
1:15:25.440 --> 1:15:28.440 |
|
mentor, leader in a research setting? |
|
|
|
1:15:28.440 --> 1:15:35.440 |
|
Is it a similar spirit or what advice could you give to people, young faculty and so on? |
|
|
|
1:15:35.440 --> 1:15:42.440 |
|
It's partly repeating what I said about an environment that should be friendly and fun |
|
|
|
1:15:42.440 --> 1:15:52.440 |
|
and ambitious and, you know, I think I learned a lot from some of my advisors and friends |
|
|
|
1:15:52.440 --> 1:16:02.440 |
|
and some were physicists and there was, for instance, this behavior that was encouraged |
|
|
|
1:16:02.440 --> 1:16:08.440 |
|
of when somebody comes with a new idea in the group, unless it's really stupid |
|
|
|
1:16:08.440 --> 1:16:11.440 |
|
but you are always enthusiastic. |
|
|
|
1:16:11.440 --> 1:16:14.440 |
|
And then you're enthusiastic for a few minutes, for a few hours. |
|
|
|
1:16:14.440 --> 1:16:22.440 |
|
Then you start, you know, asking critically a few questions, testing this. |
|
|
|
1:16:22.440 --> 1:16:28.440 |
|
But, you know, this is a process that is, I think it's very good. |
|
|
|
1:16:28.440 --> 1:16:30.440 |
|
You have to be enthusiastic. |
|
|
|
1:16:30.440 --> 1:16:33.440 |
|
Sometimes people are very critical from the beginning. |
|
|
|
1:16:33.440 --> 1:16:35.440 |
|
That's not... |
|
|
|
1:16:35.440 --> 1:16:37.440 |
|
Yes, you have to give it a chance. |
|
|
|
1:16:37.440 --> 1:16:38.440 |
|
Yes. |
|
|
|
1:16:38.440 --> 1:16:39.440 |
|
That's seed to grow. |
|
|
|
1:16:39.440 --> 1:16:44.440 |
|
That said, with some of your ideas, which are quite revolutionary, so there's a witness, |
|
|
|
1:16:44.440 --> 1:16:49.440 |
|
especially in the human vision side and neuroscience side, there could be some pretty heated arguments. |
|
|
|
1:16:49.440 --> 1:16:51.440 |
|
Do you enjoy these? |
|
|
|
1:16:51.440 --> 1:16:55.440 |
|
Is that a part of science and academic pursuits that you enjoy? |
|
|
|
1:16:55.440 --> 1:16:56.440 |
|
Yeah. |
|
|
|
1:16:56.440 --> 1:17:00.440 |
|
Is that something that happens in your group as well? |
|
|
|
1:17:00.440 --> 1:17:02.440 |
|
Yeah, absolutely. |
|
|
|
1:17:02.440 --> 1:17:14.440 |
|
I also spent some time in Germany again, there is this tradition in which people are more forthright, less kind than here. |
|
|
|
1:17:14.440 --> 1:17:23.440 |
|
So, you know, in the US, when you write a bad letter, you still say, this guy is nice, you know. |
|
|
|
1:17:23.440 --> 1:17:25.440 |
|
Yes, yes. |
|
|
|
1:17:25.440 --> 1:17:26.440 |
|
So... |
|
|
|
1:17:26.440 --> 1:17:28.440 |
|
Yeah, here in America it's degrees of nice. |
|
|
|
1:17:28.440 --> 1:17:29.440 |
|
Yes. |
|
|
|
1:17:29.440 --> 1:17:31.440 |
|
It's all just degrees of nice, yeah. |
|
|
|
1:17:31.440 --> 1:17:44.440 |
|
Right, so as long as this does not become personal and it's really like, you know, a football game with its rules, that's great. |
|
|
|
1:17:44.440 --> 1:17:46.440 |
|
It's fun. |
|
|
|
1:17:46.440 --> 1:17:58.440 |
|
So, if you somehow find yourself in a position to ask one question of an oracle, like a genie, maybe a god, and you're guaranteed to get a clear answer, |
|
|
|
1:17:58.440 --> 1:18:00.440 |
|
what kind of question would you ask? |
|
|
|
1:18:00.440 --> 1:18:03.440 |
|
What would be the question you would ask? |
|
|
|
1:18:03.440 --> 1:18:09.440 |
|
In the spirit of our discussion, it could be, how could I become ten times more intelligent? |
|
|
|
1:18:09.440 --> 1:18:15.440 |
|
And so, but see, you only get a clear short answer. |
|
|
|
1:18:15.440 --> 1:18:18.440 |
|
So, do you think there's a clear short answer to that? |
|
|
|
1:18:18.440 --> 1:18:19.440 |
|
No. |
|
|
|
1:18:19.440 --> 1:18:22.440 |
|
And that's the answer you'll get. |
|
|
|
1:18:22.440 --> 1:18:23.440 |
|
Okay. |
|
|
|
1:18:23.440 --> 1:18:26.440 |
|
So, you've mentioned Flowers of Algernon. |
|
|
|
1:18:26.440 --> 1:18:27.440 |
|
Oh, yeah. |
|
|
|
1:18:27.440 --> 1:18:32.440 |
|
There's a story that inspired you in your childhood. |
|
|
|
1:18:32.440 --> 1:18:48.440 |
|
As this story of a mouse, a human achieving genius level intelligence, and then understanding what was happening while slowly becoming not intelligent again in this tragedy of gaining intelligence and losing intelligence. |
|
|
|
1:18:48.440 --> 1:18:59.440 |
|
Do you think in that spirit, in that story, do you think intelligence is a gift or a curse from the perspective of happiness and meaning of life? |
|
|
|
1:18:59.440 --> 1:19:10.440 |
|
You try to create an intelligent system that understands the universe, but on an individual level, the meaning of life, do you think intelligence is a gift? |
|
|
|
1:19:10.440 --> 1:19:16.440 |
|
It's a good question. |
|
|
|
1:19:16.440 --> 1:19:22.440 |
|
I don't know. |
|
|
|
1:19:22.440 --> 1:19:34.440 |
|
As one of the, as one people who consider the smartest people in the world, in some, in some dimension at the very least, what do you think? |
|
|
|
1:19:34.440 --> 1:19:35.440 |
|
I don't know. |
|
|
|
1:19:35.440 --> 1:19:39.440 |
|
It may be invariant to intelligence, let's agree of happiness. |
|
|
|
1:19:39.440 --> 1:19:43.440 |
|
It would be nice if it were. |
|
|
|
1:19:43.440 --> 1:19:44.440 |
|
That's the hope. |
|
|
|
1:19:44.440 --> 1:19:45.440 |
|
Yeah. |
|
|
|
1:19:45.440 --> 1:19:49.440 |
|
You could be smart and happy and clueless and happy. |
|
|
|
1:19:49.440 --> 1:19:51.440 |
|
Yeah. |
|
|
|
1:19:51.440 --> 1:19:56.440 |
|
As always on the discussion of the meaning of life is probably a good place to end. |
|
|
|
1:19:56.440 --> 1:19:58.440 |
|
Tomasso, thank you so much for talking today. |
|
|
|
1:19:58.440 --> 1:19:59.440 |
|
Thank you. |
|
|
|
1:19:59.440 --> 1:20:19.440 |
|
This was great. |
|
|
|
|