lexicap / vtt /episode_005_small.vtt
Shubham Gupta
Add readme and files
a3be5d0
raw
history blame
59 kB
WEBVTT
00:00.000 --> 00:03.040
The following is a conversation with Vladimir Vapnik.
00:03.040 --> 00:05.280
He's the coinventor of the Support Vector Machines,
00:05.280 --> 00:07.920
Support Vector Clustering, VC Theory,
00:07.920 --> 00:11.200
and many foundational ideas in statistical learning.
00:11.200 --> 00:13.640
He was born in the Soviet Union and worked
00:13.640 --> 00:16.320
at the Institute of Control Sciences in Moscow.
00:16.320 --> 00:20.640
Then in the United States, he worked at AT&T, NEC Labs,
00:20.640 --> 00:24.280
Facebook Research, and now as a professor at Columbia
00:24.280 --> 00:25.960
University.
00:25.960 --> 00:30.320
His work has been cited over 170,000 times.
00:30.320 --> 00:31.840
He has some very interesting ideas
00:31.840 --> 00:34.800
about artificial intelligence and the nature of learning,
00:34.800 --> 00:37.600
especially on the limits of our current approaches
00:37.600 --> 00:40.440
and the open problems in the field.
00:40.440 --> 00:42.520
This conversation is part of MIT course
00:42.520 --> 00:44.440
on artificial general intelligence
00:44.440 --> 00:46.840
and the Artificial Intelligence Podcast.
00:46.840 --> 00:49.600
If you enjoy it, please subscribe on YouTube
00:49.600 --> 00:53.040
or rate it on iTunes or your podcast provider of choice
00:53.040 --> 00:55.320
or simply connect with me on Twitter
00:55.320 --> 01:00.200
or other social networks at Lex Friedman, spelled F R I D.
01:00.200 --> 01:04.800
And now here's my conversation with Vladimir Vapnik.
01:04.800 --> 01:08.840
Einstein famously said that God doesn't play dice.
01:08.840 --> 01:10.000
Yeah.
01:10.000 --> 01:12.880
You have studied the world through the eyes of statistics.
01:12.880 --> 01:17.320
So let me ask you, in terms of the nature of reality,
01:17.320 --> 01:21.360
fundamental nature of reality, does God play dice?
01:21.360 --> 01:26.200
We don't know some factors, and because we
01:26.200 --> 01:30.520
don't know some factors, which could be important,
01:30.520 --> 01:38.000
it looks like God play dice, but we should describe it.
01:38.000 --> 01:42.080
In philosophy, they distinguish between two positions,
01:42.080 --> 01:45.480
positions of instrumentalism, where
01:45.480 --> 01:48.720
you're creating theory for prediction
01:48.720 --> 01:51.400
and position of realism, where you're
01:51.400 --> 01:54.640
trying to understand what God's big.
01:54.640 --> 01:56.800
Can you describe instrumentalism and realism
01:56.800 --> 01:58.400
a little bit?
01:58.400 --> 02:06.320
For example, if you have some mechanical laws, what is that?
02:06.320 --> 02:11.480
Is it law which is true always and everywhere?
02:11.480 --> 02:14.880
Or it is law which allows you to predict
02:14.880 --> 02:22.920
the position of moving element, what you believe.
02:22.920 --> 02:28.480
You believe that it is God's law, that God created the world,
02:28.480 --> 02:33.160
which obeyed to this physical law,
02:33.160 --> 02:36.240
or it is just law for predictions?
02:36.240 --> 02:38.400
And which one is instrumentalism?
02:38.400 --> 02:39.880
For predictions.
02:39.880 --> 02:45.400
If you believe that this is law of God, and it's always
02:45.400 --> 02:50.040
true everywhere, that means that you're a realist.
02:50.040 --> 02:55.480
So you're trying to really understand that God's thought.
02:55.480 --> 03:00.040
So the way you see the world as an instrumentalist?
03:00.040 --> 03:03.240
You know, I'm working for some models,
03:03.240 --> 03:06.960
model of machine learning.
03:06.960 --> 03:12.760
So in this model, we can see setting,
03:12.760 --> 03:16.440
and we try to solve, resolve the setting,
03:16.440 --> 03:18.240
to solve the problem.
03:18.240 --> 03:20.760
And you can do it in two different ways,
03:20.760 --> 03:23.840
from the point of view of instrumentalists.
03:23.840 --> 03:27.120
And that's what everybody does now,
03:27.120 --> 03:31.560
because they say that the goal of machine learning
03:31.560 --> 03:36.800
is to find the rule for classification.
03:36.800 --> 03:40.920
That is true, but it is an instrument for prediction.
03:40.920 --> 03:46.160
But I can say the goal of machine learning
03:46.160 --> 03:50.040
is to learn about conditional probability.
03:50.040 --> 03:54.440
So how God played use, and is He play?
03:54.440 --> 03:55.960
What is probability for one?
03:55.960 --> 03:59.960
What is probability for another given situation?
03:59.960 --> 04:02.600
But for prediction, I don't need this.
04:02.600 --> 04:04.240
I need the rule.
04:04.240 --> 04:08.480
But for understanding, I need conditional probability.
04:08.480 --> 04:11.800
So let me just step back a little bit first to talk about.
04:11.800 --> 04:13.960
You mentioned, which I read last night,
04:13.960 --> 04:21.280
the parts of the 1960 paper by Eugene Wigner,
04:21.280 --> 04:23.520
unreasonable effectiveness of mathematics
04:23.520 --> 04:24.880
and natural sciences.
04:24.880 --> 04:29.400
Such a beautiful paper, by the way.
04:29.400 --> 04:34.480
It made me feel, to be honest, to confess my own work
04:34.480 --> 04:38.400
in the past few years on deep learning, heavily applied.
04:38.400 --> 04:40.320
It made me feel that I was missing out
04:40.320 --> 04:43.960
on some of the beauty of nature in the way
04:43.960 --> 04:45.560
that math can uncover.
04:45.560 --> 04:50.360
So let me just step away from the poetry of that for a second.
04:50.360 --> 04:53.040
How do you see the role of math in your life?
04:53.040 --> 04:54.080
Is it a tool?
04:54.080 --> 04:55.840
Is it poetry?
04:55.840 --> 04:56.960
Where does it sit?
04:56.960 --> 05:01.400
And does math for you have limits of what it can describe?
05:01.400 --> 05:08.280
Some people saying that math is language which use God.
05:08.280 --> 05:10.280
So I believe in that.
05:10.280 --> 05:12.000
Speak to God or use God.
05:12.000 --> 05:12.760
Or use God.
05:12.760 --> 05:14.080
Use God.
05:14.080 --> 05:15.560
Yeah.
05:15.560 --> 05:25.680
So I believe that this article about unreasonable
05:25.680 --> 05:29.960
effectiveness of math is that if you're
05:29.960 --> 05:33.960
looking in mathematical structures,
05:33.960 --> 05:37.720
they know something about reality.
05:37.720 --> 05:42.480
And the most scientists from natural science,
05:42.480 --> 05:48.440
they're looking on equation and trying to understand reality.
05:48.440 --> 05:51.280
So the same in machine learning.
05:51.280 --> 05:57.560
If you're trying very carefully look on all equations
05:57.560 --> 06:00.640
which define conditional probability,
06:00.640 --> 06:05.680
you can understand something about reality more
06:05.680 --> 06:08.160
than from your fantasy.
06:08.160 --> 06:12.480
So math can reveal the simple underlying principles
06:12.480 --> 06:13.880
of reality, perhaps.
06:13.880 --> 06:16.880
You know, what means simple?
06:16.880 --> 06:20.320
It is very hard to discover them.
06:20.320 --> 06:23.800
But then when you discover them and look at them,
06:23.800 --> 06:27.440
you see how beautiful they are.
06:27.440 --> 06:33.560
And it is surprising why people did not see that before.
06:33.560 --> 06:37.480
You're looking on equation and derive it from equations.
06:37.480 --> 06:43.360
For example, I talked yesterday about least squirmated.
06:43.360 --> 06:48.120
And people had a lot of fantasy have to improve least squirmated.
06:48.120 --> 06:52.360
But if you're going step by step by solving some equations,
06:52.360 --> 06:57.680
you suddenly will get some term which,
06:57.680 --> 07:01.040
after thinking, you understand that it described
07:01.040 --> 07:04.360
position of observation point.
07:04.360 --> 07:08.240
In least squirmated, we throw out a lot of information.
07:08.240 --> 07:11.760
We don't look in composition of point of observations.
07:11.760 --> 07:14.600
We're looking only on residuals.
07:14.600 --> 07:19.400
But when you understood that, that's a very simple idea.
07:19.400 --> 07:22.320
But it's not too simple to understand.
07:22.320 --> 07:25.680
And you can derive this just from equations.
07:25.680 --> 07:28.120
So some simple algebra, a few steps
07:28.120 --> 07:31.040
will take you to something surprising
07:31.040 --> 07:34.360
that when you think about, you understand.
07:34.360 --> 07:41.120
And that is proof that human intuition not to reach
07:41.120 --> 07:42.640
and very primitive.
07:42.640 --> 07:48.520
And it does not see very simple situations.
07:48.520 --> 07:51.760
So let me take a step back in general.
07:51.760 --> 07:54.480
Yes, right?
07:54.480 --> 07:58.840
But what about human as opposed to intuition and ingenuity?
08:01.600 --> 08:02.960
Moments of brilliance.
08:02.960 --> 08:09.480
So do you have to be so hard on human intuition?
08:09.480 --> 08:11.840
Are there moments of brilliance in human intuition?
08:11.840 --> 08:17.520
They can leap ahead of math, and then the math will catch up?
08:17.520 --> 08:19.400
I don't think so.
08:19.400 --> 08:23.560
I think that the best human intuition,
08:23.560 --> 08:26.440
it is putting in axioms.
08:26.440 --> 08:28.600
And then it is technical.
08:28.600 --> 08:31.880
See where the axioms take you.
08:31.880 --> 08:34.920
But if they correctly take axioms,
08:34.920 --> 08:41.400
but it axiom polished during generations of scientists.
08:41.400 --> 08:45.040
And this is integral wisdom.
08:45.040 --> 08:47.480
So that's beautifully put.
08:47.480 --> 08:54.040
But if you maybe look at when you think of Einstein
08:54.040 --> 08:58.960
and special relativity, what is the role of imagination
08:58.960 --> 09:04.480
coming first there in the moment of discovery of an idea?
09:04.480 --> 09:06.440
So there is obviously a mix of math
09:06.440 --> 09:10.800
and out of the box imagination there.
09:10.800 --> 09:12.600
That I don't know.
09:12.600 --> 09:18.080
Whatever I did, I exclude any imagination.
09:18.080 --> 09:21.080
Because whatever I saw in machine learning that
09:21.080 --> 09:26.440
come from imagination, like features, like deep learning,
09:26.440 --> 09:29.320
they are not relevant to the problem.
09:29.320 --> 09:31.960
When you're looking very carefully
09:31.960 --> 09:34.280
for mathematical equations, you're
09:34.280 --> 09:38.000
deriving very simple theory, which goes far by
09:38.000 --> 09:42.040
no theory at school than whatever people can imagine.
09:42.040 --> 09:44.760
Because it is not good fantasy.
09:44.760 --> 09:46.720
It is just interpretation.
09:46.720 --> 09:48.000
It is just fantasy.
09:48.000 --> 09:51.320
But it is not what you need.
09:51.320 --> 09:56.960
You don't need any imagination to derive, say,
09:56.960 --> 10:00.040
main principle of machine learning.
10:00.040 --> 10:02.760
When you think about learning and intelligence,
10:02.760 --> 10:04.560
maybe thinking about the human brain
10:04.560 --> 10:09.200
and trying to describe mathematically the process of learning
10:09.200 --> 10:13.160
that is something like what happens in the human brain,
10:13.160 --> 10:17.200
do you think we have the tools currently?
10:17.200 --> 10:19.000
Do you think we will ever have the tools
10:19.000 --> 10:22.680
to try to describe that process of learning?
10:22.680 --> 10:25.800
It is not description of what's going on.
10:25.800 --> 10:27.360
It is interpretation.
10:27.360 --> 10:29.400
It is your interpretation.
10:29.400 --> 10:32.080
Your vision can be wrong.
10:32.080 --> 10:36.160
You know, when a guy invent microscope,
10:36.160 --> 10:40.560
Levin Cook for the first time, only he got this instrument
10:40.560 --> 10:45.440
and nobody, he kept secrets about microscope.
10:45.440 --> 10:49.080
But he wrote reports in London Academy of Science.
10:49.080 --> 10:52.040
In his report, when he looked into the blood,
10:52.040 --> 10:54.480
he looked everywhere, on the water, on the blood,
10:54.480 --> 10:56.320
on the spin.
10:56.320 --> 11:04.040
But he described blood like fight between queen and king.
11:04.040 --> 11:08.120
So he saw blood cells, red cells,
11:08.120 --> 11:12.400
and he imagines that it is army fighting each other.
11:12.400 --> 11:16.960
And it was his interpretation of situation.
11:16.960 --> 11:19.760
And he sent this report in Academy of Science.
11:19.760 --> 11:22.640
They very carefully looked because they believed
11:22.640 --> 11:25.160
that he is right, he saw something.
11:25.160 --> 11:28.240
But he gave wrong interpretation.
11:28.240 --> 11:32.280
And I believe the same can happen with brain.
11:32.280 --> 11:35.280
Because the most important part, you know,
11:35.280 --> 11:38.840
I believe in human language.
11:38.840 --> 11:43.000
In some proverb, it's so much wisdom.
11:43.000 --> 11:50.240
For example, people say that it is better than 1,000 days
11:50.240 --> 11:53.960
of diligent studies one day with great teacher.
11:53.960 --> 11:59.480
But if I will ask you what teacher does, nobody knows.
11:59.480 --> 12:01.400
And that is intelligence.
12:01.400 --> 12:07.320
And what we know from history, and now from mass
12:07.320 --> 12:12.080
and machine learning, that teacher can do a lot.
12:12.080 --> 12:14.400
So what, from a mathematical point of view,
12:14.400 --> 12:16.080
is the great teacher?
12:16.080 --> 12:17.240
I don't know.
12:17.240 --> 12:18.880
That's an awful question.
12:18.880 --> 12:25.120
Now, what we can say what teacher can do,
12:25.120 --> 12:29.440
he can introduce some invariance, some predicate
12:29.440 --> 12:32.280
for creating invariance.
12:32.280 --> 12:33.520
How he doing it?
12:33.520 --> 12:34.080
I don't know.
12:34.080 --> 12:37.560
Because teacher knows reality and can describe
12:37.560 --> 12:41.200
from this reality a predicate invariance.
12:41.200 --> 12:43.480
But he knows that when you're using invariant,
12:43.480 --> 12:47.960
he can decrease number of observations 100 times.
12:47.960 --> 12:52.960
But maybe try to pull that apart a little bit.
12:52.960 --> 12:58.120
I think you mentioned a piano teacher saying to the student,
12:58.120 --> 12:59.880
play like a butterfly.
12:59.880 --> 13:03.720
I played piano, I played guitar for a long time.
13:03.720 --> 13:09.800
Yeah, maybe it's romantic, poetic.
13:09.800 --> 13:13.160
But it feels like there's a lot of truth in that statement.
13:13.160 --> 13:15.440
There is a lot of instruction in that statement.
13:15.440 --> 13:17.320
And so can you pull that apart?
13:17.320 --> 13:19.760
What is that?
13:19.760 --> 13:22.520
The language itself may not contain this information.
13:22.520 --> 13:24.160
It's not blah, blah, blah.
13:24.160 --> 13:25.640
It does not blah, blah, blah, yeah.
13:25.640 --> 13:26.960
It affects you.
13:26.960 --> 13:27.600
It's what?
13:27.600 --> 13:28.600
It affects you.
13:28.600 --> 13:29.800
It affects your playing.
13:29.800 --> 13:30.640
Yes, it does.
13:30.640 --> 13:33.640
But it's not the language.
13:33.640 --> 13:38.000
It feels like what is the information being exchanged there?
13:38.000 --> 13:39.760
What is the nature of information?
13:39.760 --> 13:41.880
What is the representation of that information?
13:41.880 --> 13:44.000
I believe that it is sort of predicate.
13:44.000 --> 13:45.400
But I don't know.
13:45.400 --> 13:48.880
That is exactly what intelligence in machine learning
13:48.880 --> 13:50.080
should be.
13:50.080 --> 13:53.200
Because the rest is just mathematical technique.
13:53.200 --> 13:57.920
I think that what was discovered recently
13:57.920 --> 14:03.280
is that there is two mechanisms of learning.
14:03.280 --> 14:06.040
One called strong convergence mechanism
14:06.040 --> 14:08.560
and weak convergence mechanism.
14:08.560 --> 14:11.200
Before, people use only one convergence.
14:11.200 --> 14:15.840
In weak convergence mechanism, you can use predicate.
14:15.840 --> 14:19.360
That's what play like butterfly.
14:19.360 --> 14:23.640
And it will immediately affect your playing.
14:23.640 --> 14:26.360
You know, there is English proverb.
14:26.360 --> 14:27.320
Great.
14:27.320 --> 14:31.680
If it looks like a duck, swims like a duck,
14:31.680 --> 14:35.200
and quack like a duck, then it is probably duck.
14:35.200 --> 14:36.240
Yes.
14:36.240 --> 14:40.400
But this is exact about predicate.
14:40.400 --> 14:42.920
Looks like a duck, what it means.
14:42.920 --> 14:46.720
So you saw many ducks that you're training data.
14:46.720 --> 14:56.480
So you have description of how looks integral looks ducks.
14:56.480 --> 14:59.360
Yeah, the visual characteristics of a duck.
14:59.360 --> 15:00.840
Yeah, but you won't.
15:00.840 --> 15:04.200
And you have model for the cognition ducks.
15:04.200 --> 15:07.880
So you would like that theoretical description
15:07.880 --> 15:12.720
from model coincide with empirical description, which
15:12.720 --> 15:14.520
you saw on Territax there.
15:14.520 --> 15:18.440
So about looks like a duck, it is general.
15:18.440 --> 15:21.480
But what about swims like a duck?
15:21.480 --> 15:23.560
You should know that duck swims.
15:23.560 --> 15:26.960
You can say it play chess like a duck, OK?
15:26.960 --> 15:28.880
Duck doesn't play chess.
15:28.880 --> 15:35.560
And it is completely legal predicate, but it is useless.
15:35.560 --> 15:41.040
So half teacher can recognize not useless predicate.
15:41.040 --> 15:44.640
So up to now, we don't use this predicate
15:44.640 --> 15:46.680
in existing machine learning.
15:46.680 --> 15:47.200
And you think that's not so useful?
15:47.200 --> 15:50.600
So why we need billions of data?
15:50.600 --> 15:55.560
But in this English proverb, they use only three predicate.
15:55.560 --> 15:59.080
Looks like a duck, swims like a duck, and quack like a duck.
15:59.080 --> 16:02.040
So you can't deny the fact that swims like a duck
16:02.040 --> 16:08.520
and quacks like a duck has humor in it, has ambiguity.
16:08.520 --> 16:12.600
Let's talk about swim like a duck.
16:12.600 --> 16:16.520
It does not say jumps like a duck.
16:16.520 --> 16:17.680
Why?
16:17.680 --> 16:20.760
Because it's not relevant.
16:20.760 --> 16:25.880
But that means that you know ducks, you know different birds,
16:25.880 --> 16:27.600
you know animals.
16:27.600 --> 16:32.440
And you derive from this that it is relevant to say swim like a duck.
16:32.440 --> 16:36.680
So underneath, in order for us to understand swims like a duck,
16:36.680 --> 16:41.200
it feels like we need to know millions of other little pieces
16:41.200 --> 16:43.000
of information.
16:43.000 --> 16:44.280
We pick up along the way.
16:44.280 --> 16:45.120
You don't think so.
16:45.120 --> 16:48.480
There doesn't need to be this knowledge base.
16:48.480 --> 16:52.600
In those statements, carries some rich information
16:52.600 --> 16:57.280
that helps us understand the essence of duck.
16:57.280 --> 17:01.920
How far are we from integrating predicates?
17:01.920 --> 17:06.000
You know that when you consider complete theory,
17:06.000 --> 17:09.320
machine learning, so what it does,
17:09.320 --> 17:12.400
you have a lot of functions.
17:12.400 --> 17:17.480
And then you're talking, it looks like a duck.
17:17.480 --> 17:20.720
You see your training data.
17:20.720 --> 17:31.040
From training data, you recognize like expected duck should look.
17:31.040 --> 17:37.640
Then you remove all functions, which does not look like you think
17:37.640 --> 17:40.080
it should look from training data.
17:40.080 --> 17:45.800
So you decrease amount of function from which you pick up one.
17:45.800 --> 17:48.320
Then you give a second predicate.
17:48.320 --> 17:51.840
And then, again, decrease the set of function.
17:51.840 --> 17:55.800
And after that, you pick up the best function you can find.
17:55.800 --> 17:58.120
It is standard machine learning.
17:58.120 --> 18:03.280
So why you need not too many examples?
18:03.280 --> 18:06.600
Because your predicates aren't very good, or you're not.
18:06.600 --> 18:09.200
That means that predicate very good.
18:09.200 --> 18:12.520
Because every predicate is invented
18:12.520 --> 18:17.720
to decrease a divisible set of functions.
18:17.720 --> 18:20.320
So you talk about admissible set of functions,
18:20.320 --> 18:22.440
and you talk about good functions.
18:22.440 --> 18:24.280
So what makes a good function?
18:24.280 --> 18:28.600
So admissible set of function is set of function
18:28.600 --> 18:32.760
which has small capacity, or small diversity,
18:32.760 --> 18:36.960
small VC dimension example, which contain good function.
18:36.960 --> 18:38.760
So by the way, for people who don't know,
18:38.760 --> 18:42.440
VC, you're the V in the VC.
18:42.440 --> 18:50.440
So how would you describe to a lay person what VC theory is?
18:50.440 --> 18:51.440
How would you describe VC?
18:51.440 --> 18:56.480
So when you have a machine, so a machine
18:56.480 --> 19:00.240
capable to pick up one function from the admissible set
19:00.240 --> 19:02.520
of function.
19:02.520 --> 19:07.640
But set of admissibles function can be big.
19:07.640 --> 19:11.600
They contain all continuous functions and it's useless.
19:11.600 --> 19:15.280
You don't have so many examples to pick up function.
19:15.280 --> 19:17.280
But it can be small.
19:17.280 --> 19:24.560
Small, we call it capacity, but maybe better called diversity.
19:24.560 --> 19:27.160
So not very different function in the set
19:27.160 --> 19:31.280
is infinite set of function, but not very diverse.
19:31.280 --> 19:34.280
So it is small VC dimension.
19:34.280 --> 19:39.360
When VC dimension is small, you need small amount
19:39.360 --> 19:41.760
of training data.
19:41.760 --> 19:47.360
So the goal is to create admissible set of functions
19:47.360 --> 19:53.200
which have small VC dimension and contain good function.
19:53.200 --> 19:58.160
Then you will be able to pick up the function
19:58.160 --> 20:02.400
using small amount of observations.
20:02.400 --> 20:06.760
So that is the task of learning.
20:06.760 --> 20:11.360
It is creating a set of admissible functions
20:11.360 --> 20:13.120
that has a small VC dimension.
20:13.120 --> 20:17.320
And then you've figured out a clever way of picking up.
20:17.320 --> 20:22.440
No, that is goal of learning, which I formulated yesterday.
20:22.440 --> 20:25.760
Statistical learning theory does not
20:25.760 --> 20:30.360
involve in creating admissible set of function.
20:30.360 --> 20:35.520
In classical learning theory, everywhere, 100% in textbook,
20:35.520 --> 20:39.200
the set of function admissible set of function is given.
20:39.200 --> 20:41.760
But this is science about nothing,
20:41.760 --> 20:44.040
because the most difficult problem
20:44.040 --> 20:50.120
to create admissible set of functions, given, say,
20:50.120 --> 20:53.080
a lot of functions, continuum set of functions,
20:53.080 --> 20:54.960
create admissible set of functions,
20:54.960 --> 20:58.760
that means that it has finite VC dimension,
20:58.760 --> 21:02.280
small VC dimension, and contain good function.
21:02.280 --> 21:05.280
So this was out of consideration.
21:05.280 --> 21:07.240
So what's the process of doing that?
21:07.240 --> 21:08.240
I mean, it's fascinating.
21:08.240 --> 21:13.200
What is the process of creating this admissible set of functions?
21:13.200 --> 21:14.920
That is invariant.
21:14.920 --> 21:15.760
That's invariance.
21:15.760 --> 21:17.280
Can you describe invariance?
21:17.280 --> 21:22.440
Yeah, you're looking of properties of training data.
21:22.440 --> 21:30.120
And properties means that you have some function,
21:30.120 --> 21:36.520
and you just count what is the average value of function
21:36.520 --> 21:38.960
on training data.
21:38.960 --> 21:43.040
You have a model, and what is the expectation
21:43.040 --> 21:44.960
of this function on the model.
21:44.960 --> 21:46.720
And they should coincide.
21:46.720 --> 21:51.800
So the problem is about how to pick up functions.
21:51.800 --> 21:53.200
It can be any function.
21:53.200 --> 21:59.280
In fact, it is true for all functions.
21:59.280 --> 22:05.000
But because when I talking set, say,
22:05.000 --> 22:09.920
duck does not jumping, so you don't ask question, jump like a duck.
22:09.920 --> 22:13.360
Because it is trivial, it does not jumping,
22:13.360 --> 22:15.560
it doesn't help you to recognize jump.
22:15.560 --> 22:19.000
But you know something, which question to ask,
22:19.000 --> 22:23.840
when you're asking, it swims like a jump, like a duck.
22:23.840 --> 22:26.840
But looks like a duck, it is general situation.
22:26.840 --> 22:34.440
Looks like, say, guy who have this illness, this disease,
22:34.440 --> 22:42.280
it is legal, so there is a general type of predicate
22:42.280 --> 22:46.440
looks like, and special type of predicate,
22:46.440 --> 22:50.040
which related to this specific problem.
22:50.040 --> 22:53.440
And that is intelligence part of all this business.
22:53.440 --> 22:55.440
And that we are teachers in world.
22:55.440 --> 22:58.440
Incorporating those specialized predicates.
22:58.440 --> 23:04.840
What do you think about deep learning as neural networks,
23:04.840 --> 23:11.440
these arbitrary architectures as helping accomplish some of the tasks
23:11.440 --> 23:14.440
you're thinking about, their effectiveness or lack thereof,
23:14.440 --> 23:19.440
what are the weaknesses and what are the possible strengths?
23:19.440 --> 23:22.440
You know, I think that this is fantasy.
23:22.440 --> 23:28.440
Everything which like deep learning, like features.
23:28.440 --> 23:32.440
Let me give you this example.
23:32.440 --> 23:38.440
One of the greatest book, this Churchill book about history of Second World War.
23:38.440 --> 23:47.440
And he's starting this book describing that in all time, when war is over,
23:47.440 --> 23:54.440
so the great kings, they gathered together,
23:54.440 --> 23:57.440
almost all of them were relatives,
23:57.440 --> 24:02.440
and they discussed what should be done, how to create peace.
24:02.440 --> 24:04.440
And they came to agreement.
24:04.440 --> 24:13.440
And when happens First World War, the general public came in power.
24:13.440 --> 24:17.440
And they were so greedy that robbed Germany.
24:17.440 --> 24:21.440
And it was clear for everybody that it is not peace.
24:21.440 --> 24:28.440
That peace will last only 20 years, because they were not professionals.
24:28.440 --> 24:31.440
It's the same I see in machine learning.
24:31.440 --> 24:37.440
There are mathematicians who are looking for the problem from a very deep point of view,
24:37.440 --> 24:39.440
a mathematical point of view.
24:39.440 --> 24:45.440
And there are computer scientists who mostly does not know mathematics.
24:45.440 --> 24:48.440
They just have interpretation of that.
24:48.440 --> 24:53.440
And they invented a lot of blah, blah, blah interpretations like deep learning.
24:53.440 --> 24:55.440
Why you need deep learning?
24:55.440 --> 24:57.440
Mathematics does not know deep learning.
24:57.440 --> 25:00.440
Mathematics does not know neurons.
25:00.440 --> 25:02.440
It is just function.
25:02.440 --> 25:06.440
If you like to say piecewise linear function, say that,
25:06.440 --> 25:10.440
and do it in class of piecewise linear function.
25:10.440 --> 25:12.440
But they invent something.
25:12.440 --> 25:20.440
And then they try to prove the advantage of that through interpretations,
25:20.440 --> 25:22.440
which mostly wrong.
25:22.440 --> 25:25.440
And when not enough they appeal to brain,
25:25.440 --> 25:27.440
which they know nothing about that.
25:27.440 --> 25:29.440
Nobody knows what's going on in the brain.
25:29.440 --> 25:34.440
So I think that more reliable look on maths.
25:34.440 --> 25:36.440
This is a mathematical problem.
25:36.440 --> 25:38.440
Do your best to solve this problem.
25:38.440 --> 25:43.440
Try to understand that there is not only one way of convergence,
25:43.440 --> 25:45.440
which is strong way of convergence.
25:45.440 --> 25:49.440
There is a weak way of convergence, which requires predicate.
25:49.440 --> 25:52.440
And if you will go through all this stuff,
25:52.440 --> 25:55.440
you will see that you don't need deep learning.
25:55.440 --> 26:00.440
Even more, I would say one of the theorem,
26:00.440 --> 26:02.440
which is called representor theorem.
26:02.440 --> 26:10.440
It says that optimal solution of mathematical problem,
26:10.440 --> 26:20.440
which described learning, is on shadow network, not on deep learning.
26:20.440 --> 26:22.440
And a shallow network, yeah.
26:22.440 --> 26:24.440
The ultimate problem is there.
26:24.440 --> 26:25.440
Absolutely.
26:25.440 --> 26:29.440
So in the end, what you're saying is exactly right.
26:29.440 --> 26:35.440
The question is, you have no value for throwing something on the table,
26:35.440 --> 26:38.440
playing with it, not math.
26:38.440 --> 26:41.440
It's like in your old network where you said throwing something in the bucket
26:41.440 --> 26:45.440
or the biological example and looking at kings and queens
26:45.440 --> 26:47.440
or the cells or the microscope.
26:47.440 --> 26:52.440
You don't see value in imagining the cells or kings and queens
26:52.440 --> 26:56.440
and using that as inspiration and imagination
26:56.440 --> 26:59.440
for where the math will eventually lead you.
26:59.440 --> 27:06.440
You think that interpretation basically deceives you in a way that's not productive.
27:06.440 --> 27:14.440
I think that if you're trying to analyze this business of learning
27:14.440 --> 27:18.440
and especially discussion about deep learning,
27:18.440 --> 27:21.440
it is discussion about interpretation.
27:21.440 --> 27:26.440
It's discussion about things, about what you can say about things.
27:26.440 --> 27:29.440
That's right, but aren't you surprised by the beauty of it?
27:29.440 --> 27:36.440
Not mathematical beauty, but the fact that it works at all.
27:36.440 --> 27:39.440
Or are you criticizing that very beauty,
27:39.440 --> 27:45.440
our human desire to interpret,
27:45.440 --> 27:49.440
to find our silly interpretations in these constructs?
27:49.440 --> 27:51.440
Let me ask you this.
27:51.440 --> 27:55.440
Are you surprised?
27:55.440 --> 27:57.440
Does it inspire you?
27:57.440 --> 28:00.440
How do you feel about the success of a system like AlphaGo
28:00.440 --> 28:03.440
at beating the game of Go?
28:03.440 --> 28:09.440
Using neural networks to estimate the quality of a board
28:09.440 --> 28:11.440
and the quality of the board?
28:11.440 --> 28:14.440
That is your interpretation quality of the board.
28:14.440 --> 28:17.440
Yes.
28:17.440 --> 28:20.440
It's not our interpretation.
28:20.440 --> 28:23.440
The fact is, a neural network system doesn't matter.
28:23.440 --> 28:27.440
A learning system that we don't mathematically understand
28:27.440 --> 28:29.440
that beats the best human player.
28:29.440 --> 28:31.440
It does something that was thought impossible.
28:31.440 --> 28:35.440
That means that it's not very difficult problem.
28:35.440 --> 28:41.440
We've empirically discovered that this is not a very difficult problem.
28:41.440 --> 28:43.440
That's true.
28:43.440 --> 28:49.440
Maybe I can't argue.
28:49.440 --> 28:52.440
Even more, I would say,
28:52.440 --> 28:54.440
that if they use deep learning,
28:54.440 --> 28:59.440
it is not the most effective way of learning theory.
28:59.440 --> 29:03.440
Usually, when people use deep learning,
29:03.440 --> 29:09.440
they're using zillions of training data.
29:09.440 --> 29:13.440
But you don't need this.
29:13.440 --> 29:15.440
I describe the challenge.
29:15.440 --> 29:22.440
Can we do some problems with deep learning method
29:22.440 --> 29:27.440
with deep net using 100 times less training data?
29:27.440 --> 29:33.440
Even more, some problems deep learning cannot solve
29:33.440 --> 29:37.440
because it's not necessary.
29:37.440 --> 29:40.440
They create admissible set of functions.
29:40.440 --> 29:45.440
Deep architecture means to create admissible set of functions.
29:45.440 --> 29:49.440
You cannot say that you're creating good admissible set of functions.
29:49.440 --> 29:52.440
It's your fantasy.
29:52.440 --> 29:54.440
It does not come from mass.
29:54.440 --> 29:58.440
But it is possible to create admissible set of functions
29:58.440 --> 30:01.440
because you have your training data.
30:01.440 --> 30:08.440
Actually, for mathematicians, when you consider a variant,
30:08.440 --> 30:11.440
you need to use law of large numbers.
30:11.440 --> 30:17.440
When you're making training in existing algorithm,
30:17.440 --> 30:20.440
you need uniform law of large numbers,
30:20.440 --> 30:22.440
which is much more difficult.
30:22.440 --> 30:24.440
You see dimension and all this stuff.
30:24.440 --> 30:32.440
Nevertheless, if you use both weak and strong way of convergence,
30:32.440 --> 30:34.440
you can decrease a lot of training data.
30:34.440 --> 30:39.440
You could do the three, the Swims like a duck and Quacks like a duck.
30:39.440 --> 30:47.440
Let's step back and think about human intelligence in general.
30:47.440 --> 30:52.440
Clearly, that has evolved in a nonmathematical way.
30:52.440 --> 31:00.440
As far as we know, God, or whoever,
31:00.440 --> 31:05.440
didn't come up with a model in place in our brain of admissible functions.
31:05.440 --> 31:06.440
It kind of evolved.
31:06.440 --> 31:07.440
I don't know.
31:07.440 --> 31:08.440
Maybe you have a view on this.
31:08.440 --> 31:15.440
Alan Turing in the 50s in his paper asked and rejected the question,
31:15.440 --> 31:16.440
can machines think?
31:16.440 --> 31:18.440
It's not a very useful question.
31:18.440 --> 31:23.440
But can you briefly entertain this useless question?
31:23.440 --> 31:25.440
Can machines think?
31:25.440 --> 31:28.440
So talk about intelligence and your view of it.
31:28.440 --> 31:29.440
I don't know that.
31:29.440 --> 31:34.440
I know that Turing described imitation.
31:34.440 --> 31:41.440
If computer can imitate human being, let's call it intelligent.
31:41.440 --> 31:45.440
And he understands that it is not thinking computer.
31:45.440 --> 31:46.440
Yes.
31:46.440 --> 31:49.440
He completely understands what he's doing.
31:49.440 --> 31:53.440
But he's set up a problem of imitation.
31:53.440 --> 31:57.440
So now we understand that the problem is not in imitation.
31:57.440 --> 32:04.440
I'm not sure that intelligence is just inside of us.
32:04.440 --> 32:06.440
It may be also outside of us.
32:06.440 --> 32:09.440
I have several observations.
32:09.440 --> 32:15.440
So when I prove some theorem, it's a very difficult theorem.
32:15.440 --> 32:22.440
But in a couple of years, in several places, people proved the same theorem.
32:22.440 --> 32:26.440
Say, soil lemma after us was done.
32:26.440 --> 32:29.440
Then another guy proved the same theorem.
32:29.440 --> 32:32.440
In the history of science, it's happened all the time.
32:32.440 --> 32:35.440
For example, geometry.
32:35.440 --> 32:37.440
It's happened simultaneously.
32:37.440 --> 32:43.440
First it did Lobachevsky and then Gauss and Boyai and other guys.
32:43.440 --> 32:48.440
It happened simultaneously in 10 years period of time.
32:48.440 --> 32:51.440
And I saw a lot of examples like that.
32:51.440 --> 32:56.440
And many mathematicians think that when they develop something,
32:56.440 --> 33:01.440
they develop something in general which affects everybody.
33:01.440 --> 33:07.440
So maybe our model that intelligence is only inside of us is incorrect.
33:07.440 --> 33:09.440
It's our interpretation.
33:09.440 --> 33:15.440
Maybe there exists some connection with world intelligence.
33:15.440 --> 33:16.440
I don't know.
33:16.440 --> 33:19.440
You're almost like plugging in into...
33:19.440 --> 33:20.440
Yeah, exactly.
33:20.440 --> 33:22.440
...and contributing to this...
33:22.440 --> 33:23.440
Into a big network.
33:23.440 --> 33:26.440
...into a big, maybe in your own network.
33:26.440 --> 33:27.440
No, no, no.
33:27.440 --> 33:34.440
On the flip side of that, maybe you can comment on big O complexity
33:34.440 --> 33:40.440
and how you see classifying algorithms by worst case running time
33:40.440 --> 33:42.440
in relation to their input.
33:42.440 --> 33:45.440
So that way of thinking about functions.
33:45.440 --> 33:47.440
Do you think P equals NP?
33:47.440 --> 33:49.440
Do you think that's an interesting question?
33:49.440 --> 33:51.440
Yeah, it is an interesting question.
33:51.440 --> 34:01.440
But let me talk about complexity and about worst case scenario.
34:01.440 --> 34:03.440
There is a mathematical setting.
34:03.440 --> 34:07.440
When I came to the United States in 1990,
34:07.440 --> 34:09.440
people did not know this theory.
34:09.440 --> 34:12.440
They did not know statistical learning theory.
34:12.440 --> 34:17.440
So in Russia it was published to monographs or monographs,
34:17.440 --> 34:19.440
but in America they didn't know.
34:19.440 --> 34:22.440
Then they learned.
34:22.440 --> 34:25.440
And somebody told me that if it's worst case theory,
34:25.440 --> 34:27.440
and they will create real case theory,
34:27.440 --> 34:30.440
but till now it did not.
34:30.440 --> 34:33.440
Because it is a mathematical tool.
34:33.440 --> 34:38.440
You can do only what you can do using mathematics,
34:38.440 --> 34:45.440
which has a clear understanding and clear description.
34:45.440 --> 34:52.440
And for this reason we introduced complexity.
34:52.440 --> 34:54.440
And we need this.
34:54.440 --> 35:01.440
Because actually it is diverse, I like this one more.
35:01.440 --> 35:04.440
This dimension you can prove some theorems.
35:04.440 --> 35:12.440
But we also create theory for case when you know probability measure.
35:12.440 --> 35:14.440
And that is the best case which can happen.
35:14.440 --> 35:17.440
It is entropy theory.
35:17.440 --> 35:20.440
So from a mathematical point of view,
35:20.440 --> 35:25.440
you know the best possible case and the worst possible case.
35:25.440 --> 35:28.440
You can derive different model in medium.
35:28.440 --> 35:30.440
But it's not so interesting.
35:30.440 --> 35:33.440
You think the edges are interesting?
35:33.440 --> 35:35.440
The edges are interesting.
35:35.440 --> 35:44.440
Because it is not so easy to get a good bound, exact bound.
35:44.440 --> 35:47.440
It's not many cases where you have.
35:47.440 --> 35:49.440
The bound is not exact.
35:49.440 --> 35:54.440
But interesting principles which discover the mass.
35:54.440 --> 35:57.440
Do you think it's interesting because it's challenging
35:57.440 --> 36:02.440
and reveals interesting principles that allow you to get those bounds?
36:02.440 --> 36:05.440
Or do you think it's interesting because it's actually very useful
36:05.440 --> 36:10.440
for understanding the essence of a function of an algorithm?
36:10.440 --> 36:15.440
So it's like me judging your life as a human being
36:15.440 --> 36:19.440
by the worst thing you did and the best thing you did
36:19.440 --> 36:21.440
versus all the stuff in the middle.
36:21.440 --> 36:25.440
It seems not productive.
36:25.440 --> 36:31.440
I don't think so because you cannot describe situation in the middle.
36:31.440 --> 36:34.440
Or it will be not general.
36:34.440 --> 36:38.440
So you can describe edges cases.
36:38.440 --> 36:41.440
And it is clear it has some model.
36:41.440 --> 36:47.440
But you cannot describe model for every new case.
36:47.440 --> 36:53.440
So you will be never accurate when you're using model.
36:53.440 --> 36:55.440
But from a statistical point of view,
36:55.440 --> 37:00.440
the way you've studied functions and the nature of learning
37:00.440 --> 37:07.440
and the world, don't you think that the real world has a very long tail
37:07.440 --> 37:13.440
that the edge cases are very far away from the mean,
37:13.440 --> 37:19.440
the stuff in the middle, or no?
37:19.440 --> 37:21.440
I don't know that.
37:21.440 --> 37:29.440
I think that from my point of view,
37:29.440 --> 37:39.440
if you will use formal statistic, uniform law of large numbers,
37:39.440 --> 37:47.440
if you will use this invariance business,
37:47.440 --> 37:51.440
you will need just law of large numbers.
37:51.440 --> 37:55.440
And there's a huge difference between uniform law of large numbers
37:55.440 --> 37:57.440
and large numbers.
37:57.440 --> 37:59.440
Can you describe that a little more?
37:59.440 --> 38:01.440
Or should we just take it to...
38:01.440 --> 38:05.440
No, for example, when I'm talking about duck,
38:05.440 --> 38:09.440
I gave three predicates and it was enough.
38:09.440 --> 38:14.440
But if you will try to do formal distinguish,
38:14.440 --> 38:17.440
you will need a lot of observations.
38:17.440 --> 38:19.440
I got you.
38:19.440 --> 38:24.440
And so that means that information about looks like a duck
38:24.440 --> 38:27.440
contain a lot of bits of information,
38:27.440 --> 38:29.440
formal bits of information.
38:29.440 --> 38:35.440
So we don't know that how much bit of information
38:35.440 --> 38:39.440
contain things from artificial intelligence.
38:39.440 --> 38:42.440
And that is the subject of analysis.
38:42.440 --> 38:47.440
Till now, old business,
38:47.440 --> 38:54.440
I don't like how people consider artificial intelligence.
38:54.440 --> 39:00.440
They consider us some codes which imitate activity of human being.
39:00.440 --> 39:02.440
It is not science.
39:02.440 --> 39:04.440
It is applications.
39:04.440 --> 39:06.440
You would like to imitate God.
39:06.440 --> 39:09.440
It is very useful and we have good problem.
39:09.440 --> 39:15.440
But you need to learn something more.
39:15.440 --> 39:23.440
How people can to develop predicates,
39:23.440 --> 39:25.440
swims like a duck,
39:25.440 --> 39:28.440
or play like butterfly or something like that.
39:28.440 --> 39:33.440
Not the teacher tells you how it came in his mind.
39:33.440 --> 39:36.440
How he choose this image.
39:36.440 --> 39:39.440
That is problem of intelligence.
39:39.440 --> 39:41.440
That is the problem of intelligence.
39:41.440 --> 39:44.440
And you see that connected to the problem of learning?
39:44.440 --> 39:45.440
Absolutely.
39:45.440 --> 39:48.440
Because you immediately give this predicate
39:48.440 --> 39:52.440
like specific predicate, swims like a duck,
39:52.440 --> 39:54.440
or quack like a duck.
39:54.440 --> 39:57.440
It was chosen somehow.
39:57.440 --> 40:00.440
So what is the line of work, would you say?
40:00.440 --> 40:05.440
If you were to formulate as a set of open problems,
40:05.440 --> 40:07.440
that will take us there.
40:07.440 --> 40:09.440
Play like a butterfly.
40:09.440 --> 40:11.440
We will get a system to be able to...
40:11.440 --> 40:13.440
Let's separate two stories.
40:13.440 --> 40:15.440
One mathematical story.
40:15.440 --> 40:19.440
That if you have predicate, you can do something.
40:19.440 --> 40:22.440
And another story you have to get predicate.
40:22.440 --> 40:26.440
It is intelligence problem.
40:26.440 --> 40:31.440
And people even did not start understanding intelligence.
40:31.440 --> 40:34.440
Because to understand intelligence, first of all,
40:34.440 --> 40:37.440
try to understand what doing teachers.
40:37.440 --> 40:40.440
How teacher teach.
40:40.440 --> 40:43.440
Why one teacher better than another one?
40:43.440 --> 40:44.440
Yeah.
40:44.440 --> 40:48.440
So you think we really even haven't started on the journey
40:48.440 --> 40:50.440
of generating the predicate?
40:50.440 --> 40:51.440
No.
40:51.440 --> 40:52.440
We don't understand.
40:52.440 --> 40:56.440
We even don't understand that this problem exists.
40:56.440 --> 40:58.440
Because did you hear?
40:58.440 --> 40:59.440
You do.
40:59.440 --> 41:02.440
No, I just know name.
41:02.440 --> 41:07.440
I want to understand why one teacher better than another.
41:07.440 --> 41:12.440
And how affect teacher student.
41:12.440 --> 41:17.440
It is not because he repeating the problem which is in textbook.
41:17.440 --> 41:18.440
Yes.
41:18.440 --> 41:20.440
He make some remarks.
41:20.440 --> 41:23.440
He make some philosophy of reasoning.
41:23.440 --> 41:24.440
Yeah, that's a beautiful...
41:24.440 --> 41:31.440
So it is a formulation of a question that is the open problem.
41:31.440 --> 41:33.440
Why is one teacher better than another?
41:33.440 --> 41:34.440
Right.
41:34.440 --> 41:37.440
What he does better.
41:37.440 --> 41:38.440
Yeah.
41:38.440 --> 41:39.440
What...
41:39.440 --> 41:42.440
Why at every level?
41:42.440 --> 41:44.440
How do they get better?
41:44.440 --> 41:47.440
What does it mean to be better?
41:47.440 --> 41:49.440
The whole...
41:49.440 --> 41:50.440
Yeah.
41:50.440 --> 41:53.440
From whatever model I have.
41:53.440 --> 41:56.440
One teacher can give a very good predicate.
41:56.440 --> 42:00.440
One teacher can say swims like a dog.
42:00.440 --> 42:03.440
And another can say jump like a dog.
42:03.440 --> 42:05.440
And jump like a dog.
42:05.440 --> 42:07.440
Car is zero information.
42:07.440 --> 42:08.440
Yeah.
42:08.440 --> 42:13.440
So what is the most exciting problem in statistical learning you've ever worked on?
42:13.440 --> 42:16.440
Or are working on now?
42:16.440 --> 42:22.440
I just finished this invariant story.
42:22.440 --> 42:24.440
And I'm happy that...
42:24.440 --> 42:30.440
I believe that it is ultimate learning story.
42:30.440 --> 42:37.440
At least I can show that there are no another mechanism, only two mechanisms.
42:37.440 --> 42:44.440
But they separate statistical part from intelligent part.
42:44.440 --> 42:48.440
And I know nothing about intelligent part.
42:48.440 --> 42:52.440
And if we will know this intelligent part,
42:52.440 --> 42:59.440
so it will help us a lot in teaching, in learning.
42:59.440 --> 43:02.440
You don't know it when we see it?
43:02.440 --> 43:06.440
So for example, in my talk, the last slide was the challenge.
43:06.440 --> 43:11.440
So you have, say, NIST digital recognition problem.
43:11.440 --> 43:16.440
And deep learning claims that they did it very well.
43:16.440 --> 43:21.440
Say 99.5% of correct answers.
43:21.440 --> 43:24.440
But they use 60,000 observations.
43:24.440 --> 43:26.440
Can you do the same?
43:26.440 --> 43:29.440
100 times less.
43:29.440 --> 43:31.440
But incorporating invariants.
43:31.440 --> 43:34.440
What it means, you know, digit one, two, three.
43:34.440 --> 43:35.440
Yeah.
43:35.440 --> 43:37.440
Just looking at that.
43:37.440 --> 43:40.440
Explain to me which invariant I should keep.
43:40.440 --> 43:43.440
To use 100 examples.
43:43.440 --> 43:48.440
Or say 100 times less examples to do the same job.
43:48.440 --> 43:49.440
Yeah.
43:49.440 --> 43:55.440
That last slide, unfortunately, you talk ended quickly.
43:55.440 --> 43:59.440
The last slide was a powerful open challenge
43:59.440 --> 44:02.440
and a formulation of the essence here.
44:02.440 --> 44:06.440
That is the exact problem of intelligence.
44:06.440 --> 44:12.440
Because everybody, when machine learning started,
44:12.440 --> 44:15.440
it was developed by mathematicians,
44:15.440 --> 44:19.440
they immediately recognized that we use much more
44:19.440 --> 44:22.440
training data than humans needed.
44:22.440 --> 44:25.440
But now again, we came to the same story.
44:25.440 --> 44:27.440
Have to decrease.
44:27.440 --> 44:30.440
That is the problem of learning.
44:30.440 --> 44:32.440
It is not like in deep learning,
44:32.440 --> 44:35.440
they use zealons of training data.
44:35.440 --> 44:38.440
Because maybe zealons are not enough
44:38.440 --> 44:44.440
if you have a good invariance.
44:44.440 --> 44:49.440
Maybe you'll never collect some number of observations.
44:49.440 --> 44:53.440
But now it is a question to intelligence.
44:53.440 --> 44:55.440
Have to do that.
44:55.440 --> 44:58.440
Because statistical part is ready.
44:58.440 --> 45:02.440
As soon as you supply us with predicate,
45:02.440 --> 45:06.440
we can do good job with small amount of observations.
45:06.440 --> 45:10.440
And the very first challenges will know digit recognition.
45:10.440 --> 45:12.440
And you know digits.
45:12.440 --> 45:15.440
And please tell me invariance.
45:15.440 --> 45:16.440
I think about that.
45:16.440 --> 45:20.440
I can say for digit 3, I would introduce
45:20.440 --> 45:24.440
concept of horizontal symmetry.
45:24.440 --> 45:29.440
So the digit 3 has horizontal symmetry
45:29.440 --> 45:33.440
more than say digit 2 or something like that.
45:33.440 --> 45:37.440
But as soon as I get the idea of horizontal symmetry,
45:37.440 --> 45:40.440
I can mathematically invent a lot of
45:40.440 --> 45:43.440
measure of horizontal symmetry
45:43.440 --> 45:46.440
on vertical symmetry or diagonal symmetry,
45:46.440 --> 45:49.440
whatever, if I have a day of symmetry.
45:49.440 --> 45:52.440
But what else?
45:52.440 --> 46:00.440
Looking on digit, I see that it is metapredicate,
46:00.440 --> 46:04.440
which is not shape.
46:04.440 --> 46:07.440
It is something like symmetry,
46:07.440 --> 46:12.440
like how dark is whole picture, something like that.
46:12.440 --> 46:15.440
Which can self rise up predicate.
46:15.440 --> 46:18.440
You think such a predicate could rise
46:18.440 --> 46:26.440
out of something that is not general.
46:26.440 --> 46:31.440
Meaning it feels like for me to be able to
46:31.440 --> 46:34.440
understand the difference between a 2 and a 3,
46:34.440 --> 46:39.440
I would need to have had a childhood
46:39.440 --> 46:45.440
of 10 to 15 years playing with kids,
46:45.440 --> 46:50.440
going to school, being yelled by parents.
46:50.440 --> 46:55.440
All of that, walking, jumping, looking at ducks.
46:55.440 --> 46:58.440
And now then I would be able to generate
46:58.440 --> 47:01.440
the right predicate for telling the difference
47:01.440 --> 47:03.440
between 2 and a 3.
47:03.440 --> 47:06.440
Or do you think there is a more efficient way?
47:06.440 --> 47:10.440
I know for sure that you must know
47:10.440 --> 47:12.440
something more than digits.
47:12.440 --> 47:15.440
That's a powerful statement.
47:15.440 --> 47:19.440
But maybe there are several languages
47:19.440 --> 47:24.440
of description, these elements of digits.
47:24.440 --> 47:27.440
So I'm talking about symmetry,
47:27.440 --> 47:30.440
about some properties of geometry,
47:30.440 --> 47:33.440
I'm talking about something abstract.
47:33.440 --> 47:38.440
But this is a problem of intelligence.
47:38.440 --> 47:42.440
So in one of our articles, it is trivial to show
47:42.440 --> 47:46.440
that every example can carry
47:46.440 --> 47:49.440
not more than one bit of information in real.
47:49.440 --> 47:54.440
Because when you show example
47:54.440 --> 47:59.440
and you say this is one, you can remove, say,
47:59.440 --> 48:03.440
a function which does not tell you one, say,
48:03.440 --> 48:06.440
the best strategy, if you can do it perfectly,
48:06.440 --> 48:09.440
it's remove half of the functions.
48:09.440 --> 48:14.440
But when you use one predicate, which looks like a duck,
48:14.440 --> 48:18.440
you can remove much more functions than half.
48:18.440 --> 48:20.440
And that means that it contains
48:20.440 --> 48:25.440
a lot of bit of information from a formal point of view.
48:25.440 --> 48:31.440
But when you have a general picture
48:31.440 --> 48:33.440
of what you want to recognize,
48:33.440 --> 48:36.440
a general picture of the world,
48:36.440 --> 48:40.440
can you invent this predicate?
48:40.440 --> 48:46.440
And that predicate carries a lot of information.
48:46.440 --> 48:49.440
Beautifully put, maybe just me,
48:49.440 --> 48:53.440
but in all the math you show, in your work,
48:53.440 --> 48:57.440
which is some of the most profound mathematical work
48:57.440 --> 49:01.440
in the field of learning AI and just math in general.
49:01.440 --> 49:04.440
I hear a lot of poetry and philosophy.
49:04.440 --> 49:09.440
You really kind of talk about philosophy of science.
49:09.440 --> 49:12.440
There's a poetry and music to a lot of the work you're doing
49:12.440 --> 49:14.440
and the way you're thinking about it.
49:14.440 --> 49:16.440
So where does that come from?
49:16.440 --> 49:20.440
Do you escape to poetry? Do you escape to music?
49:20.440 --> 49:24.440
I think that there exists ground truth.
49:24.440 --> 49:26.440
There exists ground truth?
49:26.440 --> 49:30.440
Yeah, and that can be seen everywhere.
49:30.440 --> 49:32.440
The smart guy, philosopher,
49:32.440 --> 49:38.440
sometimes I surprise how they deep see.
49:38.440 --> 49:45.440
Sometimes I see that some of them are completely out of subject.
49:45.440 --> 49:50.440
But the ground truth I see in music.
49:50.440 --> 49:52.440
Music is the ground truth?
49:52.440 --> 49:53.440
Yeah.
49:53.440 --> 50:01.440
And in poetry, many poets, they believe they take dictation.
50:01.440 --> 50:06.440
So what piece of music,
50:06.440 --> 50:08.440
as a piece of empirical evidence,
50:08.440 --> 50:14.440
gave you a sense that they are touching something in the ground truth?
50:14.440 --> 50:16.440
It is structure.
50:16.440 --> 50:18.440
The structure with the math of music.
50:18.440 --> 50:20.440
Because when you're listening to Bach,
50:20.440 --> 50:22.440
you see this structure.
50:22.440 --> 50:25.440
Very clear, very classic, very simple.
50:25.440 --> 50:31.440
And the same in Bach, when you have axioms in geometry,
50:31.440 --> 50:33.440
you have the same feeling.
50:33.440 --> 50:36.440
And in poetry, sometimes you see the same.
50:36.440 --> 50:40.440
And if you look back at your childhood,
50:40.440 --> 50:42.440
you grew up in Russia,
50:42.440 --> 50:46.440
you maybe were born as a researcher in Russia,
50:46.440 --> 50:48.440
you developed as a researcher in Russia,
50:48.440 --> 50:51.440
you came to the United States in a few places.
50:51.440 --> 50:53.440
If you look back,
50:53.440 --> 50:59.440
what were some of your happiest moments as a researcher?
50:59.440 --> 51:02.440
Some of the most profound moments.
51:02.440 --> 51:06.440
Not in terms of their impact on society,
51:06.440 --> 51:12.440
but in terms of their impact on how damn good you feel that day,
51:12.440 --> 51:15.440
and you remember that moment.
51:15.440 --> 51:20.440
You know, every time when you found something,
51:20.440 --> 51:22.440
it is great.
51:22.440 --> 51:24.440
It's a life.
51:24.440 --> 51:26.440
Every simple thing.
51:26.440 --> 51:32.440
But my general feeling that most of my time was wrong.
51:32.440 --> 51:35.440
You should go again and again and again
51:35.440 --> 51:39.440
and try to be honest in front of yourself.
51:39.440 --> 51:41.440
Not to make interpretation,
51:41.440 --> 51:46.440
but try to understand that it's related to ground truth.
51:46.440 --> 51:52.440
It is not my blah, blah, blah interpretation or something like that.
51:52.440 --> 51:57.440
But you're allowed to get excited at the possibility of discovery.
51:57.440 --> 52:00.440
You have to double check it, but...
52:00.440 --> 52:04.440
No, but how it's related to the other ground truth
52:04.440 --> 52:10.440
is it just temporary or it is forever?
52:10.440 --> 52:13.440
You know, you always have a feeling
52:13.440 --> 52:17.440
when you found something,
52:17.440 --> 52:19.440
how big is that?
52:19.440 --> 52:23.440
So, 20 years ago, when we discovered statistical learning,
52:23.440 --> 52:25.440
so nobody believed.
52:25.440 --> 52:31.440
Except for one guy, Dudley from MIT.
52:31.440 --> 52:36.440
And then in 20 years, it became fashion.
52:36.440 --> 52:39.440
And the same with support vector machines.
52:39.440 --> 52:41.440
That's kernel machines.
52:41.440 --> 52:44.440
So with support vector machines and learning theory,
52:44.440 --> 52:48.440
when you were working on it,
52:48.440 --> 52:55.440
you had a sense that you had a sense of the profundity of it,
52:55.440 --> 52:59.440
how this seems to be right.
52:59.440 --> 53:01.440
It seems to be powerful.
53:01.440 --> 53:04.440
Right, absolutely, immediately.
53:04.440 --> 53:08.440
I recognize that it will last forever.
53:08.440 --> 53:17.440
And now, when I found this invariance story,
53:17.440 --> 53:21.440
I have a feeling that it is completely wrong.
53:21.440 --> 53:25.440
Because I have proved that there are no different mechanisms.
53:25.440 --> 53:30.440
Some say cosmetic improvement you can do,
53:30.440 --> 53:34.440
but in terms of invariance,
53:34.440 --> 53:38.440
you need both invariance and statistical learning
53:38.440 --> 53:41.440
and they should work together.
53:41.440 --> 53:47.440
But also, I'm happy that we can formulate
53:47.440 --> 53:51.440
what is intelligence from that
53:51.440 --> 53:54.440
and to separate from technical part.
53:54.440 --> 53:56.440
And that is completely different.
53:56.440 --> 53:58.440
Absolutely.
53:58.440 --> 54:00.440
Well, Vladimir, thank you so much for talking today.
54:00.440 --> 54:01.440
Thank you.
54:01.440 --> 54:28.440
Thank you very much.