Shubham Gupta commited on
Commit
a3be5d0
1 Parent(s): 0b8c7b2

Add readme and files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +3 -0
  2. vtt/episode_001_large.vtt +0 -0
  3. vtt/episode_001_small.vtt +0 -0
  4. vtt/episode_002_small.vtt +1910 -0
  5. vtt/episode_003_small.vtt +1118 -0
  6. vtt/episode_004_small.vtt +1109 -0
  7. vtt/episode_005_small.vtt +2654 -0
  8. vtt/episode_006_large.vtt +2603 -0
  9. vtt/episode_006_small.vtt +2051 -0
  10. vtt/episode_007_small.vtt +0 -0
  11. vtt/episode_008_small.vtt +1418 -0
  12. vtt/episode_009_small.vtt +2240 -0
  13. vtt/episode_010_small.vtt +1331 -0
  14. vtt/episode_011_small.vtt +3773 -0
  15. vtt/episode_012_large.vtt +0 -0
  16. vtt/episode_012_small.vtt +1937 -0
  17. vtt/episode_013_small.vtt +2570 -0
  18. vtt/episode_014_small.vtt +4133 -0
  19. vtt/episode_015_small.vtt +1826 -0
  20. vtt/episode_016_small.vtt +0 -0
  21. vtt/episode_017_small.vtt +0 -0
  22. vtt/episode_018_small.vtt +1871 -0
  23. vtt/episode_019_small.vtt +0 -0
  24. vtt/episode_020_small.vtt +0 -0
  25. vtt/episode_021_large.vtt +0 -0
  26. vtt/episode_021_small.vtt +0 -0
  27. vtt/episode_022_small.vtt +0 -0
  28. vtt/episode_023_small.vtt +2177 -0
  29. vtt/episode_024_small.vtt +2675 -0
  30. vtt/episode_025_small.vtt +0 -0
  31. vtt/episode_026_large.vtt +2534 -0
  32. vtt/episode_026_small.vtt +2567 -0
  33. vtt/episode_027_small.vtt +0 -0
  34. vtt/episode_028_large.vtt +2705 -0
  35. vtt/episode_028_small.vtt +2630 -0
  36. vtt/episode_029_large.vtt +0 -0
  37. vtt/episode_029_small.vtt +0 -0
  38. vtt/episode_030_small.vtt +3584 -0
  39. vtt/episode_031_small.vtt +0 -0
  40. vtt/episode_032_small.vtt +3392 -0
  41. vtt/episode_033_small.vtt +0 -0
  42. vtt/episode_034_small.vtt +1529 -0
  43. vtt/episode_035_small.vtt +0 -0
  44. vtt/episode_036_small.vtt +0 -0
  45. vtt/episode_037_large.vtt +3590 -0
  46. vtt/episode_037_small.vtt +3143 -0
  47. vtt/episode_038_large.vtt +0 -0
  48. vtt/episode_038_small.vtt +0 -0
  49. vtt/episode_039_small.vtt +1994 -0
  50. vtt/episode_040_large.vtt +0 -0
README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # Lexicap: Lex Fridman Podcast Whisper captions
2
+
3
+ These are transcripts for Lex Fridman episodes, generated using OpenAI Whisper. This dataset was created by Dr. Andrej Karpathy.
vtt/episode_001_large.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_001_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_002_small.vtt ADDED
@@ -0,0 +1,1910 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:06.000
4
+ As part of MIT course 6S099 on artificial general intelligence, I got a chance to sit down with
5
+
6
+ 00:06.000 --> 00:13.680
7
+ Kristof Koch, who is one of the seminal figures in neurobiology, neuroscience, and generally in
8
+
9
+ 00:13.680 --> 00:20.320
10
+ the study of consciousness. He is the president, the chief scientific officer of the Allen Institute
11
+
12
+ 00:20.320 --> 00:27.520
13
+ for Brain Science in Seattle. From 1986 to 2013, he was the professor at Caltech. Before that,
14
+
15
+ 00:27.520 --> 00:34.400
16
+ he was at MIT. He is extremely well cited, over a hundred thousand citations. His research,
17
+
18
+ 00:34.400 --> 00:40.080
19
+ his writing, his ideas have had big impact on the scientific community and the general public
20
+
21
+ 00:40.080 --> 00:44.560
22
+ in the way we think about consciousness, in the way we see ourselves as human beings.
23
+
24
+ 00:44.560 --> 00:49.120
25
+ He's the author of several books, The Quest for Consciousness and Your Biological Approach,
26
+
27
+ 00:49.120 --> 00:54.320
28
+ and a more recent book, Consciousness, Confessions of a Romantic Reductionist.
29
+
30
+ 00:54.320 --> 01:00.000
31
+ If you enjoy this conversation, this course, subscribe, click the little bell icon to make
32
+
33
+ 01:00.000 --> 01:05.360
34
+ sure you never miss a video. And in the comments, leave suggestions for any people you'd like to
35
+
36
+ 01:05.360 --> 01:10.480
37
+ see be part of the course or any ideas that you would like us to explore. Thanks very much,
38
+
39
+ 01:10.480 --> 01:16.320
40
+ and I hope you enjoy. Okay, before we delve into the beautiful mysteries of consciousness,
41
+
42
+ 01:16.320 --> 01:22.240
43
+ let's zoom out a little bit. And let me ask, do you think there's intelligent life out there in
44
+
45
+ 01:22.240 --> 01:27.920
46
+ the universe? Yes, I do believe so. We have no evidence of it, but I think the probabilities
47
+
48
+ 01:27.920 --> 01:33.440
49
+ are overwhelming in favor of it. Give me a universe where we have 10 to the 11 galaxies,
50
+
51
+ 01:33.440 --> 01:39.280
52
+ and each galaxy has between 10 to the 11, 10 to the 12 stars, and we know more stars have one or
53
+
54
+ 01:39.280 --> 01:47.360
55
+ more planets. So how does that make you feel? It still makes me feel special, because I have
56
+
57
+ 01:47.360 --> 01:54.480
58
+ experiences. I feel the world, I experience the world. And independent of whether there are the
59
+
60
+ 01:54.480 --> 02:00.480
61
+ creatures out there, I still feel the world and I have access to this world in this very strange,
62
+
63
+ 02:00.480 --> 02:07.840
64
+ compelling way. And that's the core of human existence. Now you said human, do you think
65
+
66
+ 02:07.840 --> 02:13.360
67
+ if those intelligent creatures are out there, do you think they experience their world?
68
+
69
+ 02:13.360 --> 02:18.080
70
+ Yes, they are evolved. If they are a product of natural evolution, as they would have to be,
71
+
72
+ 02:18.080 --> 02:22.400
73
+ they will also experience their own world. So consciousness isn't just a human, you're right,
74
+
75
+ 02:22.400 --> 02:28.320
76
+ it's much wider. It's probably, it may be spread across all of biology. We have, the only thing
77
+
78
+ 02:28.320 --> 02:33.040
79
+ that we have special is we can talk about it. Of course, not all people can talk about it.
80
+
81
+ 02:33.040 --> 02:38.320
82
+ Babies and little children can talk about it. Patients who have a stroke in the left
83
+
84
+ 02:38.320 --> 02:43.040
85
+ inferior frontal gyrus can talk about it. But most normal adult people can talk about it.
86
+
87
+ 02:43.040 --> 02:48.000
88
+ And so we think that makes us special compared to little monkeys or dogs or cats or mice or all
89
+
90
+ 02:48.000 --> 02:52.400
91
+ the other creatures that we share the planet with. But all the evidence seems to suggest
92
+
93
+ 02:52.400 --> 02:56.960
94
+ that they too experience the world. And so it's overwhelmingly likely that other aliens,
95
+
96
+ 02:56.960 --> 03:00.640
97
+ that aliens would also experience their world. Of course, differently, because they have a
98
+
99
+ 03:00.640 --> 03:04.000
100
+ different sensorium, they have different sensors, they have a very different environment.
101
+
102
+ 03:04.640 --> 03:11.360
103
+ But the fact that I would strongly suppose that they also have experiences. They feel pain and
104
+
105
+ 03:11.360 --> 03:17.600
106
+ pleasure and see in some sort of spectrum and hear and have all the other sensors.
107
+
108
+ 03:17.600 --> 03:21.040
109
+ Of course, their language, if they have one would be different. So we might not be able to
110
+
111
+ 03:21.040 --> 03:24.320
112
+ understand their poetry about the experiences that they have.
113
+
114
+ 03:24.320 --> 03:32.240
115
+ That's correct. Right. So in a talk, in a video, I've heard you mention Siputzo, a Daxhound that
116
+
117
+ 03:32.240 --> 03:37.360
118
+ you came up with, that you grew up with was part of your family when you were young. First of all,
119
+
120
+ 03:37.360 --> 03:46.320
121
+ you're technically a Midwestern boy. Technically. But after that, you traveled there on a bit,
122
+
123
+ 03:46.320 --> 03:50.800
124
+ hence a little bit of the accent. You talked about Siputzo, the Daxhound, having
125
+
126
+ 03:51.680 --> 03:57.440
127
+ these elements of humanness, of consciousness that you discovered. So I just wanted to ask,
128
+
129
+ 03:58.000 --> 04:03.520
130
+ can you look back in your childhood and remember when was the first time you realized you yourself,
131
+
132
+ 04:03.520 --> 04:09.600
133
+ sort of from a third person perspective, our conscious being, this idea of,
134
+
135
+ 04:11.280 --> 04:16.320
136
+ you know, stepping outside yourself and seeing there's something special going on here in my brain?
137
+
138
+ 04:17.840 --> 04:22.480
139
+ I can't really actually. It's a good question. I'm not sure I recall a discrete moment. I mean,
140
+
141
+ 04:22.480 --> 04:27.280
142
+ you take it for granted because that's the only world you know, right? The only world I know,
143
+
144
+ 04:27.280 --> 04:33.520
145
+ you know, is the world of seeing and hearing voices and touching and all the other things.
146
+
147
+ 04:33.520 --> 04:40.000
148
+ So it's only much later at early in my undergraduate days when I became, when I enrolled in physics
149
+
150
+ 04:40.000 --> 04:43.520
151
+ and in philosophy that I really thought about it and thought, well, this is really fundamentally
152
+
153
+ 04:43.520 --> 04:48.800
154
+ very, very mysterious. And there's nothing really in physics right now that explains this transition
155
+
156
+ 04:48.800 --> 04:54.560
157
+ from the physics of the brain to feelings. Where do the feelings come in? So you can look at the
158
+
159
+ 04:54.560 --> 04:58.800
160
+ foundational equation of quantum mechanics, general relativity, you can look at the period table of
161
+
162
+ 04:58.800 --> 05:05.360
163
+ the elements, you can look at the endless ATG seed chat in our genes and no way is consciousness.
164
+
165
+ 05:05.360 --> 05:11.120
166
+ Yet I wake up every morning to a world where I have experiences. And so that's the heart of the
167
+
168
+ 05:11.120 --> 05:19.040
169
+ ancient mind body problem. How do experiences get into the world? So what is consciousness?
170
+
171
+ 05:19.040 --> 05:25.680
172
+ Experience. Consciousness is any, any, any experience. Some people call it subjective
173
+
174
+ 05:25.680 --> 05:30.560
175
+ feelings, some people call it phenomenon, phenomenology, some people call it qualia,
176
+
177
+ 05:30.560 --> 05:34.480
178
+ their philosophy, but they all denote the same thing. It feels like something in the
179
+
180
+ 05:34.480 --> 05:40.640
181
+ famous word of the philosopher Thomas Nagel, it feels like something to be a bad or to be, you know,
182
+
183
+ 05:40.640 --> 05:47.440
184
+ an American or to be angry or to be sad or to be in love or to have pain.
185
+
186
+ 05:49.040 --> 05:54.480
187
+ And that is what experience is, any possible experience could be as mundane as just sitting
188
+
189
+ 05:54.480 --> 05:59.760
190
+ here in a chair could be as exalted as, you know, having a mystical moment, you know,
191
+
192
+ 05:59.760 --> 06:03.200
193
+ in deep meditation, those are just different forms of experiences.
194
+
195
+ 06:03.200 --> 06:10.080
196
+ Experience. So if you were to sit down with maybe the next skip a couple generations of
197
+
198
+ 06:10.080 --> 06:16.480
199
+ IBM Watson, something that won jeopardy, what is the gap? I guess the question is between Watson,
200
+
201
+ 06:18.000 --> 06:26.000
202
+ that might be much smarter than you than us than all any human alive, but may not have experience.
203
+
204
+ 06:26.000 --> 06:31.920
205
+ What is the gap? Well, so that's a big, big question that's occupied people for the last,
206
+
207
+ 06:32.640 --> 06:38.800
208
+ certainly last 50 years since we, you know, since the advent of birth of, of computers.
209
+
210
+ 06:38.800 --> 06:42.720
211
+ That's a question on Turing tried to answer. And of course, he did it in this indirect way
212
+
213
+ 06:42.720 --> 06:48.480
214
+ by proposing a test, an operational test. So, but that's not really that's, you know,
215
+
216
+ 06:48.480 --> 06:52.480
217
+ he tried to get it. What does it mean for a person to think? And then he had this test,
218
+
219
+ 06:52.480 --> 06:56.400
220
+ right? You lock him away, and then you have a communication with them. And then you try to,
221
+
222
+ 06:56.400 --> 07:00.400
223
+ to guess after a while whether that is a person or whether it's a computer system.
224
+
225
+ 07:00.400 --> 07:05.680
226
+ There's no question that now or very soon, you know, Alexa or Siri or, you know, Google now
227
+
228
+ 07:05.680 --> 07:10.720
229
+ will pass this test, right? And you can game it. But, you know, ultimately, certainly in your
230
+
231
+ 07:10.720 --> 07:15.440
232
+ generation, there will be machines that will speak with complete points that will remember
233
+
234
+ 07:15.440 --> 07:20.320
235
+ everything you ever said. They'll remember every email you ever had, like, like Samantha remember
236
+
237
+ 07:20.320 --> 07:25.520
238
+ in the movie, her snow question is going to happen. But of course, the key question is,
239
+
240
+ 07:25.520 --> 07:31.120
241
+ does it feel like anything to be Samantha in the movie? Does it feel like anything to be Watson?
242
+
243
+ 07:31.120 --> 07:38.240
244
+ And there one has to very, very strongly think there are two different concepts here that we
245
+
246
+ 07:38.240 --> 07:44.400
247
+ co mingle. There is a concept of intelligence, natural or artificial, and there is a concept of
248
+
249
+ 07:44.400 --> 07:48.880
250
+ consciousness of experience, natural or artificial. Those are very, very different things.
251
+
252
+ 07:49.520 --> 07:55.520
253
+ Now, historically, we associate consciousness with intelligence. Why? Because we live in a world
254
+
255
+ 07:55.520 --> 08:01.520
256
+ living aside computers of natural selection, where we're surrounded by creatures, either our own kin
257
+
258
+ 08:01.520 --> 08:06.880
259
+ that are less or more intelligent, or we go across species, some some are more adapted to
260
+
261
+ 08:06.880 --> 08:12.000
262
+ particular environment, others are less adapted, whether it's a whale or dog, or you go talk about
263
+
264
+ 08:12.000 --> 08:17.680
265
+ a permitium or a little worm, all right. And we see the complexity of the nervous system goes from
266
+
267
+ 08:17.680 --> 08:24.400
268
+ one cell to to a specialized cells to a worm that has three net that has 30% of its cells are nerve
269
+
270
+ 08:24.400 --> 08:29.920
271
+ cells, to creature like us or like a blue whale that has 100 billion even more nerve cells.
272
+
273
+ 08:29.920 --> 08:35.280
274
+ And so based on behavioral evidence and based on the underlying neuroscience, we believe that
275
+
276
+ 08:35.840 --> 08:41.440
277
+ as these creatures become more complex, they are better adapted to to their particular ecological
278
+
279
+ 08:41.440 --> 08:46.960
280
+ niche. And they become more conscious, partly because their brain grows. And we believe
281
+
282
+ 08:46.960 --> 08:52.080
283
+ consciousness unlike the ancient ancient people thought most almost every culture thought that
284
+
285
+ 08:52.080 --> 08:56.800
286
+ consciousness with intelligence has to do with your heart. And you still to see that today,
287
+
288
+ 08:56.800 --> 09:00.800
289
+ you see honey, I love you with all my heart. Yes. But what you should actually say is they
290
+
291
+ 09:00.800 --> 09:05.520
292
+ know honey, I love you with all my lateral hypothalamus. And for Valentine's Day, you should
293
+
294
+ 09:05.520 --> 09:11.040
295
+ give your sweetheart, you know, hypothalamus in piece of chocolate, not a heart shaped chocolate,
296
+
297
+ 09:11.040 --> 09:15.280
298
+ right. And so we still have this language, but now we believe it's a brain. And so we see brains
299
+
300
+ 09:15.280 --> 09:19.680
301
+ of different complexity. And we think, well, they have different levels of consciousness,
302
+
303
+ 09:19.680 --> 09:28.160
304
+ they're capable of different experiences. But now we confront a world where we know where we're
305
+
306
+ 09:28.160 --> 09:34.960
307
+ beginning to engineer intelligence. And it's radical unclear whether the intelligence we're
308
+
309
+ 09:34.960 --> 09:40.160
310
+ engineering has anything to do with consciousness and whether it can experience anything. Because
311
+
312
+ 09:40.160 --> 09:45.520
313
+ fundamentally, what's the difference? Intelligence is about function. Intelligence, no matter exactly
314
+
315
+ 09:45.520 --> 09:50.400
316
+ how you define it sort of adaptation to new environments, being able to learn and quickly
317
+
318
+ 09:50.400 --> 09:54.400
319
+ understand, you know, the setup of this and what's going on and who are the actors and what's
320
+
321
+ 09:54.400 --> 10:00.800
322
+ going to happen next. That's all about function. Consciousness is not about function. Consciousness
323
+
324
+ 10:00.800 --> 10:08.080
325
+ is about being. It's in some sense much fundamental. You can see folks, you can see this in several
326
+
327
+ 10:08.080 --> 10:13.600
328
+ cases, you can see it, for instance, in the case of the clinic, when you're dealing with patients
329
+
330
+ 10:13.600 --> 10:19.200
331
+ who are, let's say, had a stroke or had wear and traffic accident, etc. They're pretty much
332
+
333
+ 10:19.200 --> 10:24.960
334
+ immobile. Terri Schievo, you may have heard historically, she was a person here in the
335
+
336
+ 10:24.960 --> 10:29.680
337
+ 90s in Florida. Her heart stood still. She was reanimated. Then for the next 14 years,
338
+
339
+ 10:29.680 --> 10:33.600
340
+ she was what's called in a vegetative state. So there's thousands of people in the vegetative
341
+
342
+ 10:33.600 --> 10:37.840
343
+ state. So they're, you know, they're, you know, they're like this. Occasionally, they open their
344
+
345
+ 10:37.840 --> 10:42.160
346
+ eyes for two, three, four, five, six, eight hours and then close their eyes. They have sleep wake cycle.
347
+
348
+ 10:42.160 --> 10:48.720
349
+ Occasionally, they have behaviors. They do like, you know, they, but there's no way that you can
350
+
351
+ 10:48.720 --> 10:53.200
352
+ establish a lawful relationship between what you say, or the doctor says, or the mom says,
353
+
354
+ 10:53.200 --> 11:00.000
355
+ and what the patient does. Right. So, so the, so the, there isn't any behavior yet in some of
356
+
357
+ 11:00.000 --> 11:06.480
358
+ these people, there is still experience. You can, you can design and build brain machine interfaces
359
+
360
+ 11:06.480 --> 11:10.400
361
+ where you can see there's, they still explain something. And of course, that these cases
362
+
363
+ 11:10.400 --> 11:15.040
364
+ are blocked in state. There's this famous book called the, the, the diving bell in the butterfly
365
+
366
+ 11:15.040 --> 11:20.160
367
+ where you had an editor, a French editor, he had a stroke in the, in the brainstem, unable to move
368
+
369
+ 11:20.160 --> 11:25.920
370
+ except his vertical eyes, eye movement. He could just move his eyes up and down. You need dictated
371
+
372
+ 11:25.920 --> 11:31.760
373
+ in an entire book. And some people even lose this at the end. And all the evidence seems to suggest
374
+
375
+ 11:31.760 --> 11:37.120
376
+ that they're still in there. In this case, you have no behavior. You have consciousness.
377
+
378
+ 11:37.120 --> 11:42.080
379
+ Second case is tonight, like all of us, you're going to go to sleep, close your eyes, you go to
380
+
381
+ 11:42.080 --> 11:46.720
382
+ sleep. You will wake up inside your sleeping body and you will have conscious experiences.
383
+
384
+ 11:47.440 --> 11:52.000
385
+ They're different from everyday experience. You might fly. You might not be surprised that you're
386
+
387
+ 11:52.000 --> 11:57.600
388
+ flying. You might meet a long dead pet, childhood dog, and you're not surprised that you're meeting
389
+
390
+ 11:57.600 --> 12:01.280
391
+ them, you know, but you have conscious experience of love, of hate, you know, they can be very
392
+
393
+ 12:01.280 --> 12:07.680
394
+ emotional. Your body doing this state, typically to them, state sends an active signal to your
395
+
396
+ 12:07.680 --> 12:13.120
397
+ motor neurons to paralyze you. It's called atonia, right? Because if you don't have that, like some
398
+
399
+ 12:13.120 --> 12:17.200
400
+ patients, what do you do? You act out your dreams, you get for example, rem behavioral disorder,
401
+
402
+ 12:17.200 --> 12:23.440
403
+ which is the bad, which is bad juju to get. Okay. Third case is pure experience. So I recently had
404
+
405
+ 12:23.440 --> 12:29.600
406
+ this, what some people call a mystical experience, I went to Singapore and went into a flotation
407
+
408
+ 12:29.600 --> 12:36.960
409
+ tank. Yeah. All right. So this is a big tub filled with water, that's body temperature and absent
410
+
411
+ 12:36.960 --> 12:42.400
412
+ salt. You strip completely naked, you lie inside of it, you close the, the, the darkness, complete
413
+
414
+ 12:42.400 --> 12:48.480
415
+ darkness, soundproof. So very quickly, you become bodyless because you're floating and you're naked.
416
+
417
+ 12:48.480 --> 12:53.600
418
+ You have no rings, no watch, no nothing. You don't feel your body anymore. It's no sound,
419
+
420
+ 12:53.600 --> 13:00.320
421
+ soundless. There's no photon, a sightless, timeless, because after a while, early on,
422
+
423
+ 13:00.320 --> 13:05.360
424
+ you actually hear your heart, but then that you, you sort of adapt to that and then sort of the
425
+
426
+ 13:05.360 --> 13:11.360
427
+ passage of time ceases. And if you train yourself like in a, in a meditation, not to think early
428
+
429
+ 13:11.360 --> 13:15.200
430
+ on, you think a lot, it's a little bit spooky. You feel somewhat uncomfortable or you think,
431
+
432
+ 13:15.200 --> 13:20.320
433
+ well, I'm going to get bored. But if you try to not to think actively, you become mindless.
434
+
435
+ 13:20.320 --> 13:27.520
436
+ There you are, bodyless, timeless, soundless, sightless, mindless, but you're in a conscious
437
+
438
+ 13:27.520 --> 13:32.560
439
+ experience. You're not asleep. You're not asleep. You are, you are being of pure,
440
+
441
+ 13:32.560 --> 13:36.400
442
+ you're pure being. There isn't any function. You aren't doing any computation. You're not
443
+
444
+ 13:36.400 --> 13:40.400
445
+ remembering, you're not projecting, you're not planning, yet you are fully conscious.
446
+
447
+ 13:40.400 --> 13:44.400
448
+ You're fully conscious. There's something going on there. It could be just a side effect. So what
449
+
450
+ 13:44.400 --> 13:52.480
451
+ is the, the, um, you mean epiphenomena? So what's the select, meaning why, uh, why, what, what,
452
+
453
+ 13:52.480 --> 13:59.200
454
+ what is the function of you being able to lay in this, uh, sense sensory free deprivation tank
455
+
456
+ 13:59.200 --> 14:04.640
457
+ and still have a conscious experience? Evolutionary. Obviously we didn't evolve with floatation
458
+
459
+ 14:04.640 --> 14:10.480
460
+ tanks in our, in our environment. I mean, so biology is not totally bad at asking why question to
461
+
462
+ 14:10.480 --> 14:14.560
463
+ leonormical question. Why do we have two eyes? Why don't we have four eyes like some teachers or three
464
+
465
+ 14:14.560 --> 14:19.360
466
+ eyes or something? Well, no, there's probably, there is a function to that, but it's, we're not
467
+
468
+ 14:19.360 --> 14:23.200
469
+ very good at answering those questions. We can speculate. And Leslie, where biology is very,
470
+
471
+ 14:23.200 --> 14:26.800
472
+ or science is very good about mechanistic question. Why is there charge in the universe,
473
+
474
+ 14:26.800 --> 14:31.040
475
+ right? We find a certain universe where there are positive and negative charges. Why? Why does
476
+
477
+ 14:31.040 --> 14:36.320
478
+ quantum mechanics hold? You know, why, why, why doesn't some other theory hold quantum mechanics
479
+
480
+ 14:36.320 --> 14:41.040
481
+ hold in our universe? It's very unclear why. So telenomical question, why questions are difficult
482
+
483
+ 14:41.040 --> 14:46.480
484
+ to answer. Clearly there's some relationship between complexity, brain processing power and
485
+
486
+ 14:46.480 --> 14:53.280
487
+ consciousness. But however, in these cases, in the three examples I gave, one is an everyday
488
+
489
+ 14:53.280 --> 14:57.920
490
+ experience at night. The other one is a trauma and third one is in principle, you can, everybody
491
+
492
+ 14:57.920 --> 15:04.080
493
+ can have these sort of mystical experiences. You have a dissociation of function from, of
494
+
495
+ 15:04.080 --> 15:12.320
496
+ intelligence from, from conscious consciousness. You caught me asking a white question. Let me
497
+
498
+ 15:12.320 --> 15:17.120
499
+ ask a question that's not a white question. You're giving a talk later today on the touring test
500
+
501
+ 15:17.760 --> 15:23.200
502
+ for intelligence and consciousness drawn lines between the two. So is there a scientific way
503
+
504
+ 15:23.200 --> 15:30.560
505
+ to say there's consciousness present in this entity or not? And to anticipate your answer,
506
+
507
+ 15:30.560 --> 15:35.840
508
+ because you, you will also, there's a neurobiological answer. So we can test a human brain. But if you
509
+
510
+ 15:35.840 --> 15:42.240
511
+ take a machine brain that you don't know tests for yet, how would you even begin to approach
512
+
513
+ 15:42.960 --> 15:47.760
514
+ a test if there's consciousness present in this thing? Okay, that's a really good question. So
515
+
516
+ 15:47.760 --> 15:54.000
517
+ let me take in two steps. So as you point out for, for, for humans, let's just stick with humans,
518
+
519
+ 15:54.000 --> 15:58.640
520
+ there's now a test called a zap and zip is a procedure where you ping the brain using
521
+
522
+ 15:58.640 --> 16:04.160
523
+ transcranial magnetic stimulation. You look at the electrical reverberations essentially using EG
524
+
525
+ 16:04.960 --> 16:08.800
526
+ and then you can measure the complexity of this brain response. And you can do this in awake
527
+
528
+ 16:08.800 --> 16:13.680
529
+ people in asleep, normal people, you can do it in awake people and then anesthetize them. You
530
+
531
+ 16:13.680 --> 16:20.240
532
+ can do it in patients and it has 100% accuracy that in all those cases, when you're clear,
533
+
534
+ 16:20.240 --> 16:24.080
535
+ the patient or the person is either conscious or unconscious, the complexity is either high or
536
+
537
+ 16:24.080 --> 16:28.320
538
+ low. And then you can adopt these techniques to similar creatures like monkeys and dogs and,
539
+
540
+ 16:28.320 --> 16:34.400
541
+ and mice that have very similar brains. Now, of course, you point out that may not help you
542
+
543
+ 16:34.400 --> 16:39.040
544
+ because we don't have a cortex, you know, and if I send a magnetic pulse into my iPhone or my
545
+
546
+ 16:39.040 --> 16:43.600
547
+ computer, it's probably going to break something. So we don't have that. So what we need ultimately,
548
+
549
+ 16:45.120 --> 16:50.080
550
+ we need a theory of consciousness. We can't just rely on our intuition. Our intuition is, well,
551
+
552
+ 16:50.080 --> 16:55.040
553
+ yeah, if somebody talks, they're conscious. However, then they're all these page children,
554
+
555
+ 16:55.040 --> 17:00.560
556
+ babies don't talk, right? But we believe that that the babies also have conscious experiences,
557
+
558
+ 17:00.560 --> 17:05.360
559
+ right? And then they're all these patients I mentioned. And they don't talk when you dream,
560
+
561
+ 17:05.360 --> 17:10.000
562
+ you can't talk because you're paralyzed. So, so what we ultimately need, we can't just rely
563
+
564
+ 17:10.000 --> 17:15.280
565
+ on our intuition, we need a theory of consciousness that tells us what is it about a piece of matter,
566
+
567
+ 17:15.280 --> 17:19.600
568
+ what is it about a piece of highly excitable matter like the brain or like a computer that
569
+
570
+ 17:19.600 --> 17:24.160
571
+ gives rise to conscious experience. We all believe none of us believe anymore in the old story,
572
+
573
+ 17:24.160 --> 17:28.160
574
+ it's a soul, right? That used to be the most common explanation that most people accept that
575
+
576
+ 17:28.160 --> 17:33.440
577
+ it's still a lot of people today believe, well, there's God in doubt, only us with a special
578
+
579
+ 17:33.440 --> 17:38.240
580
+ thing that animals don't have, René Descartes famously said, a dog, if you hit it with your
581
+
582
+ 17:38.240 --> 17:42.800
583
+ carriage, may yell, may cry, but it doesn't have this special thing. It doesn't have the magic,
584
+
585
+ 17:42.800 --> 17:47.920
586
+ the magic sauce. So yeah, it doesn't have red corketons, the soul. Now we believe that isn't
587
+
588
+ 17:47.920 --> 17:54.000
589
+ the case anymore. So what is the difference between brains and, and these guys, silicon.
590
+
591
+ 17:55.200 --> 18:01.120
592
+ And in particular, once their behavior matches. So if you have Siri of Alexa and 20 years from now
593
+
594
+ 18:01.120 --> 18:06.400
595
+ that she can talk just as good as any possible human, what grounds do you have to say she's not
596
+
597
+ 18:06.400 --> 18:11.280
598
+ conscious? In particular, if she says, it's of course she will. Well, of course I'm conscious.
599
+
600
+ 18:11.280 --> 18:15.600
601
+ You ask, how are you doing? And she'll say, well, you know, they'll generate some way to,
602
+
603
+ 18:15.600 --> 18:21.840
604
+ yeah, yeah, exactly. She'll behave like a, like a person. Now there's several differences. One is,
605
+
606
+ 18:23.280 --> 18:29.120
607
+ so this relates to the problem, the very heart. Why is consciousness a heart problem? It's because
608
+
609
+ 18:29.120 --> 18:35.280
610
+ it's subjective, right? Only I have it, for only I know, I have direct experience of my own
611
+
612
+ 18:35.280 --> 18:40.240
613
+ consciousness. I don't have experience, your consciousness. Now I assume as a sort of a
614
+
615
+ 18:40.240 --> 18:43.680
616
+ Bayesian person who believes in probability theory and all of that, you know, I can do,
617
+
618
+ 18:43.680 --> 18:48.160
619
+ I can do an abduction to the, to the best available facts. I deduce your brain is very
620
+
621
+ 18:48.160 --> 18:51.840
622
+ similar to mine. If I put you in a scanner, your brain is roughly going to behave the same with
623
+
624
+ 18:51.840 --> 18:56.240
625
+ I do. If, if, if, you know, if I give you this music and ask you, how does it taste?
626
+
627
+ 18:56.240 --> 19:00.240
628
+ Do you tell me things that, you know, that, that I would also say more or less, right?
629
+
630
+ 19:00.240 --> 19:03.920
631
+ So I infer based on all of that, that you're conscious. Now with Siri, I can't do that. So
632
+
633
+ 19:03.920 --> 19:09.680
634
+ there I really need a theory that tells me what is it about, about any system this or this that
635
+
636
+ 19:09.680 --> 19:14.880
637
+ makes it conscious. We have such a theory. Yes. So the, the integrated information theory.
638
+
639
+ 19:15.520 --> 19:19.360
640
+ But let me first, maybe it's introduction for people who are not familiar to car.
641
+
642
+ 19:20.640 --> 19:28.400
643
+ Can you, you talk a lot about panpsychism. Can you describe what physicalism versus dualism
644
+
645
+ 19:29.040 --> 19:35.280
646
+ this you mentioned the soul? What, what is the history of that idea? What is the idea of panpsychism?
647
+
648
+ 19:35.280 --> 19:44.560
649
+ Well, no, the debate really out of which panpsychism can emerge of, of, of dualism versus
650
+
651
+ 19:45.200 --> 19:49.200
652
+ physicalism. Or do you not see panpsychism as fitting into that?
653
+
654
+ 19:49.760 --> 19:53.840
655
+ No, you can argue there's some, well, okay, so let's step back. So panpsychism is a very
656
+
657
+ 19:53.840 --> 19:58.560
658
+ ancient belief that's been around. I mean, Plato and us totally talks about it.
659
+
660
+ 19:59.200 --> 20:05.040
661
+ Modern philosophers talk about it. Of course, in Buddhism, the idea is very prevalent that
662
+
663
+ 20:05.040 --> 20:08.960
664
+ I mean, there are different versions of it. One version says everything is in salt,
665
+
666
+ 20:08.960 --> 20:13.680
667
+ everything rocks and stones and dogs and people and forest and iPhones all have a soul.
668
+
669
+ 20:14.240 --> 20:20.240
670
+ All matter is in soul. That's sort of one version. Another version is that all biology,
671
+
672
+ 20:20.240 --> 20:26.080
673
+ all creatures, smaller, large from a single cell to a giant sequoia tree feel like something. That's
674
+
675
+ 20:26.080 --> 20:30.800
676
+ one I think is somewhat more realistic. So the different versions of what do you mean by feel
677
+
678
+ 20:30.800 --> 20:36.240
679
+ like something? Well, have, have feeling, have some kind of experience. It may well be possible
680
+
681
+ 20:36.240 --> 20:41.120
682
+ that it feels like something to be a paramedium. I think it's pretty likely it feels like something
683
+
684
+ 20:41.120 --> 20:48.320
685
+ to be a bee or a mouse or a dog. Sure. So, okay. So, so that you can say that's also,
686
+
687
+ 20:48.320 --> 20:53.920
688
+ so panpsychism is very broad, right? And you can, so some people, for example,
689
+
690
+ 20:53.920 --> 21:00.720
691
+ Bertrand Russell, try to advocate this, this idea is called Rasellian monism that, that
692
+
693
+ 21:00.720 --> 21:06.640
694
+ panpsychism is really physics viewed from the inside. So the idea is that physics is very
695
+
696
+ 21:06.640 --> 21:13.120
697
+ good at describing relationship among objects like charges or like gravity, right? You know,
698
+
699
+ 21:13.120 --> 21:17.360
700
+ describe the relationship between curvature and mass distribution. Okay. That's the relationship
701
+
702
+ 21:17.360 --> 21:21.440
703
+ among things. Physics doesn't really describe the ultimate reality itself. It's just
704
+
705
+ 21:21.440 --> 21:27.280
706
+ relationship among, you know, quarks or all these other stuff from like a third person observer.
707
+
708
+ 21:27.280 --> 21:33.440
709
+ Yes. Yes. And consciousness is what physics feels from the inside to my conscious experience.
710
+
711
+ 21:33.440 --> 21:37.760
712
+ It's the way the physics of my brain, particular my cortex feels from the inside.
713
+
714
+ 21:38.400 --> 21:42.400
715
+ And so if you are a paramedium, you got to remember, you say paramedium, well, that's a
716
+
717
+ 21:42.400 --> 21:48.960
718
+ pretty dumb creature. It is, but it has already a billion different molecules, probably, you know,
719
+
720
+ 21:48.960 --> 21:54.160
721
+ 5,000 different proteins assembled in a highly, highly complex system that no single person,
722
+
723
+ 21:54.160 --> 21:58.880
724
+ no computer system so far on this planet has ever managed to accurately simulate.
725
+
726
+ 21:58.880 --> 22:04.240
727
+ It's complexity vastly escapes us. Yes. And it may well be that that little thing feels like a
728
+
729
+ 22:04.240 --> 22:08.160
730
+ tiny bit. Now, it doesn't have a voice in the head like me. It doesn't have expectations.
731
+
732
+ 22:08.160 --> 22:12.400
733
+ You know, it doesn't have all that complex things, but it may well feel like something.
734
+
735
+ 22:12.400 --> 22:17.520
736
+ Yeah. So this is really interesting. Can we draw some lines and maybe try to understand
737
+
738
+ 22:17.520 --> 22:24.560
739
+ the difference between life, intelligence and consciousness? How do you see all of those?
740
+
741
+ 22:25.280 --> 22:31.120
742
+ If you have to define what is a living thing, what is a conscious thing and what is an intelligent
743
+
744
+ 22:31.120 --> 22:35.920
745
+ thing? Do those intermix for you or are they totally separate? Okay. So A, that's a question
746
+
747
+ 22:35.920 --> 22:40.000
748
+ that we don't have a full answer. Right. A lot of the stuff we're talking about today
749
+
750
+ 22:40.000 --> 22:44.240
751
+ is full of mysteries and fascinating ones, right? Well, you can go to Aristotle,
752
+
753
+ 22:44.240 --> 22:48.480
754
+ who's probably the most important scientist and philosopher who's ever lived in certainly
755
+
756
+ 22:48.480 --> 22:53.280
757
+ in Western culture. He had this idea. It's called hylomorphism. It's quite popular these days
758
+
759
+ 22:53.280 --> 22:58.080
760
+ that there are different forms of soul. The soul is really the form of something. He says,
761
+
762
+ 22:58.080 --> 23:02.480
763
+ all biological creatures have a vegetative soul. That's life principle. Today we think we
764
+
765
+ 23:02.480 --> 23:07.680
766
+ understand something more than this biochemistry in nonlinear thermodynamics. Right. Then he says
767
+
768
+ 23:07.680 --> 23:13.840
769
+ they have a sensitive soul. Only animals and humans have also a sensitive soul or a
770
+
771
+ 23:13.840 --> 23:18.960
772
+ petitive soul. They can see, they can smell and they have drives. They want to reproduce,
773
+
774
+ 23:18.960 --> 23:24.080
775
+ they want to eat, etc. Then only humans have what he called a rational soul.
776
+
777
+ 23:25.040 --> 23:29.840
778
+ Okay. Right. And that idea that made it into Christendom and then the rational soul is the one
779
+
780
+ 23:29.840 --> 23:34.240
781
+ that lives forever. He was very unclear. He wasn't really... I mean, different readings of Aristotle
782
+
783
+ 23:34.240 --> 23:39.040
784
+ give different... Did he believe that rational soul was immortal or not? I probably think he
785
+
786
+ 23:39.040 --> 23:43.040
787
+ didn't. But then, of course, that made it through play to into Christianity and then this soul
788
+
789
+ 23:43.040 --> 23:49.840
790
+ became immortal and then became the connection to God. Now, so you asked me, essentially,
791
+
792
+ 23:49.840 --> 23:56.240
793
+ what is our modern conception of these three... Aristotle would have called them different forms.
794
+
795
+ 23:56.240 --> 24:00.160
796
+ Life, we think we know something about it, at least life on this planet. Right. Although,
797
+
798
+ 24:00.160 --> 24:05.120
799
+ we don't understand how to originate it, but it's been difficult to rigorously pin down.
800
+
801
+ 24:05.120 --> 24:10.320
802
+ You see this in modern definitions of death. It's in fact, right now there's a conference
803
+
804
+ 24:10.320 --> 24:16.000
805
+ ongoing, again, that tries to define legally and medically what is death. It used to be very
806
+
807
+ 24:16.000 --> 24:19.920
808
+ simple. Death is, you stop breathing, your heart stops beating, you're dead. Right?
809
+
810
+ 24:19.920 --> 24:24.160
811
+ Totally unconventional. If you're unsure, you wait another 10 minutes. If the patient doesn't
812
+
813
+ 24:24.160 --> 24:28.640
814
+ breathe, you know, he's dead. Well, now we have ventilators, we have pacemakers. So,
815
+
816
+ 24:28.640 --> 24:33.360
817
+ it's much more difficult to define what death is. Typically, death is defined as the end of life
818
+
819
+ 24:33.360 --> 24:38.160
820
+ and life is defined before death. Thank you for that. Okay. So, we don't have really very good
821
+
822
+ 24:38.160 --> 24:42.640
823
+ definitions. Intelligence, we don't have a rigorous definition. We know something how to
824
+
825
+ 24:42.640 --> 24:49.920
826
+ measure. It's called IQ or G factors. Right. And we're beginning to build it in a narrow sense.
827
+
828
+ 24:49.920 --> 24:56.560
829
+ Right. Like, go AlphaGo and Watson and, you know, Google cars and Uber cars and all of that.
830
+
831
+ 24:56.560 --> 25:00.960
832
+ That's still narrow AI. And some people are thinking about artificial general intelligence.
833
+
834
+ 25:00.960 --> 25:05.120
835
+ But roughly, as we said before, it's something to do with the ability to learn and to adapt
836
+
837
+ 25:05.120 --> 25:10.160
838
+ to new environments. But that is, as I said, also its radical difference from experience.
839
+
840
+ 25:10.800 --> 25:16.480
841
+ And it's very unclear if you build a machine that has AGI, it's not at all a priori. It's
842
+
843
+ 25:16.480 --> 25:20.560
844
+ not at all clear that this machine will have consciousness. It may or may not.
845
+
846
+ 25:20.560 --> 25:25.280
847
+ So, let's ask it the other way. Do you think if you were to try to build an artificial general
848
+
849
+ 25:25.280 --> 25:31.040
850
+ intelligence system, do you think figuring out how to build artificial consciousness
851
+
852
+ 25:31.040 --> 25:39.360
853
+ would help you get to an AGI? Or put another way, do you think intelligent requires consciousness?
854
+
855
+ 25:40.320 --> 25:46.240
856
+ In human, it goes hand in hand. In human, or I think in biology, consciousness, intelligence
857
+
858
+ 25:46.240 --> 25:52.640
859
+ goes hand in hand. Quay is illusion because the brain evolved to be highly complex, complexity
860
+
861
+ 25:52.640 --> 25:58.160
862
+ via the theory integrated information theory is sort of ultimately is what is closely tied to
863
+
864
+ 25:58.160 --> 26:04.000
865
+ consciousness. Ultimately, it's causal power upon itself. And so, in evolved systems, they go
866
+
867
+ 26:04.000 --> 26:09.120
868
+ together. In artificial systems, particularly in digital machines, they do not go together.
869
+
870
+ 26:09.120 --> 26:16.800
871
+ And if you ask me point blank, is Alexa 20.0 in the year 2040, once she can easily pass every
872
+
873
+ 26:16.800 --> 26:21.600
874
+ Turing test that she conscious? No. Even if she claims she's conscious. In fact, you could even
875
+
876
+ 26:21.600 --> 26:25.840
877
+ do a more radical version of this thought experiment. We can build a computer simulation
878
+
879
+ 26:25.840 --> 26:30.320
880
+ of the human brain. You know what Henry Markham in the blue brain project or the human brain
881
+
882
+ 26:30.320 --> 26:34.240
883
+ project in Switzerland is trying to do. Let's grant them all the success. So in 10 years,
884
+
885
+ 26:34.240 --> 26:38.560
886
+ we have this perfect simulation of the human brain, every neuron is simulated. And it has
887
+
888
+ 26:38.560 --> 26:43.520
889
+ alarmics, and it has motor neurons, it has a blockers area. And of course, they'll talk and
890
+
891
+ 26:43.520 --> 26:48.480
892
+ they'll say, Hi, I just woken up. I feel great. Okay, even that computer simulation that can in
893
+
894
+ 26:48.480 --> 26:53.840
895
+ principle map onto your brain will not be conscious. Why? Because it simulates, it's a difference
896
+
897
+ 26:53.840 --> 26:59.280
898
+ between the simulated and the real. So it simulates the behavior associated with consciousness. It
899
+
900
+ 26:59.280 --> 27:03.920
901
+ might be, it will, if it's done properly, will have all the intelligence that that particular
902
+
903
+ 27:03.920 --> 27:09.760
904
+ person that simulating has. But simulating intelligence is not the same as having conscious
905
+
906
+ 27:09.760 --> 27:14.240
907
+ experiences. And I give you a really nice metaphor that engineers and physicists typically get.
908
+
909
+ 27:15.120 --> 27:20.000
910
+ I can write down Einstein's field equation nine or 10 equations that describe the link in general
911
+
912
+ 27:20.000 --> 27:27.600
913
+ relativity between curvature and mass. I can do that. I can run this on my laptop to predict that
914
+
915
+ 27:27.600 --> 27:34.960
916
+ the center, the black hole at the center of our galaxy will be so massive that it will twist space
917
+
918
+ 27:34.960 --> 27:40.240
919
+ time around it so no light can escape. It's a black hole, right? But funny, have you ever wondered
920
+
921
+ 27:40.240 --> 27:47.040
922
+ why doesn't this computer simulation suck me in? Right? It simulates gravity, but it doesn't have
923
+
924
+ 27:47.040 --> 27:52.720
925
+ the causal power of gravity. That's a huge difference. So it's a difference between the real
926
+
927
+ 27:52.720 --> 27:57.360
928
+ and the simulated, just like it doesn't get wet inside a computer when the computer runs code
929
+
930
+ 27:57.360 --> 28:03.360
931
+ that simulates a weather storm. And so in order to have to have artificial consciousness, you have
932
+
933
+ 28:03.360 --> 28:08.720
934
+ to give it the same causal power as the human brain. You have to build so called a neuromorphic
935
+
936
+ 28:08.720 --> 28:15.040
937
+ machine that has hardware that is very similar to the human brain, not a digital clock for
938
+
939
+ 28:15.040 --> 28:22.480
940
+ Neumann computer. So that's just to clarify, though, you think that consciousness is not required
941
+
942
+ 28:22.480 --> 28:29.360
943
+ to create human level intelligence. It seems to accompany in the human brain, but for machine
944
+
945
+ 28:29.360 --> 28:37.360
946
+ not. That's correct. So maybe just because this is AGI, let's dig in a little bit about what we
947
+
948
+ 28:37.360 --> 28:44.000
949
+ mean by intelligence. So one thing is the g factor of these kind of IQ tests of intelligence.
950
+
951
+ 28:44.000 --> 28:51.520
952
+ But I think if you maybe another way to say so in 2040, 2050, people will have Siri that is
953
+
954
+ 28:52.320 --> 28:57.600
955
+ just really impressive. Do you think people will say Siri is intelligent?
956
+
957
+ 28:57.600 --> 29:04.080
958
+ Yes. Intelligence is this amorphous thing. So to be intelligent, it seems like you have to have
959
+
960
+ 29:04.080 --> 29:09.520
961
+ some kind of connections with other human beings in the sense that you have to impress them with
962
+
963
+ 29:09.520 --> 29:17.200
964
+ your intelligence. And there feels you have to somehow operate in this world full of humans.
965
+
966
+ 29:17.200 --> 29:22.240
967
+ And for that, there feels like there has to be something like consciousness. So you think you
968
+
969
+ 29:22.240 --> 29:28.080
970
+ can have just the world's best natural NLP system, natural language, understanding a generation,
971
+
972
+ 29:28.080 --> 29:33.840
973
+ and that will be that will get us happy and say, you know what, we've created an AGI.
974
+
975
+ 29:33.840 --> 29:41.120
976
+ I don't know, happy. Yes, I do believe we can get what we call high level functional intelligence,
977
+
978
+ 29:41.120 --> 29:47.200
979
+ particularly sort of the G, you know, this fluid like intelligence that we charge,
980
+
981
+ 29:47.200 --> 29:52.640
982
+ particularly the place like MIT, right? In machines, I see a priori no reasons,
983
+
984
+ 29:52.640 --> 29:58.160
985
+ and I see a lot of reason to believe it's going to happen very over the next 50 years or 30 years.
986
+
987
+ 29:58.160 --> 30:04.160
988
+ So for beneficial AI, for creating an AI system that's, so you mentioned ethics,
989
+
990
+ 30:04.880 --> 30:10.880
991
+ that is exceptionally intelligent, but also does not do, does, you know, aligns its values
992
+
993
+ 30:10.880 --> 30:14.800
994
+ with our values of humanity. Do you think then it needs consciousness?
995
+
996
+ 30:14.800 --> 30:19.920
997
+ Yes, I think that that is a very good argument that if we're concerned about AI and the threat
998
+
999
+ 30:19.920 --> 30:26.160
1000
+ of AI, like Nick Bostrom, existentialist threat, I think having an intelligence that has empathy,
1001
+
1002
+ 30:26.160 --> 30:32.480
1003
+ right? Why do we find abusing a dog? Why do most of us find that abhorrent or abusing any animal?
1004
+
1005
+ 30:32.480 --> 30:37.600
1006
+ Right? Why do we find that abhorrent? Because we have this thing called empathy, which if you
1007
+
1008
+ 30:37.600 --> 30:42.960
1009
+ look at the Greek really means feeling with. I feel a pathos empathy. I have feeling with you.
1010
+
1011
+ 30:42.960 --> 30:48.400
1012
+ I see somebody else suffer. That isn't even my conspecific. It's not a person. It's not a love.
1013
+
1014
+ 30:48.400 --> 30:53.280
1015
+ It's not my wife or my kids. It's a dog. But I feel naturally, most of us, not all of us,
1016
+
1017
+ 30:53.280 --> 31:00.560
1018
+ most of us will feel emphatic. And so it may well be in the long term interest of survival
1019
+
1020
+ 31:00.560 --> 31:05.840
1021
+ of Homo sapiens sapiens, that if we do build AGI, and it's really becomes very powerful,
1022
+
1023
+ 31:05.840 --> 31:10.880
1024
+ that it has an emphatic response and doesn't just exterminate humanity.
1025
+
1026
+ 31:11.760 --> 31:17.920
1027
+ So as part of the full conscious experience to create a consciousness, artificial, or in our
1028
+
1029
+ 31:17.920 --> 31:24.320
1030
+ human consciousness, do you think fear, maybe we're going to get into your earlier days with
1031
+
1032
+ 31:24.320 --> 31:30.560
1033
+ Nietzsche and so on, but do you think fear and suffering are essential to have consciousness?
1034
+
1035
+ 31:30.560 --> 31:34.800
1036
+ Do you have to have the full range of experience to have a system that has experience?
1037
+
1038
+ 31:36.480 --> 31:41.600
1039
+ Or can you have a system that only has very particular kinds of very positive experiences?
1040
+
1041
+ 31:41.600 --> 31:46.880
1042
+ Look, you can have, in principle, people have done this in a rat where you implant an electrode
1043
+
1044
+ 31:46.880 --> 31:51.680
1045
+ in the hypothalamus, the pleasure center of the rat, and the rat stimulates itself above and
1046
+
1047
+ 31:51.680 --> 31:57.200
1048
+ beyond anything else. It doesn't care about food or natural sex or drink anymore, it just stimulates
1049
+
1050
+ 31:57.200 --> 32:02.960
1051
+ itself because it's such a pleasurable feeling. I guess it's like an orgasm, just you have all
1052
+
1053
+ 32:02.960 --> 32:11.040
1054
+ day long. And so a priori, I see no reason why you need different, why you need a great variety.
1055
+
1056
+ 32:11.040 --> 32:16.800
1057
+ Now, clearly to survive, that wouldn't work. But if I engineered artificially, I don't think
1058
+
1059
+ 32:18.720 --> 32:24.960
1060
+ you need a great variety of conscious experience. You could have just pleasure or just fear.
1061
+
1062
+ 32:25.680 --> 32:30.240
1063
+ It might be a terrible existence, but I think that's possible, at least on conceptual logical
1064
+
1065
+ 32:30.240 --> 32:34.640
1066
+ ground. Because any real creature, whether artificial or engineered, you want to give
1067
+
1068
+ 32:34.640 --> 32:40.080
1069
+ it fear, the fear of extinction that we all have. And you also want to give it a positive,
1070
+
1071
+ 32:40.080 --> 32:45.600
1072
+ repetitive states, states that you want the machine encouraged to do because they give
1073
+
1074
+ 32:45.600 --> 32:52.000
1075
+ the machine positive feedback. So you mentioned panpsychism to jump back a little bit.
1076
+
1077
+ 32:53.440 --> 33:00.400
1078
+ Everything having some kind of mental property. How do you go from there to something like human
1079
+
1080
+ 33:01.120 --> 33:05.760
1081
+ consciousness? So everything having some elements of consciousness. Is there something
1082
+
1083
+ 33:05.760 --> 33:12.320
1084
+ special about human consciousness? Well, so just it's not everything like a spoon. There's no
1085
+
1086
+ 33:13.600 --> 33:18.160
1087
+ the form of panpsychism, I think about doesn't ascribe consciousness to anything like this,
1088
+
1089
+ 33:18.160 --> 33:25.440
1090
+ the spoon on my liver. However, it is the theory of integrated information theory does say that
1091
+
1092
+ 33:25.440 --> 33:29.680
1093
+ system even ones that look from the outside relatively simple, at least if they have this
1094
+
1095
+ 33:29.680 --> 33:36.800
1096
+ internal causal power, they are they does feel like something. The theory doesn't say anything
1097
+
1098
+ 33:36.800 --> 33:41.920
1099
+ what's special about human. Biologically, we know what the one thing that's special about
1100
+
1101
+ 33:41.920 --> 33:48.800
1102
+ human is we speak. And we have an overblown sense of our own importance. Right. We believe we're
1103
+
1104
+ 33:48.800 --> 33:54.800
1105
+ exceptional. And we're just God's gift to to into the universe. But the but behaviorally,
1106
+
1107
+ 33:54.800 --> 33:58.720
1108
+ the main thing that we have, we can plan, we can plan over the long term, we have language,
1109
+
1110
+ 33:58.720 --> 34:04.080
1111
+ and that gives us enormous amount of power. And that's why we are the the condominant species
1112
+
1113
+ 34:04.080 --> 34:11.440
1114
+ on the planet. So you mentioned God, you grew up a devout Roman Catholic, you know, Roman Catholic
1115
+
1116
+ 34:11.440 --> 34:19.360
1117
+ family. So, you know, with consciousness, you're sort of exploring some really deeply fundamental
1118
+
1119
+ 34:19.360 --> 34:24.320
1120
+ human things that religion also touches on. So where does, where does religion fit into your
1121
+
1122
+ 34:24.320 --> 34:29.760
1123
+ thinking about consciousness? And you've you've grown throughout your life and changed your views
1124
+
1125
+ 34:29.760 --> 34:35.520
1126
+ on religion, as far as I understand. Yeah, I mean, I'm not much closer to so I'm not a Roman
1127
+
1128
+ 34:35.520 --> 34:40.960
1129
+ Catholic anymore. I don't believe there's sort of this God, the God I was, I was educated to
1130
+
1131
+ 34:40.960 --> 34:46.560
1132
+ believe in, you know, sit somewhere in the fullness of time, I'll be united in some sort of everlasting
1133
+
1134
+ 34:46.560 --> 34:52.160
1135
+ bliss. I just don't see any evidence for that. Look, the world, the night is large and full of
1136
+
1137
+ 34:52.160 --> 34:57.360
1138
+ wonders, right? There are many things that I don't understand. I think many things that we as a
1139
+
1140
+ 34:57.360 --> 35:01.520
1141
+ cult, you look, we don't even understand more than 4% of all the the universe, right? Dark
1142
+
1143
+ 35:01.520 --> 35:05.760
1144
+ matter, dark energy. We have no idea what it is. Maybe it's lost socks. What do I know? So,
1145
+
1146
+ 35:06.480 --> 35:12.960
1147
+ so all I can tell you is it's a sort of my current religious or spiritual sentiment is much closer
1148
+
1149
+ 35:12.960 --> 35:19.600
1150
+ to some form of Buddhism. Can you just without the reincarnation? Unfortunately, there's no evidence
1151
+
1152
+ 35:19.600 --> 35:25.280
1153
+ for reincarnation. So, can you describe the way Buddhism sees the world a little bit?
1154
+
1155
+ 35:25.280 --> 35:31.920
1156
+ Well, so, you know, they talk about so when I spent several meetings with the Dalai Lama and what
1157
+
1158
+ 35:31.920 --> 35:36.720
1159
+ always impressed me about him, he really unlike, for example, let's say the Pope or some Cardinal,
1160
+
1161
+ 35:36.720 --> 35:42.560
1162
+ he always emphasized minimizing the suffering of all creatures. So, they have this from the early
1163
+
1164
+ 35:42.560 --> 35:47.520
1165
+ beginning, they look at suffering in all creatures, not just in people, but in everybody, this
1166
+
1167
+ 35:47.520 --> 35:52.880
1168
+ universal. And of course, by degrees, right in the animal, general will have less is less capable
1169
+
1170
+ 35:52.880 --> 35:59.520
1171
+ of suffering than a well developed, normally developed human. And they think consciousness
1172
+
1173
+ 35:59.520 --> 36:05.440
1174
+ pervades in this universe. And they have these techniques, you know, you can think of them
1175
+
1176
+ 36:05.440 --> 36:10.880
1177
+ like mindfulness, etc. in meditation that tries to access sort of what they claim of this more
1178
+
1179
+ 36:10.880 --> 36:15.840
1180
+ fundamental aspect of reality. I'm not sure it's more fundamentalist. I think about it. There's
1181
+
1182
+ 36:15.840 --> 36:20.240
1183
+ a physical and then there's this inside view consciousness. And those are the two aspects
1184
+
1185
+ 36:20.240 --> 36:24.880
1186
+ that the only thing I have access to in my life. And you got to remember my conscious
1187
+
1188
+ 36:24.880 --> 36:28.640
1189
+ experience and your conscious experience comes prior to anything you know about physics,
1190
+
1191
+ 36:28.640 --> 36:33.680
1192
+ comes prior to knowledge about the universe and atoms and super strings and molecules and all of
1193
+
1194
+ 36:33.680 --> 36:39.040
1195
+ that. The only thing you directly are acquainted with is this world that's populated with things
1196
+
1197
+ 36:39.040 --> 36:42.720
1198
+ and images and sounds in your head and touches and all of that.
1199
+
1200
+ 36:42.720 --> 36:49.120
1201
+ I actually have a question. So it sounds like you kind of have a rich life. You talk about
1202
+
1203
+ 36:49.120 --> 36:55.040
1204
+ rock climbing and it seems like you really love literature and consciousness is all about
1205
+
1206
+ 36:55.040 --> 37:00.160
1207
+ experiencing things. So do you think that has helped your research on this topic?
1208
+
1209
+ 37:00.640 --> 37:05.440
1210
+ Yes, particularly if you think about it, the various states of what I'm going to do rock
1211
+
1212
+ 37:05.440 --> 37:11.600
1213
+ climbing. And now I do a rowing crew rowing and a bike every day, you can get into this thing called
1214
+
1215
+ 37:11.600 --> 37:16.320
1216
+ the zone. And I've always I want to I wanted about a particular with respect to consciousness
1217
+
1218
+ 37:16.320 --> 37:21.120
1219
+ because it's a strangely addictive state. You want to you want to appear. I mean, once people
1220
+
1221
+ 37:21.120 --> 37:25.360
1222
+ have it once, they want to keep on going back to it. And you wonder why what is it so addicting
1223
+
1224
+ 37:25.360 --> 37:31.840
1225
+ about it. And I think it's the experience of almost close to pure experience. Because in this
1226
+
1227
+ 37:31.840 --> 37:35.840
1228
+ in this zone, you're not conscious or inner voice anymore. There's always an inner voice
1229
+
1230
+ 37:35.840 --> 37:38.960
1231
+ nagging you, right? You have to do this, you have to do that, you have to pay your taxes,
1232
+
1233
+ 37:38.960 --> 37:42.960
1234
+ you had this fight with your ex and all of those things are always there. But when you're in the
1235
+
1236
+ 37:42.960 --> 37:47.280
1237
+ zone, all of that is gone. And you're just this in this wonderful state where you're fully out in
1238
+
1239
+ 37:47.280 --> 37:53.440
1240
+ the world, right? You're climbing or you're rowing or biking or doing soccer, whatever you're doing.
1241
+
1242
+ 37:53.440 --> 37:59.280
1243
+ And sort of consciousness sort of is this your all action, or in this case of pure experience,
1244
+
1245
+ 37:59.280 --> 38:06.160
1246
+ you're not action at all. But in both cases, you experience some aspect of you touch some basic
1247
+
1248
+ 38:06.160 --> 38:13.600
1249
+ part of conscious existence that is so basic and so deeply satisfying. You I think you touch the
1250
+
1251
+ 38:13.600 --> 38:18.320
1252
+ root of being. That's really what you're touching there, you're getting close to the root of being.
1253
+
1254
+ 38:18.320 --> 38:24.400
1255
+ And that's very different from intelligence. So what do you think about the simulation hypothesis,
1256
+
1257
+ 38:24.400 --> 38:28.320
1258
+ simulation theory, the idea that we all live in a computer simulation? Have you even ever?
1259
+
1260
+ 38:28.320 --> 38:35.760
1261
+ Rapture for nerds. Rapture for nerds. I think it's as likely as the hypothesis that
1262
+
1263
+ 38:35.760 --> 38:41.440
1264
+ engaged hundreds of scholars for many centuries, are we all just existing in the mind of God?
1265
+
1266
+ 38:42.000 --> 38:46.320
1267
+ Right. And this is just a modern version of it. It's it's it's equally plausible.
1268
+
1269
+ 38:47.280 --> 38:51.280
1270
+ People love talking about these sorts of things. I know their book written about the simulation
1271
+
1272
+ 38:51.280 --> 38:56.480
1273
+ hypothesis. If that's what people want to do, that's fine. It seems rather esoteric. It's never
1274
+
1275
+ 38:56.480 --> 39:02.240
1276
+ testable. But it's not useful for you to think of in those terms. So maybe connecting to the
1277
+
1278
+ 39:02.240 --> 39:07.200
1279
+ questions of free will, which you've talked about. I think I vaguely remember you saying
1280
+
1281
+ 39:07.200 --> 39:11.760
1282
+ that the idea that there's no free will, it makes you very uncomfortable.
1283
+
1284
+ 39:13.280 --> 39:17.840
1285
+ So what do you think about free will? And from the from a physics perspective,
1286
+
1287
+ 39:17.840 --> 39:22.400
1288
+ from a consciousness perspective, what does it all fit? Okay, so from the physics perspective,
1289
+
1290
+ 39:22.400 --> 39:26.960
1291
+ leaving aside quantum mechanics, we believe we live in a fully deterministic world, right?
1292
+
1293
+ 39:26.960 --> 39:30.880
1294
+ But then comes, of course, quantum mechanics. So now we know that certain things are in principle
1295
+
1296
+ 39:30.880 --> 39:37.040
1297
+ not predictable, which I, as you said, I prefer because the idea that at the initial condition
1298
+
1299
+ 39:37.040 --> 39:40.800
1300
+ of the universe, and then everything else, we're just acting out the initial condition of the
1301
+
1302
+ 39:40.800 --> 39:47.120
1303
+ universe that doesn't that doesn't make it's not a romantic notion. Certainly not. Right. Now,
1304
+
1305
+ 39:47.120 --> 39:52.400
1306
+ when it comes to consciousness, I think we do have certain freedom. We are much more constrained by
1307
+
1308
+ 39:52.400 --> 39:57.040
1309
+ physics, of course, and by our past and by our own conscious desires and what our parents told us
1310
+
1311
+ 39:57.040 --> 40:01.200
1312
+ and what our environment tells us, we all know that, right? There's hundreds of experiments
1313
+
1314
+ 40:01.200 --> 40:06.480
1315
+ that show how we can be influenced. But finally, in the in the final analysis, when you make a
1316
+
1317
+ 40:06.480 --> 40:10.560
1318
+ life and I'm talking really about critical decision, what you really think, should I marry,
1319
+
1320
+ 40:10.560 --> 40:14.800
1321
+ should I go to this school of that good, should I take this job with that job,
1322
+
1323
+ 40:14.800 --> 40:20.240
1324
+ should I cheat on my taxes or not? These sort of these are things where you really deliberate.
1325
+
1326
+ 40:20.240 --> 40:25.200
1327
+ And I think on those conditions, you are as free as you can be. When you when you bring your
1328
+
1329
+ 40:25.200 --> 40:33.360
1330
+ entire being your entire conscious being to that question and try to analyze it on all the
1331
+
1332
+ 40:33.360 --> 40:37.920
1333
+ the various conditions, then you take you make a decision you are as free as you can ever be.
1334
+
1335
+ 40:38.560 --> 40:44.160
1336
+ That is I think what what free will is it's not a will that's totally free to do anything it wants.
1337
+
1338
+ 40:44.160 --> 40:50.880
1339
+ That's not possible. Right. So as Jack mentioned, yet you actually read a blog about books you've
1340
+
1341
+ 40:50.880 --> 41:01.680
1342
+ read amazing books from Russian from Bulgakov. Yeah, Neil Gaiman, Carl Sagan, Murakami. So what
1343
+
1344
+ 41:01.680 --> 41:07.360
1345
+ is a book that early in your life transformed the way you saw the world, something that changed your
1346
+
1347
+ 41:07.360 --> 41:14.160
1348
+ life? Nietzsche, I guess, did. That's broke out Trista, because he talks about some of these problems.
1349
+
1350
+ 41:14.160 --> 41:18.720
1351
+ You know, he was one of the first discoverer of the unconscious. This is, you know, a little bit
1352
+
1353
+ 41:18.720 --> 41:26.240
1354
+ before Freud when it was in the air. And you know, he makes all these claims that people sort of
1355
+
1356
+ 41:26.240 --> 41:33.360
1357
+ under the guise of under the mass of charity actually are very non charitable. So he is sort
1358
+
1359
+ 41:33.360 --> 41:40.480
1360
+ of really the first discoverer of the great land of the of the unconscious. And that that really
1361
+
1362
+ 41:40.480 --> 41:44.000
1363
+ struck me. And what do you think? What do you think about the unconscious? What do you think
1364
+
1365
+ 41:44.000 --> 41:49.760
1366
+ about Freud? What do you think about these ideas? What's what's just like dark matter in the universe?
1367
+
1368
+ 41:49.760 --> 41:54.560
1369
+ What's over there in that unconscious? A lot. I mean, much more than we think, right? This is what
1370
+
1371
+ 41:54.560 --> 42:00.080
1372
+ a lot of last 100 years of research has shown. So I think he was a genius, misguided towards the
1373
+
1374
+ 42:00.080 --> 42:04.800
1375
+ end. But he was all he started out as a neuroscientist, right? He contributed. He did the studies on the
1376
+
1377
+ 42:06.000 --> 42:11.600
1378
+ on the lamp. He contributed himself to the neuron hypothesis, the idea that they're discrete units
1379
+
1380
+ 42:11.600 --> 42:17.920
1381
+ that we call nerve cells now. And then he started then he he wrote, you know, about the unconscious.
1382
+
1383
+ 42:17.920 --> 42:22.720
1384
+ And I think it's true. There's lots of stuff happening. You feel this particular when you're
1385
+
1386
+ 42:22.720 --> 42:27.520
1387
+ in a relationship and it breaks a thunder, right? And then you have this terrible, you can have
1388
+
1389
+ 42:27.520 --> 42:33.200
1390
+ love and hate and lust and anger and all of it is mixed in. And when you try to analyze yourself,
1391
+
1392
+ 42:33.200 --> 42:40.080
1393
+ why am I so upset? It's very, very difficult to penetrate to those basements, those caverns in
1394
+
1395
+ 42:40.080 --> 42:45.440
1396
+ your mind, because the prying eyes of conscience doesn't have access to those. But they're there
1397
+
1398
+ 42:45.440 --> 42:50.240
1399
+ in the amygdala or, you know, in lots of other places, they make you upset or angry or sad or
1400
+
1401
+ 42:50.240 --> 42:55.360
1402
+ depressed. And it's very difficult to try to actually uncover the reason you can go to a shrink,
1403
+
1404
+ 42:55.360 --> 42:59.760
1405
+ you can talk with your friend endlessly, you construct finally a story why this happened,
1406
+
1407
+ 42:59.760 --> 43:03.760
1408
+ why you love her or don't love her or whatever. But you don't really know whether that's actually
1409
+
1410
+ 43:04.640 --> 43:08.400
1411
+ that actually happened, because you simply don't have access to those parts of the brain. And
1412
+
1413
+ 43:08.400 --> 43:13.120
1414
+ they're very powerful. Do you think that's a feature or a bug of our brain? The fact that we
1415
+
1416
+ 43:13.120 --> 43:19.120
1417
+ have this deep, difficult to dive into subconscious? I think it's a feature, because otherwise,
1418
+
1419
+ 43:19.120 --> 43:28.480
1420
+ look, we are like any other brain or nervous system or computer, we are severely band limited.
1421
+
1422
+ 43:28.480 --> 43:34.640
1423
+ If we, if everything I do, every emotion I feel, every eye movements I make, if all of that had
1424
+
1425
+ 43:34.640 --> 43:41.520
1426
+ to be under the control of consciousness, I couldn't, I couldn't, I wouldn't be here. Right.
1427
+
1428
+ 43:41.520 --> 43:46.880
1429
+ So, so what you do early on your brain, you have to be conscious when you learn things like typing
1430
+
1431
+ 43:46.880 --> 43:52.640
1432
+ or like riding on a bike. But then you what you do, you train up route, I think that involved
1433
+
1434
+ 43:52.640 --> 43:57.760
1435
+ basal ganglia and stratum, you train up different parts of your brain. And then once you do it
1436
+
1437
+ 43:57.760 --> 44:01.600
1438
+ automatically like typing, you can show you do it much faster without even thinking about it,
1439
+
1440
+ 44:01.600 --> 44:05.280
1441
+ because you've got these highly specialized what Franz Krik and I called zombie agents
1442
+
1443
+ 44:06.160 --> 44:09.680
1444
+ that I sort of they're taking care of that while your consciousness can sort of worry
1445
+
1446
+ 44:09.680 --> 44:14.640
1447
+ about the abstract sense of the text you want to write. I think that's true for many, many things.
1448
+
1449
+ 44:14.640 --> 44:20.400
1450
+ But for the things like all the fights you had with an ex girlfriend, things that
1451
+
1452
+ 44:20.960 --> 44:25.120
1453
+ you would think are not useful to still linger somewhere in the subconscious.
1454
+
1455
+ 44:25.120 --> 44:29.600
1456
+ So that seems like a bug that it would stick there. You think it would be better if you can
1457
+
1458
+ 44:29.600 --> 44:34.320
1459
+ analyze and then get it out of the system or just forget it ever happened. You know,
1460
+
1461
+ 44:34.320 --> 44:39.520
1462
+ that that seems a very buggy kind of. Well, yeah, in general, we don't have and that's
1463
+
1464
+ 44:39.520 --> 44:43.840
1465
+ probably functional. We don't have an ability unless it's extreme, the outcases clinical
1466
+
1467
+ 44:43.840 --> 44:48.880
1468
+ dissociations, right? When people are heavily abused when they completely repress them,
1469
+
1470
+ 44:48.880 --> 44:53.280
1471
+ they the memory, but that doesn't happen in, in, in, you know, in normal people,
1472
+
1473
+ 44:53.280 --> 44:58.800
1474
+ we don't have an ability to remove traumatic memories. And of course, we suffer from that.
1475
+
1476
+ 44:58.800 --> 45:03.680
1477
+ On the other hand, probably if you have the ability to constantly wipe your memory,
1478
+
1479
+ 45:03.680 --> 45:10.160
1480
+ you probably do it to an extent that isn't useful to you. So yeah, it's a good question.
1481
+
1482
+ 45:10.160 --> 45:15.760
1483
+ It's a balance. So on the books, as Jack mentioned, correct me if I'm wrong, but
1484
+
1485
+ 45:16.640 --> 45:21.280
1486
+ broadly speaking, academia and different scientific disciplines, certainly in engineering,
1487
+
1488
+ 45:21.920 --> 45:27.680
1489
+ reading literature seems to be a rare pursuit. Perhaps I'm wrong in this, but that's in my
1490
+
1491
+ 45:27.680 --> 45:34.880
1492
+ experience, most people read much more technical texts and do not sort of escape or seek truth
1493
+
1494
+ 45:34.880 --> 45:40.400
1495
+ in literature. It seems like you do. So what do you think is the value? What do you think
1496
+
1497
+ 45:40.400 --> 45:46.000
1498
+ literature adds to the pursuit of scientific truth? Do you think it's good? It's useful for
1499
+
1500
+ 45:47.120 --> 45:51.520
1501
+ give you access to much wider array of human experiences?
1502
+
1503
+ 45:52.320 --> 45:54.000
1504
+ How valuable do you think it is?
1505
+
1506
+ 45:54.000 --> 45:58.720
1507
+ Well, if you want to understand human nature and nature in general, then I think you have to
1508
+
1509
+ 45:58.720 --> 46:04.800
1510
+ better understand a wide variety of experiences, not just sitting in a lab staring at a screen and
1511
+
1512
+ 46:04.800 --> 46:09.200
1513
+ having a face flashed onto you for 100 milliseconds and pushing a button. That's what I used to do.
1514
+
1515
+ 46:09.200 --> 46:13.280
1516
+ That's what most psychologists do. There's nothing wrong with that, but you need to consider
1517
+
1518
+ 46:13.280 --> 46:18.800
1519
+ lots of other strange states. And literature is a shortcut for this.
1520
+
1521
+ 46:18.800 --> 46:22.880
1522
+ Well, yeah, because literature, that's what literature is all about. All sorts of interesting
1523
+
1524
+ 46:22.880 --> 46:28.640
1525
+ experiences that people have, the contingency of it, the fact that women experience a world
1526
+
1527
+ 46:28.640 --> 46:33.840
1528
+ different, black people experience a world different. One way to experience that is reading
1529
+
1530
+ 46:33.840 --> 46:38.320
1531
+ all these different literature and try to find out. You see everything so relative. You read
1532
+
1533
+ 46:38.320 --> 46:42.320
1534
+ the books 100 years ago, they thought about certain problems very, very differently than
1535
+
1536
+ 46:42.320 --> 46:47.120
1537
+ us today. We today, like any culture, think we know it all. That's common to every culture.
1538
+
1539
+ 46:47.120 --> 46:50.960
1540
+ Every culture believes that it's heyday, they know it all. And then you realize, well, there's
1541
+
1542
+ 46:50.960 --> 46:55.360
1543
+ other ways of viewing the universe. And some of them may have lots of things in their favor.
1544
+
1545
+ 46:56.560 --> 47:03.200
1546
+ So this is a question I wanted to ask about timescale or scale in general. When you, with
1547
+
1548
+ 47:03.200 --> 47:07.520
1549
+ IIT or in general, try to think about consciousness, try to think about these ideas,
1550
+
1551
+ 47:08.880 --> 47:17.520
1552
+ kind of naturally thinking human timescales. Do you or and also entities that are sized
1553
+
1554
+ 47:17.520 --> 47:21.680
1555
+ close to humans? Do you think of things that are much larger, much smaller as containing
1556
+
1557
+ 47:21.680 --> 47:31.200
1558
+ consciousness? And do you think of things that take, you know, well, ages, eons to operate
1559
+
1560
+ 47:31.200 --> 47:36.560
1561
+ in their conscious cause effect? It's a very good question. So I think a lot of what small
1562
+
1563
+ 47:36.560 --> 47:42.400
1564
+ creatures, because experimentally, a lot of people work on flies and bees. So most people
1565
+
1566
+ 47:42.400 --> 47:46.320
1567
+ just think they're automata. They're just bugs, for heaven's sake. But if you look at their behavior,
1568
+
1569
+ 47:46.320 --> 47:50.080
1570
+ like bees, they can recognize individual humans. They have this very complicated
1571
+
1572
+ 47:50.960 --> 47:54.880
1573
+ way to communicate. If you've ever been involved, or you know, your parents, when they bought a
1574
+
1575
+ 47:54.880 --> 47:59.760
1576
+ house, what sort of agonizing decision that is. And bees have to do that once a year, right?
1577
+
1578
+ 47:59.760 --> 48:03.200
1579
+ When they swarm in the spring, and then they have this very elaborate way, they have three
1580
+
1581
+ 48:03.200 --> 48:07.360
1582
+ nuts scouts, they go to the individual sites, they come back, they have this power, this dance,
1583
+
1584
+ 48:07.360 --> 48:11.520
1585
+ literally where they dance for several days, they try to recruit other these very complicated
1586
+
1587
+ 48:11.520 --> 48:16.480
1588
+ decision weight. When they finally want to make a decision, the entire swarm, the scouts warm up
1589
+
1590
+ 48:16.480 --> 48:20.320
1591
+ the entire swarm, then go to one location, they don't go to 50 equation, they go to one location
1592
+
1593
+ 48:20.320 --> 48:24.800
1594
+ that the scouts have agreed upon by themselves. That's awesome. If we look at the circuit complexity,
1595
+
1596
+ 48:24.800 --> 48:28.400
1597
+ it's 10 times more denser than anything we have in our brain. Now they only have a million
1598
+
1599
+ 48:28.400 --> 48:32.640
1600
+ neurons, but the neurons are amazingly complex, complex behavior, very complicated circuitry.
1601
+
1602
+ 48:32.640 --> 48:37.120
1603
+ So there's no question, they experience something, their life is very different, they're tiny,
1604
+
1605
+ 48:37.120 --> 48:44.240
1606
+ they only live for, well, workers live maybe for two months. So I think an IIT tells you this,
1607
+
1608
+ 48:44.240 --> 48:49.680
1609
+ in principle, the substrate of consciousness is the substrate that maximizes the cause effect
1610
+
1611
+ 48:49.680 --> 48:54.640
1612
+ power over all possible spatial temple grains. So when I think about, for example, do you know
1613
+
1614
+ 48:54.640 --> 48:59.840
1615
+ the science fiction story, The Black Cloud? Okay, it's a classic by Fred Hoel, the astronomer.
1616
+
1617
+ 48:59.840 --> 49:07.120
1618
+ He has this cloud intervening between the earth and the sun and leading to some sort of global
1619
+
1620
+ 49:07.120 --> 49:13.200
1621
+ cooling that's written in the 50s. It turns out you can, using the radio dish, they communicate
1622
+
1623
+ 49:13.200 --> 49:18.240
1624
+ with actually an entity, it's actually an intelligent entity. And they sort of, they
1625
+
1626
+ 49:18.240 --> 49:23.600
1627
+ convince it to move away. So here you have a radical different entity. And in principle,
1628
+
1629
+ 49:23.600 --> 49:28.640
1630
+ IIT says, well, you can measure the integrated information in principle at least. And yes,
1631
+
1632
+ 49:28.640 --> 49:34.640
1633
+ if that, if the maximum of that occurs at a time scale of month, rather than an acid sort of fraction
1634
+
1635
+ 49:34.640 --> 49:40.720
1636
+ of a second, yes, and they would experience life where each moment is a month rather than, or
1637
+
1638
+ 49:40.720 --> 49:47.600
1639
+ microsecond, right, rather than a fraction of a second in the human case. And so there may be
1640
+
1641
+ 49:47.600 --> 49:51.760
1642
+ forms of consciousness that we simply don't recognize for what they are, because they are so
1643
+
1644
+ 49:51.760 --> 49:56.960
1645
+ radical different from anything you and I are used to. Again, that's why it's good to read
1646
+
1647
+ 49:56.960 --> 50:03.120
1648
+ or to watch science fiction movie or to think about this. Like this is, do you know Stanislaw
1649
+
1650
+ 50:03.120 --> 50:07.520
1651
+ Lem, this Polish science fiction writer, he wrote Solaris, it was turned into a Hollywood movie?
1652
+
1653
+ 50:07.520 --> 50:13.280
1654
+ Yes. His best novel was in the 60s, a very, very ingenious, an ingenious background. His most
1655
+
1656
+ 50:13.280 --> 50:19.680
1657
+ interesting novel is called The Victorious, where human civilization, they have this mission to
1658
+
1659
+ 50:19.680 --> 50:25.280
1660
+ this planet and everything is destroyed and they discover machines, humans got killed and then
1661
+
1662
+ 50:25.280 --> 50:30.480
1663
+ these machines took over and there was a machine evolution, a Darwinian evolution, he talks about
1664
+
1665
+ 50:30.480 --> 50:37.200
1666
+ this very vividly. And finally, the dominant, the dominant machine intelligence organism that
1667
+
1668
+ 50:37.200 --> 50:42.960
1669
+ survived are gigantic clouds of little hexagonal universal cell automata. This is written in the
1670
+
1671
+ 50:42.960 --> 50:48.320
1672
+ 60s. So typically they're all lying on the ground individual by themselves, but in times of crisis
1673
+
1674
+ 50:48.320 --> 50:54.080
1675
+ they can communicate, they assemble into gigantic nets, into clouds of trillions of these particles
1676
+
1677
+ 50:54.080 --> 50:59.760
1678
+ and then they become hyper intelligent and they can beat anything that humans can throw at it.
1679
+
1680
+ 50:59.760 --> 51:05.040
1681
+ It's a very beautiful and compelling way you have an intelligence where finally the humans
1682
+
1683
+ 51:05.040 --> 51:09.600
1684
+ leave the planet, they simply unable to understand and comprehend this creature and they can say,
1685
+
1686
+ 51:09.600 --> 51:14.000
1687
+ well, either we can nuke the entire planet and destroy it or we just have to leave because
1688
+
1689
+ 51:14.000 --> 51:19.520
1690
+ fundamentally it's an alien, it's so alien from us and our ideas that we cannot communicate with them.
1691
+
1692
+ 51:19.520 --> 51:25.440
1693
+ Yeah, actually in conversation, cellular automata, Stephen Wolf from
1694
+
1695
+ 51:25.440 --> 51:31.760
1696
+ brought up is that there could be some ideas that you already have these artificial general
1697
+
1698
+ 51:31.760 --> 51:36.000
1699
+ intelligence like super smart or maybe conscious beings in the cellular automata, we just don't
1700
+
1701
+ 51:36.000 --> 51:40.320
1702
+ know how to talk to them. So it's the link with the communication, but you don't know what to do
1703
+
1704
+ 51:40.320 --> 51:47.920
1705
+ with it. So that's one sort of view is consciousness is only something you can measure. So it's not
1706
+
1707
+ 51:47.920 --> 51:52.880
1708
+ conscious if you can't measure it. But so you're making an ontological and an epistemic statement.
1709
+
1710
+ 51:52.880 --> 51:58.640
1711
+ One is there, it's just like seeing their multiverses, that might be true, but I can't
1712
+
1713
+ 51:58.640 --> 52:03.600
1714
+ communicate with them. I don't have any knowledge of them. That's an epistemic argument. Those are
1715
+
1716
+ 52:03.600 --> 52:07.520
1717
+ two different things. So it may well be possible. Look, another case that's happening right now,
1718
+
1719
+ 52:07.520 --> 52:12.240
1720
+ people are building these mini organoids. Do you know about this? So you can take stem cells from
1721
+
1722
+ 52:12.240 --> 52:15.920
1723
+ under your arm, put it in a dish, add for transcription factors and then you can induce
1724
+
1725
+ 52:15.920 --> 52:20.800
1726
+ them to grow into large, well large, they're a few millimeter, they're like a half a million
1727
+
1728
+ 52:20.800 --> 52:25.840
1729
+ neurons that look like nerve cells in a dish called mini organoids. At Harvard, at Stanford,
1730
+
1731
+ 52:25.840 --> 52:29.760
1732
+ everywhere they're building them. It may be well be possible that they're beginning to feel like
1733
+
1734
+ 52:29.760 --> 52:34.560
1735
+ something. But we can't really communicate with them right now. So people are beginning to think
1736
+
1737
+ 52:34.560 --> 52:40.800
1738
+ about the ethics of this. So yes, he may be perfectly right. But it's one question, are
1739
+
1740
+ 52:40.800 --> 52:44.480
1741
+ they conscious or not? It's totally separate question. How would I know? Those are two different
1742
+
1743
+ 52:44.480 --> 52:52.480
1744
+ things. Right. If you could give advice to a young researcher sort of dreaming of understanding or
1745
+
1746
+ 52:52.480 --> 52:58.560
1747
+ creating human level intelligence or consciousness, what would you say?
1748
+
1749
+ 52:59.840 --> 53:07.680
1750
+ Follow your dreams. Read widely. No, I mean, I suppose with discipline, what is the pursuit
1751
+
1752
+ 53:07.680 --> 53:11.760
1753
+ that they should take on? Is it neuroscience? Is it competition, cognitive science? Is it
1754
+
1755
+ 53:11.760 --> 53:19.600
1756
+ philosophy? Is it computer science or robotics? No, in a sense that, okay, so the only known
1757
+
1758
+ 53:20.960 --> 53:25.520
1759
+ system that have high level of intelligence is homo sapiens. So if you wanted to build it,
1760
+
1761
+ 53:25.520 --> 53:30.080
1762
+ it's probably good to continue to study closely what humans do. So cognitive neuroscience,
1763
+
1764
+ 53:30.080 --> 53:34.400
1765
+ you know, somewhere between cognitive neuroscience on the one hand, and some philosophy of mind,
1766
+
1767
+ 53:34.400 --> 53:40.160
1768
+ and then AI computer science. You can look at all the original ideas, neural networks,
1769
+
1770
+ 53:40.160 --> 53:45.440
1771
+ they all came from neuroscience, right? And reinforce whether it's snarky, Minsky building
1772
+
1773
+ 53:45.440 --> 53:49.040
1774
+ is snarky, or whether it's, you know, the early Schubel and Wiesel experiments at Harvard that
1775
+
1776
+ 53:49.040 --> 53:54.880
1777
+ then gave rise to networks and then multi layer networks. So it may well be possible. In fact,
1778
+
1779
+ 53:54.880 --> 53:59.760
1780
+ some people argue that to make the next big step in AI once we realize the limits of deep
1781
+
1782
+ 53:59.760 --> 54:03.680
1783
+ convolutional networks, they can do certain things, but they can't really understand.
1784
+
1785
+ 54:03.680 --> 54:09.440
1786
+ But they don't, they don't, they can't really, I can't really show them one image. I can show you a
1787
+
1788
+ 54:09.440 --> 54:15.600
1789
+ single image of somebody a pickpocket who steals a wallet from a purse, you immediately know that's
1790
+
1791
+ 54:15.600 --> 54:20.800
1792
+ a pickpocket. Now computer system would just say, well, it's a man, it's a woman, it's a purse, right?
1793
+
1794
+ 54:20.800 --> 54:25.200
1795
+ Unless you train this machine on showing it 100,000 pickpockets, right? So it doesn't,
1796
+
1797
+ 54:25.200 --> 54:31.040
1798
+ it doesn't have this easy understanding that you have, right? So, so some people make the argument
1799
+
1800
+ 54:31.040 --> 54:34.640
1801
+ in order to go to the next step where you really want to build machines that understand in a way
1802
+
1803
+ 54:34.640 --> 54:39.280
1804
+ you and I, we have to go to psychology. We need to understand how we do it and how our brains
1805
+
1806
+ 54:39.280 --> 54:44.480
1807
+ enable us to do it. And so therefore being on the cusp, it's also so exciting to try to understand
1808
+
1809
+ 54:44.480 --> 54:49.040
1810
+ better our nature and then to build to take some of those insight and build them. So I think the
1811
+
1812
+ 54:49.040 --> 54:53.680
1813
+ most exciting thing is somewhere in the interface between cognitive science, neuroscience, AI,
1814
+
1815
+ 54:53.680 --> 54:55.520
1816
+ computer science and philosophy of mind.
1817
+
1818
+ 54:55.520 --> 55:00.160
1819
+ Beautiful. Yeah, I'd say if there is, from the machine learning from, from the computer science,
1820
+
1821
+ 55:00.160 --> 55:05.760
1822
+ computer vision perspective, many of the research is kind of ignore the way the human brain works.
1823
+
1824
+ 55:05.760 --> 55:12.160
1825
+ Ignore even psychology or literature or studying the brain. I would hope Josh Tannenbaum talks
1826
+
1827
+ 55:12.160 --> 55:18.640
1828
+ about bringing that in more and more. And that's, yeah. So you worked on some amazing
1829
+
1830
+ 55:18.640 --> 55:25.520
1831
+ stuff throughout your life. What's the thing that you're really excited about? What's the mystery
1832
+
1833
+ 55:25.520 --> 55:31.440
1834
+ that you would love to uncover in the near term beyond, beyond all the mysteries that you already
1835
+
1836
+ 55:31.440 --> 55:37.200
1837
+ surrounded by? Well, so there's this structure called the claustrum. Okay, this is a structure.
1838
+
1839
+ 55:37.200 --> 55:42.480
1840
+ It's underneath our cortex. It's a big, you have one on the left and a right underneath this
1841
+
1842
+ 55:42.480 --> 55:46.720
1843
+ underneath the insular. It's very thin. It's like one millimeter. It's embedded in, in wiring,
1844
+
1845
+ 55:46.720 --> 55:53.760
1846
+ in white matters. It's very difficult to image. And it has, it has connection to every cortical
1847
+
1848
+ 55:53.760 --> 55:58.640
1849
+ region. And Francis Crick, the last paper he ever wrote, he dictated corrections the day he died
1850
+
1851
+ 55:58.640 --> 56:05.360
1852
+ in hospital on this paper. He now we hypothesize, well, because it has this unique anatomy, it gets
1853
+
1854
+ 56:05.360 --> 56:11.280
1855
+ input from every cortical area and projects back to every, every cortical area, that the function
1856
+
1857
+ 56:11.280 --> 56:18.560
1858
+ of this structure is similar. It's just a metaphor to the role of a conductor in a symphony orchestra.
1859
+
1860
+ 56:18.560 --> 56:22.720
1861
+ You have all the different cortical players. You have some that do motion, some that do theory of
1862
+
1863
+ 56:22.720 --> 56:26.880
1864
+ mind, some that infer social interaction and color and hearing and all the different modules and
1865
+
1866
+ 56:26.880 --> 56:31.280
1867
+ cortex. But of course, what consciousness is, consciousness puts it all together into one
1868
+
1869
+ 56:31.280 --> 56:35.840
1870
+ package, right? The binding problem, all of that. And this is really the function because it has
1871
+
1872
+ 56:35.840 --> 56:41.280
1873
+ relatively few neurons compared to cortex, but it, it talks, it sort of receives input from all of
1874
+
1875
+ 56:41.280 --> 56:45.520
1876
+ them, and it projects back to all of them. And so we're testing that right now. We've got this
1877
+
1878
+ 56:45.520 --> 56:51.120
1879
+ beautiful neuronal reconstruction in the mouse called crown of thorn, crown of thorn neurons that
1880
+
1881
+ 56:51.120 --> 56:56.160
1882
+ are in the classroom that have the most widespread connection of any nerve neuron I've ever seen.
1883
+
1884
+ 56:56.160 --> 57:00.320
1885
+ They're very deep. You have individual neurons that sit in the clouds from tiny, but then they have
1886
+
1887
+ 57:00.320 --> 57:06.000
1888
+ this single neuron that have this huge axonal tree that cover both ipsy and contralateral cortex
1889
+
1890
+ 57:06.880 --> 57:11.280
1891
+ and, and trying to turn using, you know, fancy tools like optogenetics, trying to turn those
1892
+
1893
+ 57:11.280 --> 57:14.720
1894
+ neurons on or off and study it. What happens in the, in the mouse?
1895
+
1896
+ 57:14.720 --> 57:18.720
1897
+ So this thing is perhaps where the parts become the whole.
1898
+
1899
+ 57:19.920 --> 57:25.440
1900
+ Perhaps it's one of the structures. It's a very good way of putting it where the, the individual
1901
+
1902
+ 57:25.440 --> 57:31.760
1903
+ parts turn into the whole of, the whole of the conscious experience. Well, with that,
1904
+
1905
+ 57:32.640 --> 57:36.000
1906
+ thank you very much for being here today. Thank you very much.
1907
+
1908
+ 57:36.000 --> 57:56.560
1909
+ I'll be back in a minute. Thanks Jack. Thank you very much.
1910
+
vtt/episode_003_small.vtt ADDED
@@ -0,0 +1,1118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:06.400
4
+ You've studied the human mind, cognition, language, vision, evolution, psychology, from child to adult,
5
+
6
+ 00:07.360 --> 00:11.120
7
+ from the level of individual to the level of our entire civilization,
8
+
9
+ 00:11.680 --> 00:14.880
10
+ so I feel like I can start with a simple multiple choice question.
11
+
12
+ 00:16.240 --> 00:21.840
13
+ What is the meaning of life? Is it A, to attain knowledge, as Plato said,
14
+
15
+ 00:22.400 --> 00:28.000
16
+ B, to attain power, as Nietzsche said, C, to escape death, as Ernest Becker said,
17
+
18
+ 00:28.000 --> 00:35.040
19
+ D, to propagate our genes, as Darwin and others have said, E, there is no meaning,
20
+
21
+ 00:35.040 --> 00:41.440
22
+ as the nihilists have said, F, knowing the meaning of life is beyond our cognitive capabilities,
23
+
24
+ 00:41.440 --> 00:47.360
25
+ as Stephen Pinker said, based on my interpretation 20 years ago, and G, none of the above.
26
+
27
+ 00:48.160 --> 00:54.720
28
+ I'd say A comes closest, but I would amend that to attaining not only knowledge, but fulfillment
29
+
30
+ 00:54.720 --> 01:06.000
31
+ more generally. That is, life, health, stimulation, access to the living cultural and social world.
32
+
33
+ 01:06.000 --> 01:10.720
34
+ Now, this is our meaning of life. It's not the meaning of life, if you were to ask our genes.
35
+
36
+ 01:12.160 --> 01:17.600
37
+ Their meaning is to propagate copies of themselves, but that is distinct from the
38
+
39
+ 01:17.600 --> 01:26.640
40
+ meaning that the brain that they lead to sets for itself. So, to you, knowledge is a small subset
41
+
42
+ 01:26.640 --> 01:33.280
43
+ or a large subset? It's a large subset, but it's not the entirety of human striving, because we
44
+
45
+ 01:33.280 --> 01:39.120
46
+ also want to interact with people. We want to experience beauty. We want to experience the
47
+
48
+ 01:39.120 --> 01:47.840
49
+ richness of the natural world, but understanding what makes the universe tick is way up there.
50
+
51
+ 01:47.840 --> 01:54.000
52
+ For some of us more than others, certainly for me, that's one of the top five.
53
+
54
+ 01:54.560 --> 02:00.080
55
+ So, is that a fundamental aspect? Are you just describing your own preference, or is this a
56
+
57
+ 02:00.080 --> 02:05.920
58
+ fundamental aspect of human nature, is to seek knowledge? In your latest book, you talk about
59
+
60
+ 02:05.920 --> 02:11.760
61
+ the power, the usefulness of rationality and reason and so on. Is that a fundamental
62
+
63
+ 02:11.760 --> 02:16.160
64
+ nature of human beings, or is it something we should just strive for?
65
+
66
+ 02:16.960 --> 02:21.840
67
+ Both. We're capable of striving for it, because it is one of the things that
68
+
69
+ 02:22.640 --> 02:31.360
70
+ make us what we are, homo sapiens, wise men. We are unusual among our animals in the degree to
71
+
72
+ 02:31.360 --> 02:39.760
73
+ which we acquire knowledge and use it to survive. We make tools. We strike agreements via language.
74
+
75
+ 02:39.760 --> 02:47.760
76
+ We extract poisons. We predict the behavior of animals. We try to get at the workings of plants.
77
+
78
+ 02:47.760 --> 02:52.640
79
+ And when I say we, I don't just mean we in the modern west, but we as a species everywhere,
80
+
81
+ 02:52.640 --> 02:58.160
82
+ which is how we've managed to occupy every niche on the planet, how we've managed to drive other
83
+
84
+ 02:58.160 --> 03:06.480
85
+ animals to extinction. And the refinement of reason in pursuit of human well being, of health,
86
+
87
+ 03:06.480 --> 03:13.680
88
+ happiness, social richness, cultural richness, is our main challenge in the present. That is,
89
+
90
+ 03:14.480 --> 03:19.280
91
+ using our intellect, using our knowledge to figure out how the world works, how we work,
92
+
93
+ 03:19.280 --> 03:25.200
94
+ in order to make discoveries and strike agreements that make us all better off in the long run.
95
+
96
+ 03:25.200 --> 03:31.920
97
+ Right. And you do that almost undeniably in a data driven way in your recent book,
98
+
99
+ 03:31.920 --> 03:36.480
100
+ but I'd like to focus on the artificial intelligence aspect of things, and not just
101
+
102
+ 03:36.480 --> 03:41.840
103
+ artificial intelligence, but natural intelligence too. So 20 years ago in the book, you've written
104
+
105
+ 03:41.840 --> 03:49.600
106
+ on how the mind works, you conjecture, again, am I right to interpret things? You can correct me
107
+
108
+ 03:49.600 --> 03:55.200
109
+ if I'm wrong, but you conjecture that human thought in the brain may be a result of a network, a massive
110
+
111
+ 03:55.200 --> 04:00.560
112
+ network of highly interconnected neurons. So from this interconnectivity emerges thought,
113
+
114
+ 04:01.280 --> 04:05.600
115
+ compared to artificial neural networks, which we use for machine learning today,
116
+
117
+ 04:06.160 --> 04:12.640
118
+ is there something fundamentally more complex, mysterious, even magical about the biological
119
+
120
+ 04:12.640 --> 04:19.440
121
+ neural networks versus the ones we've been starting to use over the past 60 years and
122
+
123
+ 04:19.440 --> 04:24.960
124
+ become to success in the past 10? There is something a little bit mysterious about
125
+
126
+ 04:25.840 --> 04:31.760
127
+ the human neural networks, which is that each one of us who is a neural network knows that we
128
+
129
+ 04:31.760 --> 04:36.960
130
+ ourselves are conscious, conscious not in the sense of registering our surroundings or even
131
+
132
+ 04:36.960 --> 04:42.720
133
+ registering our internal state, but in having subjective first person, present tense experience.
134
+
135
+ 04:42.720 --> 04:49.840
136
+ That is, when I see red, it's not just different from green, but there's a redness to it that I
137
+
138
+ 04:49.840 --> 04:54.720
139
+ feel. Whether an artificial system would experience that or not, I don't know and I don't think I
140
+
141
+ 04:54.720 --> 05:00.480
142
+ can know. That's why it's mysterious. If we had a perfectly lifelike robot that was behaviorally
143
+
144
+ 05:00.480 --> 05:06.800
145
+ indistinguishable from a human, would we attribute consciousness to it or ought we to attribute
146
+
147
+ 05:06.800 --> 05:12.160
148
+ consciousness to it? And that's something that it's very hard to know. But putting that aside,
149
+
150
+ 05:12.160 --> 05:19.040
151
+ putting aside that largely philosophical question, the question is, is there some difference between
152
+
153
+ 05:19.040 --> 05:23.920
154
+ the human neural network and the ones that we're building in artificial intelligence will mean that
155
+
156
+ 05:23.920 --> 05:30.400
157
+ we're on the current trajectory not going to reach the point where we've got a lifelike robot
158
+
159
+ 05:30.400 --> 05:35.120
160
+ indistinguishable from a human because the way their so called neural networks are organized
161
+
162
+ 05:35.120 --> 05:40.560
163
+ are different from the way ours are organized. I think there's overlap, but I think there are some
164
+
165
+ 05:40.560 --> 05:48.720
166
+ big differences that their current neural networks, current so called deep learning systems are in
167
+
168
+ 05:48.720 --> 05:53.840
169
+ reality not all that deep. That is, they are very good at extracting high order statistical
170
+
171
+ 05:53.840 --> 06:00.640
172
+ regularities. But most of the systems don't have a semantic level, a level of actual understanding
173
+
174
+ 06:00.640 --> 06:06.400
175
+ of who did what to whom, why, where, how things work, what causes, what else.
176
+
177
+ 06:06.400 --> 06:10.960
178
+ Do you think that kind of thing can emerge as it does so artificial neural networks are much
179
+
180
+ 06:10.960 --> 06:16.480
181
+ smaller the number of connections and so on than the current human biological networks? But do you
182
+
183
+ 06:16.480 --> 06:22.640
184
+ think sort of go to consciousness or to go to this higher level semantic reasoning about things?
185
+
186
+ 06:22.640 --> 06:28.960
187
+ Do you think that can emerge with just a larger network with a more richly, weirdly interconnected
188
+
189
+ 06:28.960 --> 06:33.280
190
+ network? Separate it in consciousness because consciousness is even a matter of complexity.
191
+
192
+ 06:33.280 --> 06:37.920
193
+ A really weird one. Yeah, you could sensibly ask the question of whether shrimp are conscious,
194
+
195
+ 06:37.920 --> 06:43.200
196
+ for example. They're not terribly complex, but maybe they feel pain. So let's just put that
197
+
198
+ 06:43.200 --> 06:50.000
199
+ part of it aside. But I think sheer size of a neural network is not enough to give it
200
+
201
+ 06:50.960 --> 06:57.360
202
+ structure and knowledge. But if it's suitably engineered, then why not? That is, we're neural
203
+
204
+ 06:57.360 --> 07:03.680
205
+ networks. Natural selection did a kind of equivalent of engineering of our brains. So I don't think
206
+
207
+ 07:03.680 --> 07:10.880
208
+ there's anything mysterious in the sense that no systemated of silicon could ever do what a human
209
+
210
+ 07:10.880 --> 07:16.080
211
+ brain can do. I think it's possible in principle. Whether it'll ever happen depends not only on
212
+
213
+ 07:16.080 --> 07:21.040
214
+ how clever we are in engineering these systems, but whether even we even want to, whether that's
215
+
216
+ 07:21.040 --> 07:27.440
217
+ even a sensible goal. That is, you can ask the question, is there any locomotion system that is
218
+
219
+ 07:28.320 --> 07:32.960
220
+ as good as a human? Well, we kind of want to do better than a human ultimately in terms of
221
+
222
+ 07:32.960 --> 07:39.360
223
+ legged locomotion. There's no reason that humans should be our benchmark. They're tools that might
224
+
225
+ 07:39.360 --> 07:49.280
226
+ be better in some ways. It may be that we can't duplicate a natural system because at some point,
227
+
228
+ 07:49.280 --> 07:53.840
229
+ it's so much cheaper to use a natural system that we're not going to invest more brain power
230
+
231
+ 07:53.840 --> 08:00.000
232
+ and resources. So for example, we don't really have an exact substitute for wood. We still build
233
+
234
+ 08:00.000 --> 08:04.400
235
+ houses out of wood. We still build furniture out of wood. We like the look. We like the feel. It's
236
+
237
+ 08:04.400 --> 08:09.280
238
+ wood has certain properties that synthetics don't. There's not that there's anything magical or
239
+
240
+ 08:09.280 --> 08:16.400
241
+ mysterious about wood. It's just that the extra steps of duplicating everything about wood is
242
+
243
+ 08:16.400 --> 08:20.480
244
+ something we just haven't bothered because we have wood. Likewise, cotton. I'm wearing cotton
245
+
246
+ 08:20.480 --> 08:26.880
247
+ clothing now. It feels much better than polyester. It's not that cotton has something magic in it,
248
+
249
+ 08:27.600 --> 08:33.120
250
+ and it's not that we couldn't ever synthesize something exactly like cotton,
251
+
252
+ 08:33.120 --> 08:37.760
253
+ but at some point, it's just not worth it. We've got cotton. Likewise, in the case of human
254
+
255
+ 08:37.760 --> 08:43.520
256
+ intelligence, the goal of making an artificial system that is exactly like the human brain
257
+
258
+ 08:43.520 --> 08:49.440
259
+ is a goal that we probably know is going to pursue to the bitter end, I suspect, because
260
+
261
+ 08:50.080 --> 08:53.600
262
+ if you want tools that do things better than humans, you're not going to care whether it
263
+
264
+ 08:53.600 --> 08:58.720
265
+ does something like humans. So for example, diagnosing cancer or predicting the weather,
266
+
267
+ 08:58.720 --> 09:07.360
268
+ why set humans as your benchmark? But in general, I suspect you also believe that even if the human
269
+
270
+ 09:07.360 --> 09:11.440
271
+ should not be a benchmark and we don't want to imitate humans in their system, there's a lot
272
+
273
+ 09:11.440 --> 09:16.800
274
+ to be learned about how to create an artificial intelligence system by studying the humans.
275
+
276
+ 09:16.800 --> 09:23.440
277
+ Yeah, I think that's right. In the same way that to build flying machines, we want to understand
278
+
279
+ 09:23.440 --> 09:28.880
280
+ the laws of aerodynamics, including birds, but not mimic the birds, but they're the same laws.
281
+
282
+ 09:30.480 --> 09:38.400
283
+ You have a view on AI, artificial intelligence and safety, that from my perspective,
284
+
285
+ 09:38.400 --> 09:49.360
286
+ is refreshingly rational, or perhaps more importantly, has elements of positivity to it,
287
+
288
+ 09:49.360 --> 09:55.440
289
+ which I think can be inspiring and empowering as opposed to paralyzing. For many people,
290
+
291
+ 09:55.440 --> 10:02.320
292
+ including AI researchers, the eventual existential threat of AI is obvious, not only possible but
293
+
294
+ 10:02.320 --> 10:08.640
295
+ obvious. And for many others, including AI researchers, the threat is not obvious. So
296
+
297
+ 10:09.520 --> 10:16.480
298
+ Elon Musk is famously in the highly concerned about AI camp, saying things like AI is far
299
+
300
+ 10:16.480 --> 10:22.240
301
+ more dangerous than nuclear weapons, and that AI will likely destroy human civilization.
302
+
303
+ 10:22.960 --> 10:30.400
304
+ So in February, you said that if Elon was really serious about AI, the threat of AI,
305
+
306
+ 10:30.400 --> 10:34.960
307
+ he would stop building self driving cars that he's doing very successfully as part of Tesla.
308
+
309
+ 10:35.840 --> 10:40.880
310
+ Then he said, wow, if even Pinker doesn't understand the difference between narrow AI
311
+
312
+ 10:40.880 --> 10:47.280
313
+ like a car and general AI, when the latter literally has a million times more compute power
314
+
315
+ 10:47.280 --> 10:54.240
316
+ and an open ended utility function, humanity is in deep trouble. So first, what did you mean by
317
+
318
+ 10:54.240 --> 10:59.200
319
+ the statement about Elon Musk should stop building self driving cars if he's deeply concerned?
320
+
321
+ 10:59.200 --> 11:03.520
322
+ Well, not the last time that Elon Musk has fired off an intemperate tweet.
323
+
324
+ 11:04.320 --> 11:07.600
325
+ Well, we live in a world where Twitter has power.
326
+
327
+ 11:07.600 --> 11:16.640
328
+ Yes. Yeah, I think there are two kinds of existential threat that have been discussed
329
+
330
+ 11:16.640 --> 11:19.760
331
+ in connection with artificial intelligence, and I think that they're both incoherent.
332
+
333
+ 11:20.480 --> 11:28.800
334
+ One of them is a vague fear of AI takeover, that just as we subjugated animals and less
335
+
336
+ 11:28.800 --> 11:33.360
337
+ technologically advanced peoples, so if we build something that's more advanced than us,
338
+
339
+ 11:33.360 --> 11:39.200
340
+ it will inevitably turn us into pets or slaves or domesticated animal equivalents.
341
+
342
+ 11:40.240 --> 11:46.720
343
+ I think this confuses intelligence with a will to power that it so happens that in the
344
+
345
+ 11:46.720 --> 11:52.240
346
+ intelligence system we are most familiar with, namely Homo sapiens, we are products of natural
347
+
348
+ 11:52.240 --> 11:56.800
349
+ selection, which is a competitive process. And so bundled together with our problem solving
350
+
351
+ 11:56.800 --> 12:05.200
352
+ capacity are a number of nasty traits like dominance and exploitation and maximization of
353
+
354
+ 12:05.200 --> 12:11.040
355
+ power and glory and resources and influence. There's no reason to think that sheer problem
356
+
357
+ 12:11.040 --> 12:16.640
358
+ solving capability will set that as one of its goals. Its goals will be whatever we set its goals
359
+
360
+ 12:16.640 --> 12:21.760
361
+ as, and as long as someone isn't building a megalomaniacal artificial intelligence,
362
+
363
+ 12:22.560 --> 12:25.360
364
+ then there's no reason to think that it would naturally evolve in that direction.
365
+
366
+ 12:25.360 --> 12:31.600
367
+ Now you might say, well, what if we gave it the goal of maximizing its own power source?
368
+
369
+ 12:31.600 --> 12:35.280
370
+ That's a pretty stupid goal to give an autonomous system. You don't give it that goal.
371
+
372
+ 12:36.000 --> 12:41.360
373
+ I mean, that's just self evidently idiotic. So if you look at the history of the world,
374
+
375
+ 12:41.360 --> 12:45.680
376
+ there's been a lot of opportunities where engineers could instill in a system destructive
377
+
378
+ 12:45.680 --> 12:49.520
379
+ power and they choose not to because that's the natural process of engineering.
380
+
381
+ 12:49.520 --> 12:52.880
382
+ Well, except for weapons. I mean, if you're building a weapon, its goal is to destroy
383
+
384
+ 12:52.880 --> 12:58.400
385
+ people. And so I think there are good reasons to not build certain kinds of weapons. I think
386
+
387
+ 12:58.400 --> 13:06.240
388
+ building nuclear weapons was a massive mistake. You do. So maybe pause on that because that is
389
+
390
+ 13:06.240 --> 13:12.880
391
+ one of the serious threats. Do you think that it was a mistake in a sense that it should have been
392
+
393
+ 13:12.880 --> 13:19.200
394
+ stopped early on? Or do you think it's just an unfortunate event of invention that this was
395
+
396
+ 13:19.200 --> 13:22.800
397
+ invented? Do you think it's possible to stop, I guess, is the question on that? Yeah, it's hard to
398
+
399
+ 13:22.800 --> 13:27.440
400
+ rewind the clock because, of course, it was invented in the context of World War II and the
401
+
402
+ 13:27.440 --> 13:33.120
403
+ fear that the Nazis might develop one first. Then once it was initiated for that reason,
404
+
405
+ 13:33.120 --> 13:40.800
406
+ it was hard to turn off, especially since winning the war against the Japanese and the Nazis was
407
+
408
+ 13:40.800 --> 13:46.160
409
+ such an overwhelming goal of every responsible person that they were just nothing that people
410
+
411
+ 13:46.160 --> 13:51.440
412
+ wouldn't have done then to ensure victory. It's quite possible if World War II hadn't happened
413
+
414
+ 13:51.440 --> 13:56.560
415
+ that nuclear weapons wouldn't have been invented. We can't know. But I don't think it was, by any
416
+
417
+ 13:56.560 --> 14:01.760
418
+ means, a necessity any more than some of the other weapon systems that were envisioned but never
419
+
420
+ 14:01.760 --> 14:09.040
421
+ implemented, like planes that would disperse poison gas over cities like crop dusters or systems to
422
+
423
+ 14:09.040 --> 14:16.080
424
+ try to create earthquakes and tsunamis in enemy countries, to weaponize the weather,
425
+
426
+ 14:16.080 --> 14:21.520
427
+ weaponize solar flares, all kinds of crazy schemes that we thought the better of. I think
428
+
429
+ 14:21.520 --> 14:26.800
430
+ analogies between nuclear weapons and artificial intelligence are fundamentally misguided because
431
+
432
+ 14:26.800 --> 14:31.520
433
+ the whole point of nuclear weapons is to destroy things. The point of artificial intelligence
434
+
435
+ 14:31.520 --> 14:37.360
436
+ is not to destroy things. So the analogy is misleading. So there's two artificial
437
+
438
+ 14:37.360 --> 14:42.320
439
+ intelligence you mentioned. The first one was the highly intelligent or power hungry. Yeah,
440
+
441
+ 14:42.320 --> 14:47.040
442
+ an assistant that we design ourselves where we give it the goals. Goals are external to the
443
+
444
+ 14:48.320 --> 14:55.840
445
+ means to attain the goals. If we don't design an artificially intelligent system to maximize
446
+
447
+ 14:56.560 --> 15:02.400
448
+ dominance, then it won't maximize dominance. It's just that we're so familiar with homo sapiens
449
+
450
+ 15:02.400 --> 15:08.800
451
+ where these two traits come bundled together, particularly in men, that we are apt to confuse
452
+
453
+ 15:08.800 --> 15:16.720
454
+ high intelligence with a will to power. But that's just an error. The other fear is that
455
+
456
+ 15:16.720 --> 15:23.040
457
+ we'll be collateral damage that will give artificial intelligence a goal like make paper clips
458
+
459
+ 15:23.040 --> 15:28.320
460
+ and it will pursue that goal so brilliantly that before we can stop it, it turns us into paper
461
+
462
+ 15:28.320 --> 15:34.400
463
+ clips. We'll give it the goal of curing cancer and it will turn us into guinea pigs for lethal
464
+
465
+ 15:34.400 --> 15:40.000
466
+ experiments or give it the goal of world peace and its conception of world peace is no people,
467
+
468
+ 15:40.000 --> 15:44.480
469
+ therefore no fighting and so it will kill us all. Now, I think these are utterly fanciful. In fact,
470
+
471
+ 15:44.480 --> 15:49.600
472
+ I think they're actually self defeating. They first of all assume that we're going to be so
473
+
474
+ 15:49.600 --> 15:54.880
475
+ brilliant that we can design an artificial intelligence that can cure cancer. But so stupid
476
+
477
+ 15:54.880 --> 16:00.160
478
+ that we don't specify what we mean by curing cancer in enough detail that it won't kill us in the
479
+
480
+ 16:00.160 --> 16:06.720
481
+ process. And it assumes that the system will be so smart that it can cure cancer. But so
482
+
483
+ 16:06.720 --> 16:11.520
484
+ idiotic that it doesn't can't figure out that what we mean by curing cancer is not killing
485
+
486
+ 16:11.520 --> 16:17.920
487
+ everyone. So I think that the collateral damage scenario, the value alignment problem is also
488
+
489
+ 16:17.920 --> 16:23.200
490
+ based on a misconception. So one of the challenges, of course, we don't know how to build either system
491
+
492
+ 16:23.200 --> 16:27.440
493
+ currently, or are we even close to knowing? Of course, those things can change overnight,
494
+
495
+ 16:27.440 --> 16:33.840
496
+ but at this time, theorizing about is very challenging in either direction. So that's
497
+
498
+ 16:33.840 --> 16:39.600
499
+ probably at the core of the problem is without that ability to reason about the real engineering
500
+
501
+ 16:39.600 --> 16:45.200
502
+ things here at hand is your imagination runs away with things. Exactly. But let me sort of ask,
503
+
504
+ 16:45.920 --> 16:52.320
505
+ what do you think was the motivation, the thought process of Elon Musk? I build autonomous vehicles,
506
+
507
+ 16:52.320 --> 16:58.000
508
+ I study autonomous vehicles, I study Tesla autopilot. I think it is one of the greatest
509
+
510
+ 16:58.000 --> 17:02.880
511
+ currently application, large scale application of artificial intelligence in the world.
512
+
513
+ 17:02.880 --> 17:09.120
514
+ It has a potentially very positive impact on society. So how does a person who's creating this
515
+
516
+ 17:09.120 --> 17:17.680
517
+ very good, quote unquote, narrow AI system also seem to be so concerned about this other
518
+
519
+ 17:17.680 --> 17:21.120
520
+ general AI? What do you think is the motivation there? What do you think is the thing?
521
+
522
+ 17:21.120 --> 17:30.640
523
+ Well, you probably have to ask him, but there and he is notoriously flamboyant, impulsive to the,
524
+
525
+ 17:30.640 --> 17:35.120
526
+ as we have just seen, to the detriment of his own goals of the health of a company.
527
+
528
+ 17:36.000 --> 17:41.600
529
+ So I don't know what's going on in his mind. You probably have to ask him. But I don't think the,
530
+
531
+ 17:41.600 --> 17:48.160
532
+ and I don't think the distinction between special purpose AI and so called general AI is relevant
533
+
534
+ 17:48.160 --> 17:54.400
535
+ that in the same way that special purpose AI is not going to do anything conceivable in order to
536
+
537
+ 17:54.400 --> 18:00.560
538
+ attain a goal, all engineering systems have to are designed to trade off across multiple goals.
539
+
540
+ 18:00.560 --> 18:05.920
541
+ When we build cars in the first place, we didn't forget to install brakes because the goal of a
542
+
543
+ 18:05.920 --> 18:12.320
544
+ car is to go fast. It occurred to people, yes, you want to go fast, but not always. So you build
545
+
546
+ 18:12.320 --> 18:18.960
547
+ and brakes too. Likewise, if a car is going to be autonomous, that doesn't program it to take the
548
+
549
+ 18:18.960 --> 18:23.440
550
+ shortest route to the airport. It's not going to take the diagonal and mow down people and trees
551
+
552
+ 18:23.440 --> 18:28.000
553
+ and fences because that's the shortest route. That's not what we mean by the shortest route when we
554
+
555
+ 18:28.000 --> 18:34.720
556
+ program it. And that's just what an intelligence system is by definition. It takes into account
557
+
558
+ 18:34.720 --> 18:40.640
559
+ multiple constraints. The same is true, in fact, even more true of so called general intelligence.
560
+
561
+ 18:40.640 --> 18:47.040
562
+ That is, if it's genuinely intelligent, it's not going to pursue some goal single mindedly,
563
+
564
+ 18:47.040 --> 18:53.280
565
+ omitting every other consideration and collateral effect. That's not artificial and
566
+
567
+ 18:53.280 --> 18:58.560
568
+ general intelligence. That's artificial stupidity. I agree with you, by the way,
569
+
570
+ 18:58.560 --> 19:03.280
571
+ on the promise of autonomous vehicles for improving human welfare. I think it's spectacular.
572
+
573
+ 19:03.280 --> 19:08.080
574
+ And I'm surprised at how little press coverage notes that in the United States alone,
575
+
576
+ 19:08.080 --> 19:13.200
577
+ something like 40,000 people die every year on the highways, vastly more than are killed by
578
+
579
+ 19:13.200 --> 19:19.440
580
+ terrorists. And we spend a trillion dollars on a war to combat deaths by terrorism,
581
+
582
+ 19:19.440 --> 19:24.080
583
+ about half a dozen a year, whereas every year and year out, 40,000 people are
584
+
585
+ 19:24.080 --> 19:27.600
586
+ massacred on the highways, which could be brought down to very close to zero.
587
+
588
+ 19:28.560 --> 19:31.840
589
+ So I'm with you on the humanitarian benefit.
590
+
591
+ 19:31.840 --> 19:36.240
592
+ Let me just mention that as a person who's building these cars, it is a little bit offensive to me
593
+
594
+ 19:36.240 --> 19:41.680
595
+ to say that engineers would be clueless enough not to engineer safety into systems. I often
596
+
597
+ 19:41.680 --> 19:46.400
598
+ stay up at night thinking about those 40,000 people that are dying. And everything I try to
599
+
600
+ 19:46.400 --> 19:52.000
601
+ engineer is to save those people's lives. So every new invention that I'm super excited about,
602
+
603
+ 19:52.000 --> 19:59.680
604
+ every new, all the deep learning literature and CVPR conferences and NIPS, everything I'm super
605
+
606
+ 19:59.680 --> 20:08.320
607
+ excited about is all grounded in making it safe and help people. So I just don't see how that
608
+
609
+ 20:08.320 --> 20:13.200
610
+ trajectory can all of a sudden slip into a situation where intelligence will be highly
611
+
612
+ 20:13.200 --> 20:17.840
613
+ negative. You and I certainly agree on that. And I think that's only the beginning of the
614
+
615
+ 20:17.840 --> 20:23.760
616
+ potential humanitarian benefits of artificial intelligence. There's been enormous attention
617
+
618
+ 20:23.760 --> 20:27.680
619
+ to what are we going to do with the people whose jobs are made obsolete by artificial
620
+
621
+ 20:27.680 --> 20:31.600
622
+ intelligence. But very little attention given to the fact that the jobs that are going to be
623
+
624
+ 20:31.600 --> 20:37.600
625
+ made obsolete are horrible jobs. The fact that people aren't going to be picking crops and making
626
+
627
+ 20:37.600 --> 20:43.760
628
+ beds and driving trucks and mining coal, these are soul deadening jobs. And we have a whole
629
+
630
+ 20:43.760 --> 20:51.280
631
+ literature sympathizing with the people stuck in these menial, mind deadening, dangerous jobs.
632
+
633
+ 20:52.080 --> 20:56.160
634
+ If we can eliminate them, this is a fantastic boon to humanity. Now, granted,
635
+
636
+ 20:56.160 --> 21:02.160
637
+ we, you solve one problem and there's another one, namely, how do we get these people a decent
638
+
639
+ 21:02.160 --> 21:08.320
640
+ income? But if we're smart enough to invent machines that can make beds and put away dishes and
641
+
642
+ 21:09.520 --> 21:14.080
643
+ handle hospital patients, I think we're smart enough to figure out how to redistribute income
644
+
645
+ 21:14.080 --> 21:20.960
646
+ to a portion, some of the vast economic savings to the human beings who will no longer be needed to
647
+
648
+ 21:20.960 --> 21:28.400
649
+ make beds. Okay. Sam Harris says that it's obvious that eventually AI will be an existential risk.
650
+
651
+ 21:29.280 --> 21:36.240
652
+ He's one of the people who says it's obvious. We don't know when the claim goes, but eventually
653
+
654
+ 21:36.240 --> 21:41.760
655
+ it's obvious. And because we don't know when we should worry about it now. It's a very interesting
656
+
657
+ 21:41.760 --> 21:49.120
658
+ argument in my eyes. So how do we think about timescale? How do we think about existential
659
+
660
+ 21:49.120 --> 21:55.040
661
+ threats when we don't really, we know so little about the threat, unlike nuclear weapons, perhaps,
662
+
663
+ 21:55.040 --> 22:02.400
664
+ about this particular threat, that it could happen tomorrow, right? So, but very likely it won't.
665
+
666
+ 22:03.120 --> 22:08.320
667
+ Very likely it'd be 100 years away. So how do, do we ignore it? Do, how do we talk about it?
668
+
669
+ 22:08.880 --> 22:13.040
670
+ Do we worry about it? What, how do we think about those? What is it?
671
+
672
+ 22:13.040 --> 22:19.600
673
+ A threat that we can imagine, it's within the limits of our imagination, but not within our
674
+
675
+ 22:19.600 --> 22:25.760
676
+ limits of understanding to sufficient, to accurately predict it. But what, what is, what is the it
677
+
678
+ 22:25.760 --> 22:31.280
679
+ that we're referring to? Oh, AI, sorry, AI, AI being the existential threat. AI can always...
680
+
681
+ 22:31.280 --> 22:34.400
682
+ How? But like enslaving us or turning us into paperclips?
683
+
684
+ 22:35.120 --> 22:38.800
685
+ I think the most compelling from the Sam Harris perspective would be the paperclip situation.
686
+
687
+ 22:38.800 --> 22:44.000
688
+ Yeah. I mean, I just think it's totally fanciful. I mean, that is, don't build a system. Don't give a,
689
+
690
+ 22:44.000 --> 22:50.400
691
+ don't... First of all, the code of engineering is you don't implement a system with massive
692
+
693
+ 22:50.400 --> 22:55.040
694
+ control before testing it. Now, perhaps the culture of engineering will radically change,
695
+
696
+ 22:55.040 --> 23:00.320
697
+ then I would worry, but I don't see any signs that engineers will suddenly do idiotic things,
698
+
699
+ 23:00.320 --> 23:05.440
700
+ like put a, an electrical power plant in control of a system that they haven't tested
701
+
702
+ 23:05.440 --> 23:14.720
703
+ first. Or all of these scenarios not only imagine a almost a magically powered intelligence,
704
+
705
+ 23:15.360 --> 23:20.000
706
+ you know, including things like cure cancer, which is probably an incoherent goal because
707
+
708
+ 23:20.000 --> 23:25.440
709
+ there's so many different kinds of cancer or bring about world peace. I mean, how do you even specify
710
+
711
+ 23:25.440 --> 23:31.360
712
+ that as a goal? But the scenarios also imagine some degree of control of every molecule in the
713
+
714
+ 23:31.360 --> 23:38.480
715
+ universe, which not only is itself unlikely, but we would not start to connect these systems to
716
+
717
+ 23:39.200 --> 23:45.840
718
+ infrastructure without, without testing as we would any kind of engineering system. Now,
719
+
720
+ 23:45.840 --> 23:53.920
721
+ maybe some engineers will be irresponsible and we need legal and regulatory and legal
722
+
723
+ 23:53.920 --> 23:59.440
724
+ responsibility implemented so that engineers don't do things that are stupid by their own standards.
725
+
726
+ 23:59.440 --> 24:08.560
727
+ But the, I've never seen enough of a plausible scenario of existential threat to devote large
728
+
729
+ 24:08.560 --> 24:14.720
730
+ amounts of brain power to, to forestall it. So you believe in the sort of the power en masse of
731
+
732
+ 24:14.720 --> 24:19.520
733
+ the engineering of reason as you argue in your latest book of reason and science to sort of
734
+
735
+ 24:20.400 --> 24:26.160
736
+ be the very thing that guides the development of new technology so it's safe and also keeps us
737
+
738
+ 24:26.160 --> 24:32.480
739
+ safe. Yeah, the same, you know, granted the same culture of safety that currently is part of the
740
+
741
+ 24:32.480 --> 24:38.960
742
+ engineering mindset for airplanes, for example. So yeah, I don't think that, that that should
743
+
744
+ 24:38.960 --> 24:44.800
745
+ be thrown out the window and that untested, all powerful systems should be suddenly implemented.
746
+
747
+ 24:44.800 --> 24:47.360
748
+ But there's no reason to think they are. And in fact, if you look at the
749
+
750
+ 24:48.160 --> 24:51.760
751
+ progress of artificial intelligence, it's been, you know, it's been impressive, especially in
752
+
753
+ 24:51.760 --> 24:56.960
754
+ the last 10 years or so. But the idea that suddenly there'll be a step function that all of a sudden
755
+
756
+ 24:56.960 --> 25:02.160
757
+ before we know it, it will be all powerful, that there'll be some kind of recursive self
758
+
759
+ 25:02.160 --> 25:11.200
760
+ improvement, some kind of fume is also fanciful. Certainly by the technology that we that we're
761
+
762
+ 25:11.200 --> 25:16.720
763
+ now impresses us, such as deep learning, where you train something on hundreds of thousands or
764
+
765
+ 25:16.720 --> 25:22.720
766
+ millions of examples, they're not hundreds of thousands of problems of which curing cancer is
767
+
768
+ 25:24.320 --> 25:30.560
769
+ typical example. And so the kind of techniques that have allowed AI to increase in the last
770
+
771
+ 25:30.560 --> 25:37.600
772
+ five years are not the kind that are going to lead to this fantasy of exponential sudden
773
+
774
+ 25:37.600 --> 25:43.680
775
+ self improvement. So I think it's kind of a magical thinking. It's not based on our understanding
776
+
777
+ 25:43.680 --> 25:49.200
778
+ of how AI actually works. Now, give me a chance here. So you said fanciful, magical thinking.
779
+
780
+ 25:50.240 --> 25:55.280
781
+ In his TED Talk, Sam Harris says that thinking about AI killing all human civilization is somehow
782
+
783
+ 25:55.280 --> 26:00.400
784
+ fun intellectually. Now, I have to say as a scientist engineer, I don't find it fun.
785
+
786
+ 26:01.200 --> 26:08.560
787
+ But when I'm having beer with my non AI friends, there is indeed something fun and appealing about
788
+
789
+ 26:08.560 --> 26:14.720
790
+ it. Like talking about an episode of Black Mirror, considering if a large meteor is headed towards
791
+
792
+ 26:14.720 --> 26:20.640
793
+ Earth, we were just told a large meteor is headed towards Earth, something like this. And can you
794
+
795
+ 26:20.640 --> 26:25.840
796
+ relate to this sense of fun? And do you understand the psychology of it? Yes, great. Good question.
797
+
798
+ 26:26.880 --> 26:33.440
799
+ I personally don't find it fun. I find it kind of actually a waste of time, because there are
800
+
801
+ 26:33.440 --> 26:39.840
802
+ genuine threats that we ought to be thinking about, like pandemics, like cybersecurity
803
+
804
+ 26:39.840 --> 26:47.040
805
+ vulnerabilities, like the possibility of nuclear war and certainly climate change. This is enough
806
+
807
+ 26:47.040 --> 26:55.280
808
+ to fill many conversations without. And I think Sam did put his finger on something, namely that
809
+
810
+ 26:55.280 --> 27:03.120
811
+ there is a community, sometimes called the rationality community, that delights in using its
812
+
813
+ 27:03.120 --> 27:10.160
814
+ brain power to come up with scenarios that would not occur to mere mortals, to less cerebral people.
815
+
816
+ 27:10.160 --> 27:15.360
817
+ So there is a kind of intellectual thrill in finding new things to worry about that no one
818
+
819
+ 27:15.360 --> 27:21.200
820
+ has worried about yet. I actually think, though, that it's not only is it a kind of fun that doesn't
821
+
822
+ 27:21.200 --> 27:27.280
823
+ give me particular pleasure. But I think there can be a pernicious side to it, namely that you
824
+
825
+ 27:27.280 --> 27:35.280
826
+ overcome people with such dread, such fatalism, that there's so many ways to die to annihilate
827
+
828
+ 27:35.280 --> 27:40.160
829
+ our civilization that we may as well enjoy life while we can. There's nothing we can do about it.
830
+
831
+ 27:40.160 --> 27:46.560
832
+ If climate change doesn't do us in, then runaway robots will. So let's enjoy ourselves now. We
833
+
834
+ 27:46.560 --> 27:55.200
835
+ got to prioritize. We have to look at threats that are close to certainty, such as climate change,
836
+
837
+ 27:55.200 --> 28:00.320
838
+ and distinguish those from ones that are merely imaginable, but with infinitesimal probabilities.
839
+
840
+ 28:01.360 --> 28:07.120
841
+ And we have to take into account people's worry budget. You can't worry about everything. And
842
+
843
+ 28:07.120 --> 28:13.920
844
+ if you sow dread and fear and terror and and fatalism, it can lead to a kind of numbness. Well,
845
+
846
+ 28:13.920 --> 28:18.240
847
+ they're just these problems are overwhelming and the engineers are just going to kill us all.
848
+
849
+ 28:19.040 --> 28:25.760
850
+ So let's either destroy the entire infrastructure of science, technology,
851
+
852
+ 28:26.640 --> 28:32.080
853
+ or let's just enjoy life while we can. So there's a certain line of worry, which I'm
854
+
855
+ 28:32.080 --> 28:36.160
856
+ worried about a lot of things engineering. There's a certain line of worry when you cross,
857
+
858
+ 28:36.160 --> 28:42.800
859
+ you allow it to cross, that it becomes paralyzing fear as opposed to productive fear. And that's
860
+
861
+ 28:42.800 --> 28:49.760
862
+ kind of what you're highlighting. Exactly right. And we've seen some, we know that human effort is
863
+
864
+ 28:49.760 --> 28:58.080
865
+ not well calibrated against risk in that because a basic tenet of cognitive psychology is that
866
+
867
+ 28:59.440 --> 29:05.120
868
+ perception of risk and hence perception of fear is driven by imaginability, not by data.
869
+
870
+ 29:05.920 --> 29:11.200
871
+ And so we misallocate vast amounts of resources to avoiding terrorism,
872
+
873
+ 29:11.200 --> 29:16.240
874
+ which kills on average about six Americans a year with a one exception of 9 11. We invade
875
+
876
+ 29:16.240 --> 29:23.920
877
+ countries, we invent an entire new departments of government with massive, massive expenditure
878
+
879
+ 29:23.920 --> 29:30.800
880
+ of resources and lives to defend ourselves against a trivial risk. Whereas guaranteed risks,
881
+
882
+ 29:30.800 --> 29:36.720
883
+ you mentioned as one of them, you mentioned traffic fatalities and even risks that are
884
+
885
+ 29:36.720 --> 29:46.240
886
+ not here, but are plausible enough to worry about like pandemics, like nuclear war,
887
+
888
+ 29:47.120 --> 29:51.760
889
+ receive far too little attention. In presidential debates, there's no discussion of
890
+
891
+ 29:51.760 --> 29:56.720
892
+ how to minimize the risk of nuclear war, lots of discussion of terrorism, for example.
893
+
894
+ 29:57.840 --> 30:05.520
895
+ And so we, I think it's essential to calibrate our budget of fear, worry, concerned planning
896
+
897
+ 30:05.520 --> 30:12.640
898
+ to the actual probability of harm. Yep. So let me ask this in this question.
899
+
900
+ 30:13.520 --> 30:18.960
901
+ So speaking of imaginability, you said it's important to think about reason. And one of my
902
+
903
+ 30:18.960 --> 30:26.560
904
+ favorite people who likes to dip into the outskirts of reason through fascinating exploration of his
905
+
906
+ 30:26.560 --> 30:34.880
907
+ imagination is Joe Rogan. Oh, yes. So who has, through reason, used to believe a lot of conspiracies
908
+
909
+ 30:34.880 --> 30:40.000
910
+ and through reason has stripped away a lot of his beliefs in that way. So it's fascinating actually
911
+
912
+ 30:40.000 --> 30:47.920
913
+ to watch him through rationality, kind of throw away the ideas of Bigfoot and 911. I'm not sure
914
+
915
+ 30:47.920 --> 30:52.320
916
+ exactly. Kim Trails. I don't know what he believes in. Yes, okay. But he no longer believed in,
917
+
918
+ 30:52.320 --> 30:57.920
919
+ that's right. No, he's become a real force for good. So you were on the Joe Rogan podcast in
920
+
921
+ 30:57.920 --> 31:02.880
922
+ February and had a fascinating conversation, but as far as I remember, didn't talk much about
923
+
924
+ 31:02.880 --> 31:09.280
925
+ artificial intelligence. I will be on his podcast in a couple of weeks. Joe is very much concerned
926
+
927
+ 31:09.280 --> 31:14.640
928
+ about existential threat of AI. I'm not sure if you're, this is why I was hoping that you'll get
929
+
930
+ 31:14.640 --> 31:20.480
931
+ into that topic. And in this way, he represents quite a lot of people who look at the topic of AI
932
+
933
+ 31:20.480 --> 31:27.840
934
+ from 10,000 foot level. So as an exercise of communication, you said it's important to be
935
+
936
+ 31:27.840 --> 31:33.280
937
+ rational and reason about these things. Let me ask, if you were to coach me as an AI researcher
938
+
939
+ 31:33.280 --> 31:38.320
940
+ about how to speak to Joe and the general public about AI, what would you advise?
941
+
942
+ 31:38.320 --> 31:42.400
943
+ Well, the short answer would be to read the sections that I wrote in Enlightenment.
944
+
945
+ 31:44.080 --> 31:48.880
946
+ But longer reason would be, I think to emphasize, and I think you're very well positioned as an
947
+
948
+ 31:48.880 --> 31:54.800
949
+ engineer to remind people about the culture of engineering, that it really is safety oriented,
950
+
951
+ 31:54.800 --> 32:02.160
952
+ that another discussion in Enlightenment now, I plot rates of accidental death from various
953
+
954
+ 32:02.160 --> 32:09.280
955
+ causes, plane crashes, car crashes, occupational accidents, even death by lightning strikes,
956
+
957
+ 32:09.280 --> 32:16.560
958
+ and they all plummet. Because the culture of engineering is how do you squeeze out the lethal
959
+
960
+ 32:16.560 --> 32:23.360
961
+ risks, death by fire, death by drowning, death by asphyxiation, all of them drastically declined
962
+
963
+ 32:23.360 --> 32:28.160
964
+ because of advances in engineering, that I got to say, I did not appreciate until I saw those
965
+
966
+ 32:28.160 --> 32:34.000
967
+ graphs. And it is because exactly people like you who stay up at night thinking, oh my God,
968
+
969
+ 32:36.000 --> 32:42.560
970
+ what I'm inventing likely to hurt people and to deploy ingenuity to prevent that from happening.
971
+
972
+ 32:42.560 --> 32:47.360
973
+ Now, I'm not an engineer, although I spent 22 years at MIT, so I know something about the culture
974
+
975
+ 32:47.360 --> 32:51.360
976
+ of engineering. My understanding is that this is the way you think if you're an engineer.
977
+
978
+ 32:51.360 --> 32:58.160
979
+ And it's essential that that culture not be suddenly switched off when it comes to artificial
980
+
981
+ 32:58.160 --> 33:02.080
982
+ intelligence. So I mean, that could be a problem, but is there any reason to think it would be
983
+
984
+ 33:02.080 --> 33:07.360
985
+ switched off? I don't think so. And one, there's not enough engineers speaking up for this way,
986
+
987
+ 33:07.360 --> 33:13.680
988
+ for the excitement, for the positive view of human nature, what you're trying to create is
989
+
990
+ 33:13.680 --> 33:18.240
991
+ the positivity, like everything we try to invent is trying to do good for the world.
992
+
993
+ 33:18.240 --> 33:23.600
994
+ But let me ask you about the psychology of negativity. It seems just objectively,
995
+
996
+ 33:23.600 --> 33:27.680
997
+ not considering the topic, it seems that being negative about the future, it makes you sound
998
+
999
+ 33:27.680 --> 33:32.720
1000
+ smarter than being positive about the future, in regard to this topic. Am I correct in this
1001
+
1002
+ 33:32.720 --> 33:39.120
1003
+ observation? And if so, why do you think that is? Yeah, I think there is that phenomenon,
1004
+
1005
+ 33:39.120 --> 33:43.920
1006
+ that as Tom Lehrer, the satirist said, always predict the worst and you'll be hailed as a
1007
+
1008
+ 33:43.920 --> 33:51.840
1009
+ prophet. It may be part of our overall negativity bias. We are as a species more attuned to the
1010
+
1011
+ 33:51.840 --> 33:59.200
1012
+ negative than the positive. We dread losses more than we enjoy gains. And that might open up a
1013
+
1014
+ 33:59.200 --> 34:06.560
1015
+ space for prophets to remind us of harms and risks and losses that we may have overlooked.
1016
+
1017
+ 34:06.560 --> 34:15.040
1018
+ So I think there is that asymmetry. So you've written some of my favorite books
1019
+
1020
+ 34:16.080 --> 34:21.680
1021
+ all over the place. So starting from Enlightenment now, to the better ranges of our nature,
1022
+
1023
+ 34:21.680 --> 34:28.560
1024
+ blank slate, how the mind works, the one about language, language instinct. Bill Gates,
1025
+
1026
+ 34:28.560 --> 34:37.840
1027
+ big fan too, said of your most recent book that it's my new favorite book of all time. So for
1028
+
1029
+ 34:37.840 --> 34:44.000
1030
+ you as an author, what was the book early on in your life that had a profound impact on the way
1031
+
1032
+ 34:44.000 --> 34:50.560
1033
+ you saw the world? Certainly this book Enlightenment now is influenced by David Deutch's The Beginning
1034
+
1035
+ 34:50.560 --> 34:57.520
1036
+ of Infinity. We have a rather deep reflection on knowledge and the power of knowledge to improve
1037
+
1038
+ 34:57.520 --> 35:02.960
1039
+ the human condition. They end with bits of wisdom such as that problems are inevitable,
1040
+
1041
+ 35:02.960 --> 35:07.760
1042
+ but problems are solvable given the right knowledge and that solutions create new problems
1043
+
1044
+ 35:07.760 --> 35:12.480
1045
+ that have to be solved in their turn. That's I think a kind of wisdom about the human condition
1046
+
1047
+ 35:12.480 --> 35:16.960
1048
+ that influenced the writing of this book. There's some books that are excellent but obscure,
1049
+
1050
+ 35:16.960 --> 35:22.080
1051
+ some of which I have on my page on my website. I read a book called The History of Force,
1052
+
1053
+ 35:22.080 --> 35:27.920
1054
+ self published by a political scientist named James Payne on the historical decline of violence and
1055
+
1056
+ 35:27.920 --> 35:35.120
1057
+ that was one of the inspirations for the better angels of our nature. What about early on if
1058
+
1059
+ 35:35.120 --> 35:40.640
1060
+ you look back when you were maybe a teenager? I loved a book called One, Two, Three, Infinity.
1061
+
1062
+ 35:40.640 --> 35:45.920
1063
+ When I was a young adult, I read that book by George Gamov, the physicist, which had very
1064
+
1065
+ 35:45.920 --> 35:55.120
1066
+ accessible and humorous explanations of relativity, of number theory, of dimensionality, high
1067
+
1068
+ 35:56.080 --> 36:02.240
1069
+ multiple dimensional spaces in a way that I think is still delightful 70 years after it was published.
1070
+
1071
+ 36:03.120 --> 36:09.280
1072
+ I like the Time Life Science series. These are books that arrive every month that my mother
1073
+
1074
+ 36:09.280 --> 36:15.600
1075
+ subscribed to. Each one on a different topic. One would be on electricity, one would be on
1076
+
1077
+ 36:15.600 --> 36:21.440
1078
+ forests, one would be on evolution, and then one was on the mind. I was just intrigued that there
1079
+
1080
+ 36:21.440 --> 36:27.040
1081
+ could be a science of mind. That book, I would cite as an influence as well. Then later on.
1082
+
1083
+ 36:27.040 --> 36:30.960
1084
+ That's when you fell in love with the idea of studying the mind. Was that the thing that grabbed
1085
+
1086
+ 36:30.960 --> 36:38.560
1087
+ you? It was one of the things, I would say. I read as a college student the book Reflections on
1088
+
1089
+ 36:38.560 --> 36:44.800
1090
+ Language by Noam Chomsky. He spent most of his career here at MIT. Richard Dawkins,
1091
+
1092
+ 36:44.800 --> 36:48.800
1093
+ two books, The Blind Watchmaker and the Selfish Gene were enormously influential,
1094
+
1095
+ 36:49.520 --> 36:56.640
1096
+ partly mainly for the content, but also for the writing style, the ability to explain
1097
+
1098
+ 36:56.640 --> 37:03.760
1099
+ abstract concepts in lively prose. Stephen Jay Gould's first collection ever since Darwin, also
1100
+
1101
+ 37:05.040 --> 37:11.120
1102
+ excellent example of lively writing. George Miller, the psychologist that most psychologists
1103
+
1104
+ 37:11.120 --> 37:17.440
1105
+ are familiar with, came up with the idea that human memory has a capacity of seven plus or minus
1106
+
1107
+ 37:17.440 --> 37:21.920
1108
+ two chunks. That's probably his biggest claim to fame. He wrote a couple of books on language
1109
+
1110
+ 37:21.920 --> 37:27.520
1111
+ and communication that I'd read as an undergraduate. Again, beautifully written and intellectually deep.
1112
+
1113
+ 37:28.400 --> 37:31.840
1114
+ Wonderful. Stephen, thank you so much for taking the time today.
1115
+
1116
+ 37:31.840 --> 37:42.960
1117
+ My pleasure. Thanks a lot, Lex.
1118
+
vtt/episode_004_small.vtt ADDED
@@ -0,0 +1,1109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:04.320
4
+ What difference between biological neural networks and artificial neural networks
5
+
6
+ 00:04.320 --> 00:07.680
7
+ is most mysterious, captivating and profound for you?
8
+
9
+ 00:11.120 --> 00:15.280
10
+ First of all, there's so much we don't know about biological neural networks,
11
+
12
+ 00:15.280 --> 00:21.840
13
+ and that's very mysterious and captivating because maybe it holds the key to improving
14
+
15
+ 00:21.840 --> 00:29.840
16
+ artificial neural networks. One of the things I studied recently is something that
17
+
18
+ 00:29.840 --> 00:36.160
19
+ we don't know how biological neural networks do, but would be really useful for artificial ones,
20
+
21
+ 00:37.120 --> 00:43.440
22
+ is the ability to do credit assignment through very long time spans.
23
+
24
+ 00:44.080 --> 00:49.680
25
+ There are things that we can in principle do with artificial neural nets, but it's not very
26
+
27
+ 00:49.680 --> 00:55.920
28
+ convenient and it's not biologically plausible. And this mismatch, I think this kind of mismatch,
29
+
30
+ 00:55.920 --> 01:03.600
31
+ maybe an interesting thing to study, to A, understand better how brains might do these
32
+
33
+ 01:03.600 --> 01:08.720
34
+ things because we don't have good corresponding theories with artificial neural nets, and B,
35
+
36
+ 01:10.240 --> 01:19.040
37
+ maybe provide new ideas that we could explore about things that brain do differently and
38
+
39
+ 01:19.040 --> 01:22.160
40
+ that we could incorporate in artificial neural nets.
41
+
42
+ 01:22.160 --> 01:27.680
43
+ So let's break credit assignment up a little bit. So what? It's a beautifully technical term,
44
+
45
+ 01:27.680 --> 01:34.560
46
+ but it could incorporate so many things. So is it more on the RNN memory side,
47
+
48
+ 01:35.840 --> 01:39.760
49
+ thinking like that, or is it something about knowledge, building up common sense knowledge
50
+
51
+ 01:39.760 --> 01:46.560
52
+ over time, or is it more in the reinforcement learning sense that you're picking up rewards
53
+
54
+ 01:46.560 --> 01:50.080
55
+ over time for a particular to achieve a certain kind of goal?
56
+
57
+ 01:50.080 --> 01:58.080
58
+ So I was thinking more about the first two meanings whereby we store all kinds of memories,
59
+
60
+ 01:59.120 --> 02:09.680
61
+ episodic memories in our brain, which we can access later in order to help us both infer
62
+
63
+ 02:10.560 --> 02:19.520
64
+ causes of things that we are observing now and assign credit to decisions or interpretations
65
+
66
+ 02:19.520 --> 02:26.960
67
+ we came up with a while ago when those memories were stored. And then we can change the way we
68
+
69
+ 02:26.960 --> 02:34.800
70
+ would have reacted or interpreted things in the past, and now that's credit assignment used for learning.
71
+
72
+ 02:36.320 --> 02:43.760
73
+ So in which way do you think artificial neural networks, the current LSTM,
74
+
75
+ 02:43.760 --> 02:52.240
76
+ the current architectures are not able to capture the presumably you're thinking of very long term?
77
+
78
+ 02:52.240 --> 03:00.720
79
+ Yes. So current, the current nets are doing a fairly good jobs for sequences with dozens or say
80
+
81
+ 03:00.720 --> 03:06.560
82
+ hundreds of time steps. And then it gets sort of harder and harder and depending on what you
83
+
84
+ 03:06.560 --> 03:13.120
85
+ have to remember and so on as you consider longer durations. Whereas humans seem to be able to
86
+
87
+ 03:13.120 --> 03:18.080
88
+ do credit assignment through essentially arbitrary times like I could remember something I did last
89
+
90
+ 03:18.080 --> 03:23.360
91
+ year. And then now because I see some new evidence, I'm going to change my mind about
92
+
93
+ 03:23.360 --> 03:29.040
94
+ the way I was thinking last year, and hopefully not do the same mistake again.
95
+
96
+ 03:31.040 --> 03:36.800
97
+ I think a big part of that is probably forgetting. You're only remembering the really important
98
+
99
+ 03:36.800 --> 03:43.680
100
+ things that's very efficient forgetting. Yes. So there's a selection of what we remember.
101
+
102
+ 03:43.680 --> 03:49.120
103
+ And I think there are really cool connection to higher level cognitions here regarding
104
+
105
+ 03:49.120 --> 03:55.760
106
+ consciousness, deciding and emotions. So deciding what comes to consciousness and what gets stored
107
+
108
+ 03:55.760 --> 04:04.800
109
+ in memory, which are not trivial either. So you've been at the forefront there all along
110
+
111
+ 04:04.800 --> 04:10.800
112
+ showing some of the amazing things that neural networks, deep neural networks can do in the
113
+
114
+ 04:10.800 --> 04:16.560
115
+ field of artificial intelligence is just broadly in all kinds of applications. But we can talk
116
+
117
+ 04:16.560 --> 04:23.200
118
+ about that forever. But what in your view, because we're thinking towards the future is the weakest
119
+
120
+ 04:23.200 --> 04:29.120
121
+ aspect of the way deep neural networks represent the world. What is that? What is in your view
122
+
123
+ 04:29.120 --> 04:41.200
124
+ is missing? So current state of the art neural nets trained on large quantities of images or texts
125
+
126
+ 04:43.840 --> 04:49.760
127
+ have some level of understanding of what explains those data sets, but it's very
128
+
129
+ 04:49.760 --> 05:01.440
130
+ basic. It's very low level. And it's not nearly as robust and abstract and general as our understanding.
131
+
132
+ 05:02.960 --> 05:09.760
133
+ Okay, so that doesn't tell us how to fix things. But I think it encourages us to think about
134
+
135
+ 05:09.760 --> 05:21.200
136
+ how we can maybe train our neural nets differently, so that they would focus, for example, on causal
137
+
138
+ 05:21.200 --> 05:30.000
139
+ explanations, something that we don't do currently with neural net training. Also, one thing I'll
140
+
141
+ 05:30.000 --> 05:37.920
142
+ talk about in my talk this afternoon is instead of learning separately from images and videos on
143
+
144
+ 05:37.920 --> 05:45.600
145
+ one hand and from texts on the other hand, we need to do a better job of jointly learning about
146
+
147
+ 05:45.600 --> 05:54.320
148
+ language and about the world to which it refers. So that, you know, both sides can help each other.
149
+
150
+ 05:54.880 --> 06:02.480
151
+ We need to have good world models in our neural nets for them to really understand sentences
152
+
153
+ 06:02.480 --> 06:10.000
154
+ which talk about what's going on in the world. And I think we need language input to help
155
+
156
+ 06:10.640 --> 06:17.760
157
+ provide clues about what high level concepts like semantic concepts should be represented
158
+
159
+ 06:17.760 --> 06:26.400
160
+ at the top levels of these neural nets. In fact, there is evidence that the purely unsupervised
161
+
162
+ 06:26.400 --> 06:33.840
163
+ learning of representations doesn't give rise to high level representations that are as powerful
164
+
165
+ 06:33.840 --> 06:40.320
166
+ as the ones we're getting from supervised learning. And so the clues we're getting just with the labels,
167
+
168
+ 06:40.320 --> 06:46.960
169
+ not even sentences, is already very powerful. Do you think that's an architecture challenge
170
+
171
+ 06:46.960 --> 06:55.920
172
+ or is it a data set challenge? Neither. I'm tempted to just end it there.
173
+
174
+ 07:02.960 --> 07:06.800
175
+ Of course, data sets and architectures are something you want to always play with. But
176
+
177
+ 07:06.800 --> 07:13.040
178
+ I think the crucial thing is more the training objectives, the training frameworks. For example,
179
+
180
+ 07:13.040 --> 07:20.240
181
+ going from passive observation of data to more active agents, which
182
+
183
+ 07:22.320 --> 07:27.280
184
+ learn by intervening in the world, the relationships between causes and effects,
185
+
186
+ 07:28.480 --> 07:36.240
187
+ the sort of objective functions which could be important to allow the highest level
188
+
189
+ 07:36.240 --> 07:44.000
190
+ of explanations to rise from the learning, which I don't think we have now. The kinds of
191
+
192
+ 07:44.000 --> 07:50.320
193
+ objective functions which could be used to reward exploration, the right kind of exploration. So
194
+
195
+ 07:50.320 --> 07:56.160
196
+ these kinds of questions are neither in the data set nor in the architecture, but more in
197
+
198
+ 07:56.800 --> 08:03.920
199
+ how we learn under what objectives and so on. Yeah, that's a, I've heard you mention in several
200
+
201
+ 08:03.920 --> 08:08.080
202
+ contexts, the idea of sort of the way children learn, they interact with objects in the world.
203
+
204
+ 08:08.080 --> 08:15.040
205
+ And it seems fascinating because in some sense, except with some cases in reinforcement learning,
206
+
207
+ 08:15.760 --> 08:23.600
208
+ that idea is not part of the learning process in artificial neural networks. It's almost like
209
+
210
+ 08:24.320 --> 08:33.120
211
+ do you envision something like an objective function saying, you know what, if you poke this
212
+
213
+ 08:33.120 --> 08:38.800
214
+ object in this kind of way, it would be really helpful for me to further, further learn.
215
+
216
+ 08:39.920 --> 08:44.880
217
+ Sort of almost guiding some aspect of learning. Right, right, right. So I was talking to Rebecca
218
+
219
+ 08:44.880 --> 08:54.240
220
+ Sachs just an hour ago and she was talking about lots and lots of evidence from infants seem to
221
+
222
+ 08:54.240 --> 09:04.880
223
+ clearly pick what interests them in a directed way. And so they're not passive learners.
224
+
225
+ 09:04.880 --> 09:11.680
226
+ They, they focus their attention on aspects of the world, which are most interesting,
227
+
228
+ 09:11.680 --> 09:17.760
229
+ surprising in a non trivial way that makes them change their theories of the world.
230
+
231
+ 09:17.760 --> 09:29.120
232
+ So that's a fascinating view of the future progress. But on a more maybe boring question,
233
+
234
+ 09:30.000 --> 09:37.440
235
+ do you think going deeper and larger? So do you think just increasing the size of the things
236
+
237
+ 09:37.440 --> 09:43.520
238
+ that have been increasing a lot in the past few years will, will also make significant progress?
239
+
240
+ 09:43.520 --> 09:49.760
241
+ So some of the representational issues that you, you mentioned, they're kind of shallow
242
+
243
+ 09:50.560 --> 09:54.880
244
+ in some sense. Oh, you mean in the sense of abstraction,
245
+
246
+ 09:54.880 --> 09:59.040
247
+ abstract in the sense of abstraction, they're not getting some, I don't think that having
248
+
249
+ 10:00.400 --> 10:05.520
250
+ more depth in the network in the sense of instead of 100 layers, we have 10,000 is going to solve
251
+
252
+ 10:05.520 --> 10:13.120
253
+ our problem. You don't think so? Is that obvious to you? Yes. What is clear to me is that
254
+
255
+ 10:13.120 --> 10:21.600
256
+ engineers and companies and labs, grad students will continue to tune architectures and explore
257
+
258
+ 10:21.600 --> 10:27.520
259
+ all kinds of tweaks to make the current state of the art slightly ever slightly better. But
260
+
261
+ 10:27.520 --> 10:31.840
262
+ I don't think that's going to be nearly enough. I think we need some fairly drastic changes in
263
+
264
+ 10:31.840 --> 10:39.680
265
+ the way that we're considering learning to achieve the goal that these learners actually
266
+
267
+ 10:39.680 --> 10:45.680
268
+ understand in a deep way the environment in which they are, you know, observing and acting.
269
+
270
+ 10:46.480 --> 10:51.920
271
+ But I guess I was trying to ask a question that's more interesting than just more layers
272
+
273
+ 10:53.040 --> 11:00.800
274
+ is basically once you figure out a way to learn through interacting, how many parameters does
275
+
276
+ 11:00.800 --> 11:07.760
277
+ it take to store that information? So I think our brain is quite bigger than most neural networks.
278
+
279
+ 11:07.760 --> 11:13.120
280
+ Right, right. Oh, I see what you mean. Oh, I'm with you there. So I agree that in order to
281
+
282
+ 11:14.240 --> 11:19.760
283
+ build neural nets with the kind of broad knowledge of the world that typical adult humans have,
284
+
285
+ 11:20.960 --> 11:24.880
286
+ probably the kind of computing power we have now is going to be insufficient.
287
+
288
+ 11:25.600 --> 11:30.320
289
+ So the good news is there are hardware companies building neural net chips. And so
290
+
291
+ 11:30.320 --> 11:39.280
292
+ it's going to get better. However, the good news in a way, which is also a bad news, is that even
293
+
294
+ 11:39.280 --> 11:47.840
295
+ our state of the art deep learning methods fail to learn models that understand even very simple
296
+
297
+ 11:47.840 --> 11:53.680
298
+ environments like some grid worlds that we have built. Even these fairly simple environments,
299
+
300
+ 11:53.680 --> 11:57.120
301
+ I mean, of course, if you train them with enough examples, eventually they get it,
302
+
303
+ 11:57.120 --> 12:05.200
304
+ but it's just like instead of what humans might need just dozens of examples, these things will
305
+
306
+ 12:05.200 --> 12:12.720
307
+ need millions, right, for very, very, very simple tasks. And so I think there's an opportunity
308
+
309
+ 12:13.520 --> 12:18.080
310
+ for academics who don't have the kind of computing power that say Google has
311
+
312
+ 12:19.280 --> 12:25.360
313
+ to do really important and exciting research to advance the state of the art in training
314
+
315
+ 12:25.360 --> 12:32.720
316
+ frameworks, learning models, agent learning in even simple environments that are synthetic,
317
+
318
+ 12:33.440 --> 12:37.200
319
+ that seem trivial, but yet current machine learning fails on.
320
+
321
+ 12:38.240 --> 12:48.240
322
+ We talked about priors and common sense knowledge. It seems like we humans take a lot of knowledge
323
+
324
+ 12:48.240 --> 12:57.040
325
+ for granted. So what's your view of these priors of forming this broad view of the world, this
326
+
327
+ 12:57.040 --> 13:02.560
328
+ accumulation of information, and how we can teach neural networks or learning systems to pick that
329
+
330
+ 13:02.560 --> 13:10.880
331
+ knowledge up? So knowledge, you know, for a while, the artificial intelligence, maybe in the 80,
332
+
333
+ 13:10.880 --> 13:16.880
334
+ like there's a time where knowledge representation, knowledge, acquisition, expert systems, I mean,
335
+
336
+ 13:16.880 --> 13:24.080
337
+ though, the symbolic AI was a view, was an interesting problem set to solve. And it was kind
338
+
339
+ 13:24.080 --> 13:29.440
340
+ of put on hold a little bit, it seems like because it doesn't work. It doesn't work. That's right.
341
+
342
+ 13:29.440 --> 13:37.840
343
+ But that's right. But the goals of that remain important. Yes, remain important. And how do you
344
+
345
+ 13:37.840 --> 13:45.920
346
+ think those goals can be addressed? Right. So first of all, I believe that one reason why the
347
+
348
+ 13:45.920 --> 13:52.560
349
+ classical expert systems approach failed is because a lot of the knowledge we have, so you talked
350
+
351
+ 13:52.560 --> 14:01.760
352
+ about common sense and tuition, there's a lot of knowledge like this, which is not consciously
353
+
354
+ 14:01.760 --> 14:06.320
355
+ accessible. There are lots of decisions we're taking that we can't really explain, even if
356
+
357
+ 14:06.320 --> 14:16.160
358
+ sometimes we make up a story. And that knowledge is also necessary for machines to take good
359
+
360
+ 14:16.160 --> 14:22.320
361
+ decisions. And that knowledge is hard to codify in expert systems, rule based systems, and, you
362
+
363
+ 14:22.320 --> 14:27.920
364
+ know, classical AI formalism. And there are other issues, of course, with the old AI, like,
365
+
366
+ 14:29.680 --> 14:34.320
367
+ not really good ways of handling uncertainty, I would say something more subtle,
368
+
369
+ 14:34.320 --> 14:40.480
370
+ which we understand better now, but I think still isn't enough in the minds of people.
371
+
372
+ 14:41.360 --> 14:48.480
373
+ There's something really powerful that comes from distributed representations, the thing that really
374
+
375
+ 14:49.120 --> 14:58.480
376
+ makes neural nets work so well. And it's hard to replicate that kind of power in a symbolic world.
377
+
378
+ 14:58.480 --> 15:05.200
379
+ The knowledge in expert systems and so on is nicely decomposed into like a bunch of rules.
380
+
381
+ 15:05.760 --> 15:11.280
382
+ Whereas if you think about a neural net, it's the opposite. You have this big blob of parameters
383
+
384
+ 15:11.280 --> 15:16.480
385
+ which work intensely together to represent everything the network knows. And it's not
386
+
387
+ 15:16.480 --> 15:22.880
388
+ sufficiently factorized. And so I think this is one of the weaknesses of current neural nets,
389
+
390
+ 15:22.880 --> 15:30.080
391
+ that we have to take lessons from classical AI in order to bring in another kind of
392
+
393
+ 15:30.080 --> 15:35.920
394
+ compositionality, which is common in language, for example, and in these rules. But that isn't
395
+
396
+ 15:35.920 --> 15:45.040
397
+ so native to neural nets. And on that line of thinking, disentangled representations. Yes. So
398
+
399
+ 15:46.320 --> 15:51.680
400
+ let me connect with disentangled representations. If you might, if you don't mind. Yes, exactly.
401
+
402
+ 15:51.680 --> 15:58.080
403
+ Yeah. So for many years, I thought, and I still believe that it's really important that we come
404
+
405
+ 15:58.080 --> 16:04.080
406
+ up with learning algorithms, either unsupervised or supervised, but reinforcement, whatever,
407
+
408
+ 16:04.720 --> 16:11.600
409
+ that build representations in which the important factors, hopefully causal factors are nicely
410
+
411
+ 16:11.600 --> 16:16.240
412
+ separated and easy to pick up from the representation. So that's the idea of disentangled
413
+
414
+ 16:16.240 --> 16:22.560
415
+ representations. It says transfer the data into a space where everything becomes easy, we can maybe
416
+
417
+ 16:22.560 --> 16:29.360
418
+ just learn with linear models about the things we care about. And I still think this is important,
419
+
420
+ 16:29.360 --> 16:36.880
421
+ but I think this is missing out on a very important ingredient, which classical AI systems can remind
422
+
423
+ 16:36.880 --> 16:41.920
424
+ us of. So let's say we have these disentangled representations, you still need to learn about
425
+
426
+ 16:41.920 --> 16:47.120
427
+ the, the relationships between the variables, those high level semantic variables, they're not
428
+
429
+ 16:47.120 --> 16:52.000
430
+ going to be independent. I mean, this is like too much of an assumption. They're going to have some
431
+
432
+ 16:52.000 --> 16:56.400
433
+ interesting relationships that allow to predict things in the future to explain what happened in
434
+
435
+ 16:56.400 --> 17:01.840
436
+ the past. The kind of knowledge about those relationships in a classical AI system is
437
+
438
+ 17:01.840 --> 17:06.640
439
+ encoded in the rules, like a rule is just like a little piece of knowledge that says, oh, I have
440
+
441
+ 17:06.640 --> 17:12.160
442
+ these two, three, four variables that are linked in this interesting way. Then I can say something
443
+
444
+ 17:12.160 --> 17:17.280
445
+ about one or two of them given a couple of others, right? In addition to disentangling the,
446
+
447
+ 17:18.880 --> 17:23.520
448
+ the elements of the representation, which are like the variables in a rule based system,
449
+
450
+ 17:24.080 --> 17:33.200
451
+ you also need to disentangle the, the mechanisms that relate those variables to each other.
452
+
453
+ 17:33.200 --> 17:37.760
454
+ So like the rules. So if the rules are neatly separated, like each rule is, you know, living
455
+
456
+ 17:37.760 --> 17:44.960
457
+ on its own. And when I, I change a rule because I'm learning, it doesn't need to break other rules.
458
+
459
+ 17:44.960 --> 17:49.280
460
+ Whereas current neural nets, for example, are very sensitive to what's called catastrophic
461
+
462
+ 17:49.280 --> 17:54.800
463
+ forgetting, where after I've learned some things, and then they learn new things, they can destroy
464
+
465
+ 17:54.800 --> 18:00.480
466
+ the old things that I had learned, right? If the knowledge was better factorized and, and
467
+
468
+ 18:00.480 --> 18:08.240
469
+ and separated disentangled, then you would avoid a lot of that. Now you can't do this in the
470
+
471
+ 18:08.880 --> 18:17.200
472
+ sensory domain, but my idea in like a pixel space, but, but my idea is that when you project the
473
+
474
+ 18:17.200 --> 18:22.560
475
+ data in the right semantic space, it becomes possible to now represent this extra knowledge
476
+
477
+ 18:23.440 --> 18:27.760
478
+ beyond the transformation from input to representations, which is how representations
479
+
480
+ 18:27.760 --> 18:33.120
481
+ act on each other and predict the future and so on, in a way that can be neatly
482
+
483
+ 18:34.560 --> 18:38.560
484
+ disentangled. So now it's the rules that are disentangled from each other and not just the
485
+
486
+ 18:38.560 --> 18:43.680
487
+ variables that are disentangled from each other. And you draw distinction between semantic space
488
+
489
+ 18:43.680 --> 18:48.400
490
+ and pixel, like, does there need to be an architectural difference? Well, yeah. So, so
491
+
492
+ 18:48.400 --> 18:51.840
493
+ there's the sensory space like pixels, which where everything is entangled,
494
+
495
+ 18:51.840 --> 18:58.000
496
+ and the information, like the variables are completely interdependent in very complicated
497
+
498
+ 18:58.000 --> 19:03.760
499
+ ways. And also computation, like the, it's not just variables, it's also how they are
500
+
501
+ 19:03.760 --> 19:10.240
502
+ related to each other is, is all intertwined. But, but I'm hypothesizing that in the right
503
+
504
+ 19:10.240 --> 19:16.800
505
+ high level representation space, both the variables and how they relate to each other
506
+
507
+ 19:16.800 --> 19:22.960
508
+ can be disentangled and that will provide a lot of generalization power. Generalization power.
509
+
510
+ 19:22.960 --> 19:29.760
511
+ Yes. Distribution of the test set, it's assumed to be the same as a distribution of the training
512
+
513
+ 19:29.760 --> 19:36.640
514
+ set. Right. This is where current machine learning is too weak. It doesn't tell us anything,
515
+
516
+ 19:36.640 --> 19:41.120
517
+ is not able to tell us anything about how our neural nets, say, are going to generalize to a
518
+
519
+ 19:41.120 --> 19:46.160
520
+ new distribution. And, and, you know, people may think, well, but there's nothing we can say if
521
+
522
+ 19:46.160 --> 19:51.840
523
+ we don't know what the new distribution will be. The truth is, humans are able to generalize to
524
+
525
+ 19:51.840 --> 19:56.560
526
+ new distributions. Yeah, how are we able to do that? So yeah, because there is something, these
527
+
528
+ 19:56.560 --> 20:00.720
529
+ new distributions, even though they could look very different from the training distributions,
530
+
531
+ 20:01.520 --> 20:05.360
532
+ they have things in common. So let me give you a concrete example. You read a science fiction
533
+
534
+ 20:05.360 --> 20:12.560
535
+ novel, the science fiction novel, maybe, you know, brings you in some other planet where
536
+
537
+ 20:12.560 --> 20:17.760
538
+ things look very different on the surface, but it's still the same laws of physics.
539
+
540
+ 20:18.560 --> 20:21.440
541
+ All right. And so you can read the book and you understand what's going on.
542
+
543
+ 20:22.960 --> 20:29.200
544
+ So the distribution is very different. But because you can transport a lot of the knowledge you had
545
+
546
+ 20:29.200 --> 20:35.680
547
+ from Earth about the underlying cause and effect relationships and physical mechanisms and all
548
+
549
+ 20:35.680 --> 20:40.880
550
+ that, and maybe even social interactions, you can now make sense of what is going on on this
551
+
552
+ 20:40.880 --> 20:43.920
553
+ planet where like visually, for example, things are totally different.
554
+
555
+ 20:45.920 --> 20:52.000
556
+ Taking that analogy further and distorting it, let's enter a science fiction world of, say,
557
+
558
+ 20:52.000 --> 21:00.720
559
+ Space Odyssey 2001 with Hal. Yeah. Or maybe, which is probably one of my favorite AI movies.
560
+
561
+ 21:00.720 --> 21:06.080
562
+ Me too. And then there's another one that a lot of people love that may be a little bit outside
563
+
564
+ 21:06.080 --> 21:13.120
565
+ of the AI community is Ex Machina. I don't know if you've seen it. Yes. By the way, what are your
566
+
567
+ 21:13.120 --> 21:19.600
568
+ reviews on that movie? Are you able to enjoy it? So there are things I like and things I hate.
569
+
570
+ 21:21.120 --> 21:25.760
571
+ So let me, you could talk about that in the context of a question I want to ask,
572
+
573
+ 21:25.760 --> 21:31.920
574
+ which is there's quite a large community of people from different backgrounds off and outside of AI
575
+
576
+ 21:31.920 --> 21:36.480
577
+ who are concerned about existential threat of artificial intelligence. Right. You've seen
578
+
579
+ 21:36.480 --> 21:41.920
580
+ now this community develop over time. You've seen you have a perspective. So what do you think is
581
+
582
+ 21:41.920 --> 21:47.680
583
+ the best way to talk about AI safety, to think about it, to have discourse about it within AI
584
+
585
+ 21:47.680 --> 21:53.920
586
+ community and outside and grounded in the fact that Ex Machina is one of the main sources of
587
+
588
+ 21:53.920 --> 21:59.040
589
+ information for the general public about AI. So I think you're putting it right. There's a big
590
+
591
+ 21:59.040 --> 22:04.400
592
+ difference between the sort of discussion we ought to have within the AI community
593
+
594
+ 22:05.200 --> 22:11.600
595
+ and the sort of discussion that really matter in the general public. So I think the picture of
596
+
597
+ 22:11.600 --> 22:19.040
598
+ Terminator and, you know, AI loose and killing people and super intelligence that's going to
599
+
600
+ 22:19.040 --> 22:26.320
601
+ destroy us, whatever we try, isn't really so useful for the public discussion because
602
+
603
+ 22:26.320 --> 22:32.960
604
+ for the public discussion that things I believe really matter are the short term and
605
+
606
+ 22:32.960 --> 22:40.560
607
+ mini term, very likely negative impacts of AI on society, whether it's from security,
608
+
609
+ 22:40.560 --> 22:45.680
610
+ like, you know, big brother scenarios with face recognition or killer robots, or the impact on
611
+
612
+ 22:45.680 --> 22:52.400
613
+ the job market, or concentration of power and discrimination, all kinds of social issues,
614
+
615
+ 22:52.400 --> 22:58.240
616
+ which could actually, some of them could really threaten democracy, for example.
617
+
618
+ 22:58.800 --> 23:04.000
619
+ Just to clarify, when you said killer robots, you mean autonomous weapons as a weapon system?
620
+
621
+ 23:04.000 --> 23:10.400
622
+ Yes, I don't mean, no, that's right. So I think these short and medium term concerns
623
+
624
+ 23:11.280 --> 23:18.560
625
+ should be important parts of the public debate. Now, existential risk, for me, is a very unlikely
626
+
627
+ 23:18.560 --> 23:26.880
628
+ consideration, but still worth academic investigation. In the same way that you could say,
629
+
630
+ 23:26.880 --> 23:32.640
631
+ should we study what could happen if meteorite, you know, came to earth and destroyed it.
632
+
633
+ 23:32.640 --> 23:37.680
634
+ So I think it's very unlikely that this is going to happen in or happen in a reasonable future.
635
+
636
+ 23:37.680 --> 23:45.520
637
+ It's very, the sort of scenario of an AI getting loose goes against my understanding of at least
638
+
639
+ 23:45.520 --> 23:50.160
640
+ current machine learning and current neural nets and so on. It's not plausible to me.
641
+
642
+ 23:50.160 --> 23:54.320
643
+ But of course, I don't have a crystal ball and who knows what AI will be in 50 years from now.
644
+
645
+ 23:54.320 --> 23:59.280
646
+ So I think it is worth that scientists study those problems. It's just not a pressing question,
647
+
648
+ 23:59.280 --> 24:04.880
649
+ as far as I'm concerned. So before I continue down that line, I have a few questions there, but
650
+
651
+ 24:06.640 --> 24:11.440
652
+ what do you like and not like about X Machina as a movie? Because I actually watched it for the
653
+
654
+ 24:11.440 --> 24:17.840
655
+ second time and enjoyed it. I hated it the first time and I enjoyed it quite a bit more the second
656
+
657
+ 24:17.840 --> 24:26.080
658
+ time when I sort of learned to accept certain pieces of it. See it as a concept movie. What
659
+
660
+ 24:26.080 --> 24:36.160
661
+ was your experience? What were your thoughts? So the negative is the picture it paints of science
662
+
663
+ 24:36.160 --> 24:41.760
664
+ is totally wrong. Science in general and AI in particular. Science is not happening
665
+
666
+ 24:43.120 --> 24:51.840
667
+ in some hidden place by some really smart guy. One person. One person. This is totally unrealistic.
668
+
669
+ 24:51.840 --> 24:58.240
670
+ This is not how it happens. Even a team of people in some isolated place will not make it.
671
+
672
+ 24:58.240 --> 25:07.920
673
+ Science moves by small steps thanks to the collaboration and community of a large number
674
+
675
+ 25:07.920 --> 25:16.000
676
+ of people interacting and all the scientists who are expert in their field kind of know what is
677
+
678
+ 25:16.000 --> 25:24.000
679
+ going on even in the industrial labs. Information flows and leaks and so on. And the spirit of
680
+
681
+ 25:24.000 --> 25:30.320
682
+ it is very different from the way science is painted in this movie. Yeah, let me ask on that
683
+
684
+ 25:30.320 --> 25:36.400
685
+ point. It's been the case to this point that kind of even if the research happens inside
686
+
687
+ 25:36.400 --> 25:42.000
688
+ Google or Facebook, inside companies, it still kind of comes out. Do you think that will always be
689
+
690
+ 25:42.000 --> 25:48.960
691
+ the case with AI? Is it possible to bottle ideas to the point where there's a set of breakthroughs
692
+
693
+ 25:48.960 --> 25:53.120
694
+ that go completely undiscovered by the general research community? Do you think that's even
695
+
696
+ 25:53.120 --> 26:02.240
697
+ possible? It's possible, but it's unlikely. It's not how it is done now. It's not how I can force
698
+
699
+ 26:02.240 --> 26:13.120
700
+ it in in the foreseeable future. But of course, I don't have a crystal ball. And so who knows,
701
+
702
+ 26:13.120 --> 26:18.240
703
+ this is science fiction after all. But but usually ominous that the lights went off during
704
+
705
+ 26:18.240 --> 26:24.320
706
+ during that discussion. So the problem again, there's a you know, one thing is the movie and
707
+
708
+ 26:24.320 --> 26:28.720
709
+ you could imagine all kinds of science fiction. The problem with for me, maybe similar to the
710
+
711
+ 26:28.720 --> 26:37.120
712
+ question about existential risk is that this kind of movie paints such a wrong picture of what is
713
+
714
+ 26:37.120 --> 26:43.520
715
+ actual, you know, the actual science and how it's going on that that it can have unfortunate effects
716
+
717
+ 26:43.520 --> 26:49.040
718
+ on people's understanding of current science. And so that's kind of sad.
719
+
720
+ 26:50.560 --> 26:56.800
721
+ There's an important principle in research, which is diversity. So in other words,
722
+
723
+ 26:58.000 --> 27:02.720
724
+ research is exploration, research is exploration in the space of ideas. And different people
725
+
726
+ 27:03.440 --> 27:09.920
727
+ will focus on different directions. And this is not just good, it's essential. So I'm totally fine
728
+
729
+ 27:09.920 --> 27:16.640
730
+ with people exploring directions that are contrary to mine or look orthogonal to mine.
731
+
732
+ 27:18.560 --> 27:24.880
733
+ I am more than fine, I think it's important. I and my friends don't claim we have universal
734
+
735
+ 27:24.880 --> 27:29.680
736
+ truth about what will especially about what will happen in the future. Now that being said,
737
+
738
+ 27:30.320 --> 27:37.600
739
+ we have our intuitions and then we act accordingly, according to where we think we can be most useful
740
+
741
+ 27:37.600 --> 27:43.360
742
+ and where society has the most to gain or to lose. We should have those debates and
743
+
744
+ 27:45.920 --> 27:50.080
745
+ and not end up in a society where there's only one voice and one way of thinking and
746
+
747
+ 27:51.360 --> 27:59.120
748
+ research money is spread out. So this agreement is a sign of good research, good science. So
749
+
750
+ 27:59.120 --> 28:08.560
751
+ yes. The idea of bias in the human sense of bias. How do you think about instilling in machine
752
+
753
+ 28:08.560 --> 28:15.440
754
+ learning something that's aligned with human values in terms of bias? We intuitively assume
755
+
756
+ 28:15.440 --> 28:21.680
757
+ beings have a concept of what bias means, of what fundamental respect for other human beings means,
758
+
759
+ 28:21.680 --> 28:25.280
760
+ but how do we instill that into machine learning systems, do you think?
761
+
762
+ 28:25.280 --> 28:32.720
763
+ So I think there are short term things that are already happening and then there are long term
764
+
765
+ 28:32.720 --> 28:39.040
766
+ things that we need to do. In the short term, there are techniques that have been proposed and
767
+
768
+ 28:39.040 --> 28:44.800
769
+ I think will continue to be improved and maybe alternatives will come up to take data sets
770
+
771
+ 28:45.600 --> 28:51.200
772
+ in which we know there is bias, we can measure it. Pretty much any data set where humans are
773
+
774
+ 28:51.200 --> 28:56.080
775
+ being observed taking decisions will have some sort of bias discrimination against particular
776
+
777
+ 28:56.080 --> 29:04.000
778
+ groups and so on. And we can use machine learning techniques to try to build predictors, classifiers
779
+
780
+ 29:04.000 --> 29:11.920
781
+ that are going to be less biased. We can do it for example using adversarial methods to make our
782
+
783
+ 29:11.920 --> 29:19.520
784
+ systems less sensitive to these variables we should not be sensitive to. So these are clear,
785
+
786
+ 29:19.520 --> 29:24.240
787
+ well defined ways of trying to address the problem, maybe they have weaknesses and more
788
+
789
+ 29:24.240 --> 29:30.400
790
+ research is needed and so on, but I think in fact they're sufficiently mature that governments should
791
+
792
+ 29:30.400 --> 29:36.160
793
+ start regulating companies where it matters say like insurance companies so that they use those
794
+
795
+ 29:36.160 --> 29:43.840
796
+ techniques because those techniques will probably reduce the bias, but at a cost for example maybe
797
+
798
+ 29:43.840 --> 29:47.920
799
+ their predictions will be less accurate and so companies will not do it until you force them.
800
+
801
+ 29:47.920 --> 29:56.000
802
+ All right, so this is short term. Long term, I'm really interested in thinking how we can
803
+
804
+ 29:56.000 --> 30:02.160
805
+ instill moral values into computers. Obviously this is not something we'll achieve in the next five
806
+
807
+ 30:02.160 --> 30:11.680
808
+ or 10 years. There's already work in detecting emotions for example in images and sounds and
809
+
810
+ 30:11.680 --> 30:21.520
811
+ texts and also studying how different agents interacting in different ways may correspond to
812
+
813
+ 30:22.960 --> 30:30.000
814
+ patterns of say injustice which could trigger anger. So these are things we can do in the
815
+
816
+ 30:30.000 --> 30:42.160
817
+ medium term and eventually train computers to model for example how humans react emotionally. I would
818
+
819
+ 30:42.160 --> 30:49.920
820
+ say the simplest thing is unfair situations which trigger anger. This is one of the most basic
821
+
822
+ 30:49.920 --> 30:55.360
823
+ emotions that we share with other animals. I think it's quite feasible within the next few years so
824
+
825
+ 30:55.360 --> 31:00.800
826
+ we can build systems that can detect these kind of things to the extent unfortunately that they
827
+
828
+ 31:00.800 --> 31:07.840
829
+ understand enough about the world around us which is a long time away but maybe we can initially do
830
+
831
+ 31:07.840 --> 31:14.800
832
+ this in virtual environments so you can imagine like a video game where agents interact in some
833
+
834
+ 31:14.800 --> 31:21.760
835
+ ways and then some situations trigger an emotion. I think we could train machines to detect those
836
+
837
+ 31:21.760 --> 31:27.920
838
+ situations and predict that the particular emotion will likely be felt if a human was playing one
839
+
840
+ 31:27.920 --> 31:34.080
841
+ of the characters. You have shown excitement and done a lot of excellent work with unsupervised
842
+
843
+ 31:34.080 --> 31:42.800
844
+ learning but there's been a lot of success on the supervised learning. One of the things I'm
845
+
846
+ 31:42.800 --> 31:48.800
847
+ really passionate about is how humans and robots work together and in the context of supervised
848
+
849
+ 31:48.800 --> 31:54.800
850
+ learning that means the process of annotation. Do you think about the problem of annotation of
851
+
852
+ 31:55.520 --> 32:04.080
853
+ put in a more interesting way is humans teaching machines? Yes, I think it's an important subject.
854
+
855
+ 32:04.880 --> 32:11.280
856
+ Reducing it to annotation may be useful for somebody building a system tomorrow but
857
+
858
+ 32:12.560 --> 32:17.600
859
+ longer term the process of teaching I think is something that deserves a lot more attention
860
+
861
+ 32:17.600 --> 32:21.840
862
+ from the machine learning community so there are people of coin the term machine teaching.
863
+
864
+ 32:22.560 --> 32:30.480
865
+ So what are good strategies for teaching a learning agent and can we design, train a system
866
+
867
+ 32:30.480 --> 32:38.000
868
+ that is going to be a good teacher? So in my group we have a project called a BBI or BBI game
869
+
870
+ 32:38.640 --> 32:46.000
871
+ where there is a game or a scenario where there's a learning agent and a teaching agent
872
+
873
+ 32:46.000 --> 32:54.400
874
+ presumably the teaching agent would eventually be a human but we're not there yet and the
875
+
876
+ 32:56.000 --> 33:00.880
877
+ role of the teacher is to use its knowledge of the environment which it can acquire using
878
+
879
+ 33:00.880 --> 33:09.680
880
+ whatever way brute force to help the learner learn as quickly as possible. So the learner
881
+
882
+ 33:09.680 --> 33:13.920
883
+ is going to try to learn by itself maybe using some exploration and whatever
884
+
885
+ 33:13.920 --> 33:21.520
886
+ but the teacher can choose, can have an influence on the interaction with the learner
887
+
888
+ 33:21.520 --> 33:28.960
889
+ so as to guide the learner maybe teach it the things that the learner has most trouble with
890
+
891
+ 33:28.960 --> 33:34.320
892
+ or just add the boundary between what it knows and doesn't know and so on. So there's a tradition
893
+
894
+ 33:34.320 --> 33:41.280
895
+ of these kind of ideas from other fields and like tutorial systems for example and AI
896
+
897
+ 33:41.280 --> 33:46.880
898
+ and of course people in the humanities have been thinking about these questions but I think
899
+
900
+ 33:46.880 --> 33:52.560
901
+ it's time that machine learning people look at this because in the future we'll have more and more
902
+
903
+ 33:53.760 --> 33:59.680
904
+ human machine interaction with the human in the loop and I think understanding how to make this
905
+
906
+ 33:59.680 --> 34:04.080
907
+ work better. Oh the problems around that are very interesting and not sufficiently addressed.
908
+
909
+ 34:04.080 --> 34:11.440
910
+ You've done a lot of work with language too, what aspect of the traditionally formulated
911
+
912
+ 34:11.440 --> 34:17.040
913
+ touring test, a test of natural language understanding in generation in your eyes is the
914
+
915
+ 34:17.040 --> 34:22.960
916
+ most difficult of conversation, what in your eyes is the hardest part of conversation to solve for
917
+
918
+ 34:22.960 --> 34:30.640
919
+ machines. So I would say it's everything having to do with the non linguistic knowledge which
920
+
921
+ 34:30.640 --> 34:36.400
922
+ implicitly you need in order to make sense of sentences. Things like the winner grad schemas
923
+
924
+ 34:36.400 --> 34:42.400
925
+ so these sentences that are semantically ambiguous. In other words you need to understand enough about
926
+
927
+ 34:42.400 --> 34:48.720
928
+ the world in order to really interpret properly those sentences. I think these are interesting
929
+
930
+ 34:48.720 --> 34:55.840
931
+ challenges for machine learning because they point in the direction of building systems that
932
+
933
+ 34:55.840 --> 35:02.880
934
+ both understand how the world works and there's causal relationships in the world and associate
935
+
936
+ 35:03.520 --> 35:09.760
937
+ that knowledge with how to express it in language either for reading or writing.
938
+
939
+ 35:11.840 --> 35:17.600
940
+ You speak French? Yes, it's my mother tongue. It's one of the romance languages. Do you think
941
+
942
+ 35:17.600 --> 35:23.040
943
+ passing the touring test and all the underlying challenges we just mentioned depend on language?
944
+
945
+ 35:23.040 --> 35:28.000
946
+ Do you think it might be easier in French than it is in English or is independent of language?
947
+
948
+ 35:28.800 --> 35:37.680
949
+ I think it's independent of language. I would like to build systems that can use the same
950
+
951
+ 35:37.680 --> 35:45.840
952
+ principles, the same learning mechanisms to learn from human agents, whatever their language.
953
+
954
+ 35:45.840 --> 35:53.600
955
+ Well, certainly us humans can talk more beautifully and smoothly in poetry. So I'm Russian originally.
956
+
957
+ 35:53.600 --> 36:01.360
958
+ I know poetry in Russian is maybe easier to convey complex ideas than it is in English
959
+
960
+ 36:02.320 --> 36:09.520
961
+ but maybe I'm showing my bias and some people could say that about French. But of course the
962
+
963
+ 36:09.520 --> 36:16.400
964
+ goal ultimately is our human brain is able to utilize any kind of those languages to use them
965
+
966
+ 36:16.400 --> 36:21.040
967
+ as tools to convey meaning. Yeah, of course there are differences between languages and maybe some
968
+
969
+ 36:21.040 --> 36:25.920
970
+ are slightly better at some things but in the grand scheme of things where we're trying to understand
971
+
972
+ 36:25.920 --> 36:31.040
973
+ how the brain works and language and so on, I think these differences are minute.
974
+
975
+ 36:31.040 --> 36:42.880
976
+ So you've lived perhaps through an AI winter of sorts. Yes. How did you stay warm and continue
977
+
978
+ 36:42.880 --> 36:48.480
979
+ with your research? Stay warm with friends. With friends. Okay, so it's important to have friends
980
+
981
+ 36:48.480 --> 36:57.200
982
+ and what have you learned from the experience? Listen to your inner voice. Don't, you know, be
983
+
984
+ 36:57.200 --> 37:07.680
985
+ trying to just please the crowds and the fashion and if you have a strong intuition about something
986
+
987
+ 37:08.480 --> 37:15.520
988
+ that is not contradicted by actual evidence, go for it. I mean, it could be contradicted by people.
989
+
990
+ 37:16.960 --> 37:21.920
991
+ Not your own instinct of based on everything you've learned. So of course you have to adapt
992
+
993
+ 37:21.920 --> 37:29.440
994
+ your beliefs when your experiments contradict those beliefs but you have to stick to your
995
+
996
+ 37:29.440 --> 37:36.160
997
+ beliefs otherwise. It's what allowed me to go through those years. It's what allowed me to
998
+
999
+ 37:37.120 --> 37:44.480
1000
+ persist in directions that, you know, took time, whatever other people think, took time to mature
1001
+
1002
+ 37:44.480 --> 37:53.680
1003
+ and bring fruits. So history of AI is marked with these, of course it's marked with technical
1004
+
1005
+ 37:53.680 --> 37:58.880
1006
+ breakthroughs but it's also marked with these seminal events that capture the imagination
1007
+
1008
+ 37:58.880 --> 38:06.000
1009
+ of the community. Most recent, I would say AlphaGo beating the world champion human go player
1010
+
1011
+ 38:06.000 --> 38:14.000
1012
+ was one of those moments. What do you think the next such moment might be? Okay, sir, first of all,
1013
+
1014
+ 38:14.000 --> 38:24.880
1015
+ I think that these so called seminal events are overrated. As I said, science really moves by
1016
+
1017
+ 38:24.880 --> 38:33.760
1018
+ small steps. Now what happens is you make one more small step and it's like the drop that,
1019
+
1020
+ 38:33.760 --> 38:40.560
1021
+ you know, allows to, that fills the bucket and then you have drastic consequences because now
1022
+
1023
+ 38:40.560 --> 38:46.240
1024
+ you're able to do something you were not able to do before or now say the cost of building some
1025
+
1026
+ 38:46.240 --> 38:51.920
1027
+ device or solving a problem becomes cheaper than what existed and you have a new market that opens
1028
+
1029
+ 38:51.920 --> 39:00.080
1030
+ up. So especially in the world of commerce and applications, the impact of a small scientific
1031
+
1032
+ 39:00.080 --> 39:07.520
1033
+ progress could be huge but in the science itself, I think it's very, very gradual and
1034
+
1035
+ 39:07.520 --> 39:15.280
1036
+ where are these steps being taken now? So there's unsupervised, right? So if I look at one trend
1037
+
1038
+ 39:15.280 --> 39:24.080
1039
+ that I like in my community, for example, and at me line, my institute, what are the two hardest
1040
+
1041
+ 39:24.080 --> 39:32.800
1042
+ topics? GANs and reinforcement learning, even though in Montreal in particular, like reinforcement
1043
+
1044
+ 39:32.800 --> 39:39.600
1045
+ learning was something pretty much absent just two or three years ago. So it is really a big
1046
+
1047
+ 39:39.600 --> 39:48.400
1048
+ interest from students and there's a big interest from people like me. So I would say this is
1049
+
1050
+ 39:48.400 --> 39:54.960
1051
+ something where we're going to see more progress even though it hasn't yet provided much in terms of
1052
+
1053
+ 39:54.960 --> 40:01.280
1054
+ actual industrial fallout. Like even though there's Alpha Gold, there's no, like Google is not making
1055
+
1056
+ 40:01.280 --> 40:06.320
1057
+ money on this right now. But I think over the long term, this is really, really important for many
1058
+
1059
+ 40:06.320 --> 40:13.760
1060
+ reasons. So in other words, I would say reinforcement learning maybe more generally agent learning
1061
+
1062
+ 40:13.760 --> 40:17.520
1063
+ because it doesn't have to be with rewards. It could be in all kinds of ways that an agent
1064
+
1065
+ 40:17.520 --> 40:23.040
1066
+ is learning about its environment. Now, reinforcement learning, you're excited about. Do you think
1067
+
1068
+ 40:23.040 --> 40:32.320
1069
+ GANs could provide something? Yes. Some moment in it. Well, GANs or other
1070
+
1071
+ 40:33.760 --> 40:41.360
1072
+ generative models, I believe, will be crucial ingredients in building agents that can understand
1073
+
1074
+ 40:41.360 --> 40:48.880
1075
+ the world. A lot of the successes in reinforcement learning in the past has been with policy
1076
+
1077
+ 40:48.880 --> 40:53.360
1078
+ gradient where you'll just learn a policy. You don't actually learn a model of the world. But
1079
+
1080
+ 40:53.360 --> 40:58.640
1081
+ there are lots of issues with that. And we don't know how to do model based RL right now. But I
1082
+
1083
+ 40:58.640 --> 41:06.080
1084
+ think this is where we have to go in order to build models that can generalize faster and better,
1085
+
1086
+ 41:06.080 --> 41:13.200
1087
+ like to new distributions that capture, to some extent, at least the underlying causal
1088
+
1089
+ 41:13.200 --> 41:20.320
1090
+ mechanisms in the world. Last question. What made you fall in love with artificial intelligence?
1091
+
1092
+ 41:20.960 --> 41:28.400
1093
+ If you look back, what was the first moment in your life when you were fascinated by either
1094
+
1095
+ 41:28.400 --> 41:33.600
1096
+ the human mind or the artificial mind? You know, when I was an adolescent, I was reading a lot.
1097
+
1098
+ 41:33.600 --> 41:41.920
1099
+ And then I started reading science fiction. There you go. That's it. That's where I got hooked.
1100
+
1101
+ 41:41.920 --> 41:50.160
1102
+ And then, you know, I had one of the first personal computers and I got hooked in programming.
1103
+
1104
+ 41:50.960 --> 41:55.040
1105
+ And so it just, you know, start with fiction and then make it a reality. That's right.
1106
+
1107
+ 41:55.040 --> 42:12.080
1108
+ Yosha, thank you so much for talking to me. My pleasure.
1109
+
vtt/episode_005_small.vtt ADDED
@@ -0,0 +1,2654 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.040
4
+ The following is a conversation with Vladimir Vapnik.
5
+
6
+ 00:03.040 --> 00:05.280
7
+ He's the coinventor of the Support Vector Machines,
8
+
9
+ 00:05.280 --> 00:07.920
10
+ Support Vector Clustering, VC Theory,
11
+
12
+ 00:07.920 --> 00:11.200
13
+ and many foundational ideas in statistical learning.
14
+
15
+ 00:11.200 --> 00:13.640
16
+ He was born in the Soviet Union and worked
17
+
18
+ 00:13.640 --> 00:16.320
19
+ at the Institute of Control Sciences in Moscow.
20
+
21
+ 00:16.320 --> 00:20.640
22
+ Then in the United States, he worked at AT&T, NEC Labs,
23
+
24
+ 00:20.640 --> 00:24.280
25
+ Facebook Research, and now as a professor at Columbia
26
+
27
+ 00:24.280 --> 00:25.960
28
+ University.
29
+
30
+ 00:25.960 --> 00:30.320
31
+ His work has been cited over 170,000 times.
32
+
33
+ 00:30.320 --> 00:31.840
34
+ He has some very interesting ideas
35
+
36
+ 00:31.840 --> 00:34.800
37
+ about artificial intelligence and the nature of learning,
38
+
39
+ 00:34.800 --> 00:37.600
40
+ especially on the limits of our current approaches
41
+
42
+ 00:37.600 --> 00:40.440
43
+ and the open problems in the field.
44
+
45
+ 00:40.440 --> 00:42.520
46
+ This conversation is part of MIT course
47
+
48
+ 00:42.520 --> 00:44.440
49
+ on artificial general intelligence
50
+
51
+ 00:44.440 --> 00:46.840
52
+ and the Artificial Intelligence Podcast.
53
+
54
+ 00:46.840 --> 00:49.600
55
+ If you enjoy it, please subscribe on YouTube
56
+
57
+ 00:49.600 --> 00:53.040
58
+ or rate it on iTunes or your podcast provider of choice
59
+
60
+ 00:53.040 --> 00:55.320
61
+ or simply connect with me on Twitter
62
+
63
+ 00:55.320 --> 01:00.200
64
+ or other social networks at Lex Friedman, spelled F R I D.
65
+
66
+ 01:00.200 --> 01:04.800
67
+ And now here's my conversation with Vladimir Vapnik.
68
+
69
+ 01:04.800 --> 01:08.840
70
+ Einstein famously said that God doesn't play dice.
71
+
72
+ 01:08.840 --> 01:10.000
73
+ Yeah.
74
+
75
+ 01:10.000 --> 01:12.880
76
+ You have studied the world through the eyes of statistics.
77
+
78
+ 01:12.880 --> 01:17.320
79
+ So let me ask you, in terms of the nature of reality,
80
+
81
+ 01:17.320 --> 01:21.360
82
+ fundamental nature of reality, does God play dice?
83
+
84
+ 01:21.360 --> 01:26.200
85
+ We don't know some factors, and because we
86
+
87
+ 01:26.200 --> 01:30.520
88
+ don't know some factors, which could be important,
89
+
90
+ 01:30.520 --> 01:38.000
91
+ it looks like God play dice, but we should describe it.
92
+
93
+ 01:38.000 --> 01:42.080
94
+ In philosophy, they distinguish between two positions,
95
+
96
+ 01:42.080 --> 01:45.480
97
+ positions of instrumentalism, where
98
+
99
+ 01:45.480 --> 01:48.720
100
+ you're creating theory for prediction
101
+
102
+ 01:48.720 --> 01:51.400
103
+ and position of realism, where you're
104
+
105
+ 01:51.400 --> 01:54.640
106
+ trying to understand what God's big.
107
+
108
+ 01:54.640 --> 01:56.800
109
+ Can you describe instrumentalism and realism
110
+
111
+ 01:56.800 --> 01:58.400
112
+ a little bit?
113
+
114
+ 01:58.400 --> 02:06.320
115
+ For example, if you have some mechanical laws, what is that?
116
+
117
+ 02:06.320 --> 02:11.480
118
+ Is it law which is true always and everywhere?
119
+
120
+ 02:11.480 --> 02:14.880
121
+ Or it is law which allows you to predict
122
+
123
+ 02:14.880 --> 02:22.920
124
+ the position of moving element, what you believe.
125
+
126
+ 02:22.920 --> 02:28.480
127
+ You believe that it is God's law, that God created the world,
128
+
129
+ 02:28.480 --> 02:33.160
130
+ which obeyed to this physical law,
131
+
132
+ 02:33.160 --> 02:36.240
133
+ or it is just law for predictions?
134
+
135
+ 02:36.240 --> 02:38.400
136
+ And which one is instrumentalism?
137
+
138
+ 02:38.400 --> 02:39.880
139
+ For predictions.
140
+
141
+ 02:39.880 --> 02:45.400
142
+ If you believe that this is law of God, and it's always
143
+
144
+ 02:45.400 --> 02:50.040
145
+ true everywhere, that means that you're a realist.
146
+
147
+ 02:50.040 --> 02:55.480
148
+ So you're trying to really understand that God's thought.
149
+
150
+ 02:55.480 --> 03:00.040
151
+ So the way you see the world as an instrumentalist?
152
+
153
+ 03:00.040 --> 03:03.240
154
+ You know, I'm working for some models,
155
+
156
+ 03:03.240 --> 03:06.960
157
+ model of machine learning.
158
+
159
+ 03:06.960 --> 03:12.760
160
+ So in this model, we can see setting,
161
+
162
+ 03:12.760 --> 03:16.440
163
+ and we try to solve, resolve the setting,
164
+
165
+ 03:16.440 --> 03:18.240
166
+ to solve the problem.
167
+
168
+ 03:18.240 --> 03:20.760
169
+ And you can do it in two different ways,
170
+
171
+ 03:20.760 --> 03:23.840
172
+ from the point of view of instrumentalists.
173
+
174
+ 03:23.840 --> 03:27.120
175
+ And that's what everybody does now,
176
+
177
+ 03:27.120 --> 03:31.560
178
+ because they say that the goal of machine learning
179
+
180
+ 03:31.560 --> 03:36.800
181
+ is to find the rule for classification.
182
+
183
+ 03:36.800 --> 03:40.920
184
+ That is true, but it is an instrument for prediction.
185
+
186
+ 03:40.920 --> 03:46.160
187
+ But I can say the goal of machine learning
188
+
189
+ 03:46.160 --> 03:50.040
190
+ is to learn about conditional probability.
191
+
192
+ 03:50.040 --> 03:54.440
193
+ So how God played use, and is He play?
194
+
195
+ 03:54.440 --> 03:55.960
196
+ What is probability for one?
197
+
198
+ 03:55.960 --> 03:59.960
199
+ What is probability for another given situation?
200
+
201
+ 03:59.960 --> 04:02.600
202
+ But for prediction, I don't need this.
203
+
204
+ 04:02.600 --> 04:04.240
205
+ I need the rule.
206
+
207
+ 04:04.240 --> 04:08.480
208
+ But for understanding, I need conditional probability.
209
+
210
+ 04:08.480 --> 04:11.800
211
+ So let me just step back a little bit first to talk about.
212
+
213
+ 04:11.800 --> 04:13.960
214
+ You mentioned, which I read last night,
215
+
216
+ 04:13.960 --> 04:21.280
217
+ the parts of the 1960 paper by Eugene Wigner,
218
+
219
+ 04:21.280 --> 04:23.520
220
+ unreasonable effectiveness of mathematics
221
+
222
+ 04:23.520 --> 04:24.880
223
+ and natural sciences.
224
+
225
+ 04:24.880 --> 04:29.400
226
+ Such a beautiful paper, by the way.
227
+
228
+ 04:29.400 --> 04:34.480
229
+ It made me feel, to be honest, to confess my own work
230
+
231
+ 04:34.480 --> 04:38.400
232
+ in the past few years on deep learning, heavily applied.
233
+
234
+ 04:38.400 --> 04:40.320
235
+ It made me feel that I was missing out
236
+
237
+ 04:40.320 --> 04:43.960
238
+ on some of the beauty of nature in the way
239
+
240
+ 04:43.960 --> 04:45.560
241
+ that math can uncover.
242
+
243
+ 04:45.560 --> 04:50.360
244
+ So let me just step away from the poetry of that for a second.
245
+
246
+ 04:50.360 --> 04:53.040
247
+ How do you see the role of math in your life?
248
+
249
+ 04:53.040 --> 04:54.080
250
+ Is it a tool?
251
+
252
+ 04:54.080 --> 04:55.840
253
+ Is it poetry?
254
+
255
+ 04:55.840 --> 04:56.960
256
+ Where does it sit?
257
+
258
+ 04:56.960 --> 05:01.400
259
+ And does math for you have limits of what it can describe?
260
+
261
+ 05:01.400 --> 05:08.280
262
+ Some people saying that math is language which use God.
263
+
264
+ 05:08.280 --> 05:10.280
265
+ So I believe in that.
266
+
267
+ 05:10.280 --> 05:12.000
268
+ Speak to God or use God.
269
+
270
+ 05:12.000 --> 05:12.760
271
+ Or use God.
272
+
273
+ 05:12.760 --> 05:14.080
274
+ Use God.
275
+
276
+ 05:14.080 --> 05:15.560
277
+ Yeah.
278
+
279
+ 05:15.560 --> 05:25.680
280
+ So I believe that this article about unreasonable
281
+
282
+ 05:25.680 --> 05:29.960
283
+ effectiveness of math is that if you're
284
+
285
+ 05:29.960 --> 05:33.960
286
+ looking in mathematical structures,
287
+
288
+ 05:33.960 --> 05:37.720
289
+ they know something about reality.
290
+
291
+ 05:37.720 --> 05:42.480
292
+ And the most scientists from natural science,
293
+
294
+ 05:42.480 --> 05:48.440
295
+ they're looking on equation and trying to understand reality.
296
+
297
+ 05:48.440 --> 05:51.280
298
+ So the same in machine learning.
299
+
300
+ 05:51.280 --> 05:57.560
301
+ If you're trying very carefully look on all equations
302
+
303
+ 05:57.560 --> 06:00.640
304
+ which define conditional probability,
305
+
306
+ 06:00.640 --> 06:05.680
307
+ you can understand something about reality more
308
+
309
+ 06:05.680 --> 06:08.160
310
+ than from your fantasy.
311
+
312
+ 06:08.160 --> 06:12.480
313
+ So math can reveal the simple underlying principles
314
+
315
+ 06:12.480 --> 06:13.880
316
+ of reality, perhaps.
317
+
318
+ 06:13.880 --> 06:16.880
319
+ You know, what means simple?
320
+
321
+ 06:16.880 --> 06:20.320
322
+ It is very hard to discover them.
323
+
324
+ 06:20.320 --> 06:23.800
325
+ But then when you discover them and look at them,
326
+
327
+ 06:23.800 --> 06:27.440
328
+ you see how beautiful they are.
329
+
330
+ 06:27.440 --> 06:33.560
331
+ And it is surprising why people did not see that before.
332
+
333
+ 06:33.560 --> 06:37.480
334
+ You're looking on equation and derive it from equations.
335
+
336
+ 06:37.480 --> 06:43.360
337
+ For example, I talked yesterday about least squirmated.
338
+
339
+ 06:43.360 --> 06:48.120
340
+ And people had a lot of fantasy have to improve least squirmated.
341
+
342
+ 06:48.120 --> 06:52.360
343
+ But if you're going step by step by solving some equations,
344
+
345
+ 06:52.360 --> 06:57.680
346
+ you suddenly will get some term which,
347
+
348
+ 06:57.680 --> 07:01.040
349
+ after thinking, you understand that it described
350
+
351
+ 07:01.040 --> 07:04.360
352
+ position of observation point.
353
+
354
+ 07:04.360 --> 07:08.240
355
+ In least squirmated, we throw out a lot of information.
356
+
357
+ 07:08.240 --> 07:11.760
358
+ We don't look in composition of point of observations.
359
+
360
+ 07:11.760 --> 07:14.600
361
+ We're looking only on residuals.
362
+
363
+ 07:14.600 --> 07:19.400
364
+ But when you understood that, that's a very simple idea.
365
+
366
+ 07:19.400 --> 07:22.320
367
+ But it's not too simple to understand.
368
+
369
+ 07:22.320 --> 07:25.680
370
+ And you can derive this just from equations.
371
+
372
+ 07:25.680 --> 07:28.120
373
+ So some simple algebra, a few steps
374
+
375
+ 07:28.120 --> 07:31.040
376
+ will take you to something surprising
377
+
378
+ 07:31.040 --> 07:34.360
379
+ that when you think about, you understand.
380
+
381
+ 07:34.360 --> 07:41.120
382
+ And that is proof that human intuition not to reach
383
+
384
+ 07:41.120 --> 07:42.640
385
+ and very primitive.
386
+
387
+ 07:42.640 --> 07:48.520
388
+ And it does not see very simple situations.
389
+
390
+ 07:48.520 --> 07:51.760
391
+ So let me take a step back in general.
392
+
393
+ 07:51.760 --> 07:54.480
394
+ Yes, right?
395
+
396
+ 07:54.480 --> 07:58.840
397
+ But what about human as opposed to intuition and ingenuity?
398
+
399
+ 08:01.600 --> 08:02.960
400
+ Moments of brilliance.
401
+
402
+ 08:02.960 --> 08:09.480
403
+ So do you have to be so hard on human intuition?
404
+
405
+ 08:09.480 --> 08:11.840
406
+ Are there moments of brilliance in human intuition?
407
+
408
+ 08:11.840 --> 08:17.520
409
+ They can leap ahead of math, and then the math will catch up?
410
+
411
+ 08:17.520 --> 08:19.400
412
+ I don't think so.
413
+
414
+ 08:19.400 --> 08:23.560
415
+ I think that the best human intuition,
416
+
417
+ 08:23.560 --> 08:26.440
418
+ it is putting in axioms.
419
+
420
+ 08:26.440 --> 08:28.600
421
+ And then it is technical.
422
+
423
+ 08:28.600 --> 08:31.880
424
+ See where the axioms take you.
425
+
426
+ 08:31.880 --> 08:34.920
427
+ But if they correctly take axioms,
428
+
429
+ 08:34.920 --> 08:41.400
430
+ but it axiom polished during generations of scientists.
431
+
432
+ 08:41.400 --> 08:45.040
433
+ And this is integral wisdom.
434
+
435
+ 08:45.040 --> 08:47.480
436
+ So that's beautifully put.
437
+
438
+ 08:47.480 --> 08:54.040
439
+ But if you maybe look at when you think of Einstein
440
+
441
+ 08:54.040 --> 08:58.960
442
+ and special relativity, what is the role of imagination
443
+
444
+ 08:58.960 --> 09:04.480
445
+ coming first there in the moment of discovery of an idea?
446
+
447
+ 09:04.480 --> 09:06.440
448
+ So there is obviously a mix of math
449
+
450
+ 09:06.440 --> 09:10.800
451
+ and out of the box imagination there.
452
+
453
+ 09:10.800 --> 09:12.600
454
+ That I don't know.
455
+
456
+ 09:12.600 --> 09:18.080
457
+ Whatever I did, I exclude any imagination.
458
+
459
+ 09:18.080 --> 09:21.080
460
+ Because whatever I saw in machine learning that
461
+
462
+ 09:21.080 --> 09:26.440
463
+ come from imagination, like features, like deep learning,
464
+
465
+ 09:26.440 --> 09:29.320
466
+ they are not relevant to the problem.
467
+
468
+ 09:29.320 --> 09:31.960
469
+ When you're looking very carefully
470
+
471
+ 09:31.960 --> 09:34.280
472
+ for mathematical equations, you're
473
+
474
+ 09:34.280 --> 09:38.000
475
+ deriving very simple theory, which goes far by
476
+
477
+ 09:38.000 --> 09:42.040
478
+ no theory at school than whatever people can imagine.
479
+
480
+ 09:42.040 --> 09:44.760
481
+ Because it is not good fantasy.
482
+
483
+ 09:44.760 --> 09:46.720
484
+ It is just interpretation.
485
+
486
+ 09:46.720 --> 09:48.000
487
+ It is just fantasy.
488
+
489
+ 09:48.000 --> 09:51.320
490
+ But it is not what you need.
491
+
492
+ 09:51.320 --> 09:56.960
493
+ You don't need any imagination to derive, say,
494
+
495
+ 09:56.960 --> 10:00.040
496
+ main principle of machine learning.
497
+
498
+ 10:00.040 --> 10:02.760
499
+ When you think about learning and intelligence,
500
+
501
+ 10:02.760 --> 10:04.560
502
+ maybe thinking about the human brain
503
+
504
+ 10:04.560 --> 10:09.200
505
+ and trying to describe mathematically the process of learning
506
+
507
+ 10:09.200 --> 10:13.160
508
+ that is something like what happens in the human brain,
509
+
510
+ 10:13.160 --> 10:17.200
511
+ do you think we have the tools currently?
512
+
513
+ 10:17.200 --> 10:19.000
514
+ Do you think we will ever have the tools
515
+
516
+ 10:19.000 --> 10:22.680
517
+ to try to describe that process of learning?
518
+
519
+ 10:22.680 --> 10:25.800
520
+ It is not description of what's going on.
521
+
522
+ 10:25.800 --> 10:27.360
523
+ It is interpretation.
524
+
525
+ 10:27.360 --> 10:29.400
526
+ It is your interpretation.
527
+
528
+ 10:29.400 --> 10:32.080
529
+ Your vision can be wrong.
530
+
531
+ 10:32.080 --> 10:36.160
532
+ You know, when a guy invent microscope,
533
+
534
+ 10:36.160 --> 10:40.560
535
+ Levin Cook for the first time, only he got this instrument
536
+
537
+ 10:40.560 --> 10:45.440
538
+ and nobody, he kept secrets about microscope.
539
+
540
+ 10:45.440 --> 10:49.080
541
+ But he wrote reports in London Academy of Science.
542
+
543
+ 10:49.080 --> 10:52.040
544
+ In his report, when he looked into the blood,
545
+
546
+ 10:52.040 --> 10:54.480
547
+ he looked everywhere, on the water, on the blood,
548
+
549
+ 10:54.480 --> 10:56.320
550
+ on the spin.
551
+
552
+ 10:56.320 --> 11:04.040
553
+ But he described blood like fight between queen and king.
554
+
555
+ 11:04.040 --> 11:08.120
556
+ So he saw blood cells, red cells,
557
+
558
+ 11:08.120 --> 11:12.400
559
+ and he imagines that it is army fighting each other.
560
+
561
+ 11:12.400 --> 11:16.960
562
+ And it was his interpretation of situation.
563
+
564
+ 11:16.960 --> 11:19.760
565
+ And he sent this report in Academy of Science.
566
+
567
+ 11:19.760 --> 11:22.640
568
+ They very carefully looked because they believed
569
+
570
+ 11:22.640 --> 11:25.160
571
+ that he is right, he saw something.
572
+
573
+ 11:25.160 --> 11:28.240
574
+ But he gave wrong interpretation.
575
+
576
+ 11:28.240 --> 11:32.280
577
+ And I believe the same can happen with brain.
578
+
579
+ 11:32.280 --> 11:35.280
580
+ Because the most important part, you know,
581
+
582
+ 11:35.280 --> 11:38.840
583
+ I believe in human language.
584
+
585
+ 11:38.840 --> 11:43.000
586
+ In some proverb, it's so much wisdom.
587
+
588
+ 11:43.000 --> 11:50.240
589
+ For example, people say that it is better than 1,000 days
590
+
591
+ 11:50.240 --> 11:53.960
592
+ of diligent studies one day with great teacher.
593
+
594
+ 11:53.960 --> 11:59.480
595
+ But if I will ask you what teacher does, nobody knows.
596
+
597
+ 11:59.480 --> 12:01.400
598
+ And that is intelligence.
599
+
600
+ 12:01.400 --> 12:07.320
601
+ And what we know from history, and now from mass
602
+
603
+ 12:07.320 --> 12:12.080
604
+ and machine learning, that teacher can do a lot.
605
+
606
+ 12:12.080 --> 12:14.400
607
+ So what, from a mathematical point of view,
608
+
609
+ 12:14.400 --> 12:16.080
610
+ is the great teacher?
611
+
612
+ 12:16.080 --> 12:17.240
613
+ I don't know.
614
+
615
+ 12:17.240 --> 12:18.880
616
+ That's an awful question.
617
+
618
+ 12:18.880 --> 12:25.120
619
+ Now, what we can say what teacher can do,
620
+
621
+ 12:25.120 --> 12:29.440
622
+ he can introduce some invariance, some predicate
623
+
624
+ 12:29.440 --> 12:32.280
625
+ for creating invariance.
626
+
627
+ 12:32.280 --> 12:33.520
628
+ How he doing it?
629
+
630
+ 12:33.520 --> 12:34.080
631
+ I don't know.
632
+
633
+ 12:34.080 --> 12:37.560
634
+ Because teacher knows reality and can describe
635
+
636
+ 12:37.560 --> 12:41.200
637
+ from this reality a predicate invariance.
638
+
639
+ 12:41.200 --> 12:43.480
640
+ But he knows that when you're using invariant,
641
+
642
+ 12:43.480 --> 12:47.960
643
+ he can decrease number of observations 100 times.
644
+
645
+ 12:47.960 --> 12:52.960
646
+ But maybe try to pull that apart a little bit.
647
+
648
+ 12:52.960 --> 12:58.120
649
+ I think you mentioned a piano teacher saying to the student,
650
+
651
+ 12:58.120 --> 12:59.880
652
+ play like a butterfly.
653
+
654
+ 12:59.880 --> 13:03.720
655
+ I played piano, I played guitar for a long time.
656
+
657
+ 13:03.720 --> 13:09.800
658
+ Yeah, maybe it's romantic, poetic.
659
+
660
+ 13:09.800 --> 13:13.160
661
+ But it feels like there's a lot of truth in that statement.
662
+
663
+ 13:13.160 --> 13:15.440
664
+ There is a lot of instruction in that statement.
665
+
666
+ 13:15.440 --> 13:17.320
667
+ And so can you pull that apart?
668
+
669
+ 13:17.320 --> 13:19.760
670
+ What is that?
671
+
672
+ 13:19.760 --> 13:22.520
673
+ The language itself may not contain this information.
674
+
675
+ 13:22.520 --> 13:24.160
676
+ It's not blah, blah, blah.
677
+
678
+ 13:24.160 --> 13:25.640
679
+ It does not blah, blah, blah, yeah.
680
+
681
+ 13:25.640 --> 13:26.960
682
+ It affects you.
683
+
684
+ 13:26.960 --> 13:27.600
685
+ It's what?
686
+
687
+ 13:27.600 --> 13:28.600
688
+ It affects you.
689
+
690
+ 13:28.600 --> 13:29.800
691
+ It affects your playing.
692
+
693
+ 13:29.800 --> 13:30.640
694
+ Yes, it does.
695
+
696
+ 13:30.640 --> 13:33.640
697
+ But it's not the language.
698
+
699
+ 13:33.640 --> 13:38.000
700
+ It feels like what is the information being exchanged there?
701
+
702
+ 13:38.000 --> 13:39.760
703
+ What is the nature of information?
704
+
705
+ 13:39.760 --> 13:41.880
706
+ What is the representation of that information?
707
+
708
+ 13:41.880 --> 13:44.000
709
+ I believe that it is sort of predicate.
710
+
711
+ 13:44.000 --> 13:45.400
712
+ But I don't know.
713
+
714
+ 13:45.400 --> 13:48.880
715
+ That is exactly what intelligence in machine learning
716
+
717
+ 13:48.880 --> 13:50.080
718
+ should be.
719
+
720
+ 13:50.080 --> 13:53.200
721
+ Because the rest is just mathematical technique.
722
+
723
+ 13:53.200 --> 13:57.920
724
+ I think that what was discovered recently
725
+
726
+ 13:57.920 --> 14:03.280
727
+ is that there is two mechanisms of learning.
728
+
729
+ 14:03.280 --> 14:06.040
730
+ One called strong convergence mechanism
731
+
732
+ 14:06.040 --> 14:08.560
733
+ and weak convergence mechanism.
734
+
735
+ 14:08.560 --> 14:11.200
736
+ Before, people use only one convergence.
737
+
738
+ 14:11.200 --> 14:15.840
739
+ In weak convergence mechanism, you can use predicate.
740
+
741
+ 14:15.840 --> 14:19.360
742
+ That's what play like butterfly.
743
+
744
+ 14:19.360 --> 14:23.640
745
+ And it will immediately affect your playing.
746
+
747
+ 14:23.640 --> 14:26.360
748
+ You know, there is English proverb.
749
+
750
+ 14:26.360 --> 14:27.320
751
+ Great.
752
+
753
+ 14:27.320 --> 14:31.680
754
+ If it looks like a duck, swims like a duck,
755
+
756
+ 14:31.680 --> 14:35.200
757
+ and quack like a duck, then it is probably duck.
758
+
759
+ 14:35.200 --> 14:36.240
760
+ Yes.
761
+
762
+ 14:36.240 --> 14:40.400
763
+ But this is exact about predicate.
764
+
765
+ 14:40.400 --> 14:42.920
766
+ Looks like a duck, what it means.
767
+
768
+ 14:42.920 --> 14:46.720
769
+ So you saw many ducks that you're training data.
770
+
771
+ 14:46.720 --> 14:56.480
772
+ So you have description of how looks integral looks ducks.
773
+
774
+ 14:56.480 --> 14:59.360
775
+ Yeah, the visual characteristics of a duck.
776
+
777
+ 14:59.360 --> 15:00.840
778
+ Yeah, but you won't.
779
+
780
+ 15:00.840 --> 15:04.200
781
+ And you have model for the cognition ducks.
782
+
783
+ 15:04.200 --> 15:07.880
784
+ So you would like that theoretical description
785
+
786
+ 15:07.880 --> 15:12.720
787
+ from model coincide with empirical description, which
788
+
789
+ 15:12.720 --> 15:14.520
790
+ you saw on Territax there.
791
+
792
+ 15:14.520 --> 15:18.440
793
+ So about looks like a duck, it is general.
794
+
795
+ 15:18.440 --> 15:21.480
796
+ But what about swims like a duck?
797
+
798
+ 15:21.480 --> 15:23.560
799
+ You should know that duck swims.
800
+
801
+ 15:23.560 --> 15:26.960
802
+ You can say it play chess like a duck, OK?
803
+
804
+ 15:26.960 --> 15:28.880
805
+ Duck doesn't play chess.
806
+
807
+ 15:28.880 --> 15:35.560
808
+ And it is completely legal predicate, but it is useless.
809
+
810
+ 15:35.560 --> 15:41.040
811
+ So half teacher can recognize not useless predicate.
812
+
813
+ 15:41.040 --> 15:44.640
814
+ So up to now, we don't use this predicate
815
+
816
+ 15:44.640 --> 15:46.680
817
+ in existing machine learning.
818
+
819
+ 15:46.680 --> 15:47.200
820
+ And you think that's not so useful?
821
+
822
+ 15:47.200 --> 15:50.600
823
+ So why we need billions of data?
824
+
825
+ 15:50.600 --> 15:55.560
826
+ But in this English proverb, they use only three predicate.
827
+
828
+ 15:55.560 --> 15:59.080
829
+ Looks like a duck, swims like a duck, and quack like a duck.
830
+
831
+ 15:59.080 --> 16:02.040
832
+ So you can't deny the fact that swims like a duck
833
+
834
+ 16:02.040 --> 16:08.520
835
+ and quacks like a duck has humor in it, has ambiguity.
836
+
837
+ 16:08.520 --> 16:12.600
838
+ Let's talk about swim like a duck.
839
+
840
+ 16:12.600 --> 16:16.520
841
+ It does not say jumps like a duck.
842
+
843
+ 16:16.520 --> 16:17.680
844
+ Why?
845
+
846
+ 16:17.680 --> 16:20.760
847
+ Because it's not relevant.
848
+
849
+ 16:20.760 --> 16:25.880
850
+ But that means that you know ducks, you know different birds,
851
+
852
+ 16:25.880 --> 16:27.600
853
+ you know animals.
854
+
855
+ 16:27.600 --> 16:32.440
856
+ And you derive from this that it is relevant to say swim like a duck.
857
+
858
+ 16:32.440 --> 16:36.680
859
+ So underneath, in order for us to understand swims like a duck,
860
+
861
+ 16:36.680 --> 16:41.200
862
+ it feels like we need to know millions of other little pieces
863
+
864
+ 16:41.200 --> 16:43.000
865
+ of information.
866
+
867
+ 16:43.000 --> 16:44.280
868
+ We pick up along the way.
869
+
870
+ 16:44.280 --> 16:45.120
871
+ You don't think so.
872
+
873
+ 16:45.120 --> 16:48.480
874
+ There doesn't need to be this knowledge base.
875
+
876
+ 16:48.480 --> 16:52.600
877
+ In those statements, carries some rich information
878
+
879
+ 16:52.600 --> 16:57.280
880
+ that helps us understand the essence of duck.
881
+
882
+ 16:57.280 --> 17:01.920
883
+ How far are we from integrating predicates?
884
+
885
+ 17:01.920 --> 17:06.000
886
+ You know that when you consider complete theory,
887
+
888
+ 17:06.000 --> 17:09.320
889
+ machine learning, so what it does,
890
+
891
+ 17:09.320 --> 17:12.400
892
+ you have a lot of functions.
893
+
894
+ 17:12.400 --> 17:17.480
895
+ And then you're talking, it looks like a duck.
896
+
897
+ 17:17.480 --> 17:20.720
898
+ You see your training data.
899
+
900
+ 17:20.720 --> 17:31.040
901
+ From training data, you recognize like expected duck should look.
902
+
903
+ 17:31.040 --> 17:37.640
904
+ Then you remove all functions, which does not look like you think
905
+
906
+ 17:37.640 --> 17:40.080
907
+ it should look from training data.
908
+
909
+ 17:40.080 --> 17:45.800
910
+ So you decrease amount of function from which you pick up one.
911
+
912
+ 17:45.800 --> 17:48.320
913
+ Then you give a second predicate.
914
+
915
+ 17:48.320 --> 17:51.840
916
+ And then, again, decrease the set of function.
917
+
918
+ 17:51.840 --> 17:55.800
919
+ And after that, you pick up the best function you can find.
920
+
921
+ 17:55.800 --> 17:58.120
922
+ It is standard machine learning.
923
+
924
+ 17:58.120 --> 18:03.280
925
+ So why you need not too many examples?
926
+
927
+ 18:03.280 --> 18:06.600
928
+ Because your predicates aren't very good, or you're not.
929
+
930
+ 18:06.600 --> 18:09.200
931
+ That means that predicate very good.
932
+
933
+ 18:09.200 --> 18:12.520
934
+ Because every predicate is invented
935
+
936
+ 18:12.520 --> 18:17.720
937
+ to decrease a divisible set of functions.
938
+
939
+ 18:17.720 --> 18:20.320
940
+ So you talk about admissible set of functions,
941
+
942
+ 18:20.320 --> 18:22.440
943
+ and you talk about good functions.
944
+
945
+ 18:22.440 --> 18:24.280
946
+ So what makes a good function?
947
+
948
+ 18:24.280 --> 18:28.600
949
+ So admissible set of function is set of function
950
+
951
+ 18:28.600 --> 18:32.760
952
+ which has small capacity, or small diversity,
953
+
954
+ 18:32.760 --> 18:36.960
955
+ small VC dimension example, which contain good function.
956
+
957
+ 18:36.960 --> 18:38.760
958
+ So by the way, for people who don't know,
959
+
960
+ 18:38.760 --> 18:42.440
961
+ VC, you're the V in the VC.
962
+
963
+ 18:42.440 --> 18:50.440
964
+ So how would you describe to a lay person what VC theory is?
965
+
966
+ 18:50.440 --> 18:51.440
967
+ How would you describe VC?
968
+
969
+ 18:51.440 --> 18:56.480
970
+ So when you have a machine, so a machine
971
+
972
+ 18:56.480 --> 19:00.240
973
+ capable to pick up one function from the admissible set
974
+
975
+ 19:00.240 --> 19:02.520
976
+ of function.
977
+
978
+ 19:02.520 --> 19:07.640
979
+ But set of admissibles function can be big.
980
+
981
+ 19:07.640 --> 19:11.600
982
+ They contain all continuous functions and it's useless.
983
+
984
+ 19:11.600 --> 19:15.280
985
+ You don't have so many examples to pick up function.
986
+
987
+ 19:15.280 --> 19:17.280
988
+ But it can be small.
989
+
990
+ 19:17.280 --> 19:24.560
991
+ Small, we call it capacity, but maybe better called diversity.
992
+
993
+ 19:24.560 --> 19:27.160
994
+ So not very different function in the set
995
+
996
+ 19:27.160 --> 19:31.280
997
+ is infinite set of function, but not very diverse.
998
+
999
+ 19:31.280 --> 19:34.280
1000
+ So it is small VC dimension.
1001
+
1002
+ 19:34.280 --> 19:39.360
1003
+ When VC dimension is small, you need small amount
1004
+
1005
+ 19:39.360 --> 19:41.760
1006
+ of training data.
1007
+
1008
+ 19:41.760 --> 19:47.360
1009
+ So the goal is to create admissible set of functions
1010
+
1011
+ 19:47.360 --> 19:53.200
1012
+ which have small VC dimension and contain good function.
1013
+
1014
+ 19:53.200 --> 19:58.160
1015
+ Then you will be able to pick up the function
1016
+
1017
+ 19:58.160 --> 20:02.400
1018
+ using small amount of observations.
1019
+
1020
+ 20:02.400 --> 20:06.760
1021
+ So that is the task of learning.
1022
+
1023
+ 20:06.760 --> 20:11.360
1024
+ It is creating a set of admissible functions
1025
+
1026
+ 20:11.360 --> 20:13.120
1027
+ that has a small VC dimension.
1028
+
1029
+ 20:13.120 --> 20:17.320
1030
+ And then you've figured out a clever way of picking up.
1031
+
1032
+ 20:17.320 --> 20:22.440
1033
+ No, that is goal of learning, which I formulated yesterday.
1034
+
1035
+ 20:22.440 --> 20:25.760
1036
+ Statistical learning theory does not
1037
+
1038
+ 20:25.760 --> 20:30.360
1039
+ involve in creating admissible set of function.
1040
+
1041
+ 20:30.360 --> 20:35.520
1042
+ In classical learning theory, everywhere, 100% in textbook,
1043
+
1044
+ 20:35.520 --> 20:39.200
1045
+ the set of function admissible set of function is given.
1046
+
1047
+ 20:39.200 --> 20:41.760
1048
+ But this is science about nothing,
1049
+
1050
+ 20:41.760 --> 20:44.040
1051
+ because the most difficult problem
1052
+
1053
+ 20:44.040 --> 20:50.120
1054
+ to create admissible set of functions, given, say,
1055
+
1056
+ 20:50.120 --> 20:53.080
1057
+ a lot of functions, continuum set of functions,
1058
+
1059
+ 20:53.080 --> 20:54.960
1060
+ create admissible set of functions,
1061
+
1062
+ 20:54.960 --> 20:58.760
1063
+ that means that it has finite VC dimension,
1064
+
1065
+ 20:58.760 --> 21:02.280
1066
+ small VC dimension, and contain good function.
1067
+
1068
+ 21:02.280 --> 21:05.280
1069
+ So this was out of consideration.
1070
+
1071
+ 21:05.280 --> 21:07.240
1072
+ So what's the process of doing that?
1073
+
1074
+ 21:07.240 --> 21:08.240
1075
+ I mean, it's fascinating.
1076
+
1077
+ 21:08.240 --> 21:13.200
1078
+ What is the process of creating this admissible set of functions?
1079
+
1080
+ 21:13.200 --> 21:14.920
1081
+ That is invariant.
1082
+
1083
+ 21:14.920 --> 21:15.760
1084
+ That's invariance.
1085
+
1086
+ 21:15.760 --> 21:17.280
1087
+ Can you describe invariance?
1088
+
1089
+ 21:17.280 --> 21:22.440
1090
+ Yeah, you're looking of properties of training data.
1091
+
1092
+ 21:22.440 --> 21:30.120
1093
+ And properties means that you have some function,
1094
+
1095
+ 21:30.120 --> 21:36.520
1096
+ and you just count what is the average value of function
1097
+
1098
+ 21:36.520 --> 21:38.960
1099
+ on training data.
1100
+
1101
+ 21:38.960 --> 21:43.040
1102
+ You have a model, and what is the expectation
1103
+
1104
+ 21:43.040 --> 21:44.960
1105
+ of this function on the model.
1106
+
1107
+ 21:44.960 --> 21:46.720
1108
+ And they should coincide.
1109
+
1110
+ 21:46.720 --> 21:51.800
1111
+ So the problem is about how to pick up functions.
1112
+
1113
+ 21:51.800 --> 21:53.200
1114
+ It can be any function.
1115
+
1116
+ 21:53.200 --> 21:59.280
1117
+ In fact, it is true for all functions.
1118
+
1119
+ 21:59.280 --> 22:05.000
1120
+ But because when I talking set, say,
1121
+
1122
+ 22:05.000 --> 22:09.920
1123
+ duck does not jumping, so you don't ask question, jump like a duck.
1124
+
1125
+ 22:09.920 --> 22:13.360
1126
+ Because it is trivial, it does not jumping,
1127
+
1128
+ 22:13.360 --> 22:15.560
1129
+ it doesn't help you to recognize jump.
1130
+
1131
+ 22:15.560 --> 22:19.000
1132
+ But you know something, which question to ask,
1133
+
1134
+ 22:19.000 --> 22:23.840
1135
+ when you're asking, it swims like a jump, like a duck.
1136
+
1137
+ 22:23.840 --> 22:26.840
1138
+ But looks like a duck, it is general situation.
1139
+
1140
+ 22:26.840 --> 22:34.440
1141
+ Looks like, say, guy who have this illness, this disease,
1142
+
1143
+ 22:34.440 --> 22:42.280
1144
+ it is legal, so there is a general type of predicate
1145
+
1146
+ 22:42.280 --> 22:46.440
1147
+ looks like, and special type of predicate,
1148
+
1149
+ 22:46.440 --> 22:50.040
1150
+ which related to this specific problem.
1151
+
1152
+ 22:50.040 --> 22:53.440
1153
+ And that is intelligence part of all this business.
1154
+
1155
+ 22:53.440 --> 22:55.440
1156
+ And that we are teachers in world.
1157
+
1158
+ 22:55.440 --> 22:58.440
1159
+ Incorporating those specialized predicates.
1160
+
1161
+ 22:58.440 --> 23:04.840
1162
+ What do you think about deep learning as neural networks,
1163
+
1164
+ 23:04.840 --> 23:11.440
1165
+ these arbitrary architectures as helping accomplish some of the tasks
1166
+
1167
+ 23:11.440 --> 23:14.440
1168
+ you're thinking about, their effectiveness or lack thereof,
1169
+
1170
+ 23:14.440 --> 23:19.440
1171
+ what are the weaknesses and what are the possible strengths?
1172
+
1173
+ 23:19.440 --> 23:22.440
1174
+ You know, I think that this is fantasy.
1175
+
1176
+ 23:22.440 --> 23:28.440
1177
+ Everything which like deep learning, like features.
1178
+
1179
+ 23:28.440 --> 23:32.440
1180
+ Let me give you this example.
1181
+
1182
+ 23:32.440 --> 23:38.440
1183
+ One of the greatest book, this Churchill book about history of Second World War.
1184
+
1185
+ 23:38.440 --> 23:47.440
1186
+ And he's starting this book describing that in all time, when war is over,
1187
+
1188
+ 23:47.440 --> 23:54.440
1189
+ so the great kings, they gathered together,
1190
+
1191
+ 23:54.440 --> 23:57.440
1192
+ almost all of them were relatives,
1193
+
1194
+ 23:57.440 --> 24:02.440
1195
+ and they discussed what should be done, how to create peace.
1196
+
1197
+ 24:02.440 --> 24:04.440
1198
+ And they came to agreement.
1199
+
1200
+ 24:04.440 --> 24:13.440
1201
+ And when happens First World War, the general public came in power.
1202
+
1203
+ 24:13.440 --> 24:17.440
1204
+ And they were so greedy that robbed Germany.
1205
+
1206
+ 24:17.440 --> 24:21.440
1207
+ And it was clear for everybody that it is not peace.
1208
+
1209
+ 24:21.440 --> 24:28.440
1210
+ That peace will last only 20 years, because they were not professionals.
1211
+
1212
+ 24:28.440 --> 24:31.440
1213
+ It's the same I see in machine learning.
1214
+
1215
+ 24:31.440 --> 24:37.440
1216
+ There are mathematicians who are looking for the problem from a very deep point of view,
1217
+
1218
+ 24:37.440 --> 24:39.440
1219
+ a mathematical point of view.
1220
+
1221
+ 24:39.440 --> 24:45.440
1222
+ And there are computer scientists who mostly does not know mathematics.
1223
+
1224
+ 24:45.440 --> 24:48.440
1225
+ They just have interpretation of that.
1226
+
1227
+ 24:48.440 --> 24:53.440
1228
+ And they invented a lot of blah, blah, blah interpretations like deep learning.
1229
+
1230
+ 24:53.440 --> 24:55.440
1231
+ Why you need deep learning?
1232
+
1233
+ 24:55.440 --> 24:57.440
1234
+ Mathematics does not know deep learning.
1235
+
1236
+ 24:57.440 --> 25:00.440
1237
+ Mathematics does not know neurons.
1238
+
1239
+ 25:00.440 --> 25:02.440
1240
+ It is just function.
1241
+
1242
+ 25:02.440 --> 25:06.440
1243
+ If you like to say piecewise linear function, say that,
1244
+
1245
+ 25:06.440 --> 25:10.440
1246
+ and do it in class of piecewise linear function.
1247
+
1248
+ 25:10.440 --> 25:12.440
1249
+ But they invent something.
1250
+
1251
+ 25:12.440 --> 25:20.440
1252
+ And then they try to prove the advantage of that through interpretations,
1253
+
1254
+ 25:20.440 --> 25:22.440
1255
+ which mostly wrong.
1256
+
1257
+ 25:22.440 --> 25:25.440
1258
+ And when not enough they appeal to brain,
1259
+
1260
+ 25:25.440 --> 25:27.440
1261
+ which they know nothing about that.
1262
+
1263
+ 25:27.440 --> 25:29.440
1264
+ Nobody knows what's going on in the brain.
1265
+
1266
+ 25:29.440 --> 25:34.440
1267
+ So I think that more reliable look on maths.
1268
+
1269
+ 25:34.440 --> 25:36.440
1270
+ This is a mathematical problem.
1271
+
1272
+ 25:36.440 --> 25:38.440
1273
+ Do your best to solve this problem.
1274
+
1275
+ 25:38.440 --> 25:43.440
1276
+ Try to understand that there is not only one way of convergence,
1277
+
1278
+ 25:43.440 --> 25:45.440
1279
+ which is strong way of convergence.
1280
+
1281
+ 25:45.440 --> 25:49.440
1282
+ There is a weak way of convergence, which requires predicate.
1283
+
1284
+ 25:49.440 --> 25:52.440
1285
+ And if you will go through all this stuff,
1286
+
1287
+ 25:52.440 --> 25:55.440
1288
+ you will see that you don't need deep learning.
1289
+
1290
+ 25:55.440 --> 26:00.440
1291
+ Even more, I would say one of the theorem,
1292
+
1293
+ 26:00.440 --> 26:02.440
1294
+ which is called representor theorem.
1295
+
1296
+ 26:02.440 --> 26:10.440
1297
+ It says that optimal solution of mathematical problem,
1298
+
1299
+ 26:10.440 --> 26:20.440
1300
+ which described learning, is on shadow network, not on deep learning.
1301
+
1302
+ 26:20.440 --> 26:22.440
1303
+ And a shallow network, yeah.
1304
+
1305
+ 26:22.440 --> 26:24.440
1306
+ The ultimate problem is there.
1307
+
1308
+ 26:24.440 --> 26:25.440
1309
+ Absolutely.
1310
+
1311
+ 26:25.440 --> 26:29.440
1312
+ So in the end, what you're saying is exactly right.
1313
+
1314
+ 26:29.440 --> 26:35.440
1315
+ The question is, you have no value for throwing something on the table,
1316
+
1317
+ 26:35.440 --> 26:38.440
1318
+ playing with it, not math.
1319
+
1320
+ 26:38.440 --> 26:41.440
1321
+ It's like in your old network where you said throwing something in the bucket
1322
+
1323
+ 26:41.440 --> 26:45.440
1324
+ or the biological example and looking at kings and queens
1325
+
1326
+ 26:45.440 --> 26:47.440
1327
+ or the cells or the microscope.
1328
+
1329
+ 26:47.440 --> 26:52.440
1330
+ You don't see value in imagining the cells or kings and queens
1331
+
1332
+ 26:52.440 --> 26:56.440
1333
+ and using that as inspiration and imagination
1334
+
1335
+ 26:56.440 --> 26:59.440
1336
+ for where the math will eventually lead you.
1337
+
1338
+ 26:59.440 --> 27:06.440
1339
+ You think that interpretation basically deceives you in a way that's not productive.
1340
+
1341
+ 27:06.440 --> 27:14.440
1342
+ I think that if you're trying to analyze this business of learning
1343
+
1344
+ 27:14.440 --> 27:18.440
1345
+ and especially discussion about deep learning,
1346
+
1347
+ 27:18.440 --> 27:21.440
1348
+ it is discussion about interpretation.
1349
+
1350
+ 27:21.440 --> 27:26.440
1351
+ It's discussion about things, about what you can say about things.
1352
+
1353
+ 27:26.440 --> 27:29.440
1354
+ That's right, but aren't you surprised by the beauty of it?
1355
+
1356
+ 27:29.440 --> 27:36.440
1357
+ Not mathematical beauty, but the fact that it works at all.
1358
+
1359
+ 27:36.440 --> 27:39.440
1360
+ Or are you criticizing that very beauty,
1361
+
1362
+ 27:39.440 --> 27:45.440
1363
+ our human desire to interpret,
1364
+
1365
+ 27:45.440 --> 27:49.440
1366
+ to find our silly interpretations in these constructs?
1367
+
1368
+ 27:49.440 --> 27:51.440
1369
+ Let me ask you this.
1370
+
1371
+ 27:51.440 --> 27:55.440
1372
+ Are you surprised?
1373
+
1374
+ 27:55.440 --> 27:57.440
1375
+ Does it inspire you?
1376
+
1377
+ 27:57.440 --> 28:00.440
1378
+ How do you feel about the success of a system like AlphaGo
1379
+
1380
+ 28:00.440 --> 28:03.440
1381
+ at beating the game of Go?
1382
+
1383
+ 28:03.440 --> 28:09.440
1384
+ Using neural networks to estimate the quality of a board
1385
+
1386
+ 28:09.440 --> 28:11.440
1387
+ and the quality of the board?
1388
+
1389
+ 28:11.440 --> 28:14.440
1390
+ That is your interpretation quality of the board.
1391
+
1392
+ 28:14.440 --> 28:17.440
1393
+ Yes.
1394
+
1395
+ 28:17.440 --> 28:20.440
1396
+ It's not our interpretation.
1397
+
1398
+ 28:20.440 --> 28:23.440
1399
+ The fact is, a neural network system doesn't matter.
1400
+
1401
+ 28:23.440 --> 28:27.440
1402
+ A learning system that we don't mathematically understand
1403
+
1404
+ 28:27.440 --> 28:29.440
1405
+ that beats the best human player.
1406
+
1407
+ 28:29.440 --> 28:31.440
1408
+ It does something that was thought impossible.
1409
+
1410
+ 28:31.440 --> 28:35.440
1411
+ That means that it's not very difficult problem.
1412
+
1413
+ 28:35.440 --> 28:41.440
1414
+ We've empirically discovered that this is not a very difficult problem.
1415
+
1416
+ 28:41.440 --> 28:43.440
1417
+ That's true.
1418
+
1419
+ 28:43.440 --> 28:49.440
1420
+ Maybe I can't argue.
1421
+
1422
+ 28:49.440 --> 28:52.440
1423
+ Even more, I would say,
1424
+
1425
+ 28:52.440 --> 28:54.440
1426
+ that if they use deep learning,
1427
+
1428
+ 28:54.440 --> 28:59.440
1429
+ it is not the most effective way of learning theory.
1430
+
1431
+ 28:59.440 --> 29:03.440
1432
+ Usually, when people use deep learning,
1433
+
1434
+ 29:03.440 --> 29:09.440
1435
+ they're using zillions of training data.
1436
+
1437
+ 29:09.440 --> 29:13.440
1438
+ But you don't need this.
1439
+
1440
+ 29:13.440 --> 29:15.440
1441
+ I describe the challenge.
1442
+
1443
+ 29:15.440 --> 29:22.440
1444
+ Can we do some problems with deep learning method
1445
+
1446
+ 29:22.440 --> 29:27.440
1447
+ with deep net using 100 times less training data?
1448
+
1449
+ 29:27.440 --> 29:33.440
1450
+ Even more, some problems deep learning cannot solve
1451
+
1452
+ 29:33.440 --> 29:37.440
1453
+ because it's not necessary.
1454
+
1455
+ 29:37.440 --> 29:40.440
1456
+ They create admissible set of functions.
1457
+
1458
+ 29:40.440 --> 29:45.440
1459
+ Deep architecture means to create admissible set of functions.
1460
+
1461
+ 29:45.440 --> 29:49.440
1462
+ You cannot say that you're creating good admissible set of functions.
1463
+
1464
+ 29:49.440 --> 29:52.440
1465
+ It's your fantasy.
1466
+
1467
+ 29:52.440 --> 29:54.440
1468
+ It does not come from mass.
1469
+
1470
+ 29:54.440 --> 29:58.440
1471
+ But it is possible to create admissible set of functions
1472
+
1473
+ 29:58.440 --> 30:01.440
1474
+ because you have your training data.
1475
+
1476
+ 30:01.440 --> 30:08.440
1477
+ Actually, for mathematicians, when you consider a variant,
1478
+
1479
+ 30:08.440 --> 30:11.440
1480
+ you need to use law of large numbers.
1481
+
1482
+ 30:11.440 --> 30:17.440
1483
+ When you're making training in existing algorithm,
1484
+
1485
+ 30:17.440 --> 30:20.440
1486
+ you need uniform law of large numbers,
1487
+
1488
+ 30:20.440 --> 30:22.440
1489
+ which is much more difficult.
1490
+
1491
+ 30:22.440 --> 30:24.440
1492
+ You see dimension and all this stuff.
1493
+
1494
+ 30:24.440 --> 30:32.440
1495
+ Nevertheless, if you use both weak and strong way of convergence,
1496
+
1497
+ 30:32.440 --> 30:34.440
1498
+ you can decrease a lot of training data.
1499
+
1500
+ 30:34.440 --> 30:39.440
1501
+ You could do the three, the Swims like a duck and Quacks like a duck.
1502
+
1503
+ 30:39.440 --> 30:47.440
1504
+ Let's step back and think about human intelligence in general.
1505
+
1506
+ 30:47.440 --> 30:52.440
1507
+ Clearly, that has evolved in a nonmathematical way.
1508
+
1509
+ 30:52.440 --> 31:00.440
1510
+ As far as we know, God, or whoever,
1511
+
1512
+ 31:00.440 --> 31:05.440
1513
+ didn't come up with a model in place in our brain of admissible functions.
1514
+
1515
+ 31:05.440 --> 31:06.440
1516
+ It kind of evolved.
1517
+
1518
+ 31:06.440 --> 31:07.440
1519
+ I don't know.
1520
+
1521
+ 31:07.440 --> 31:08.440
1522
+ Maybe you have a view on this.
1523
+
1524
+ 31:08.440 --> 31:15.440
1525
+ Alan Turing in the 50s in his paper asked and rejected the question,
1526
+
1527
+ 31:15.440 --> 31:16.440
1528
+ can machines think?
1529
+
1530
+ 31:16.440 --> 31:18.440
1531
+ It's not a very useful question.
1532
+
1533
+ 31:18.440 --> 31:23.440
1534
+ But can you briefly entertain this useless question?
1535
+
1536
+ 31:23.440 --> 31:25.440
1537
+ Can machines think?
1538
+
1539
+ 31:25.440 --> 31:28.440
1540
+ So talk about intelligence and your view of it.
1541
+
1542
+ 31:28.440 --> 31:29.440
1543
+ I don't know that.
1544
+
1545
+ 31:29.440 --> 31:34.440
1546
+ I know that Turing described imitation.
1547
+
1548
+ 31:34.440 --> 31:41.440
1549
+ If computer can imitate human being, let's call it intelligent.
1550
+
1551
+ 31:41.440 --> 31:45.440
1552
+ And he understands that it is not thinking computer.
1553
+
1554
+ 31:45.440 --> 31:46.440
1555
+ Yes.
1556
+
1557
+ 31:46.440 --> 31:49.440
1558
+ He completely understands what he's doing.
1559
+
1560
+ 31:49.440 --> 31:53.440
1561
+ But he's set up a problem of imitation.
1562
+
1563
+ 31:53.440 --> 31:57.440
1564
+ So now we understand that the problem is not in imitation.
1565
+
1566
+ 31:57.440 --> 32:04.440
1567
+ I'm not sure that intelligence is just inside of us.
1568
+
1569
+ 32:04.440 --> 32:06.440
1570
+ It may be also outside of us.
1571
+
1572
+ 32:06.440 --> 32:09.440
1573
+ I have several observations.
1574
+
1575
+ 32:09.440 --> 32:15.440
1576
+ So when I prove some theorem, it's a very difficult theorem.
1577
+
1578
+ 32:15.440 --> 32:22.440
1579
+ But in a couple of years, in several places, people proved the same theorem.
1580
+
1581
+ 32:22.440 --> 32:26.440
1582
+ Say, soil lemma after us was done.
1583
+
1584
+ 32:26.440 --> 32:29.440
1585
+ Then another guy proved the same theorem.
1586
+
1587
+ 32:29.440 --> 32:32.440
1588
+ In the history of science, it's happened all the time.
1589
+
1590
+ 32:32.440 --> 32:35.440
1591
+ For example, geometry.
1592
+
1593
+ 32:35.440 --> 32:37.440
1594
+ It's happened simultaneously.
1595
+
1596
+ 32:37.440 --> 32:43.440
1597
+ First it did Lobachevsky and then Gauss and Boyai and other guys.
1598
+
1599
+ 32:43.440 --> 32:48.440
1600
+ It happened simultaneously in 10 years period of time.
1601
+
1602
+ 32:48.440 --> 32:51.440
1603
+ And I saw a lot of examples like that.
1604
+
1605
+ 32:51.440 --> 32:56.440
1606
+ And many mathematicians think that when they develop something,
1607
+
1608
+ 32:56.440 --> 33:01.440
1609
+ they develop something in general which affects everybody.
1610
+
1611
+ 33:01.440 --> 33:07.440
1612
+ So maybe our model that intelligence is only inside of us is incorrect.
1613
+
1614
+ 33:07.440 --> 33:09.440
1615
+ It's our interpretation.
1616
+
1617
+ 33:09.440 --> 33:15.440
1618
+ Maybe there exists some connection with world intelligence.
1619
+
1620
+ 33:15.440 --> 33:16.440
1621
+ I don't know.
1622
+
1623
+ 33:16.440 --> 33:19.440
1624
+ You're almost like plugging in into...
1625
+
1626
+ 33:19.440 --> 33:20.440
1627
+ Yeah, exactly.
1628
+
1629
+ 33:20.440 --> 33:22.440
1630
+ ...and contributing to this...
1631
+
1632
+ 33:22.440 --> 33:23.440
1633
+ Into a big network.
1634
+
1635
+ 33:23.440 --> 33:26.440
1636
+ ...into a big, maybe in your own network.
1637
+
1638
+ 33:26.440 --> 33:27.440
1639
+ No, no, no.
1640
+
1641
+ 33:27.440 --> 33:34.440
1642
+ On the flip side of that, maybe you can comment on big O complexity
1643
+
1644
+ 33:34.440 --> 33:40.440
1645
+ and how you see classifying algorithms by worst case running time
1646
+
1647
+ 33:40.440 --> 33:42.440
1648
+ in relation to their input.
1649
+
1650
+ 33:42.440 --> 33:45.440
1651
+ So that way of thinking about functions.
1652
+
1653
+ 33:45.440 --> 33:47.440
1654
+ Do you think P equals NP?
1655
+
1656
+ 33:47.440 --> 33:49.440
1657
+ Do you think that's an interesting question?
1658
+
1659
+ 33:49.440 --> 33:51.440
1660
+ Yeah, it is an interesting question.
1661
+
1662
+ 33:51.440 --> 34:01.440
1663
+ But let me talk about complexity and about worst case scenario.
1664
+
1665
+ 34:01.440 --> 34:03.440
1666
+ There is a mathematical setting.
1667
+
1668
+ 34:03.440 --> 34:07.440
1669
+ When I came to the United States in 1990,
1670
+
1671
+ 34:07.440 --> 34:09.440
1672
+ people did not know this theory.
1673
+
1674
+ 34:09.440 --> 34:12.440
1675
+ They did not know statistical learning theory.
1676
+
1677
+ 34:12.440 --> 34:17.440
1678
+ So in Russia it was published to monographs or monographs,
1679
+
1680
+ 34:17.440 --> 34:19.440
1681
+ but in America they didn't know.
1682
+
1683
+ 34:19.440 --> 34:22.440
1684
+ Then they learned.
1685
+
1686
+ 34:22.440 --> 34:25.440
1687
+ And somebody told me that if it's worst case theory,
1688
+
1689
+ 34:25.440 --> 34:27.440
1690
+ and they will create real case theory,
1691
+
1692
+ 34:27.440 --> 34:30.440
1693
+ but till now it did not.
1694
+
1695
+ 34:30.440 --> 34:33.440
1696
+ Because it is a mathematical tool.
1697
+
1698
+ 34:33.440 --> 34:38.440
1699
+ You can do only what you can do using mathematics,
1700
+
1701
+ 34:38.440 --> 34:45.440
1702
+ which has a clear understanding and clear description.
1703
+
1704
+ 34:45.440 --> 34:52.440
1705
+ And for this reason we introduced complexity.
1706
+
1707
+ 34:52.440 --> 34:54.440
1708
+ And we need this.
1709
+
1710
+ 34:54.440 --> 35:01.440
1711
+ Because actually it is diverse, I like this one more.
1712
+
1713
+ 35:01.440 --> 35:04.440
1714
+ This dimension you can prove some theorems.
1715
+
1716
+ 35:04.440 --> 35:12.440
1717
+ But we also create theory for case when you know probability measure.
1718
+
1719
+ 35:12.440 --> 35:14.440
1720
+ And that is the best case which can happen.
1721
+
1722
+ 35:14.440 --> 35:17.440
1723
+ It is entropy theory.
1724
+
1725
+ 35:17.440 --> 35:20.440
1726
+ So from a mathematical point of view,
1727
+
1728
+ 35:20.440 --> 35:25.440
1729
+ you know the best possible case and the worst possible case.
1730
+
1731
+ 35:25.440 --> 35:28.440
1732
+ You can derive different model in medium.
1733
+
1734
+ 35:28.440 --> 35:30.440
1735
+ But it's not so interesting.
1736
+
1737
+ 35:30.440 --> 35:33.440
1738
+ You think the edges are interesting?
1739
+
1740
+ 35:33.440 --> 35:35.440
1741
+ The edges are interesting.
1742
+
1743
+ 35:35.440 --> 35:44.440
1744
+ Because it is not so easy to get a good bound, exact bound.
1745
+
1746
+ 35:44.440 --> 35:47.440
1747
+ It's not many cases where you have.
1748
+
1749
+ 35:47.440 --> 35:49.440
1750
+ The bound is not exact.
1751
+
1752
+ 35:49.440 --> 35:54.440
1753
+ But interesting principles which discover the mass.
1754
+
1755
+ 35:54.440 --> 35:57.440
1756
+ Do you think it's interesting because it's challenging
1757
+
1758
+ 35:57.440 --> 36:02.440
1759
+ and reveals interesting principles that allow you to get those bounds?
1760
+
1761
+ 36:02.440 --> 36:05.440
1762
+ Or do you think it's interesting because it's actually very useful
1763
+
1764
+ 36:05.440 --> 36:10.440
1765
+ for understanding the essence of a function of an algorithm?
1766
+
1767
+ 36:10.440 --> 36:15.440
1768
+ So it's like me judging your life as a human being
1769
+
1770
+ 36:15.440 --> 36:19.440
1771
+ by the worst thing you did and the best thing you did
1772
+
1773
+ 36:19.440 --> 36:21.440
1774
+ versus all the stuff in the middle.
1775
+
1776
+ 36:21.440 --> 36:25.440
1777
+ It seems not productive.
1778
+
1779
+ 36:25.440 --> 36:31.440
1780
+ I don't think so because you cannot describe situation in the middle.
1781
+
1782
+ 36:31.440 --> 36:34.440
1783
+ Or it will be not general.
1784
+
1785
+ 36:34.440 --> 36:38.440
1786
+ So you can describe edges cases.
1787
+
1788
+ 36:38.440 --> 36:41.440
1789
+ And it is clear it has some model.
1790
+
1791
+ 36:41.440 --> 36:47.440
1792
+ But you cannot describe model for every new case.
1793
+
1794
+ 36:47.440 --> 36:53.440
1795
+ So you will be never accurate when you're using model.
1796
+
1797
+ 36:53.440 --> 36:55.440
1798
+ But from a statistical point of view,
1799
+
1800
+ 36:55.440 --> 37:00.440
1801
+ the way you've studied functions and the nature of learning
1802
+
1803
+ 37:00.440 --> 37:07.440
1804
+ and the world, don't you think that the real world has a very long tail
1805
+
1806
+ 37:07.440 --> 37:13.440
1807
+ that the edge cases are very far away from the mean,
1808
+
1809
+ 37:13.440 --> 37:19.440
1810
+ the stuff in the middle, or no?
1811
+
1812
+ 37:19.440 --> 37:21.440
1813
+ I don't know that.
1814
+
1815
+ 37:21.440 --> 37:29.440
1816
+ I think that from my point of view,
1817
+
1818
+ 37:29.440 --> 37:39.440
1819
+ if you will use formal statistic, uniform law of large numbers,
1820
+
1821
+ 37:39.440 --> 37:47.440
1822
+ if you will use this invariance business,
1823
+
1824
+ 37:47.440 --> 37:51.440
1825
+ you will need just law of large numbers.
1826
+
1827
+ 37:51.440 --> 37:55.440
1828
+ And there's a huge difference between uniform law of large numbers
1829
+
1830
+ 37:55.440 --> 37:57.440
1831
+ and large numbers.
1832
+
1833
+ 37:57.440 --> 37:59.440
1834
+ Can you describe that a little more?
1835
+
1836
+ 37:59.440 --> 38:01.440
1837
+ Or should we just take it to...
1838
+
1839
+ 38:01.440 --> 38:05.440
1840
+ No, for example, when I'm talking about duck,
1841
+
1842
+ 38:05.440 --> 38:09.440
1843
+ I gave three predicates and it was enough.
1844
+
1845
+ 38:09.440 --> 38:14.440
1846
+ But if you will try to do formal distinguish,
1847
+
1848
+ 38:14.440 --> 38:17.440
1849
+ you will need a lot of observations.
1850
+
1851
+ 38:17.440 --> 38:19.440
1852
+ I got you.
1853
+
1854
+ 38:19.440 --> 38:24.440
1855
+ And so that means that information about looks like a duck
1856
+
1857
+ 38:24.440 --> 38:27.440
1858
+ contain a lot of bits of information,
1859
+
1860
+ 38:27.440 --> 38:29.440
1861
+ formal bits of information.
1862
+
1863
+ 38:29.440 --> 38:35.440
1864
+ So we don't know that how much bit of information
1865
+
1866
+ 38:35.440 --> 38:39.440
1867
+ contain things from artificial intelligence.
1868
+
1869
+ 38:39.440 --> 38:42.440
1870
+ And that is the subject of analysis.
1871
+
1872
+ 38:42.440 --> 38:47.440
1873
+ Till now, old business,
1874
+
1875
+ 38:47.440 --> 38:54.440
1876
+ I don't like how people consider artificial intelligence.
1877
+
1878
+ 38:54.440 --> 39:00.440
1879
+ They consider us some codes which imitate activity of human being.
1880
+
1881
+ 39:00.440 --> 39:02.440
1882
+ It is not science.
1883
+
1884
+ 39:02.440 --> 39:04.440
1885
+ It is applications.
1886
+
1887
+ 39:04.440 --> 39:06.440
1888
+ You would like to imitate God.
1889
+
1890
+ 39:06.440 --> 39:09.440
1891
+ It is very useful and we have good problem.
1892
+
1893
+ 39:09.440 --> 39:15.440
1894
+ But you need to learn something more.
1895
+
1896
+ 39:15.440 --> 39:23.440
1897
+ How people can to develop predicates,
1898
+
1899
+ 39:23.440 --> 39:25.440
1900
+ swims like a duck,
1901
+
1902
+ 39:25.440 --> 39:28.440
1903
+ or play like butterfly or something like that.
1904
+
1905
+ 39:28.440 --> 39:33.440
1906
+ Not the teacher tells you how it came in his mind.
1907
+
1908
+ 39:33.440 --> 39:36.440
1909
+ How he choose this image.
1910
+
1911
+ 39:36.440 --> 39:39.440
1912
+ That is problem of intelligence.
1913
+
1914
+ 39:39.440 --> 39:41.440
1915
+ That is the problem of intelligence.
1916
+
1917
+ 39:41.440 --> 39:44.440
1918
+ And you see that connected to the problem of learning?
1919
+
1920
+ 39:44.440 --> 39:45.440
1921
+ Absolutely.
1922
+
1923
+ 39:45.440 --> 39:48.440
1924
+ Because you immediately give this predicate
1925
+
1926
+ 39:48.440 --> 39:52.440
1927
+ like specific predicate, swims like a duck,
1928
+
1929
+ 39:52.440 --> 39:54.440
1930
+ or quack like a duck.
1931
+
1932
+ 39:54.440 --> 39:57.440
1933
+ It was chosen somehow.
1934
+
1935
+ 39:57.440 --> 40:00.440
1936
+ So what is the line of work, would you say?
1937
+
1938
+ 40:00.440 --> 40:05.440
1939
+ If you were to formulate as a set of open problems,
1940
+
1941
+ 40:05.440 --> 40:07.440
1942
+ that will take us there.
1943
+
1944
+ 40:07.440 --> 40:09.440
1945
+ Play like a butterfly.
1946
+
1947
+ 40:09.440 --> 40:11.440
1948
+ We will get a system to be able to...
1949
+
1950
+ 40:11.440 --> 40:13.440
1951
+ Let's separate two stories.
1952
+
1953
+ 40:13.440 --> 40:15.440
1954
+ One mathematical story.
1955
+
1956
+ 40:15.440 --> 40:19.440
1957
+ That if you have predicate, you can do something.
1958
+
1959
+ 40:19.440 --> 40:22.440
1960
+ And another story you have to get predicate.
1961
+
1962
+ 40:22.440 --> 40:26.440
1963
+ It is intelligence problem.
1964
+
1965
+ 40:26.440 --> 40:31.440
1966
+ And people even did not start understanding intelligence.
1967
+
1968
+ 40:31.440 --> 40:34.440
1969
+ Because to understand intelligence, first of all,
1970
+
1971
+ 40:34.440 --> 40:37.440
1972
+ try to understand what doing teachers.
1973
+
1974
+ 40:37.440 --> 40:40.440
1975
+ How teacher teach.
1976
+
1977
+ 40:40.440 --> 40:43.440
1978
+ Why one teacher better than another one?
1979
+
1980
+ 40:43.440 --> 40:44.440
1981
+ Yeah.
1982
+
1983
+ 40:44.440 --> 40:48.440
1984
+ So you think we really even haven't started on the journey
1985
+
1986
+ 40:48.440 --> 40:50.440
1987
+ of generating the predicate?
1988
+
1989
+ 40:50.440 --> 40:51.440
1990
+ No.
1991
+
1992
+ 40:51.440 --> 40:52.440
1993
+ We don't understand.
1994
+
1995
+ 40:52.440 --> 40:56.440
1996
+ We even don't understand that this problem exists.
1997
+
1998
+ 40:56.440 --> 40:58.440
1999
+ Because did you hear?
2000
+
2001
+ 40:58.440 --> 40:59.440
2002
+ You do.
2003
+
2004
+ 40:59.440 --> 41:02.440
2005
+ No, I just know name.
2006
+
2007
+ 41:02.440 --> 41:07.440
2008
+ I want to understand why one teacher better than another.
2009
+
2010
+ 41:07.440 --> 41:12.440
2011
+ And how affect teacher student.
2012
+
2013
+ 41:12.440 --> 41:17.440
2014
+ It is not because he repeating the problem which is in textbook.
2015
+
2016
+ 41:17.440 --> 41:18.440
2017
+ Yes.
2018
+
2019
+ 41:18.440 --> 41:20.440
2020
+ He make some remarks.
2021
+
2022
+ 41:20.440 --> 41:23.440
2023
+ He make some philosophy of reasoning.
2024
+
2025
+ 41:23.440 --> 41:24.440
2026
+ Yeah, that's a beautiful...
2027
+
2028
+ 41:24.440 --> 41:31.440
2029
+ So it is a formulation of a question that is the open problem.
2030
+
2031
+ 41:31.440 --> 41:33.440
2032
+ Why is one teacher better than another?
2033
+
2034
+ 41:33.440 --> 41:34.440
2035
+ Right.
2036
+
2037
+ 41:34.440 --> 41:37.440
2038
+ What he does better.
2039
+
2040
+ 41:37.440 --> 41:38.440
2041
+ Yeah.
2042
+
2043
+ 41:38.440 --> 41:39.440
2044
+ What...
2045
+
2046
+ 41:39.440 --> 41:42.440
2047
+ Why at every level?
2048
+
2049
+ 41:42.440 --> 41:44.440
2050
+ How do they get better?
2051
+
2052
+ 41:44.440 --> 41:47.440
2053
+ What does it mean to be better?
2054
+
2055
+ 41:47.440 --> 41:49.440
2056
+ The whole...
2057
+
2058
+ 41:49.440 --> 41:50.440
2059
+ Yeah.
2060
+
2061
+ 41:50.440 --> 41:53.440
2062
+ From whatever model I have.
2063
+
2064
+ 41:53.440 --> 41:56.440
2065
+ One teacher can give a very good predicate.
2066
+
2067
+ 41:56.440 --> 42:00.440
2068
+ One teacher can say swims like a dog.
2069
+
2070
+ 42:00.440 --> 42:03.440
2071
+ And another can say jump like a dog.
2072
+
2073
+ 42:03.440 --> 42:05.440
2074
+ And jump like a dog.
2075
+
2076
+ 42:05.440 --> 42:07.440
2077
+ Car is zero information.
2078
+
2079
+ 42:07.440 --> 42:08.440
2080
+ Yeah.
2081
+
2082
+ 42:08.440 --> 42:13.440
2083
+ So what is the most exciting problem in statistical learning you've ever worked on?
2084
+
2085
+ 42:13.440 --> 42:16.440
2086
+ Or are working on now?
2087
+
2088
+ 42:16.440 --> 42:22.440
2089
+ I just finished this invariant story.
2090
+
2091
+ 42:22.440 --> 42:24.440
2092
+ And I'm happy that...
2093
+
2094
+ 42:24.440 --> 42:30.440
2095
+ I believe that it is ultimate learning story.
2096
+
2097
+ 42:30.440 --> 42:37.440
2098
+ At least I can show that there are no another mechanism, only two mechanisms.
2099
+
2100
+ 42:37.440 --> 42:44.440
2101
+ But they separate statistical part from intelligent part.
2102
+
2103
+ 42:44.440 --> 42:48.440
2104
+ And I know nothing about intelligent part.
2105
+
2106
+ 42:48.440 --> 42:52.440
2107
+ And if we will know this intelligent part,
2108
+
2109
+ 42:52.440 --> 42:59.440
2110
+ so it will help us a lot in teaching, in learning.
2111
+
2112
+ 42:59.440 --> 43:02.440
2113
+ You don't know it when we see it?
2114
+
2115
+ 43:02.440 --> 43:06.440
2116
+ So for example, in my talk, the last slide was the challenge.
2117
+
2118
+ 43:06.440 --> 43:11.440
2119
+ So you have, say, NIST digital recognition problem.
2120
+
2121
+ 43:11.440 --> 43:16.440
2122
+ And deep learning claims that they did it very well.
2123
+
2124
+ 43:16.440 --> 43:21.440
2125
+ Say 99.5% of correct answers.
2126
+
2127
+ 43:21.440 --> 43:24.440
2128
+ But they use 60,000 observations.
2129
+
2130
+ 43:24.440 --> 43:26.440
2131
+ Can you do the same?
2132
+
2133
+ 43:26.440 --> 43:29.440
2134
+ 100 times less.
2135
+
2136
+ 43:29.440 --> 43:31.440
2137
+ But incorporating invariants.
2138
+
2139
+ 43:31.440 --> 43:34.440
2140
+ What it means, you know, digit one, two, three.
2141
+
2142
+ 43:34.440 --> 43:35.440
2143
+ Yeah.
2144
+
2145
+ 43:35.440 --> 43:37.440
2146
+ Just looking at that.
2147
+
2148
+ 43:37.440 --> 43:40.440
2149
+ Explain to me which invariant I should keep.
2150
+
2151
+ 43:40.440 --> 43:43.440
2152
+ To use 100 examples.
2153
+
2154
+ 43:43.440 --> 43:48.440
2155
+ Or say 100 times less examples to do the same job.
2156
+
2157
+ 43:48.440 --> 43:49.440
2158
+ Yeah.
2159
+
2160
+ 43:49.440 --> 43:55.440
2161
+ That last slide, unfortunately, you talk ended quickly.
2162
+
2163
+ 43:55.440 --> 43:59.440
2164
+ The last slide was a powerful open challenge
2165
+
2166
+ 43:59.440 --> 44:02.440
2167
+ and a formulation of the essence here.
2168
+
2169
+ 44:02.440 --> 44:06.440
2170
+ That is the exact problem of intelligence.
2171
+
2172
+ 44:06.440 --> 44:12.440
2173
+ Because everybody, when machine learning started,
2174
+
2175
+ 44:12.440 --> 44:15.440
2176
+ it was developed by mathematicians,
2177
+
2178
+ 44:15.440 --> 44:19.440
2179
+ they immediately recognized that we use much more
2180
+
2181
+ 44:19.440 --> 44:22.440
2182
+ training data than humans needed.
2183
+
2184
+ 44:22.440 --> 44:25.440
2185
+ But now again, we came to the same story.
2186
+
2187
+ 44:25.440 --> 44:27.440
2188
+ Have to decrease.
2189
+
2190
+ 44:27.440 --> 44:30.440
2191
+ That is the problem of learning.
2192
+
2193
+ 44:30.440 --> 44:32.440
2194
+ It is not like in deep learning,
2195
+
2196
+ 44:32.440 --> 44:35.440
2197
+ they use zealons of training data.
2198
+
2199
+ 44:35.440 --> 44:38.440
2200
+ Because maybe zealons are not enough
2201
+
2202
+ 44:38.440 --> 44:44.440
2203
+ if you have a good invariance.
2204
+
2205
+ 44:44.440 --> 44:49.440
2206
+ Maybe you'll never collect some number of observations.
2207
+
2208
+ 44:49.440 --> 44:53.440
2209
+ But now it is a question to intelligence.
2210
+
2211
+ 44:53.440 --> 44:55.440
2212
+ Have to do that.
2213
+
2214
+ 44:55.440 --> 44:58.440
2215
+ Because statistical part is ready.
2216
+
2217
+ 44:58.440 --> 45:02.440
2218
+ As soon as you supply us with predicate,
2219
+
2220
+ 45:02.440 --> 45:06.440
2221
+ we can do good job with small amount of observations.
2222
+
2223
+ 45:06.440 --> 45:10.440
2224
+ And the very first challenges will know digit recognition.
2225
+
2226
+ 45:10.440 --> 45:12.440
2227
+ And you know digits.
2228
+
2229
+ 45:12.440 --> 45:15.440
2230
+ And please tell me invariance.
2231
+
2232
+ 45:15.440 --> 45:16.440
2233
+ I think about that.
2234
+
2235
+ 45:16.440 --> 45:20.440
2236
+ I can say for digit 3, I would introduce
2237
+
2238
+ 45:20.440 --> 45:24.440
2239
+ concept of horizontal symmetry.
2240
+
2241
+ 45:24.440 --> 45:29.440
2242
+ So the digit 3 has horizontal symmetry
2243
+
2244
+ 45:29.440 --> 45:33.440
2245
+ more than say digit 2 or something like that.
2246
+
2247
+ 45:33.440 --> 45:37.440
2248
+ But as soon as I get the idea of horizontal symmetry,
2249
+
2250
+ 45:37.440 --> 45:40.440
2251
+ I can mathematically invent a lot of
2252
+
2253
+ 45:40.440 --> 45:43.440
2254
+ measure of horizontal symmetry
2255
+
2256
+ 45:43.440 --> 45:46.440
2257
+ on vertical symmetry or diagonal symmetry,
2258
+
2259
+ 45:46.440 --> 45:49.440
2260
+ whatever, if I have a day of symmetry.
2261
+
2262
+ 45:49.440 --> 45:52.440
2263
+ But what else?
2264
+
2265
+ 45:52.440 --> 46:00.440
2266
+ Looking on digit, I see that it is metapredicate,
2267
+
2268
+ 46:00.440 --> 46:04.440
2269
+ which is not shape.
2270
+
2271
+ 46:04.440 --> 46:07.440
2272
+ It is something like symmetry,
2273
+
2274
+ 46:07.440 --> 46:12.440
2275
+ like how dark is whole picture, something like that.
2276
+
2277
+ 46:12.440 --> 46:15.440
2278
+ Which can self rise up predicate.
2279
+
2280
+ 46:15.440 --> 46:18.440
2281
+ You think such a predicate could rise
2282
+
2283
+ 46:18.440 --> 46:26.440
2284
+ out of something that is not general.
2285
+
2286
+ 46:26.440 --> 46:31.440
2287
+ Meaning it feels like for me to be able to
2288
+
2289
+ 46:31.440 --> 46:34.440
2290
+ understand the difference between a 2 and a 3,
2291
+
2292
+ 46:34.440 --> 46:39.440
2293
+ I would need to have had a childhood
2294
+
2295
+ 46:39.440 --> 46:45.440
2296
+ of 10 to 15 years playing with kids,
2297
+
2298
+ 46:45.440 --> 46:50.440
2299
+ going to school, being yelled by parents.
2300
+
2301
+ 46:50.440 --> 46:55.440
2302
+ All of that, walking, jumping, looking at ducks.
2303
+
2304
+ 46:55.440 --> 46:58.440
2305
+ And now then I would be able to generate
2306
+
2307
+ 46:58.440 --> 47:01.440
2308
+ the right predicate for telling the difference
2309
+
2310
+ 47:01.440 --> 47:03.440
2311
+ between 2 and a 3.
2312
+
2313
+ 47:03.440 --> 47:06.440
2314
+ Or do you think there is a more efficient way?
2315
+
2316
+ 47:06.440 --> 47:10.440
2317
+ I know for sure that you must know
2318
+
2319
+ 47:10.440 --> 47:12.440
2320
+ something more than digits.
2321
+
2322
+ 47:12.440 --> 47:15.440
2323
+ That's a powerful statement.
2324
+
2325
+ 47:15.440 --> 47:19.440
2326
+ But maybe there are several languages
2327
+
2328
+ 47:19.440 --> 47:24.440
2329
+ of description, these elements of digits.
2330
+
2331
+ 47:24.440 --> 47:27.440
2332
+ So I'm talking about symmetry,
2333
+
2334
+ 47:27.440 --> 47:30.440
2335
+ about some properties of geometry,
2336
+
2337
+ 47:30.440 --> 47:33.440
2338
+ I'm talking about something abstract.
2339
+
2340
+ 47:33.440 --> 47:38.440
2341
+ But this is a problem of intelligence.
2342
+
2343
+ 47:38.440 --> 47:42.440
2344
+ So in one of our articles, it is trivial to show
2345
+
2346
+ 47:42.440 --> 47:46.440
2347
+ that every example can carry
2348
+
2349
+ 47:46.440 --> 47:49.440
2350
+ not more than one bit of information in real.
2351
+
2352
+ 47:49.440 --> 47:54.440
2353
+ Because when you show example
2354
+
2355
+ 47:54.440 --> 47:59.440
2356
+ and you say this is one, you can remove, say,
2357
+
2358
+ 47:59.440 --> 48:03.440
2359
+ a function which does not tell you one, say,
2360
+
2361
+ 48:03.440 --> 48:06.440
2362
+ the best strategy, if you can do it perfectly,
2363
+
2364
+ 48:06.440 --> 48:09.440
2365
+ it's remove half of the functions.
2366
+
2367
+ 48:09.440 --> 48:14.440
2368
+ But when you use one predicate, which looks like a duck,
2369
+
2370
+ 48:14.440 --> 48:18.440
2371
+ you can remove much more functions than half.
2372
+
2373
+ 48:18.440 --> 48:20.440
2374
+ And that means that it contains
2375
+
2376
+ 48:20.440 --> 48:25.440
2377
+ a lot of bit of information from a formal point of view.
2378
+
2379
+ 48:25.440 --> 48:31.440
2380
+ But when you have a general picture
2381
+
2382
+ 48:31.440 --> 48:33.440
2383
+ of what you want to recognize,
2384
+
2385
+ 48:33.440 --> 48:36.440
2386
+ a general picture of the world,
2387
+
2388
+ 48:36.440 --> 48:40.440
2389
+ can you invent this predicate?
2390
+
2391
+ 48:40.440 --> 48:46.440
2392
+ And that predicate carries a lot of information.
2393
+
2394
+ 48:46.440 --> 48:49.440
2395
+ Beautifully put, maybe just me,
2396
+
2397
+ 48:49.440 --> 48:53.440
2398
+ but in all the math you show, in your work,
2399
+
2400
+ 48:53.440 --> 48:57.440
2401
+ which is some of the most profound mathematical work
2402
+
2403
+ 48:57.440 --> 49:01.440
2404
+ in the field of learning AI and just math in general.
2405
+
2406
+ 49:01.440 --> 49:04.440
2407
+ I hear a lot of poetry and philosophy.
2408
+
2409
+ 49:04.440 --> 49:09.440
2410
+ You really kind of talk about philosophy of science.
2411
+
2412
+ 49:09.440 --> 49:12.440
2413
+ There's a poetry and music to a lot of the work you're doing
2414
+
2415
+ 49:12.440 --> 49:14.440
2416
+ and the way you're thinking about it.
2417
+
2418
+ 49:14.440 --> 49:16.440
2419
+ So where does that come from?
2420
+
2421
+ 49:16.440 --> 49:20.440
2422
+ Do you escape to poetry? Do you escape to music?
2423
+
2424
+ 49:20.440 --> 49:24.440
2425
+ I think that there exists ground truth.
2426
+
2427
+ 49:24.440 --> 49:26.440
2428
+ There exists ground truth?
2429
+
2430
+ 49:26.440 --> 49:30.440
2431
+ Yeah, and that can be seen everywhere.
2432
+
2433
+ 49:30.440 --> 49:32.440
2434
+ The smart guy, philosopher,
2435
+
2436
+ 49:32.440 --> 49:38.440
2437
+ sometimes I surprise how they deep see.
2438
+
2439
+ 49:38.440 --> 49:45.440
2440
+ Sometimes I see that some of them are completely out of subject.
2441
+
2442
+ 49:45.440 --> 49:50.440
2443
+ But the ground truth I see in music.
2444
+
2445
+ 49:50.440 --> 49:52.440
2446
+ Music is the ground truth?
2447
+
2448
+ 49:52.440 --> 49:53.440
2449
+ Yeah.
2450
+
2451
+ 49:53.440 --> 50:01.440
2452
+ And in poetry, many poets, they believe they take dictation.
2453
+
2454
+ 50:01.440 --> 50:06.440
2455
+ So what piece of music,
2456
+
2457
+ 50:06.440 --> 50:08.440
2458
+ as a piece of empirical evidence,
2459
+
2460
+ 50:08.440 --> 50:14.440
2461
+ gave you a sense that they are touching something in the ground truth?
2462
+
2463
+ 50:14.440 --> 50:16.440
2464
+ It is structure.
2465
+
2466
+ 50:16.440 --> 50:18.440
2467
+ The structure with the math of music.
2468
+
2469
+ 50:18.440 --> 50:20.440
2470
+ Because when you're listening to Bach,
2471
+
2472
+ 50:20.440 --> 50:22.440
2473
+ you see this structure.
2474
+
2475
+ 50:22.440 --> 50:25.440
2476
+ Very clear, very classic, very simple.
2477
+
2478
+ 50:25.440 --> 50:31.440
2479
+ And the same in Bach, when you have axioms in geometry,
2480
+
2481
+ 50:31.440 --> 50:33.440
2482
+ you have the same feeling.
2483
+
2484
+ 50:33.440 --> 50:36.440
2485
+ And in poetry, sometimes you see the same.
2486
+
2487
+ 50:36.440 --> 50:40.440
2488
+ And if you look back at your childhood,
2489
+
2490
+ 50:40.440 --> 50:42.440
2491
+ you grew up in Russia,
2492
+
2493
+ 50:42.440 --> 50:46.440
2494
+ you maybe were born as a researcher in Russia,
2495
+
2496
+ 50:46.440 --> 50:48.440
2497
+ you developed as a researcher in Russia,
2498
+
2499
+ 50:48.440 --> 50:51.440
2500
+ you came to the United States in a few places.
2501
+
2502
+ 50:51.440 --> 50:53.440
2503
+ If you look back,
2504
+
2505
+ 50:53.440 --> 50:59.440
2506
+ what were some of your happiest moments as a researcher?
2507
+
2508
+ 50:59.440 --> 51:02.440
2509
+ Some of the most profound moments.
2510
+
2511
+ 51:02.440 --> 51:06.440
2512
+ Not in terms of their impact on society,
2513
+
2514
+ 51:06.440 --> 51:12.440
2515
+ but in terms of their impact on how damn good you feel that day,
2516
+
2517
+ 51:12.440 --> 51:15.440
2518
+ and you remember that moment.
2519
+
2520
+ 51:15.440 --> 51:20.440
2521
+ You know, every time when you found something,
2522
+
2523
+ 51:20.440 --> 51:22.440
2524
+ it is great.
2525
+
2526
+ 51:22.440 --> 51:24.440
2527
+ It's a life.
2528
+
2529
+ 51:24.440 --> 51:26.440
2530
+ Every simple thing.
2531
+
2532
+ 51:26.440 --> 51:32.440
2533
+ But my general feeling that most of my time was wrong.
2534
+
2535
+ 51:32.440 --> 51:35.440
2536
+ You should go again and again and again
2537
+
2538
+ 51:35.440 --> 51:39.440
2539
+ and try to be honest in front of yourself.
2540
+
2541
+ 51:39.440 --> 51:41.440
2542
+ Not to make interpretation,
2543
+
2544
+ 51:41.440 --> 51:46.440
2545
+ but try to understand that it's related to ground truth.
2546
+
2547
+ 51:46.440 --> 51:52.440
2548
+ It is not my blah, blah, blah interpretation or something like that.
2549
+
2550
+ 51:52.440 --> 51:57.440
2551
+ But you're allowed to get excited at the possibility of discovery.
2552
+
2553
+ 51:57.440 --> 52:00.440
2554
+ You have to double check it, but...
2555
+
2556
+ 52:00.440 --> 52:04.440
2557
+ No, but how it's related to the other ground truth
2558
+
2559
+ 52:04.440 --> 52:10.440
2560
+ is it just temporary or it is forever?
2561
+
2562
+ 52:10.440 --> 52:13.440
2563
+ You know, you always have a feeling
2564
+
2565
+ 52:13.440 --> 52:17.440
2566
+ when you found something,
2567
+
2568
+ 52:17.440 --> 52:19.440
2569
+ how big is that?
2570
+
2571
+ 52:19.440 --> 52:23.440
2572
+ So, 20 years ago, when we discovered statistical learning,
2573
+
2574
+ 52:23.440 --> 52:25.440
2575
+ so nobody believed.
2576
+
2577
+ 52:25.440 --> 52:31.440
2578
+ Except for one guy, Dudley from MIT.
2579
+
2580
+ 52:31.440 --> 52:36.440
2581
+ And then in 20 years, it became fashion.
2582
+
2583
+ 52:36.440 --> 52:39.440
2584
+ And the same with support vector machines.
2585
+
2586
+ 52:39.440 --> 52:41.440
2587
+ That's kernel machines.
2588
+
2589
+ 52:41.440 --> 52:44.440
2590
+ So with support vector machines and learning theory,
2591
+
2592
+ 52:44.440 --> 52:48.440
2593
+ when you were working on it,
2594
+
2595
+ 52:48.440 --> 52:55.440
2596
+ you had a sense that you had a sense of the profundity of it,
2597
+
2598
+ 52:55.440 --> 52:59.440
2599
+ how this seems to be right.
2600
+
2601
+ 52:59.440 --> 53:01.440
2602
+ It seems to be powerful.
2603
+
2604
+ 53:01.440 --> 53:04.440
2605
+ Right, absolutely, immediately.
2606
+
2607
+ 53:04.440 --> 53:08.440
2608
+ I recognize that it will last forever.
2609
+
2610
+ 53:08.440 --> 53:17.440
2611
+ And now, when I found this invariance story,
2612
+
2613
+ 53:17.440 --> 53:21.440
2614
+ I have a feeling that it is completely wrong.
2615
+
2616
+ 53:21.440 --> 53:25.440
2617
+ Because I have proved that there are no different mechanisms.
2618
+
2619
+ 53:25.440 --> 53:30.440
2620
+ Some say cosmetic improvement you can do,
2621
+
2622
+ 53:30.440 --> 53:34.440
2623
+ but in terms of invariance,
2624
+
2625
+ 53:34.440 --> 53:38.440
2626
+ you need both invariance and statistical learning
2627
+
2628
+ 53:38.440 --> 53:41.440
2629
+ and they should work together.
2630
+
2631
+ 53:41.440 --> 53:47.440
2632
+ But also, I'm happy that we can formulate
2633
+
2634
+ 53:47.440 --> 53:51.440
2635
+ what is intelligence from that
2636
+
2637
+ 53:51.440 --> 53:54.440
2638
+ and to separate from technical part.
2639
+
2640
+ 53:54.440 --> 53:56.440
2641
+ And that is completely different.
2642
+
2643
+ 53:56.440 --> 53:58.440
2644
+ Absolutely.
2645
+
2646
+ 53:58.440 --> 54:00.440
2647
+ Well, Vladimir, thank you so much for talking today.
2648
+
2649
+ 54:00.440 --> 54:01.440
2650
+ Thank you.
2651
+
2652
+ 54:01.440 --> 54:28.440
2653
+ Thank you very much.
2654
+
vtt/episode_006_large.vtt ADDED
@@ -0,0 +1,2603 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:05.680
4
+ The following is a conversation with Guido van Rossum, creator of Python, one of the most popular
5
+
6
+ 00:05.680 --> 00:11.120
7
+ programming languages in the world, used in almost any application that involves computers
8
+
9
+ 00:11.120 --> 00:17.760
10
+ from web back end development to psychology, neuroscience, computer vision, robotics, deep
11
+
12
+ 00:17.760 --> 00:24.560
13
+ learning, natural language processing, and almost any subfield of AI. This conversation is part of
14
+
15
+ 00:24.560 --> 00:29.280
16
+ MIT course on artificial general intelligence and the artificial intelligence podcast.
17
+
18
+ 00:29.280 --> 00:36.080
19
+ If you enjoy it, subscribe on YouTube, iTunes, or your podcast provider of choice, or simply connect
20
+
21
+ 00:36.080 --> 00:44.720
22
+ with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Guido van
23
+
24
+ 00:44.720 --> 00:53.120
25
+ Rossum. You were born in the Netherlands in 1956. Your parents and the world around you was deeply
26
+
27
+ 00:53.120 --> 01:00.080
28
+ deeply impacted by World War Two, as was my family from the Soviet Union. So with that context,
29
+
30
+ 01:02.000 --> 01:07.360
31
+ what is your view of human nature? Are some humans inherently good,
32
+
33
+ 01:07.360 --> 01:12.240
34
+ and some inherently evil? Or do we all have both good and evil within us?
35
+
36
+ 01:12.240 --> 01:23.920
37
+ Guido van Rossum Ouch, I did not expect such a deep one. I, I guess we all have good and evil
38
+
39
+ 01:24.880 --> 01:31.440
40
+ potential in us. And a lot of it depends on circumstances and context.
41
+
42
+ 01:31.440 --> 01:38.800
43
+ Peter Bell out of that world, at least on the Soviet Union side in Europe, sort of out of
44
+
45
+ 01:38.800 --> 01:46.480
46
+ suffering, out of challenge, out of that kind of set of traumatic events, often emerges beautiful
47
+
48
+ 01:46.480 --> 01:53.200
49
+ art, music, literature. In an interview I read or heard, you said you enjoyed Dutch literature
50
+
51
+ 01:54.320 --> 01:59.760
52
+ when you were a child. Can you tell me about the books that had an influence on you in your
53
+
54
+ 01:59.760 --> 02:01.520
55
+ childhood? Guido van Rossum
56
+
57
+ 02:01.520 --> 02:09.120
58
+ Well, with as a teenager, my favorite writer was my favorite Dutch author was a guy named Willem
59
+
60
+ 02:09.120 --> 02:19.440
61
+ Frederik Hermans, who's writing, certainly his early novels were all about sort of
62
+
63
+ 02:19.440 --> 02:30.480
64
+ ambiguous things that happened during World War Two. I think he was a young adult during that time.
65
+
66
+ 02:31.600 --> 02:40.800
67
+ And he wrote about it a lot, and very interesting, very good books, I thought, I think.
68
+
69
+ 02:40.800 --> 02:42.560
70
+ Peter Bell In a nonfiction way?
71
+
72
+ 02:42.560 --> 02:46.400
73
+ Guido van Rossum No, it was all fiction, but it was
74
+
75
+ 02:46.400 --> 02:52.640
76
+ very much set in the ambiguous world of resistance against the Germans,
77
+
78
+ 02:54.560 --> 03:03.840
79
+ where often you couldn't tell whether someone was truly in the resistance or really a spy for the
80
+
81
+ 03:03.840 --> 03:11.280
82
+ Germans. And some of the characters in his novels sort of crossed that line, and you never really
83
+
84
+ 03:11.280 --> 03:13.840
85
+ find out what exactly happened.
86
+
87
+ 03:13.840 --> 03:16.880
88
+ Peter Bell And in his novels, there's always a
89
+
90
+ 03:16.880 --> 03:22.160
91
+ good guy and a bad guy, the nature of good and evil. Is it clear there's a hero?
92
+
93
+ 03:22.160 --> 03:25.120
94
+ Guido van Rossum No, his heroes are often more,
95
+
96
+ 03:25.120 --> 03:34.000
97
+ his main characters are often anti heroes. And so they're not very heroic. They're often,
98
+
99
+ 03:36.640 --> 03:40.800
100
+ they fail at some level to accomplish their lofty goals.
101
+
102
+ 03:40.800 --> 03:43.040
103
+ Peter Bell And looking at the trajectory
104
+
105
+ 03:43.040 --> 03:48.880
106
+ through the rest of your life, has literature, Dutch or English or translation had an impact
107
+
108
+ 03:50.560 --> 03:54.160
109
+ outside the technical world that you existed in?
110
+
111
+ 03:54.160 --> 03:59.920
112
+ Guido van Rossum I still read novels.
113
+
114
+ 04:00.640 --> 04:05.200
115
+ I don't think that it impacts me that much directly.
116
+
117
+ 04:05.200 --> 04:07.280
118
+ Peter Bell It doesn't impact your work.
119
+
120
+ 04:07.280 --> 04:10.080
121
+ Guido van Rossum It's a separate world.
122
+
123
+ 04:10.080 --> 04:17.440
124
+ My work is highly technical and sort of the world of art and literature doesn't really
125
+
126
+ 04:17.440 --> 04:19.120
127
+ directly have any bearing on it.
128
+
129
+ 04:19.120 --> 04:22.400
130
+ Peter Bell You don't think there's a creative element
131
+
132
+ 04:22.400 --> 04:26.880
133
+ to the design? You know, some would say design of a language is art.
134
+
135
+ 04:26.880 --> 04:32.160
136
+ Guido van Rossum I'm not disagreeing with that.
137
+
138
+ 04:32.160 --> 04:39.360
139
+ I'm just saying that sort of I don't feel direct influences from more traditional art
140
+
141
+ 04:39.360 --> 04:40.880
142
+ on my own creativity.
143
+
144
+ 04:40.880 --> 04:43.280
145
+ Peter Bell Right. Of course, you don't feel doesn't mean
146
+
147
+ 04:43.280 --> 04:46.000
148
+ it's not somehow deeply there in your subconscious.
149
+
150
+ 04:46.000 --> 04:48.240
151
+ Guido van Rossum Who knows?
152
+
153
+ 04:48.240 --> 04:51.200
154
+ Peter Bell Who knows? So let's go back to your early
155
+
156
+ 04:51.200 --> 04:57.440
157
+ teens. Your hobbies were building electronic circuits, building mechanical models.
158
+
159
+ 04:57.440 --> 05:06.080
160
+ What if you can just put yourself back in the mind of that young Guido 12, 13, 14, was
161
+
162
+ 05:06.080 --> 05:12.240
163
+ that grounded in a desire to create a system? So to create something? Or was it more just
164
+
165
+ 05:12.240 --> 05:14.720
166
+ tinkering? Just the joy of puzzle solving?
167
+
168
+ 05:14.720 --> 05:18.720
169
+ Guido van Rossum I think it was more the latter, actually.
170
+
171
+ 05:18.720 --> 05:29.920
172
+ I maybe towards the end of my high school period, I felt confident enough that that
173
+
174
+ 05:29.920 --> 05:39.120
175
+ I designed my own circuits that were sort of interesting somewhat. But a lot of that
176
+
177
+ 05:39.120 --> 05:46.000
178
+ time, I literally just took a model kit and follow the instructions, putting the things
179
+
180
+ 05:46.000 --> 05:51.680
181
+ together. I mean, I think the first few years that I built electronics kits, I really did
182
+
183
+ 05:51.680 --> 05:59.760
184
+ not have enough understanding of sort of electronics to really understand what I was doing. I mean,
185
+
186
+ 05:59.760 --> 06:06.480
187
+ I could debug it, and I could sort of follow the instructions very carefully, which has
188
+
189
+ 06:06.480 --> 06:14.560
190
+ always stayed with me. But I had a very naive model of, like, how do I build a circuit?
191
+
192
+ 06:14.560 --> 06:22.800
193
+ Of, like, how a transistor works? And I don't think that in those days, I had any understanding
194
+
195
+ 06:22.800 --> 06:32.560
196
+ of coils and capacitors, which actually sort of was a major problem when I started to build
197
+
198
+ 06:32.560 --> 06:39.840
199
+ more complex digital circuits, because I was unaware of the sort of the analog part of
200
+
201
+ 06:39.840 --> 06:50.080
202
+ the – how they actually work. And I would have things that – the schematic looked
203
+
204
+ 06:50.080 --> 06:57.440
205
+ – everything looked fine, and it didn't work. And what I didn't realize was that
206
+
207
+ 06:57.440 --> 07:02.720
208
+ there was some megahertz level oscillation that was throwing the circuit off, because
209
+
210
+ 07:02.720 --> 07:13.360
211
+ I had a sort of – two wires were too close, or the switches were kind of poorly built.
212
+
213
+ 07:13.360 --> 07:19.280
214
+ But through that time, I think it's really interesting and instructive to think about,
215
+
216
+ 07:19.280 --> 07:24.600
217
+ because echoes of it are in this time now. So in the 1970s, the personal computer was
218
+
219
+ 07:24.600 --> 07:33.200
220
+ being born. So did you sense, in tinkering with these circuits, did you sense the encroaching
221
+
222
+ 07:33.200 --> 07:39.320
223
+ revolution in personal computing? So if at that point, we would sit you down and ask
224
+
225
+ 07:39.320 --> 07:46.040
226
+ you to predict the 80s and the 90s, do you think you would be able to do so successfully
227
+
228
+ 07:46.040 --> 07:55.560
229
+ to unroll the process that's happening? No, I had no clue. I remember, I think, in
230
+
231
+ 07:55.560 --> 08:03.060
232
+ the summer after my senior year – or maybe it was the summer after my junior year – well,
233
+
234
+ 08:03.060 --> 08:11.600
235
+ at some point, I think, when I was 18, I went on a trip to the Math Olympiad in Eastern
236
+
237
+ 08:11.600 --> 08:16.920
238
+ Europe, and there was like – I was part of the Dutch team, and there were other nerdy
239
+
240
+ 08:16.920 --> 08:23.040
241
+ kids that sort of had different experiences, and one of them told me about this amazing
242
+
243
+ 08:23.040 --> 08:31.840
244
+ thing called a computer. And I had never heard that word. My own explorations in electronics
245
+
246
+ 08:31.840 --> 08:40.420
247
+ were sort of about very simple digital circuits, and I had sort of – I had the idea that
248
+
249
+ 08:40.420 --> 08:49.760
250
+ I somewhat understood how a digital calculator worked. And so there is maybe some echoes
251
+
252
+ 08:49.760 --> 08:56.440
253
+ of computers there, but I never made that connection. I didn't know that when my parents
254
+
255
+ 08:56.440 --> 09:03.520
256
+ were paying for magazine subscriptions using punched cards, that there was something called
257
+
258
+ 09:03.520 --> 09:08.260
259
+ a computer that was involved that read those cards and transferred the money between accounts.
260
+
261
+ 09:08.260 --> 09:15.880
262
+ I was also not really interested in those things. It was only when I went to university
263
+
264
+ 09:15.880 --> 09:23.120
265
+ to study math that I found out that they had a computer, and students were allowed to use
266
+
267
+ 09:23.120 --> 09:24.120
268
+ it.
269
+
270
+ 09:24.120 --> 09:27.800
271
+ And there were some – you're supposed to talk to that computer by programming it.
272
+
273
+ 09:27.800 --> 09:29.920
274
+ What did that feel like, finding –
275
+
276
+ 09:29.920 --> 09:35.440
277
+ Yeah, that was the only thing you could do with it. The computer wasn't really connected
278
+
279
+ 09:35.440 --> 09:41.400
280
+ to the real world. The only thing you could do was sort of – you typed your program
281
+
282
+ 09:41.400 --> 09:47.840
283
+ on a bunch of punched cards. You gave the punched cards to the operator, and an hour
284
+
285
+ 09:47.840 --> 09:55.520
286
+ later the operator gave you back your printout. And so all you could do was write a program
287
+
288
+ 09:55.520 --> 10:04.080
289
+ that did something very abstract. And I don't even remember what my first forays into programming
290
+
291
+ 10:04.080 --> 10:13.440
292
+ were, but they were sort of doing simple math exercises and just to learn how a programming
293
+
294
+ 10:13.440 --> 10:15.560
295
+ language worked.
296
+
297
+ 10:15.560 --> 10:21.680
298
+ Did you sense, okay, first year of college, you see this computer, you're able to have
299
+
300
+ 10:21.680 --> 10:29.420
301
+ a program and it generates some output. Did you start seeing the possibility of this,
302
+
303
+ 10:29.420 --> 10:34.920
304
+ or was it a continuation of the tinkering with circuits? Did you start to imagine that
305
+
306
+ 10:34.920 --> 10:42.460
307
+ one, the personal computer, but did you see it as something that is a tool, like a word
308
+
309
+ 10:42.460 --> 10:47.160
310
+ processing tool, maybe for gaming or something? Or did you start to imagine that it could
311
+
312
+ 10:47.160 --> 10:53.860
313
+ be going to the world of robotics, like the Frankenstein picture that you could create
314
+
315
+ 10:53.860 --> 10:59.640
316
+ an artificial being? There's like another entity in front of you. You did not see the
317
+
318
+ 10:59.640 --> 11:00.640
319
+ computer.
320
+
321
+ 11:00.640 --> 11:05.840
322
+ I don't think I really saw it that way. I was really more interested in the tinkering.
323
+
324
+ 11:05.840 --> 11:14.920
325
+ It's maybe not a sort of a complete coincidence that I ended up sort of creating a programming
326
+
327
+ 11:14.920 --> 11:20.360
328
+ language which is a tool for other programmers. I've always been very focused on the sort
329
+
330
+ 11:20.360 --> 11:28.920
331
+ of activity of programming itself and not so much what happens with the program you
332
+
333
+ 11:28.920 --> 11:29.920
334
+ write.
335
+
336
+ 11:29.920 --> 11:30.920
337
+ Right.
338
+
339
+ 11:30.920 --> 11:37.800
340
+ I do remember, and I don't remember, maybe in my second or third year, probably my second
341
+
342
+ 11:37.800 --> 11:46.680
343
+ actually, someone pointed out to me that there was this thing called Conway's Game of Life.
344
+
345
+ 11:46.680 --> 11:50.480
346
+ You're probably familiar with it. I think –
347
+
348
+ 11:50.480 --> 11:53.200
349
+ In the 70s, I think is when they came up with it.
350
+
351
+ 11:53.200 --> 12:00.840
352
+ So there was a Scientific American column by someone who did a monthly column about
353
+
354
+ 12:00.840 --> 12:06.580
355
+ mathematical diversions. I'm also blanking out on the guy's name. It was very famous
356
+
357
+ 12:06.580 --> 12:12.440
358
+ at the time and I think up to the 90s or so. And one of his columns was about Conway's
359
+
360
+ 12:12.440 --> 12:18.160
361
+ Game of Life and he had some illustrations and he wrote down all the rules and sort of
362
+
363
+ 12:18.160 --> 12:23.720
364
+ there was the suggestion that this was philosophically interesting, that that was why Conway had
365
+
366
+ 12:23.720 --> 12:31.480
367
+ called it that. And all I had was like the two pages photocopy of that article. I don't
368
+
369
+ 12:31.480 --> 12:40.200
370
+ even remember where I got it. But it spoke to me and I remember implementing a version
371
+
372
+ 12:40.200 --> 12:49.000
373
+ of that game for the batch computer we were using where I had a whole Pascal program that
374
+
375
+ 12:49.000 --> 12:56.480
376
+ sort of read an initial situation from input and read some numbers that said do so many
377
+
378
+ 12:56.480 --> 13:05.960
379
+ generations and print every so many generations and then out would come pages and pages of
380
+
381
+ 13:05.960 --> 13:08.480
382
+ sort of things.
383
+
384
+ 13:08.480 --> 13:18.360
385
+ I remember much later I've done a similar thing using Python but that original version
386
+
387
+ 13:18.360 --> 13:27.700
388
+ I wrote at the time I found interesting because I combined it with some trick I had learned
389
+
390
+ 13:27.700 --> 13:36.000
391
+ during my electronics hobbyist times. I essentially first on paper I designed a simple circuit
392
+
393
+ 13:36.000 --> 13:45.780
394
+ built out of logic gates that took nine bits of input which is sort of the cell and its
395
+
396
+ 13:45.780 --> 13:54.040
397
+ neighbors and produced a new value for that cell and it's like a combination of a half
398
+
399
+ 13:54.040 --> 14:01.040
400
+ adder and some other clipping. It's actually a full adder. And so I had worked that out
401
+
402
+ 14:01.040 --> 14:10.520
403
+ and then I translated that into a series of Boolean operations on Pascal integers where
404
+
405
+ 14:10.520 --> 14:21.740
406
+ you could use the integers as bitwise values. And so I could basically generate 60 bits
407
+
408
+ 14:21.740 --> 14:28.800
409
+ of a generation in like eight instructions or so.
410
+
411
+ 14:28.800 --> 14:29.800
412
+ Nice.
413
+
414
+ 14:29.800 --> 14:32.560
415
+ So I was proud of that.
416
+
417
+ 14:32.560 --> 14:38.120
418
+ It's funny that you mentioned, so for people who don't know Conway's Game of Life, it's
419
+
420
+ 14:38.120 --> 14:44.840
421
+ a cellular automata where there's single compute units that kind of look at their neighbors
422
+
423
+ 14:44.840 --> 14:50.080
424
+ and figure out what they look like in the next generation based on the state of their
425
+
426
+ 14:50.080 --> 14:57.840
427
+ neighbors and this is deeply distributed system in concept at least. And then there's simple
428
+
429
+ 14:57.840 --> 15:04.400
430
+ rules that all of them follow and somehow out of this simple rule when you step back
431
+
432
+ 15:04.400 --> 15:13.160
433
+ and look at what occurs, it's beautiful. There's an emergent complexity. Even though the underlying
434
+
435
+ 15:13.160 --> 15:17.440
436
+ rules are simple, there's an emergent complexity. Now the funny thing is you've implemented
437
+
438
+ 15:17.440 --> 15:23.660
439
+ this and the thing you're commenting on is you're proud of a hack you did to make it
440
+
441
+ 15:23.660 --> 15:30.800
442
+ run efficiently. When you're not commenting on, it's a beautiful implementation, you're
443
+
444
+ 15:30.800 --> 15:36.780
445
+ not commenting on the fact that there's an emergent complexity that you've coded a simple
446
+
447
+ 15:36.780 --> 15:42.960
448
+ program and when you step back and you print out the following generation after generation,
449
+
450
+ 15:42.960 --> 15:48.400
451
+ that's stuff that you may have not predicted would happen is happening.
452
+
453
+ 15:48.400 --> 15:53.600
454
+ And is that magic? I mean, that's the magic that all of us feel when we program. When
455
+
456
+ 15:53.600 --> 15:59.240
457
+ you create a program and then you run it and whether it's Hello World or it shows something
458
+
459
+ 15:59.240 --> 16:03.840
460
+ on screen, if there's a graphical component, are you seeing the magic in the mechanism
461
+
462
+ 16:03.840 --> 16:05.200
463
+ of creating that?
464
+
465
+ 16:05.200 --> 16:14.440
466
+ I think I went back and forth. As a student, we had an incredibly small budget of computer
467
+
468
+ 16:14.440 --> 16:20.280
469
+ time that we could use. It was actually measured. I once got in trouble with one of my professors
470
+
471
+ 16:20.280 --> 16:29.640
472
+ because I had overspent the department's budget. It's a different story.
473
+
474
+ 16:29.640 --> 16:36.900
475
+ I actually wanted the efficient implementation because I also wanted to explore what would
476
+
477
+ 16:36.900 --> 16:48.560
478
+ happen with a larger number of generations and a larger size of the board. Once the implementation
479
+
480
+ 16:48.560 --> 16:57.000
481
+ was flawless, I would feed it different patterns and then I think maybe there was a follow
482
+
483
+ 16:57.000 --> 17:03.620
484
+ up article where there were patterns that were like gliders, patterns that repeated
485
+
486
+ 17:03.620 --> 17:13.200
487
+ themselves after a number of generations but translated one or two positions to the right
488
+
489
+ 17:13.200 --> 17:21.720
490
+ or up or something like that. I remember things like glider guns. Well, you can Google Conway's
491
+
492
+ 17:21.720 --> 17:27.560
493
+ Game of Life. People still go aww and ooh over it.
494
+
495
+ 17:27.560 --> 17:32.680
496
+ For a reason because it's not really well understood why. I mean, this is what Stephen
497
+
498
+ 17:32.680 --> 17:40.240
499
+ Wolfram is obsessed about. We don't have the mathematical tools to describe the kind of
500
+
501
+ 17:40.240 --> 17:45.120
502
+ complexity that emerges in these kinds of systems. The only way you can do is to run
503
+
504
+ 17:45.120 --> 17:47.120
505
+ it.
506
+
507
+ 17:47.120 --> 17:55.720
508
+ I'm not convinced that it's sort of a problem that lends itself to classic mathematical
509
+
510
+ 17:55.720 --> 17:59.920
511
+ analysis.
512
+
513
+ 17:59.920 --> 18:05.120
514
+ One theory of how you create an artificial intelligence or artificial being is you kind
515
+
516
+ 18:05.120 --> 18:10.120
517
+ of have to, same with the Game of Life, you kind of have to create a universe and let
518
+
519
+ 18:10.120 --> 18:17.520
520
+ it run. That creating it from scratch in a design way, coding up a Python program that
521
+
522
+ 18:17.520 --> 18:22.760
523
+ creates a fully intelligent system may be quite challenging. You might need to create
524
+
525
+ 18:22.760 --> 18:27.120
526
+ a universe just like the Game of Life.
527
+
528
+ 18:27.120 --> 18:33.200
529
+ You might have to experiment with a lot of different universes before there is a set
530
+
531
+ 18:33.200 --> 18:41.480
532
+ of rules that doesn't essentially always just end up repeating itself in a trivial
533
+
534
+ 18:41.480 --> 18:42.480
535
+ way.
536
+
537
+ 18:42.480 --> 18:49.840
538
+ Yeah, and Stephen Wolfram works with these simple rules, says that it's kind of surprising
539
+
540
+ 18:49.840 --> 18:55.280
541
+ how quickly you find rules that create interesting things. You shouldn't be able to, but somehow
542
+
543
+ 18:55.280 --> 19:02.120
544
+ you do. And so maybe our universe is laden with rules that will create interesting things
545
+
546
+ 19:02.120 --> 19:07.440
547
+ that might not look like humans, but emergent phenomena that's interesting may not be as
548
+
549
+ 19:07.440 --> 19:09.440
550
+ difficult to create as we think.
551
+
552
+ 19:09.440 --> 19:10.440
553
+ Sure.
554
+
555
+ 19:10.440 --> 19:17.440
556
+ But let me sort of ask, at that time, some of the world, at least in popular press, was
557
+
558
+ 19:17.440 --> 19:25.680
559
+ kind of captivated, perhaps at least in America, by the idea of artificial intelligence, that
560
+
561
+ 19:25.680 --> 19:33.240
562
+ these computers would be able to think pretty soon. And did that touch you at all? In science
563
+
564
+ 19:33.240 --> 19:37.800
565
+ fiction or in reality in any way?
566
+
567
+ 19:37.800 --> 19:49.000
568
+ I didn't really start reading science fiction until much, much later. I think as a teenager
569
+
570
+ 19:49.000 --> 19:54.520
571
+ I read maybe one bundle of science fiction stories.
572
+
573
+ 19:54.520 --> 19:57.960
574
+ Was it in the background somewhere, like in your thoughts?
575
+
576
+ 19:57.960 --> 20:04.720
577
+ That sort of the using computers to build something intelligent always felt to me, because
578
+
579
+ 20:04.720 --> 20:12.920
580
+ I felt I had so much understanding of what actually goes on inside a computer. I knew
581
+
582
+ 20:12.920 --> 20:22.880
583
+ how many bits of memory it had and how difficult it was to program. And sort of, I didn't believe
584
+
585
+ 20:22.880 --> 20:30.560
586
+ at all that you could just build something intelligent out of that, that would really
587
+
588
+ 20:30.560 --> 20:40.600
589
+ sort of satisfy my definition of intelligence. I think the most influential thing that I
590
+
591
+ 20:40.600 --> 20:48.680
592
+ read in my early twenties was Gödel Escherbach. That was about consciousness, and that was
593
+
594
+ 20:48.680 --> 20:54.040
595
+ a big eye opener in some sense.
596
+
597
+ 20:54.040 --> 21:00.760
598
+ In what sense? So, on your own brain, did you at the time or do you now see your own
599
+
600
+ 21:00.760 --> 21:07.720
601
+ brain as a computer? Or is there a total separation of the way? So yeah, you're very pragmatically
602
+
603
+ 21:07.720 --> 21:14.600
604
+ practically know the limits of memory, the limits of this sequential computing or weakly
605
+
606
+ 21:14.600 --> 21:21.000
607
+ paralyzed computing, and you just know what we have now, and it's hard to see how it creates.
608
+
609
+ 21:21.000 --> 21:29.920
610
+ But it's also easy to see, it was in the 40s, 50s, 60s, and now at least similarities between
611
+
612
+ 21:29.920 --> 21:31.680
613
+ the brain and our computers.
614
+
615
+ 21:31.680 --> 21:43.200
616
+ Oh yeah, I mean, I totally believe that brains are computers in some sense. I mean, the rules
617
+
618
+ 21:43.200 --> 21:51.200
619
+ they use to play by are pretty different from the rules we can sort of implement in our
620
+
621
+ 21:51.200 --> 22:02.960
622
+ current hardware, but I don't believe in, like, a separate thing that infuses us with
623
+
624
+ 22:02.960 --> 22:10.480
625
+ intelligence or consciousness or any of that. There's no soul, I've been an atheist
626
+
627
+ 22:10.480 --> 22:18.800
628
+ probably from when I was 10 years old, just by thinking a bit about math and the universe,
629
+
630
+ 22:18.800 --> 22:26.640
631
+ and well, my parents were atheists. Now, I know that you could be an atheist and still
632
+
633
+ 22:26.640 --> 22:34.080
634
+ believe that there is something sort of about intelligence or consciousness that cannot
635
+
636
+ 22:34.080 --> 22:44.560
637
+ possibly emerge from a fixed set of rules. I am not in that camp. I totally see that,
638
+
639
+ 22:44.560 --> 22:53.680
640
+ sort of, given how many millions of years evolution took its time, DNA is a particular
641
+
642
+ 22:53.680 --> 23:07.040
643
+ machine that sort of encodes information and an unlimited amount of information in chemical
644
+
645
+ 23:07.040 --> 23:12.320
646
+ form and has figured out a way to replicate itself.
647
+
648
+ 23:12.320 --> 23:16.880
649
+ I thought that that was, maybe it's 300 million years ago, but I thought it was closer
650
+
651
+ 23:16.880 --> 23:25.120
652
+ to half a billion years ago, that that's sort of originated and it hasn't really changed,
653
+
654
+ 23:25.120 --> 23:32.040
655
+ that the sort of the structure of DNA hasn't changed ever since. That is like our binary
656
+
657
+ 23:32.040 --> 23:35.200
658
+ code that we have in hardware. I mean...
659
+
660
+ 23:35.200 --> 23:39.760
661
+ The basic programming language hasn't changed, but maybe the programming itself...
662
+
663
+ 23:39.760 --> 23:48.320
664
+ Obviously, it did sort of, it happened to be a set of rules that was good enough to
665
+
666
+ 23:48.320 --> 23:59.520
667
+ sort of develop endless variability and sort of the idea of self replicating molecules
668
+
669
+ 23:59.520 --> 24:05.360
670
+ competing with each other for resources and one type eventually sort of always taking
671
+
672
+ 24:05.360 --> 24:12.320
673
+ over. That happened before there were any fossils, so we don't know how that exactly
674
+
675
+ 24:12.320 --> 24:17.920
676
+ happened, but I believe it's clear that that did happen.
677
+
678
+ 24:17.920 --> 24:25.360
679
+ Can you comment on consciousness and how you see it? Because I think we'll talk about
680
+
681
+ 24:25.360 --> 24:30.080
682
+ programming quite a bit. We'll talk about, you know, intelligence connecting to programming
683
+
684
+ 24:30.080 --> 24:38.080
685
+ fundamentally, but consciousness is this whole other thing. Do you think about it often as
686
+
687
+ 24:38.080 --> 24:45.440
688
+ a developer of a programming language and as a human?
689
+
690
+ 24:45.440 --> 24:55.000
691
+ Those are pretty sort of separate topics. Sort of my line of work working with programming
692
+
693
+ 24:55.000 --> 25:02.720
694
+ does not involve anything that goes in the direction of developing intelligence or consciousness,
695
+
696
+ 25:02.720 --> 25:13.880
697
+ but sort of privately as an avid reader of popular science writing, I have some thoughts
698
+
699
+ 25:13.880 --> 25:25.680
700
+ which is mostly that I don't actually believe that consciousness is an all or nothing thing.
701
+
702
+ 25:25.680 --> 25:35.960
703
+ I have a feeling that, and I forget what I read that influenced this, but I feel that
704
+
705
+ 25:35.960 --> 25:41.400
706
+ if you look at a cat or a dog or a mouse, they have some form of intelligence. If you
707
+
708
+ 25:41.400 --> 25:54.040
709
+ look at a fish, it has some form of intelligence, and that evolution just took a long time,
710
+
711
+ 25:54.040 --> 26:01.320
712
+ but I feel that the sort of evolution of more and more intelligence that led to sort of
713
+
714
+ 26:01.320 --> 26:12.920
715
+ the human form of intelligence followed the evolution of the senses, especially the visual
716
+
717
+ 26:12.920 --> 26:20.480
718
+ sense. I mean, there is an enormous amount of processing that's needed to interpret
719
+
720
+ 26:20.480 --> 26:28.240
721
+ a scene, and humans are still better at that than computers are.
722
+
723
+ 26:28.240 --> 26:39.660
724
+ And I have a feeling that there is a sort of, the reason that like mammals in particular
725
+
726
+ 26:39.660 --> 26:47.960
727
+ developed the levels of consciousness that they have and that eventually sort of going
728
+
729
+ 26:47.960 --> 26:55.360
730
+ from intelligence to self awareness and consciousness has to do with sort of being a robot that
731
+
732
+ 26:55.360 --> 26:58.920
733
+ has very highly developed senses.
734
+
735
+ 26:58.920 --> 27:04.760
736
+ Has a lot of rich sensory information coming in, so that's a really interesting thought
737
+
738
+ 27:04.760 --> 27:14.200
739
+ that whatever that basic mechanism of DNA, whatever that basic building blocks of programming,
740
+
741
+ 27:14.200 --> 27:21.080
742
+ if you just add more abilities, more high resolution sensors, more sensors, you just
743
+
744
+ 27:21.080 --> 27:26.760
745
+ keep stacking those things on top that this basic programming in trying to survive develops
746
+
747
+ 27:26.760 --> 27:35.000
748
+ very interesting things that start to us humans to appear like intelligence and consciousness.
749
+
750
+ 27:35.000 --> 27:42.280
751
+ As far as robots go, I think that the self driving cars have that sort of the greatest
752
+
753
+ 27:42.280 --> 27:50.400
754
+ opportunity of developing something like that, because when I drive myself, I don't just
755
+
756
+ 27:50.400 --> 27:53.800
757
+ pay attention to the rules of the road.
758
+
759
+ 27:53.800 --> 28:01.220
760
+ I also look around and I get clues from that, oh, this is a shopping district, oh, here's
761
+
762
+ 28:01.220 --> 28:08.960
763
+ an old lady crossing the street, oh, here is someone carrying a pile of mail, there's
764
+
765
+ 28:08.960 --> 28:14.040
766
+ a mailbox, I bet you they're going to cross the street to reach that mailbox.
767
+
768
+ 28:14.040 --> 28:17.520
769
+ And I slow down, and I don't even think about that.
770
+
771
+ 28:17.520 --> 28:25.780
772
+ And so, there is so much where you turn your observations into an understanding of what
773
+
774
+ 28:25.780 --> 28:32.680
775
+ other consciousnesses are going to do, or what other systems in the world are going
776
+
777
+ 28:32.680 --> 28:37.400
778
+ to be, oh, that tree is going to fall.
779
+
780
+ 28:37.400 --> 28:46.800
781
+ I see sort of, I see much more of, I expect somehow that if anything is going to become
782
+
783
+ 28:46.800 --> 28:55.520
784
+ unconscious, it's going to be the self driving car and not the network of a bazillion computers
785
+
786
+ 28:55.520 --> 29:03.160
787
+ in a Google or Amazon data center that are all networked together to do whatever they
788
+
789
+ 29:03.160 --> 29:04.160
790
+ do.
791
+
792
+ 29:04.160 --> 29:09.640
793
+ So, in that sense, so you actually highlight, because that's what I work in Thomas Vehicles,
794
+
795
+ 29:09.640 --> 29:15.600
796
+ you highlight the big gap between what we currently can't do and what we truly need
797
+
798
+ 29:15.600 --> 29:18.500
799
+ to be able to do to solve the problem.
800
+
801
+ 29:18.500 --> 29:24.600
802
+ Under that formulation, then consciousness and intelligence is something that basically
803
+
804
+ 29:24.600 --> 29:30.020
805
+ a system should have in order to interact with us humans, as opposed to some kind of
806
+
807
+ 29:30.020 --> 29:35.280
808
+ abstract notion of a consciousness.
809
+
810
+ 29:35.280 --> 29:39.200
811
+ Consciousness is something that you need to have to be able to empathize, to be able to
812
+
813
+ 29:39.200 --> 29:47.440
814
+ fear, understand what the fear of death is, all these aspects that are important for interacting
815
+
816
+ 29:47.440 --> 29:56.160
817
+ with pedestrians, you need to be able to do basic computation based on our human desires
818
+
819
+ 29:56.160 --> 29:57.160
820
+ and thoughts.
821
+
822
+ 29:57.160 --> 30:02.280
823
+ And if you sort of, yeah, if you look at the dog, the dog clearly knows, I mean, I'm
824
+
825
+ 30:02.280 --> 30:07.340
826
+ not the dog owner, but I have friends who have dogs, the dogs clearly know what the
827
+
828
+ 30:07.340 --> 30:11.400
829
+ humans around them are going to do, or at least they have a model of what those humans
830
+
831
+ 30:11.400 --> 30:14.160
832
+ are going to do and they learn.
833
+
834
+ 30:14.160 --> 30:19.060
835
+ Some dogs know when you're going out and they want to go out with you, they're sad when
836
+
837
+ 30:19.060 --> 30:26.080
838
+ you leave them alone, they cry, they're afraid because they were mistreated when they were
839
+
840
+ 30:26.080 --> 30:31.040
841
+ younger.
842
+
843
+ 30:31.040 --> 30:39.280
844
+ We don't assign sort of consciousness to dogs, or at least not all that much, but I also
845
+
846
+ 30:39.280 --> 30:42.500
847
+ don't think they have none of that.
848
+
849
+ 30:42.500 --> 30:50.360
850
+ So I think it's consciousness and intelligence are not all or nothing.
851
+
852
+ 30:50.360 --> 30:52.780
853
+ The spectrum is really interesting.
854
+
855
+ 30:52.780 --> 30:58.760
856
+ But in returning to programming languages and the way we think about building these
857
+
858
+ 30:58.760 --> 31:03.260
859
+ kinds of things, about building intelligence, building consciousness, building artificial
860
+
861
+ 31:03.260 --> 31:04.260
862
+ beings.
863
+
864
+ 31:04.260 --> 31:10.920
865
+ So I think one of the exciting ideas came in the 17th century and with Leibniz, Hobbes,
866
+
867
+ 31:10.920 --> 31:18.520
868
+ Descartes, where there's this feeling that you can convert all thought, all reasoning,
869
+
870
+ 31:18.520 --> 31:24.480
871
+ all the thing that we find very special in our brains, you can convert all of that into
872
+
873
+ 31:24.480 --> 31:25.480
874
+ logic.
875
+
876
+ 31:25.480 --> 31:30.400
877
+ So you can formalize it, formal reasoning, and then once you formalize everything, all
878
+
879
+ 31:30.400 --> 31:34.400
880
+ of knowledge, then you can just calculate and that's what we're doing with our brains
881
+
882
+ 31:34.400 --> 31:35.400
883
+ is we're calculating.
884
+
885
+ 31:35.400 --> 31:40.240
886
+ So there's this whole idea that this is possible, that this we can actually program.
887
+
888
+ 31:40.240 --> 31:46.520
889
+ But they weren't aware of the concept of pattern matching in the sense that we are aware of
890
+
891
+ 31:46.520 --> 31:47.640
892
+ it now.
893
+
894
+ 31:47.640 --> 31:57.640
895
+ They sort of thought they had discovered incredible bits of mathematics like Newton's calculus
896
+
897
+ 31:57.640 --> 32:06.840
898
+ and their sort of idealism, their sort of extension of what they could do with logic
899
+
900
+ 32:06.840 --> 32:18.000
901
+ and math sort of went along those lines and they thought there's like, yeah, logic.
902
+
903
+ 32:18.000 --> 32:22.020
904
+ There's like a bunch of rules and a bunch of input.
905
+
906
+ 32:22.020 --> 32:28.600
907
+ They didn't realize that how you recognize a face is not just a bunch of rules but is
908
+
909
+ 32:28.600 --> 32:39.160
910
+ a shit ton of data plus a circuit that sort of interprets the visual clues and the context
911
+
912
+ 32:39.160 --> 32:49.400
913
+ and everything else and somehow can massively parallel pattern match against stored rules.
914
+
915
+ 32:49.400 --> 32:56.320
916
+ I mean, if I see you tomorrow here in front of the Dropbox office, I might recognize you.
917
+
918
+ 32:56.320 --> 33:01.320
919
+ Even if I'm wearing a different shirt, yeah, but if I see you tomorrow in a coffee shop
920
+
921
+ 33:01.320 --> 33:06.640
922
+ in Belmont, I might have no idea that it was you or on the beach or whatever.
923
+
924
+ 33:06.640 --> 33:10.160
925
+ I make those kind of mistakes myself all the time.
926
+
927
+ 33:10.160 --> 33:16.320
928
+ I see someone that I only know as like, oh, this person is a colleague of my wife's and
929
+
930
+ 33:16.320 --> 33:20.860
931
+ then I see them at the movies and I didn't recognize them.
932
+
933
+ 33:20.860 --> 33:29.320
934
+ But do you see those, you call it pattern matching, do you see that rules is unable
935
+
936
+ 33:29.320 --> 33:32.380
937
+ to encode that?
938
+
939
+ 33:32.380 --> 33:36.320
940
+ Everything you see, all the pieces of information you look around this room, I'm wearing a black
941
+
942
+ 33:36.320 --> 33:41.720
943
+ shirt, I have a certain height, I'm a human, all these, there's probably tens of thousands
944
+
945
+ 33:41.720 --> 33:45.680
946
+ of facts you pick up moment by moment about this scene.
947
+
948
+ 33:45.680 --> 33:50.000
949
+ You take them for granted and you aggregate them together to understand the scene.
950
+
951
+ 33:50.000 --> 33:53.800
952
+ You don't think all of that could be encoded to where at the end of the day, you can just
953
+
954
+ 33:53.800 --> 33:57.440
955
+ put it all on the table and calculate?
956
+
957
+ 33:57.440 --> 33:58.840
958
+ I don't know what that means.
959
+
960
+ 33:58.840 --> 34:08.680
961
+ I mean, yes, in the sense that there is no actual magic there, but there are enough layers
962
+
963
+ 34:08.680 --> 34:17.640
964
+ of abstraction from the facts as they enter my eyes and my ears to the understanding of
965
+
966
+ 34:17.640 --> 34:29.240
967
+ the scene that I don't think that AI has really covered enough of that distance.
968
+
969
+ 34:29.240 --> 34:37.800
970
+ It's like if you take a human body and you realize it's built out of atoms, well, that
971
+
972
+ 34:37.800 --> 34:41.960
973
+ is a uselessly reductionist view, right?
974
+
975
+ 34:41.960 --> 34:46.380
976
+ The body is built out of organs, the organs are built out of cells, the cells are built
977
+
978
+ 34:46.380 --> 34:53.240
979
+ out of proteins, the proteins are built out of amino acids, the amino acids are built
980
+
981
+ 34:53.240 --> 34:58.040
982
+ out of atoms and then you get to quantum mechanics.
983
+
984
+ 34:58.040 --> 34:59.920
985
+ So that's a very pragmatic view.
986
+
987
+ 34:59.920 --> 35:03.720
988
+ I mean, obviously as an engineer, I agree with that kind of view, but you also have
989
+
990
+ 35:03.720 --> 35:13.160
991
+ to consider the Sam Harris view of, well, intelligence is just information processing.
992
+
993
+ 35:13.160 --> 35:17.320
994
+ Like you said, you take in sensory information, you do some stuff with it and you come up
995
+
996
+ 35:17.320 --> 35:20.760
997
+ with actions that are intelligent.
998
+
999
+ 35:20.760 --> 35:22.480
1000
+ That makes it sound so easy.
1001
+
1002
+ 35:22.480 --> 35:24.280
1003
+ I don't know who Sam Harris is.
1004
+
1005
+ 35:24.280 --> 35:26.400
1006
+ Oh, well, it's a philosopher.
1007
+
1008
+ 35:26.400 --> 35:29.680
1009
+ So like this is how philosophers often think, right?
1010
+
1011
+ 35:29.680 --> 35:33.760
1012
+ And essentially that's what Descartes was, is wait a minute, if there is, like you said,
1013
+
1014
+ 35:33.760 --> 35:39.320
1015
+ no magic, so he basically says it doesn't appear like there's any magic, but we know
1016
+
1017
+ 35:39.320 --> 35:44.280
1018
+ so little about it that it might as well be magic.
1019
+
1020
+ 35:44.280 --> 35:47.800
1021
+ So just because we know that we're made of atoms, just because we know we're made
1022
+
1023
+ 35:47.800 --> 35:53.280
1024
+ of organs, the fact that we know very little how to get from the atoms to organs in a way
1025
+
1026
+ 35:53.280 --> 36:00.400
1027
+ that's recreatable means that you shouldn't get too excited just yet about the fact that
1028
+
1029
+ 36:00.400 --> 36:02.240
1030
+ you figured out that we're made of atoms.
1031
+
1032
+ 36:02.240 --> 36:11.920
1033
+ Right, and the same about taking facts as our sensory organs take them in and turning
1034
+
1035
+ 36:11.920 --> 36:19.820
1036
+ that into reasons and actions, that sort of, there are a lot of abstractions that we haven't
1037
+
1038
+ 36:19.820 --> 36:23.960
1039
+ quite figured out how to deal with those.
1040
+
1041
+ 36:23.960 --> 36:37.440
1042
+ I mean, sometimes, I don't know if I can go on a tangent or not, so if I take a simple
1043
+
1044
+ 36:37.440 --> 36:45.640
1045
+ program that parses, say I have a compiler that parses a program, in a sense the input
1046
+
1047
+ 36:45.640 --> 36:55.640
1048
+ routine of that compiler, of that parser, is a sensing organ, and it builds up a mighty
1049
+
1050
+ 36:55.640 --> 37:01.960
1051
+ complicated internal representation of the program it just saw, it doesn't just have
1052
+
1053
+ 37:01.960 --> 37:08.200
1054
+ a linear sequence of bytes representing the text of the program anymore, it has an abstract
1055
+
1056
+ 37:08.200 --> 37:15.480
1057
+ syntax tree, and I don't know how many of your viewers or listeners are familiar with
1058
+
1059
+ 37:15.480 --> 37:18.680
1060
+ compiler technology, but there's…
1061
+
1062
+ 37:18.680 --> 37:21.880
1063
+ Fewer and fewer these days, right?
1064
+
1065
+ 37:21.880 --> 37:24.920
1066
+ That's also true, probably.
1067
+
1068
+ 37:24.920 --> 37:30.360
1069
+ People want to take a shortcut, but there's sort of, this abstraction is a data structure
1070
+
1071
+ 37:30.360 --> 37:37.480
1072
+ that the compiler then uses to produce outputs that is relevant, like a translation of that
1073
+
1074
+ 37:37.480 --> 37:47.880
1075
+ program to machine code that can be executed by hardware, and then that data structure
1076
+
1077
+ 37:47.880 --> 37:50.600
1078
+ gets thrown away.
1079
+
1080
+ 37:50.600 --> 38:02.560
1081
+ When a fish or a fly sees, sort of gets visual impulses, I'm sure it also builds up some
1082
+
1083
+ 38:02.560 --> 38:10.000
1084
+ data structure, and for the fly that may be very minimal, a fly may have only a few, I
1085
+
1086
+ 38:10.000 --> 38:17.680
1087
+ mean, in the case of a fly's brain, I could imagine that there are few enough layers of
1088
+
1089
+ 38:17.680 --> 38:24.040
1090
+ abstraction that it's not much more than when it's darker here than it is here, well
1091
+
1092
+ 38:24.040 --> 38:29.880
1093
+ it can sense motion, because a fly sort of responds when you move your arm towards it,
1094
+
1095
+ 38:29.880 --> 38:39.240
1096
+ so clearly its visual processing is intelligent, well, not intelligent, but it has an abstraction
1097
+
1098
+ 38:39.240 --> 38:46.440
1099
+ for motion, and we still have similar things in, but much more complicated in our brains,
1100
+
1101
+ 38:46.440 --> 38:50.400
1102
+ I mean, otherwise you couldn't drive a car if you couldn't, if you didn't have an
1103
+
1104
+ 38:50.400 --> 38:53.480
1105
+ incredibly good abstraction for motion.
1106
+
1107
+ 38:53.480 --> 38:59.160
1108
+ Yeah, in some sense, the same abstraction for motion is probably one of the primary
1109
+
1110
+ 38:59.160 --> 39:05.080
1111
+ sources of our, of information for us, we just know what to do, I think we know what
1112
+
1113
+ 39:05.080 --> 39:08.280
1114
+ to do with that, we've built up other abstractions on top.
1115
+
1116
+ 39:08.280 --> 39:14.120
1117
+ We build much more complicated data structures based on that, and we build more persistent
1118
+
1119
+ 39:14.120 --> 39:20.320
1120
+ data structures, sort of after some processing, some information sort of gets stored in our
1121
+
1122
+ 39:20.320 --> 39:27.240
1123
+ memory pretty much permanently, and is available on recall, I mean, there are some things that
1124
+
1125
+ 39:27.240 --> 39:34.040
1126
+ you sort of, you're conscious that you're remembering it, like, you give me your phone
1127
+
1128
+ 39:34.040 --> 39:39.560
1129
+ number, I, well, at my age I have to write it down, but I could imagine, I could remember
1130
+
1131
+ 39:39.560 --> 39:46.240
1132
+ those seven numbers, or ten digits, and reproduce them in a while, if I sort of repeat them
1133
+
1134
+ 39:46.240 --> 39:53.320
1135
+ to myself a few times, so that's a fairly conscious form of memorization.
1136
+
1137
+ 39:53.320 --> 39:57.800
1138
+ On the other hand, how do I recognize your face, I have no idea.
1139
+
1140
+ 39:57.800 --> 40:04.080
1141
+ My brain has a whole bunch of specialized hardware that knows how to recognize faces,
1142
+
1143
+ 40:04.080 --> 40:10.200
1144
+ I don't know how much of that is sort of coded in our DNA, and how much of that is
1145
+
1146
+ 40:10.200 --> 40:17.960
1147
+ trained over and over between the ages of zero and three, but somehow our brains know
1148
+
1149
+ 40:17.960 --> 40:26.000
1150
+ how to do lots of things like that, that are useful in our interactions with other humans,
1151
+
1152
+ 40:26.000 --> 40:29.880
1153
+ without really being conscious of how it's done anymore.
1154
+
1155
+ 40:29.880 --> 40:36.200
1156
+ Right, so our actual day to day lives, we're operating at the very highest level of abstraction,
1157
+
1158
+ 40:36.200 --> 40:39.760
1159
+ we're just not even conscious of all the little details underlying it.
1160
+
1161
+ 40:39.760 --> 40:43.360
1162
+ There's compilers on top of, it's like turtles on top of turtles, or turtles all the way
1163
+
1164
+ 40:43.360 --> 40:48.200
1165
+ down, there's compilers all the way down, but that's essentially, you say that there's
1166
+
1167
+ 40:48.200 --> 40:54.920
1168
+ no magic, that's what I, what I was trying to get at, I think, is with Descartes started
1169
+
1170
+ 40:54.920 --> 40:59.600
1171
+ this whole train of saying that there's no magic, I mean, there's all this beforehand.
1172
+
1173
+ 40:59.600 --> 41:06.120
1174
+ Well didn't Descartes also have the notion though that the soul and the body were fundamentally
1175
+
1176
+ 41:06.120 --> 41:07.120
1177
+ separate?
1178
+
1179
+ 41:07.120 --> 41:11.800
1180
+ Separate, yeah, I think he had to write in God in there for political reasons, so I don't
1181
+
1182
+ 41:11.800 --> 41:17.880
1183
+ know actually, I'm not a historian, but there's notions in there that all of reasoning, all
1184
+
1185
+ 41:17.880 --> 41:20.120
1186
+ of human thought can be formalized.
1187
+
1188
+ 41:20.120 --> 41:28.480
1189
+ I think that continued in the 20th century with Russell and with Gadot's incompleteness
1190
+
1191
+ 41:28.480 --> 41:33.120
1192
+ theorem, this debate of what are the limits of the things that could be formalized, that's
1193
+
1194
+ 41:33.120 --> 41:37.960
1195
+ where the Turing machine came along, and this exciting idea, I mean, underlying a lot of
1196
+
1197
+ 41:37.960 --> 41:43.160
1198
+ computing that you can do quite a lot with a computer.
1199
+
1200
+ 41:43.160 --> 41:47.640
1201
+ You can encode a lot of the stuff we're talking about in terms of recognizing faces and so
1202
+
1203
+ 41:47.640 --> 41:53.960
1204
+ on, theoretically, in an algorithm that can then run on a computer.
1205
+
1206
+ 41:53.960 --> 42:05.040
1207
+ And in that context, I'd like to ask programming in a philosophical way, what does it mean
1208
+
1209
+ 42:05.040 --> 42:06.480
1210
+ to program a computer?
1211
+
1212
+ 42:06.480 --> 42:13.360
1213
+ So you said you write a Python program or compiled a C++ program that compiles to some
1214
+
1215
+ 42:13.360 --> 42:21.200
1216
+ byte code, it's forming layers, you're programming a layer of abstraction that's higher, how
1217
+
1218
+ 42:21.200 --> 42:24.920
1219
+ do you see programming in that context?
1220
+
1221
+ 42:24.920 --> 42:29.800
1222
+ Can it keep getting higher and higher levels of abstraction?
1223
+
1224
+ 42:29.800 --> 42:35.960
1225
+ I think at some point the higher levels of abstraction will not be called programming
1226
+
1227
+ 42:35.960 --> 42:44.720
1228
+ and they will not resemble what we call programming at the moment.
1229
+
1230
+ 42:44.720 --> 42:52.080
1231
+ There will not be source code, I mean, there will still be source code sort of at a lower
1232
+
1233
+ 42:52.080 --> 42:59.320
1234
+ level of the machine, just like there are still molecules and electrons and sort of
1235
+
1236
+ 42:59.320 --> 43:09.120
1237
+ proteins in our brains, but, and so there's still programming and system administration
1238
+
1239
+ 43:09.120 --> 43:15.960
1240
+ and who knows what, to keep the machine running, but what the machine does is a different level
1241
+
1242
+ 43:15.960 --> 43:23.060
1243
+ of abstraction in a sense, and as far as I understand the way that for the last decade
1244
+
1245
+ 43:23.060 --> 43:28.440
1246
+ or more people have made progress with things like facial recognition or the self driving
1247
+
1248
+ 43:28.440 --> 43:38.200
1249
+ cars is all by endless, endless amounts of training data where at least as a lay person,
1250
+
1251
+ 43:38.200 --> 43:47.420
1252
+ and I feel myself totally as a lay person in that field, it looks like the researchers
1253
+
1254
+ 43:47.420 --> 43:57.400
1255
+ who publish the results don't necessarily know exactly how their algorithms work, and
1256
+
1257
+ 43:57.400 --> 44:04.840
1258
+ I often get upset when I sort of read a sort of a fluff piece about Facebook in the newspaper
1259
+
1260
+ 44:04.840 --> 44:12.680
1261
+ or social networks and they say, well, algorithms, and that's like a totally different interpretation
1262
+
1263
+ 44:12.680 --> 44:19.240
1264
+ of the word algorithm, because for me, the way I was trained or what I learned when I
1265
+
1266
+ 44:19.240 --> 44:25.920
1267
+ was eight or ten years old, an algorithm is a set of rules that you completely understand
1268
+
1269
+ 44:25.920 --> 44:30.720
1270
+ that can be mathematically analyzed and you can prove things.
1271
+
1272
+ 44:30.720 --> 44:37.840
1273
+ You can like prove that Aristotelian sieve produces all prime numbers and only prime
1274
+
1275
+ 44:37.840 --> 44:38.840
1276
+ numbers.
1277
+
1278
+ 44:38.840 --> 44:39.840
1279
+ Yeah.
1280
+
1281
+ 44:39.840 --> 44:44.360
1282
+ So I don't know if you know who Andrej Karpathy is, I'm afraid not.
1283
+
1284
+ 44:44.360 --> 44:51.980
1285
+ So he's a head of AI at Tesla now, but he was at Stanford before and he has this cheeky
1286
+
1287
+ 44:51.980 --> 44:56.480
1288
+ way of calling this concept software 2.0.
1289
+
1290
+ 44:56.480 --> 45:00.120
1291
+ So let me disentangle that for a second.
1292
+
1293
+ 45:00.120 --> 45:06.080
1294
+ So kind of what you're referring to is the traditional, the algorithm, the concept of
1295
+
1296
+ 45:06.080 --> 45:09.560
1297
+ an algorithm, something that's there, it's clear, you can read it, you understand it,
1298
+
1299
+ 45:09.560 --> 45:14.800
1300
+ you can prove it's functioning as kind of software 1.0.
1301
+
1302
+ 45:14.800 --> 45:21.920
1303
+ And what software 2.0 is, is exactly what you described, which is you have neural networks,
1304
+
1305
+ 45:21.920 --> 45:26.600
1306
+ which is a type of machine learning that you feed a bunch of data and that neural network
1307
+
1308
+ 45:26.600 --> 45:30.200
1309
+ learns to do a function.
1310
+
1311
+ 45:30.200 --> 45:35.220
1312
+ All you specify is the inputs and the outputs you want and you can't look inside.
1313
+
1314
+ 45:35.220 --> 45:37.040
1315
+ You can't analyze it.
1316
+
1317
+ 45:37.040 --> 45:41.920
1318
+ All you can do is train this function to map the inputs to the outputs by giving a lot
1319
+
1320
+ 45:41.920 --> 45:42.920
1321
+ of data.
1322
+
1323
+ 45:42.920 --> 45:47.040
1324
+ And that's as programming becomes getting a lot of data.
1325
+
1326
+ 45:47.040 --> 45:48.920
1327
+ That's what programming is.
1328
+
1329
+ 45:48.920 --> 45:52.120
1330
+ Well, that would be programming 2.0.
1331
+
1332
+ 45:52.120 --> 45:53.800
1333
+ To programming 2.0.
1334
+
1335
+ 45:53.800 --> 45:55.600
1336
+ I wouldn't call that programming.
1337
+
1338
+ 45:55.600 --> 45:57.480
1339
+ It's just a different activity.
1340
+
1341
+ 45:57.480 --> 46:02.640
1342
+ Just like building organs out of cells is not called chemistry.
1343
+
1344
+ 46:02.640 --> 46:09.680
1345
+ Well, so let's just step back and think sort of more generally, of course.
1346
+
1347
+ 46:09.680 --> 46:18.080
1348
+ But you know, it's like as a parent teaching your kids, things can be called programming.
1349
+
1350
+ 46:18.080 --> 46:22.720
1351
+ In that same sense, that's how programming is being used.
1352
+
1353
+ 46:22.720 --> 46:27.080
1354
+ You're providing them data, examples, use cases.
1355
+
1356
+ 46:27.080 --> 46:36.680
1357
+ So imagine writing a function not by, not with for loops and clearly readable text,
1358
+
1359
+ 46:36.680 --> 46:42.760
1360
+ but more saying, well, here's a lot of examples of what this function should take.
1361
+
1362
+ 46:42.760 --> 46:47.860
1363
+ And here's a lot of examples of when it takes those functions, it should do this.
1364
+
1365
+ 46:47.860 --> 46:50.280
1366
+ And then figure out the rest.
1367
+
1368
+ 46:50.280 --> 46:52.640
1369
+ So that's the 2.0 concept.
1370
+
1371
+ 46:52.640 --> 46:58.560
1372
+ And so the question I have for you is like, it's a very fuzzy way.
1373
+
1374
+ 46:58.560 --> 47:01.680
1375
+ This is the reality of a lot of these pattern recognition systems and so on.
1376
+
1377
+ 47:01.680 --> 47:05.400
1378
+ It's a fuzzy way of quote unquote programming.
1379
+
1380
+ 47:05.400 --> 47:09.160
1381
+ What do you think about this kind of world?
1382
+
1383
+ 47:09.160 --> 47:13.640
1384
+ Should it be called something totally different than programming?
1385
+
1386
+ 47:13.640 --> 47:21.000
1387
+ If you're a software engineer, does that mean you're designing systems that are very, can
1388
+
1389
+ 47:21.000 --> 47:28.140
1390
+ be systematically tested, evaluated, they have a very specific specification and then this
1391
+
1392
+ 47:28.140 --> 47:33.520
1393
+ other fuzzy software 2.0 world, machine learning world, that's something else totally?
1394
+
1395
+ 47:33.520 --> 47:41.000
1396
+ Or is there some intermixing that's possible?
1397
+
1398
+ 47:41.000 --> 47:48.600
1399
+ Well the question is probably only being asked because we don't quite know what that software
1400
+
1401
+ 47:48.600 --> 47:51.400
1402
+ 2.0 actually is.
1403
+
1404
+ 47:51.400 --> 48:02.960
1405
+ And I think there is a truism that every task that AI has tackled in the past, at some point
1406
+
1407
+ 48:02.960 --> 48:09.160
1408
+ we realized how it was done and then it was no longer considered part of artificial intelligence
1409
+
1410
+ 48:09.160 --> 48:15.200
1411
+ because it was no longer necessary to use that term.
1412
+
1413
+ 48:15.200 --> 48:21.600
1414
+ It was just, oh now we know how to do this.
1415
+
1416
+ 48:21.600 --> 48:30.320
1417
+ And a new field of science or engineering has been developed and I don't know if sort
1418
+
1419
+ 48:30.320 --> 48:39.000
1420
+ of every form of learning or sort of controlling computer systems should always be called programming.
1421
+
1422
+ 48:39.000 --> 48:43.720
1423
+ So I don't know, maybe I'm focused too much on the terminology.
1424
+
1425
+ 48:43.720 --> 48:56.200
1426
+ But I expect that there just will be different concepts where people with sort of different
1427
+
1428
+ 48:56.200 --> 49:07.920
1429
+ education and a different model of what they're trying to do will develop those concepts.
1430
+
1431
+ 49:07.920 --> 49:17.240
1432
+ I guess if you could comment on another way to put this concept is, I think the kind of
1433
+
1434
+ 49:17.240 --> 49:23.480
1435
+ functions that neural networks provide is things as opposed to being able to upfront
1436
+
1437
+ 49:23.480 --> 49:28.720
1438
+ prove that this should work for all cases you throw at it.
1439
+
1440
+ 49:28.720 --> 49:32.320
1441
+ All you're able, it's the worst case analysis versus average case analysis.
1442
+
1443
+ 49:32.320 --> 49:39.800
1444
+ All you're able to say is it seems on everything we've tested to work 99.9% of the time, but
1445
+
1446
+ 49:39.800 --> 49:44.160
1447
+ we can't guarantee it and it fails in unexpected ways.
1448
+
1449
+ 49:44.160 --> 49:48.080
1450
+ We can't even give you examples of how it fails in unexpected ways, but it's like really
1451
+
1452
+ 49:48.080 --> 49:50.120
1453
+ good most of the time.
1454
+
1455
+ 49:50.120 --> 50:00.720
1456
+ Is there no room for that in current ways we think about programming?
1457
+
1458
+ 50:00.720 --> 50:11.080
1459
+ programming 1.0 is actually sort of getting to that point too, where the sort of the ideal
1460
+
1461
+ 50:11.080 --> 50:21.120
1462
+ of a bug free program has been abandoned long ago by most software developers.
1463
+
1464
+ 50:21.120 --> 50:30.120
1465
+ We only care about bugs that manifest themselves often enough to be annoying.
1466
+
1467
+ 50:30.120 --> 50:40.680
1468
+ And we're willing to take the occasional crash or outage or incorrect result for granted
1469
+
1470
+ 50:40.680 --> 50:47.600
1471
+ because we can't possibly, we don't have enough programmers to make all the code bug free
1472
+
1473
+ 50:47.600 --> 50:50.200
1474
+ and it would be an incredibly tedious business.
1475
+
1476
+ 50:50.200 --> 50:56.320
1477
+ And if you try to throw formal methods at it, it becomes even more tedious.
1478
+
1479
+ 50:56.320 --> 51:05.520
1480
+ So every once in a while the user clicks on a link and somehow they get an error and the
1481
+
1482
+ 51:05.520 --> 51:07.360
1483
+ average user doesn't panic.
1484
+
1485
+ 51:07.360 --> 51:14.840
1486
+ They just click again and see if it works better the second time, which often magically
1487
+
1488
+ 51:14.840 --> 51:21.600
1489
+ it does, or they go up and they try some other way of performing their tasks.
1490
+
1491
+ 51:21.600 --> 51:29.880
1492
+ So that's sort of an end to end recovery mechanism and inside systems there is all
1493
+
1494
+ 51:29.880 --> 51:39.120
1495
+ sorts of retries and timeouts and fallbacks and I imagine that that sort of biological
1496
+
1497
+ 51:39.120 --> 51:46.320
1498
+ systems are even more full of that because otherwise they wouldn't survive.
1499
+
1500
+ 51:46.320 --> 51:54.160
1501
+ Do you think programming should be taught and thought of as exactly what you just said?
1502
+
1503
+ 51:54.160 --> 52:01.560
1504
+ I come from this kind of, you're always denying that fact always.
1505
+
1506
+ 52:01.560 --> 52:12.680
1507
+ In sort of basic programming education, the sort of the programs you're having students
1508
+
1509
+ 52:12.680 --> 52:23.480
1510
+ write are so small and simple that if there is a bug you can always find it and fix it.
1511
+
1512
+ 52:23.480 --> 52:29.720
1513
+ Because the sort of programming as it's being taught in some, even elementary, middle schools,
1514
+
1515
+ 52:29.720 --> 52:36.680
1516
+ in high school, introduction to programming classes in college typically, it's programming
1517
+
1518
+ 52:36.680 --> 52:38.920
1519
+ in the small.
1520
+
1521
+ 52:38.920 --> 52:47.560
1522
+ Very few classes sort of actually teach software engineering, building large systems.
1523
+
1524
+ 52:47.560 --> 52:51.360
1525
+ Every summer here at Dropbox we have a large number of interns.
1526
+
1527
+ 52:51.360 --> 52:56.720
1528
+ Every tech company on the West Coast has the same thing.
1529
+
1530
+ 52:56.720 --> 53:02.520
1531
+ These interns are always amazed because this is the first time in their life that they
1532
+
1533
+ 53:02.520 --> 53:12.920
1534
+ see what goes on in a really large software development environment.
1535
+
1536
+ 53:12.920 --> 53:20.280
1537
+ Everything they've learned in college was almost always about a much smaller scale and
1538
+
1539
+ 53:20.280 --> 53:27.840
1540
+ somehow that difference in scale makes a qualitative difference in how you do things and how you
1541
+
1542
+ 53:27.840 --> 53:29.600
1543
+ think about it.
1544
+
1545
+ 53:29.600 --> 53:36.300
1546
+ If you then take a few steps back into decades, 70s and 80s, when you were first thinking
1547
+
1548
+ 53:36.300 --> 53:41.840
1549
+ about Python or just that world of programming languages, did you ever think that there would
1550
+
1551
+ 53:41.840 --> 53:46.720
1552
+ be systems as large as underlying Google, Facebook, and Dropbox?
1553
+
1554
+ 53:46.720 --> 53:51.440
1555
+ Did you, when you were thinking about Python?
1556
+
1557
+ 53:51.440 --> 53:57.520
1558
+ I was actually always caught by surprise by sort of this, yeah, pretty much every stage
1559
+
1560
+ 53:57.520 --> 53:59.680
1561
+ of computing.
1562
+
1563
+ 53:59.680 --> 54:07.280
1564
+ So maybe just because you've spoken in other interviews, but I think the evolution of programming
1565
+
1566
+ 54:07.280 --> 54:13.080
1567
+ languages are fascinating and it's especially because it leads from my perspective towards
1568
+
1569
+ 54:13.080 --> 54:15.640
1570
+ greater and greater degrees of intelligence.
1571
+
1572
+ 54:15.640 --> 54:21.880
1573
+ I learned the first programming language I played with in Russia was with the Turtle
1574
+
1575
+ 54:21.880 --> 54:22.880
1576
+ logo.
1577
+
1578
+ 54:22.880 --> 54:24.840
1579
+ Logo, yeah.
1580
+
1581
+ 54:24.840 --> 54:29.960
1582
+ And if you look, I just have a list of programming languages, all of which I've now played with
1583
+
1584
+ 54:29.960 --> 54:30.960
1585
+ a little bit.
1586
+
1587
+ 54:30.960 --> 54:36.640
1588
+ I mean, they're all beautiful in different ways from Fortran, Cobalt, Lisp, Algol 60,
1589
+
1590
+ 54:36.640 --> 54:46.160
1591
+ Basic, Logo again, C, as a few, the object oriented came along in the 60s, Simula, Pascal,
1592
+
1593
+ 54:46.160 --> 54:47.560
1594
+ Smalltalk.
1595
+
1596
+ 54:47.560 --> 54:48.560
1597
+ All of that leads.
1598
+
1599
+ 54:48.560 --> 54:49.560
1600
+ They're all the classics.
1601
+
1602
+ 54:49.560 --> 54:50.560
1603
+ The classics.
1604
+
1605
+ 54:50.560 --> 54:51.560
1606
+ Yeah.
1607
+
1608
+ 54:51.560 --> 54:52.560
1609
+ The classic hits, right?
1610
+
1611
+ 54:52.560 --> 54:58.280
1612
+ Steam, that's built on top of Lisp.
1613
+
1614
+ 54:58.280 --> 55:05.900
1615
+ On the database side, SQL, C++, and all of that leads up to Python, Pascal too, and that's
1616
+
1617
+ 55:05.900 --> 55:10.960
1618
+ before Python, MATLAB, these kind of different communities, different languages.
1619
+
1620
+ 55:10.960 --> 55:13.240
1621
+ So can you talk about that world?
1622
+
1623
+ 55:13.240 --> 55:18.680
1624
+ I know that sort of Python came out of ABC, which I actually never knew that language.
1625
+
1626
+ 55:18.680 --> 55:24.400
1627
+ I just, having researched this conversation, went back to ABC and it looks remarkably,
1628
+
1629
+ 55:24.400 --> 55:31.240
1630
+ it has a lot of annoying qualities, but underneath those, like all caps and so on, but underneath
1631
+
1632
+ 55:31.240 --> 55:35.720
1633
+ that, there's elements of Python that are quite, they're already there.
1634
+
1635
+ 55:35.720 --> 55:37.540
1636
+ That's where I got all the good stuff.
1637
+
1638
+ 55:37.540 --> 55:38.540
1639
+ All the good stuff.
1640
+
1641
+ 55:38.540 --> 55:41.580
1642
+ So, but in that world, you're swimming these programming languages, were you focused on
1643
+
1644
+ 55:41.580 --> 55:48.080
1645
+ just the good stuff in your specific circle, or did you have a sense of what is everyone
1646
+
1647
+ 55:48.080 --> 55:49.080
1648
+ chasing?
1649
+
1650
+ 55:49.080 --> 55:57.000
1651
+ You said that every programming language is built to scratch an itch.
1652
+
1653
+ 55:57.000 --> 55:59.920
1654
+ Were you aware of all the itches in the community?
1655
+
1656
+ 55:59.920 --> 56:05.080
1657
+ And if not, or if yes, I mean, what itch were you trying to scratch with Python?
1658
+
1659
+ 56:05.080 --> 56:12.040
1660
+ Well, I'm glad I wasn't aware of all the itches because I would probably not have been able
1661
+
1662
+ 56:12.040 --> 56:14.040
1663
+ to do anything.
1664
+
1665
+ 56:14.040 --> 56:19.760
1666
+ I mean, if you're trying to solve every problem at once, you'll solve nothing.
1667
+
1668
+ 56:19.760 --> 56:23.880
1669
+ Well, yeah, it's too overwhelming.
1670
+
1671
+ 56:23.880 --> 56:28.360
1672
+ And so I had a very, very focused problem.
1673
+
1674
+ 56:28.360 --> 56:41.880
1675
+ I wanted a programming language that sat somewhere in between shell scripting and C. And now,
1676
+
1677
+ 56:41.880 --> 56:48.720
1678
+ arguably, there is like, one is higher level, one is lower level.
1679
+
1680
+ 56:48.720 --> 56:56.760
1681
+ And Python is sort of a language of an intermediate level, although it's still pretty much at
1682
+
1683
+ 56:56.760 --> 57:00.560
1684
+ the high level end.
1685
+
1686
+ 57:00.560 --> 57:11.200
1687
+ I was thinking about much more about, I want a tool that I can use to be more productive
1688
+
1689
+ 57:11.200 --> 57:16.640
1690
+ as a programmer in a very specific environment.
1691
+
1692
+ 57:16.640 --> 57:22.280
1693
+ And I also had given myself a time budget for the development of the tool.
1694
+
1695
+ 57:22.280 --> 57:29.340
1696
+ And that was sort of about three months for both the design, like thinking through what
1697
+
1698
+ 57:29.340 --> 57:38.900
1699
+ are all the features of the language syntactically and semantically, and how do I implement the
1700
+
1701
+ 57:38.900 --> 57:43.680
1702
+ whole pipeline from parsing the source code to executing it.
1703
+
1704
+ 57:43.680 --> 57:51.440
1705
+ So I think both with the timeline and the goals, it seems like productivity was at the
1706
+
1707
+ 57:51.440 --> 57:54.040
1708
+ core of it as a goal.
1709
+
1710
+ 57:54.040 --> 58:01.280
1711
+ So like, for me in the 90s, and the first decade of the 21st century, I was always doing
1712
+
1713
+ 58:01.280 --> 58:07.620
1714
+ machine learning, AI programming for my research was always in C++.
1715
+
1716
+ 58:07.620 --> 58:14.240
1717
+ And then the other people who are a little more mechanical engineering, electrical engineering,
1718
+
1719
+ 58:14.240 --> 58:15.240
1720
+ are MATLABby.
1721
+
1722
+ 58:15.240 --> 58:18.520
1723
+ They're a little bit more MATLAB focused.
1724
+
1725
+ 58:18.520 --> 58:21.200
1726
+ Those are the world, and maybe a little bit Java too.
1727
+
1728
+ 58:21.200 --> 58:29.160
1729
+ But people who are more interested in emphasizing the object oriented nature of things.
1730
+
1731
+ 58:29.160 --> 58:34.920
1732
+ So within the last 10 years or so, especially with the oncoming of neural networks and these
1733
+
1734
+ 58:34.920 --> 58:41.360
1735
+ packages that are built on Python to interface with neural networks, I switched to Python
1736
+
1737
+ 58:41.360 --> 58:47.120
1738
+ and it's just, I've noticed a significant boost that I can't exactly, because I don't
1739
+
1740
+ 58:47.120 --> 58:52.840
1741
+ think about it, but I can't exactly put into words why I'm just much, much more productive.
1742
+
1743
+ 58:52.840 --> 58:56.400
1744
+ Just being able to get the job done much, much faster.
1745
+
1746
+ 58:56.400 --> 59:01.880
1747
+ So how do you think, whatever that qualitative difference is, I don't know if it's quantitative,
1748
+
1749
+ 59:01.880 --> 59:07.280
1750
+ it could be just a feeling, I don't know if I'm actually more productive, but how
1751
+
1752
+ 59:07.280 --> 59:08.280
1753
+ do you think about...
1754
+
1755
+ 59:08.280 --> 59:09.280
1756
+ You probably are.
1757
+
1758
+ 59:09.280 --> 59:10.280
1759
+ Yeah.
1760
+
1761
+ 59:10.280 --> 59:11.880
1762
+ Well, that's right.
1763
+
1764
+ 59:11.880 --> 59:15.400
1765
+ I think there's elements, let me just speak to one aspect that I think that was affecting
1766
+
1767
+ 59:15.400 --> 59:26.160
1768
+ my productivity is C++ was, I really enjoyed creating performant code and creating a beautiful
1769
+
1770
+ 59:26.160 --> 59:31.000
1771
+ structure where everything that, you know, this kind of going into this, especially with
1772
+
1773
+ 59:31.000 --> 59:37.080
1774
+ the newer and newer standards of templated programming of just really creating this beautiful
1775
+
1776
+ 59:37.080 --> 59:42.000
1777
+ formal structure that I found myself spending most of my time doing that as opposed to getting
1778
+
1779
+ 59:42.000 --> 59:47.520
1780
+ it, parsing a file and extracting a few keywords or whatever the task was trying to do.
1781
+
1782
+ 59:47.520 --> 59:49.980
1783
+ So what is it about Python?
1784
+
1785
+ 59:49.980 --> 59:54.520
1786
+ How do you think of productivity in general as you were designing it now, sort of through
1787
+
1788
+ 59:54.520 --> 1:00:00.120
1789
+ the decades, last three decades, what do you think it means to be a productive programmer?
1790
+
1791
+ 1:00:00.120 --> 1:00:03.560
1792
+ And how did you try to design it into the language?
1793
+
1794
+ 1:00:03.560 --> 1:00:10.400
1795
+ There are different tasks and as a programmer, it's useful to have different tools available
1796
+
1797
+ 1:00:10.400 --> 1:00:13.940
1798
+ that sort of are suitable for different tasks.
1799
+
1800
+ 1:00:13.940 --> 1:00:25.600
1801
+ So I still write C code, I still write shell code, but I write most of my things in Python.
1802
+
1803
+ 1:00:25.600 --> 1:00:33.000
1804
+ Why do I still use those other languages, because sometimes the task just demands it.
1805
+
1806
+ 1:00:33.000 --> 1:00:39.000
1807
+ And well, I would say most of the time the task actually demands a certain language because
1808
+
1809
+ 1:00:39.000 --> 1:00:45.600
1810
+ the task is not write a program that solves problem X from scratch, but it's more like
1811
+
1812
+ 1:00:45.600 --> 1:00:56.680
1813
+ fix a bug in existing program X or add a small feature to an existing large program.
1814
+
1815
+ 1:00:56.680 --> 1:01:10.160
1816
+ But even if you're not constrained in your choice of language by context like that, there
1817
+
1818
+ 1:01:10.160 --> 1:01:21.360
1819
+ is still the fact that if you write it in a certain language, then you have this balance
1820
+
1821
+ 1:01:21.360 --> 1:01:31.840
1822
+ between how long does it take you to write the code and how long does the code run?
1823
+
1824
+ 1:01:31.840 --> 1:01:42.760
1825
+ And when you're in the phase of exploring solutions, you often spend much more time
1826
+
1827
+ 1:01:42.760 --> 1:01:50.720
1828
+ writing the code than running it because every time you've run it, you see that the output
1829
+
1830
+ 1:01:50.720 --> 1:01:58.480
1831
+ is not quite what you wanted and you spend some more time coding.
1832
+
1833
+ 1:01:58.480 --> 1:02:06.760
1834
+ And a language like Python just makes that iteration much faster because there are fewer
1835
+
1836
+ 1:02:06.760 --> 1:02:19.480
1837
+ details that you have to get right before your program compiles and runs.
1838
+
1839
+ 1:02:19.480 --> 1:02:26.400
1840
+ There are libraries that do all sorts of stuff for you, so you can sort of very quickly take
1841
+
1842
+ 1:02:26.400 --> 1:02:36.320
1843
+ a bunch of existing components, put them together, and get your prototype application running.
1844
+
1845
+ 1:02:36.320 --> 1:02:42.860
1846
+ Just like when I was building electronics, I was using a breadboard most of the time,
1847
+
1848
+ 1:02:42.860 --> 1:02:51.320
1849
+ so I had this sprawl out circuit that if you shook it, it would stop working because it
1850
+
1851
+ 1:02:51.320 --> 1:02:58.800
1852
+ was not put together very well, but it functioned and all I wanted was to see that it worked
1853
+
1854
+ 1:02:58.800 --> 1:03:05.000
1855
+ and then move on to the next schematic or design or add something to it.
1856
+
1857
+ 1:03:05.000 --> 1:03:10.500
1858
+ Once you've sort of figured out, oh, this is the perfect design for my radio or light
1859
+
1860
+ 1:03:10.500 --> 1:03:15.800
1861
+ sensor or whatever, then you can say, okay, how do we design a PCB for this?
1862
+
1863
+ 1:03:15.800 --> 1:03:19.920
1864
+ How do we solder the components in a small space?
1865
+
1866
+ 1:03:19.920 --> 1:03:32.840
1867
+ How do we make it so that it is robust against, say, voltage fluctuations or mechanical disruption?
1868
+
1869
+ 1:03:32.840 --> 1:03:37.320
1870
+ I know nothing about that when it comes to designing electronics, but I know a lot about
1871
+
1872
+ 1:03:37.320 --> 1:03:40.400
1873
+ that when it comes to writing code.
1874
+
1875
+ 1:03:40.400 --> 1:03:46.080
1876
+ So the initial steps are efficient, fast, and there's not much stuff that gets in the
1877
+
1878
+ 1:03:46.080 --> 1:03:56.680
1879
+ way, but you're kind of describing, like Darwin described the evolution of species, right?
1880
+
1881
+ 1:03:56.680 --> 1:04:00.520
1882
+ You're observing of what is true about Python.
1883
+
1884
+ 1:04:00.520 --> 1:04:07.800
1885
+ Now if you take a step back, if the act of creating languages is art and you had three
1886
+
1887
+ 1:04:07.800 --> 1:04:15.640
1888
+ months to do it, initial steps, so you just specified a bunch of goals, sort of things
1889
+
1890
+ 1:04:15.640 --> 1:04:19.400
1891
+ that you observe about Python, perhaps you had those goals, but how do you create the
1892
+
1893
+ 1:04:19.400 --> 1:04:25.600
1894
+ rules, the syntactic structure, the features that result in those?
1895
+
1896
+ 1:04:25.600 --> 1:04:29.880
1897
+ So I have in the beginning and I have follow up questions about through the evolution of
1898
+
1899
+ 1:04:29.880 --> 1:04:35.440
1900
+ Python too, but in the very beginning when you were sitting there creating the lexical
1901
+
1902
+ 1:04:35.440 --> 1:04:37.440
1903
+ analyzer or whatever.
1904
+
1905
+ 1:04:37.440 --> 1:04:47.240
1906
+ Python was still a big part of it because I sort of, I said to myself, I don't want
1907
+
1908
+ 1:04:47.240 --> 1:04:53.640
1909
+ to have to design everything from scratch, I'm going to borrow features from other languages
1910
+
1911
+ 1:04:53.640 --> 1:04:54.640
1912
+ that I like.
1913
+
1914
+ 1:04:54.640 --> 1:04:55.640
1915
+ Oh, interesting.
1916
+
1917
+ 1:04:55.640 --> 1:04:58.360
1918
+ So you basically, exactly, you first observe what you like.
1919
+
1920
+ 1:04:58.360 --> 1:05:05.240
1921
+ Yeah, and so that's why if you're 17 years old and you want to sort of create a programming
1922
+
1923
+ 1:05:05.240 --> 1:05:11.600
1924
+ language, you're not going to be very successful at it because you have no experience with
1925
+
1926
+ 1:05:11.600 --> 1:05:24.300
1927
+ other languages, whereas I was in my, let's say mid 30s, I had written parsers before,
1928
+
1929
+ 1:05:24.300 --> 1:05:30.880
1930
+ so I had worked on the implementation of ABC, I had spent years debating the design of ABC
1931
+
1932
+ 1:05:30.880 --> 1:05:37.520
1933
+ with its authors, with its designers, I had nothing to do with the design, it was designed
1934
+
1935
+ 1:05:37.520 --> 1:05:42.080
1936
+ fully as it ended up being implemented when I joined the team.
1937
+
1938
+ 1:05:42.080 --> 1:05:51.440
1939
+ But so you borrow ideas and concepts and very concrete sort of local rules from different
1940
+
1941
+ 1:05:51.440 --> 1:05:58.920
1942
+ languages like the indentation and certain other syntactic features from ABC, but I chose
1943
+
1944
+ 1:05:58.920 --> 1:06:07.960
1945
+ to borrow string literals and how numbers work from C and various other things.
1946
+
1947
+ 1:06:07.960 --> 1:06:13.800
1948
+ So in then, if you take that further, so yet you've had this funny sounding, but I think
1949
+
1950
+ 1:06:13.800 --> 1:06:21.000
1951
+ surprisingly accurate and at least practical title of benevolent dictator for life for
1952
+
1953
+ 1:06:21.000 --> 1:06:25.240
1954
+ quite, you know, for the last three decades or whatever, or no, not the actual title,
1955
+
1956
+ 1:06:25.240 --> 1:06:27.940
1957
+ but functionally speaking.
1958
+
1959
+ 1:06:27.940 --> 1:06:34.280
1960
+ So you had to make decisions, design decisions.
1961
+
1962
+ 1:06:34.280 --> 1:06:41.960
1963
+ Can you maybe, let's take Python 2, so releasing Python 3 as an example.
1964
+
1965
+ 1:06:41.960 --> 1:06:47.240
1966
+ It's not backward compatible to Python 2 in ways that a lot of people know.
1967
+
1968
+ 1:06:47.240 --> 1:06:50.640
1969
+ So what was that deliberation, discussion, decision like?
1970
+
1971
+ 1:06:50.640 --> 1:06:51.640
1972
+ Yeah.
1973
+
1974
+ 1:06:51.640 --> 1:06:54.520
1975
+ What was the psychology of that experience?
1976
+
1977
+ 1:06:54.520 --> 1:06:58.520
1978
+ Do you regret any aspects of how that experience undergone that?
1979
+
1980
+ 1:06:58.520 --> 1:07:03.040
1981
+ Well, yeah, so it was a group process really.
1982
+
1983
+ 1:07:03.040 --> 1:07:11.880
1984
+ At that point, even though I was BDFL in name and certainly everybody sort of respected
1985
+
1986
+ 1:07:11.880 --> 1:07:22.160
1987
+ my position as the creator and the current sort of owner of the language design, I was
1988
+
1989
+ 1:07:22.160 --> 1:07:26.560
1990
+ looking at everyone else for feedback.
1991
+
1992
+ 1:07:26.560 --> 1:07:35.280
1993
+ Sort of Python 3.0 in some sense was sparked by other people in the community pointing
1994
+
1995
+ 1:07:35.280 --> 1:07:46.360
1996
+ out, oh, well, there are a few issues that sort of bite users over and over.
1997
+
1998
+ 1:07:46.360 --> 1:07:48.920
1999
+ Can we do something about that?
2000
+
2001
+ 1:07:48.920 --> 1:07:56.360
2002
+ And for Python 3, we took a number of those Python words as they were called at the time
2003
+
2004
+ 1:07:56.360 --> 1:08:04.800
2005
+ and we said, can we try to sort of make small changes to the language that address those
2006
+
2007
+ 1:08:04.800 --> 1:08:06.560
2008
+ words?
2009
+
2010
+ 1:08:06.560 --> 1:08:15.360
2011
+ And we had sort of in the past, we had always taken backwards compatibility very seriously.
2012
+
2013
+ 1:08:15.360 --> 1:08:20.420
2014
+ And so many Python words in earlier versions had already been resolved because they could
2015
+
2016
+ 1:08:20.420 --> 1:08:29.740
2017
+ be resolved while maintaining backwards compatibility or sort of using a very gradual path of evolution
2018
+
2019
+ 1:08:29.740 --> 1:08:31.960
2020
+ of the language in a certain area.
2021
+
2022
+ 1:08:31.960 --> 1:08:39.760
2023
+ And so we were stuck with a number of words that were widely recognized as problems, not
2024
+
2025
+ 1:08:39.760 --> 1:08:47.680
2026
+ like roadblocks, but nevertheless sort of things that some people trip over and you know that
2027
+
2028
+ 1:08:47.680 --> 1:08:52.080
2029
+ that's always the same thing that people trip over when they trip.
2030
+
2031
+ 1:08:52.080 --> 1:08:58.480
2032
+ And we could not think of a backwards compatible way of resolving those issues.
2033
+
2034
+ 1:08:58.480 --> 1:09:01.920
2035
+ But it's still an option to not resolve the issues, right?
2036
+
2037
+ 1:09:01.920 --> 1:09:07.920
2038
+ And so yes, for a long time, we had sort of resigned ourselves to, well, okay, the language
2039
+
2040
+ 1:09:07.920 --> 1:09:13.400
2041
+ is not going to be perfect in this way and that way and that way.
2042
+
2043
+ 1:09:13.400 --> 1:09:19.440
2044
+ And we sort of, certain of these, I mean, there are still plenty of things where you
2045
+
2046
+ 1:09:19.440 --> 1:09:32.680
2047
+ can say, well, that particular detail is better in Java or in R or in Visual Basic or whatever.
2048
+
2049
+ 1:09:32.680 --> 1:09:37.960
2050
+ And we're okay with that because, well, we can't easily change it.
2051
+
2052
+ 1:09:37.960 --> 1:09:38.960
2053
+ It's not too bad.
2054
+
2055
+ 1:09:38.960 --> 1:09:47.180
2056
+ We can do a little bit with user education or we can have a static analyzer or warnings
2057
+
2058
+ 1:09:47.180 --> 1:09:49.440
2059
+ in the parse or something.
2060
+
2061
+ 1:09:49.440 --> 1:09:54.880
2062
+ But there were things where we thought, well, these are really problems that are not going
2063
+
2064
+ 1:09:54.880 --> 1:09:55.880
2065
+ away.
2066
+
2067
+ 1:09:55.880 --> 1:10:00.840
2068
+ They are getting worse in the future.
2069
+
2070
+ 1:10:00.840 --> 1:10:03.040
2071
+ We should do something about that.
2072
+
2073
+ 1:10:03.040 --> 1:10:05.640
2074
+ But ultimately there is a decision to be made, right?
2075
+
2076
+ 1:10:05.640 --> 1:10:13.320
2077
+ So was that the toughest decision in the history of Python you had to make as the benevolent
2078
+
2079
+ 1:10:13.320 --> 1:10:15.180
2080
+ dictator for life?
2081
+
2082
+ 1:10:15.180 --> 1:10:20.160
2083
+ Or if not, what are there, maybe even on the smaller scale, what was the decision where
2084
+
2085
+ 1:10:20.160 --> 1:10:22.040
2086
+ you were really torn up about?
2087
+
2088
+ 1:10:22.040 --> 1:10:25.800
2089
+ Well, the toughest decision was probably to resign.
2090
+
2091
+ 1:10:25.800 --> 1:10:28.120
2092
+ All right, let's go there.
2093
+
2094
+ 1:10:28.120 --> 1:10:29.360
2095
+ Hold on a second then.
2096
+
2097
+ 1:10:29.360 --> 1:10:33.200
2098
+ Let me just, because in the interest of time too, because I have a few cool questions for
2099
+
2100
+ 1:10:33.200 --> 1:10:38.160
2101
+ you and let's touch a really important one because it was quite dramatic and beautiful
2102
+
2103
+ 1:10:38.160 --> 1:10:40.400
2104
+ in certain kinds of ways.
2105
+
2106
+ 1:10:40.400 --> 1:10:47.320
2107
+ In July this year, three months ago, you wrote, now that PEP 572 is done, I don't ever want
2108
+
2109
+ 1:10:47.320 --> 1:10:52.680
2110
+ to have to fight so hard for a PEP and find that so many people despise my decisions.
2111
+
2112
+ 1:10:52.680 --> 1:10:56.240
2113
+ I would like to remove myself entirely from the decision process.
2114
+
2115
+ 1:10:56.240 --> 1:11:01.520
2116
+ I'll still be there for a while as an ordinary core developer and I'll still be available
2117
+
2118
+ 1:11:01.520 --> 1:11:05.440
2119
+ to mentor people, possibly more available.
2120
+
2121
+ 1:11:05.440 --> 1:11:11.000
2122
+ But I'm basically giving myself a permanent vacation from being BDFL, benevolent dictator
2123
+
2124
+ 1:11:11.000 --> 1:11:12.000
2125
+ for life.
2126
+
2127
+ 1:11:12.000 --> 1:11:14.240
2128
+ And you all will be on your own.
2129
+
2130
+ 1:11:14.240 --> 1:11:19.720
2131
+ First of all, it's almost Shakespearean.
2132
+
2133
+ 1:11:19.720 --> 1:11:22.300
2134
+ I'm not going to appoint a successor.
2135
+
2136
+ 1:11:22.300 --> 1:11:24.640
2137
+ So what are you all going to do?
2138
+
2139
+ 1:11:24.640 --> 1:11:29.240
2140
+ Create a democracy, anarchy, a dictatorship, a federation?
2141
+
2142
+ 1:11:29.240 --> 1:11:34.560
2143
+ So that was a very dramatic and beautiful set of statements.
2144
+
2145
+ 1:11:34.560 --> 1:11:40.080
2146
+ It's almost, it's open ended nature called the community to create a future for Python.
2147
+
2148
+ 1:11:40.080 --> 1:11:43.280
2149
+ It's just kind of a beautiful aspect to it.
2150
+
2151
+ 1:11:43.280 --> 1:11:48.320
2152
+ So what, and dramatic, you know, what was making that decision like?
2153
+
2154
+ 1:11:48.320 --> 1:11:54.560
2155
+ What was on your heart, on your mind, stepping back now a few months later?
2156
+
2157
+ 1:11:54.560 --> 1:12:02.940
2158
+ I'm glad you liked the writing because it was actually written pretty quickly.
2159
+
2160
+ 1:12:02.940 --> 1:12:14.240
2161
+ It was literally something like after months and months of going around in circles, I had
2162
+
2163
+ 1:12:14.240 --> 1:12:26.240
2164
+ finally approved PEP572, which I had a big hand in its design, although I didn't initiate
2165
+
2166
+ 1:12:26.240 --> 1:12:27.760
2167
+ it originally.
2168
+
2169
+ 1:12:27.760 --> 1:12:36.320
2170
+ I sort of gave it a bunch of nudges in a direction that would be better for the language.
2171
+
2172
+ 1:12:36.320 --> 1:12:40.320
2173
+ So sorry, just to ask, is async IO, that's the one or no?
2174
+
2175
+ 1:12:40.320 --> 1:12:49.320
2176
+ PEP572 was actually a small feature, which is assignment expressions.
2177
+
2178
+ 1:12:49.320 --> 1:12:58.200
2179
+ That had been, there was just a lot of debate where a lot of people claimed that they knew
2180
+
2181
+ 1:12:58.200 --> 1:13:04.800
2182
+ what was Pythonic and what was not Pythonic, and they knew that this was going to destroy
2183
+
2184
+ 1:13:04.800 --> 1:13:06.080
2185
+ the language.
2186
+
2187
+ 1:13:06.080 --> 1:13:11.800
2188
+ This was like a violation of Python's most fundamental design philosophy, and I thought
2189
+
2190
+ 1:13:11.800 --> 1:13:17.200
2191
+ that was all bullshit because I was in favor of it, and I would think I know something
2192
+
2193
+ 1:13:17.200 --> 1:13:19.120
2194
+ about Python's design philosophy.
2195
+
2196
+ 1:13:19.120 --> 1:13:26.340
2197
+ So I was really tired and also stressed of that thing, and literally after sort of announcing
2198
+
2199
+ 1:13:26.340 --> 1:13:34.560
2200
+ I was going to accept it, a certain Wednesday evening I had finally sent the email, it's
2201
+
2202
+ 1:13:34.560 --> 1:13:35.560
2203
+ accepted.
2204
+
2205
+ 1:13:35.560 --> 1:13:38.920
2206
+ I can just go implement it.
2207
+
2208
+ 1:13:38.920 --> 1:13:44.120
2209
+ So I went to bed feeling really relieved, that's behind me.
2210
+
2211
+ 1:13:44.120 --> 1:13:54.320
2212
+ And I wake up Thursday morning, 7 a.m., and I think, well, that was the last one that's
2213
+
2214
+ 1:13:54.320 --> 1:14:03.880
2215
+ going to be such a terrible debate, and that's the last time that I let myself be so stressed
2216
+
2217
+ 1:14:03.880 --> 1:14:06.520
2218
+ out about a pep decision.
2219
+
2220
+ 1:14:06.520 --> 1:14:07.920
2221
+ I should just resign.
2222
+
2223
+ 1:14:07.920 --> 1:14:15.520
2224
+ I've been sort of thinking about retirement for half a decade, I've been joking and sort
2225
+
2226
+ 1:14:15.520 --> 1:14:22.460
2227
+ of mentioning retirement, sort of telling the community at some point in the future
2228
+
2229
+ 1:14:22.460 --> 1:14:29.400
2230
+ I'm going to retire, don't take that FL part of my title too literally.
2231
+
2232
+ 1:14:29.400 --> 1:14:32.080
2233
+ And I thought, okay, this is it.
2234
+
2235
+ 1:14:32.080 --> 1:14:39.200
2236
+ I'm done, I had the day off, I wanted to have a good time with my wife, we were going to
2237
+
2238
+ 1:14:39.200 --> 1:14:48.480
2239
+ a little beach town nearby, and in I think maybe 15, 20 minutes I wrote that thing that
2240
+
2241
+ 1:14:48.480 --> 1:14:51.320
2242
+ you just called Shakespearean.
2243
+
2244
+ 1:14:51.320 --> 1:15:01.560
2245
+ The funny thing is I didn't even realize what a monumental decision it was, because
2246
+
2247
+ 1:15:01.560 --> 1:15:09.200
2248
+ five minutes later I read that link to my message back on Twitter, where people were
2249
+
2250
+ 1:15:09.200 --> 1:15:15.280
2251
+ already discussing on Twitter, Guido resigned as the BDFL.
2252
+
2253
+ 1:15:15.280 --> 1:15:22.440
2254
+ And I had posted it on an internal forum that I thought was only read by core developers,
2255
+
2256
+ 1:15:22.440 --> 1:15:28.520
2257
+ so I thought I would at least have one day before the news would sort of get out.
2258
+
2259
+ 1:15:28.520 --> 1:15:36.200
2260
+ The on your own aspects had also an element of quite, it was quite a powerful element
2261
+
2262
+ 1:15:36.200 --> 1:15:43.080
2263
+ of the uncertainty that lies ahead, but can you also just briefly talk about, for example
2264
+
2265
+ 1:15:43.080 --> 1:15:49.920
2266
+ I play guitar as a hobby for fun, and whenever I play people are super positive, super friendly,
2267
+
2268
+ 1:15:49.920 --> 1:15:52.680
2269
+ they're like, this is awesome, this is great.
2270
+
2271
+ 1:15:52.680 --> 1:15:57.520
2272
+ But sometimes I enter as an outside observer, I enter the programming community and there
2273
+
2274
+ 1:15:57.520 --> 1:16:05.560
2275
+ seems to sometimes be camps on whatever the topic, and the two camps, the two or plus
2276
+
2277
+ 1:16:05.560 --> 1:16:11.700
2278
+ camps, are often pretty harsh at criticizing the opposing camps.
2279
+
2280
+ 1:16:11.700 --> 1:16:14.880
2281
+ As an onlooker, I may be totally wrong on this, but what do you think of this?
2282
+
2283
+ 1:16:14.880 --> 1:16:19.760
2284
+ Yeah, holy wars are sort of a favorite activity in the programming community.
2285
+
2286
+ 1:16:19.760 --> 1:16:22.120
2287
+ And what is the psychology behind that?
2288
+
2289
+ 1:16:22.120 --> 1:16:25.120
2290
+ Is that okay for a healthy community to have?
2291
+
2292
+ 1:16:25.120 --> 1:16:29.760
2293
+ Is that a productive force ultimately for the evolution of a language?
2294
+
2295
+ 1:16:29.760 --> 1:16:39.080
2296
+ Well, if everybody is patting each other on the back and never telling the truth, it would
2297
+
2298
+ 1:16:39.080 --> 1:16:40.840
2299
+ not be a good thing.
2300
+
2301
+ 1:16:40.840 --> 1:16:52.760
2302
+ I think there is a middle ground where sort of being nasty to each other is not okay,
2303
+
2304
+ 1:16:52.760 --> 1:17:01.760
2305
+ but there is a middle ground where there is healthy ongoing criticism and feedback that
2306
+
2307
+ 1:17:01.760 --> 1:17:04.780
2308
+ is very productive.
2309
+
2310
+ 1:17:04.780 --> 1:17:07.760
2311
+ And you mean at every level you see that.
2312
+
2313
+ 1:17:07.760 --> 1:17:17.760
2314
+ I mean, someone proposes to fix a very small issue in a code base, chances are that some
2315
+
2316
+ 1:17:17.760 --> 1:17:27.080
2317
+ reviewer will sort of respond by saying, well, actually, you can do it better the other way.
2318
+
2319
+ 1:17:27.080 --> 1:17:34.360
2320
+ When it comes to deciding on the future of the Python core developer community, we now
2321
+
2322
+ 1:17:34.360 --> 1:17:41.160
2323
+ have, I think, five or six competing proposals for a constitution.
2324
+
2325
+ 1:17:41.160 --> 1:17:48.040
2326
+ So that future, do you have a fear of that future, do you have a hope for that future?
2327
+
2328
+ 1:17:48.040 --> 1:17:51.280
2329
+ I'm very confident about that future.
2330
+
2331
+ 1:17:51.280 --> 1:17:58.920
2332
+ By and large, I think that the debate has been very healthy and productive.
2333
+
2334
+ 1:17:58.920 --> 1:18:07.680
2335
+ And I actually, when I wrote that resignation email, I knew that Python was in a very good
2336
+
2337
+ 1:18:07.680 --> 1:18:16.840
2338
+ spot and that the Python core developer community, the group of 50 or 100 people who sort of
2339
+
2340
+ 1:18:16.840 --> 1:18:24.720
2341
+ write or review most of the code that goes into Python, those people get along very well
2342
+
2343
+ 1:18:24.720 --> 1:18:27.680
2344
+ most of the time.
2345
+
2346
+ 1:18:27.680 --> 1:18:40.120
2347
+ A large number of different areas of expertise are represented, different levels of experience
2348
+
2349
+ 1:18:40.120 --> 1:18:45.440
2350
+ in the Python core dev community, different levels of experience completely outside it
2351
+
2352
+ 1:18:45.440 --> 1:18:53.040
2353
+ in software development in general, large systems, small systems, embedded systems.
2354
+
2355
+ 1:18:53.040 --> 1:19:03.880
2356
+ So I felt okay resigning because I knew that the community can really take care of itself.
2357
+
2358
+ 1:19:03.880 --> 1:19:12.360
2359
+ And out of a grab bag of future feature developments, let me ask if you can comment, maybe on all
2360
+
2361
+ 1:19:12.360 --> 1:19:19.120
2362
+ very quickly, concurrent programming, parallel computing, async IO.
2363
+
2364
+ 1:19:19.120 --> 1:19:24.880
2365
+ These are things that people have expressed hope, complained about, whatever, have discussed
2366
+
2367
+ 1:19:24.880 --> 1:19:25.880
2368
+ on Reddit.
2369
+
2370
+ 1:19:25.880 --> 1:19:32.200
2371
+ Async IO, so the parallelization in general, packaging, I was totally clueless on this.
2372
+
2373
+ 1:19:32.200 --> 1:19:38.600
2374
+ I just used pip to install stuff, but apparently there's pipenv, poetry, there's these dependency
2375
+
2376
+ 1:19:38.600 --> 1:19:41.300
2377
+ packaging systems that manage dependencies and so on.
2378
+
2379
+ 1:19:41.300 --> 1:19:45.520
2380
+ They're emerging and there's a lot of confusion about what's the right thing to use.
2381
+
2382
+ 1:19:45.520 --> 1:19:56.360
2383
+ Then also functional programming, are we going to get more functional programming or not,
2384
+
2385
+ 1:19:56.360 --> 1:19:59.040
2386
+ this kind of idea.
2387
+
2388
+ 1:19:59.040 --> 1:20:08.280
2389
+ And of course the GIL connected to the parallelization, I suppose, the global interpreter lock problem.
2390
+
2391
+ 1:20:08.280 --> 1:20:12.800
2392
+ Can you just comment on whichever you want to comment on?
2393
+
2394
+ 1:20:12.800 --> 1:20:25.440
2395
+ Well, let's take the GIL and parallelization and async IO as one topic.
2396
+
2397
+ 1:20:25.440 --> 1:20:35.820
2398
+ I'm not that hopeful that Python will develop into a sort of high concurrency, high parallelism
2399
+
2400
+ 1:20:35.820 --> 1:20:37.960
2401
+ language.
2402
+
2403
+ 1:20:37.960 --> 1:20:44.800
2404
+ That's sort of the way the language is designed, the way most users use the language, the way
2405
+
2406
+ 1:20:44.800 --> 1:20:50.280
2407
+ the language is implemented, all make that a pretty unlikely future.
2408
+
2409
+ 1:20:50.280 --> 1:20:56.040
2410
+ So you think it might not even need to, really the way people use it, it might not be something
2411
+
2412
+ 1:20:56.040 --> 1:20:58.160
2413
+ that should be of great concern.
2414
+
2415
+ 1:20:58.160 --> 1:21:05.620
2416
+ I think async IO is a special case because it sort of allows overlapping IO and only
2417
+
2418
+ 1:21:05.620 --> 1:21:18.160
2419
+ IO and that is a sort of best practice of supporting very high throughput IO, many connections
2420
+
2421
+ 1:21:18.160 --> 1:21:21.680
2422
+ per second.
2423
+
2424
+ 1:21:21.680 --> 1:21:22.780
2425
+ I'm not worried about that.
2426
+
2427
+ 1:21:22.780 --> 1:21:25.280
2428
+ I think async IO will evolve.
2429
+
2430
+ 1:21:25.280 --> 1:21:27.440
2431
+ There are a couple of competing packages.
2432
+
2433
+ 1:21:27.440 --> 1:21:36.800
2434
+ We have some very smart people who are sort of pushing us to make async IO better.
2435
+
2436
+ 1:21:36.800 --> 1:21:43.800
2437
+ Parallel computing, I think that Python is not the language for that.
2438
+
2439
+ 1:21:43.800 --> 1:21:53.560
2440
+ There are ways to work around it, but you can't expect to write an algorithm in Python
2441
+
2442
+ 1:21:53.560 --> 1:21:57.440
2443
+ and have a compiler automatically parallelize that.
2444
+
2445
+ 1:21:57.440 --> 1:22:03.520
2446
+ What you can do is use a package like NumPy and there are a bunch of other very powerful
2447
+
2448
+ 1:22:03.520 --> 1:22:12.480
2449
+ packages that sort of use all the CPUs available because you tell the package, here's the data,
2450
+
2451
+ 1:22:12.480 --> 1:22:19.040
2452
+ here's the abstract operation to apply over it, go at it, and then we're back in the C++
2453
+
2454
+ 1:22:19.040 --> 1:22:20.040
2455
+ world.
2456
+
2457
+ 1:22:20.040 --> 1:22:24.600
2458
+ Those packages are themselves implemented usually in C++.
2459
+
2460
+ 1:22:24.600 --> 1:22:28.000
2461
+ That's where TensorFlow and all these packages come in, where they parallelize across GPUs,
2462
+
2463
+ 1:22:28.000 --> 1:22:30.480
2464
+ for example, they take care of that for you.
2465
+
2466
+ 1:22:30.480 --> 1:22:36.600
2467
+ In terms of packaging, can you comment on the future of packaging in Python?
2468
+
2469
+ 1:22:36.600 --> 1:22:42.640
2470
+ Packaging has always been my least favorite topic.
2471
+
2472
+ 1:22:42.640 --> 1:22:55.600
2473
+ It's a really tough problem because the OS and the platform want to own packaging, but
2474
+
2475
+ 1:22:55.600 --> 1:23:01.000
2476
+ their packaging solution is not specific to a language.
2477
+
2478
+ 1:23:01.000 --> 1:23:07.480
2479
+ If you take Linux, there are two competing packaging solutions for Linux or for Unix
2480
+
2481
+ 1:23:07.480 --> 1:23:15.000
2482
+ in general, but they all work across all languages.
2483
+
2484
+ 1:23:15.000 --> 1:23:24.760
2485
+ Several languages like Node, JavaScript, Ruby, and Python all have their own packaging solutions
2486
+
2487
+ 1:23:24.760 --> 1:23:29.480
2488
+ that only work within the ecosystem of that language.
2489
+
2490
+ 1:23:29.480 --> 1:23:31.920
2491
+ What should you use?
2492
+
2493
+ 1:23:31.920 --> 1:23:34.560
2494
+ That is a tough problem.
2495
+
2496
+ 1:23:34.560 --> 1:23:43.520
2497
+ My own approach is I use the system packaging system to install Python, and I use the Python
2498
+
2499
+ 1:23:43.520 --> 1:23:49.280
2500
+ packaging system then to install third party Python packages.
2501
+
2502
+ 1:23:49.280 --> 1:23:51.480
2503
+ That's what most people do.
2504
+
2505
+ 1:23:51.480 --> 1:23:56.400
2506
+ Ten years ago, Python packaging was really a terrible situation.
2507
+
2508
+ 1:23:56.400 --> 1:24:05.360
2509
+ Nowadays, pip is the future, there is a separate ecosystem for numerical and scientific Python
2510
+
2511
+ 1:24:05.360 --> 1:24:08.200
2512
+ based on Anaconda.
2513
+
2514
+ 1:24:08.200 --> 1:24:09.760
2515
+ Those two can live together.
2516
+
2517
+ 1:24:09.760 --> 1:24:13.600
2518
+ I don't think there is a need for more than that.
2519
+
2520
+ 1:24:13.600 --> 1:24:14.600
2521
+ That's packaging.
2522
+
2523
+ 1:24:14.600 --> 1:24:18.720
2524
+ Well, at least for me, that's where I've been extremely happy.
2525
+
2526
+ 1:24:18.720 --> 1:24:22.320
2527
+ I didn't even know this was an issue until it was brought up.
2528
+
2529
+ 1:24:22.320 --> 1:24:27.600
2530
+ In the interest of time, let me sort of skip through a million other questions I have.
2531
+
2532
+ 1:24:27.600 --> 1:24:32.880
2533
+ So I watched the five and a half hour oral history that you've done with the Computer
2534
+
2535
+ 1:24:32.880 --> 1:24:37.600
2536
+ History Museum, and the nice thing about it, it gave this, because of the linear progression
2537
+
2538
+ 1:24:37.600 --> 1:24:44.480
2539
+ of the interview, it gave this feeling of a life, you know, a life well lived with interesting
2540
+
2541
+ 1:24:44.480 --> 1:24:52.160
2542
+ things in it, sort of a pretty, I would say a good spend of this little existence we have
2543
+
2544
+ 1:24:52.160 --> 1:24:53.160
2545
+ on Earth.
2546
+
2547
+ 1:24:53.160 --> 1:24:59.840
2548
+ So, outside of your family, looking back, what about this journey are you really proud
2549
+
2550
+ 1:24:59.840 --> 1:25:00.840
2551
+ of?
2552
+
2553
+ 1:25:00.840 --> 1:25:07.040
2554
+ Are there moments that stand out, accomplishments, ideas?
2555
+
2556
+ 1:25:07.040 --> 1:25:14.040
2557
+ Is it the creation of Python itself that stands out as a thing that you look back and say,
2558
+
2559
+ 1:25:14.040 --> 1:25:16.480
2560
+ damn, I did pretty good there?
2561
+
2562
+ 1:25:16.480 --> 1:25:25.520
2563
+ Well, I would say that Python is definitely the best thing I've ever done, and I wouldn't
2564
+
2565
+ 1:25:25.520 --> 1:25:36.560
2566
+ sort of say just the creation of Python, but the way I sort of raised Python, like a baby.
2567
+
2568
+ 1:25:36.560 --> 1:25:42.480
2569
+ I didn't just conceive a child, but I raised a child, and now I'm setting the child free
2570
+
2571
+ 1:25:42.480 --> 1:25:50.200
2572
+ in the world, and I've set up the child to sort of be able to take care of himself, and
2573
+
2574
+ 1:25:50.200 --> 1:25:52.640
2575
+ I'm very proud of that.
2576
+
2577
+ 1:25:52.640 --> 1:25:56.740
2578
+ And as the announcer of Monty Python's Flying Circus used to say, and now for something
2579
+
2580
+ 1:25:56.740 --> 1:26:02.280
2581
+ completely different, do you have a favorite Monty Python moment, or a moment in Hitchhiker's
2582
+
2583
+ 1:26:02.280 --> 1:26:07.720
2584
+ Guide, or any other literature show or movie that cracks you up when you think about it?
2585
+
2586
+ 1:26:07.720 --> 1:26:11.320
2587
+ You can always play me the dead parrot sketch.
2588
+
2589
+ 1:26:11.320 --> 1:26:13.680
2590
+ Oh, that's brilliant.
2591
+
2592
+ 1:26:13.680 --> 1:26:14.680
2593
+ That's my favorite as well.
2594
+
2595
+ 1:26:14.680 --> 1:26:15.680
2596
+ It's pushing up the daisies.
2597
+
2598
+ 1:26:15.680 --> 1:26:20.680
2599
+ Okay, Greta, thank you so much for talking with me today.
2600
+
2601
+ 1:26:20.680 --> 1:26:44.080
2602
+ Lex, this has been a great conversation.
2603
+
vtt/episode_006_small.vtt ADDED
@@ -0,0 +1,2051 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:04.640
4
+ The following is a conversation with Guido Van Rossum, creator of Python,
5
+
6
+ 00:04.640 --> 00:09.520
7
+ one of the most popular programming languages in the world, used in almost any application
8
+
9
+ 00:09.520 --> 00:16.000
10
+ that involves computers, from web backend development to psychology, neuroscience,
11
+
12
+ 00:16.000 --> 00:21.040
13
+ computer vision, robotics, deep learning, natural language processing, and almost any
14
+
15
+ 00:21.040 --> 00:27.360
16
+ subfield of AI. This conversation is part of MIT course on artificial general intelligence
17
+
18
+ 00:27.360 --> 00:33.760
19
+ and the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes,
20
+
21
+ 00:33.760 --> 00:39.040
22
+ or your podcast provider of choice, or simply connect with me on Twitter at Lex Friedman,
23
+
24
+ 00:39.040 --> 00:45.520
25
+ spelled F R I D. And now here's my conversation with Guido Van Rossum.
26
+
27
+ 00:46.320 --> 00:53.600
28
+ You were born in the Netherlands in 1956. Your parents and the world around you was deeply
29
+
30
+ 00:53.600 --> 01:00.080
31
+ impacted by World War Two, as was my family from the Soviet Union. So with that context,
32
+
33
+ 01:01.920 --> 01:08.240
34
+ what is your view of human nature? Are some humans inherently good and some inherently
35
+
36
+ 01:08.240 --> 01:12.240
37
+ evil, or do we all have both good and evil within us?
38
+
39
+ 01:12.240 --> 01:26.320
40
+ Ouch, I did not expect such a deep one. I guess we all have good and evil potential in us,
41
+
42
+ 01:26.320 --> 01:31.440
43
+ and a lot of it depends on circumstances and context.
44
+
45
+ 01:32.960 --> 01:39.360
46
+ Out of that world, at least on the Soviet Union side in Europe, sort of out of suffering,
47
+
48
+ 01:39.360 --> 01:46.800
49
+ out of challenge, out of that kind of set of traumatic events, often emerges beautiful art,
50
+
51
+ 01:46.800 --> 01:53.200
52
+ music, literature. In an interview I read or heard, you said you enjoyed Dutch literature
53
+
54
+ 01:54.320 --> 01:59.680
55
+ when you were a child. Can you tell me about the books that had an influence on you in your
56
+
57
+ 01:59.680 --> 02:06.960
58
+ childhood? Well, as a teenager, my favorite writer was, my favorite Dutch author was
59
+
60
+ 02:06.960 --> 02:17.920
61
+ a guy named Willem Friedrich Hermans, whose writing, certainly his early novels, were all about
62
+
63
+ 02:19.040 --> 02:29.520
64
+ sort of ambiguous things that happened during World War Two. I think he was a young adult
65
+
66
+ 02:29.520 --> 02:40.400
67
+ during that time, and he wrote about it a lot and very interesting, very good books, I thought,
68
+
69
+ 02:40.400 --> 02:50.000
70
+ I think. In a nonfiction way? No, it was all fiction, but it was very much set in the ambiguous
71
+
72
+ 02:50.000 --> 02:57.680
73
+ world of resistance against the Germans, where often you couldn't tell whether someone
74
+
75
+ 02:57.680 --> 03:07.280
76
+ was truly in the resistance or really a spy for the Germans, and some of the characters in his
77
+
78
+ 03:07.280 --> 03:13.840
79
+ novels sort of crossed that line, and you never really find out what exactly happened.
80
+
81
+ 03:14.800 --> 03:20.080
82
+ And in his novels, there's always a good guy and a bad guy, in the nature of good and evil,
83
+
84
+ 03:20.080 --> 03:30.320
85
+ is it clear there's a hero? No, his main characters are often antiheroes, and so they're
86
+
87
+ 03:30.320 --> 03:40.800
88
+ not very heroic. They fail at some level to accomplish their lofty goals.
89
+
90
+ 03:41.680 --> 03:45.120
91
+ And looking at the trajectory through the rest of your life, has literature,
92
+
93
+ 03:45.120 --> 03:54.240
94
+ Dutch or English or translation had an impact outside the technical world that you existed in?
95
+
96
+ 03:58.160 --> 04:05.200
97
+ I still read novels. I don't think that it impacts me that much directly.
98
+
99
+ 04:06.240 --> 04:14.320
100
+ It doesn't impact your work. It's a separate world. My work is highly technical and sort of
101
+
102
+ 04:14.320 --> 04:19.200
103
+ the world of art and literature doesn't really directly have any bearing on it.
104
+
105
+ 04:20.320 --> 04:26.880
106
+ You don't think there's a creative element to the design of a language's art?
107
+
108
+ 04:30.560 --> 04:36.080
109
+ I'm not disagreeing with that. I'm just saying that I don't feel
110
+
111
+ 04:36.800 --> 04:41.840
112
+ direct influences from more traditional art on my own creativity.
113
+
114
+ 04:41.840 --> 04:46.560
115
+ All right, of course, you don't feel doesn't mean it's not somehow deeply there in your subconscious.
116
+
117
+ 04:50.240 --> 04:56.880
118
+ So let's go back to your early teens. Your hobbies were building electronic circuits,
119
+
120
+ 04:56.880 --> 05:03.920
121
+ building mechanical models. If you can just put yourself back in the mind of that
122
+
123
+ 05:03.920 --> 05:13.200
124
+ young widow, 12, 13, 14, was that grounded in a desire to create a system? So to create
125
+
126
+ 05:13.200 --> 05:17.600
127
+ something? Or was it more just tinkering? Just the joy of puzzle solving?
128
+
129
+ 05:19.280 --> 05:27.200
130
+ I think it was more the latter, actually. Maybe towards the end of my high school
131
+
132
+ 05:27.200 --> 05:36.720
133
+ period, I felt confident enough that I designed my own circuits that were sort of interesting.
134
+
135
+ 05:38.720 --> 05:48.000
136
+ Somewhat. But a lot of that time, I literally just took a model kit and followed the instructions,
137
+
138
+ 05:48.000 --> 05:53.840
139
+ putting the things together. I mean, I think the first few years that I built electronics kits,
140
+
141
+ 05:53.840 --> 06:01.520
142
+ I really did not have enough understanding of electronics to really understand what I was
143
+
144
+ 06:01.520 --> 06:09.600
145
+ doing. I could debug it and I could follow the instructions very carefully, which has always
146
+
147
+ 06:09.600 --> 06:20.400
148
+ stayed with me. But I had a very naive model of how a transistor works. I don't think that in
149
+
150
+ 06:20.400 --> 06:31.360
151
+ those days, I had any understanding of coils and capacitors, which actually was a major problem
152
+
153
+ 06:31.360 --> 06:39.120
154
+ when I started to build more complex digital circuits, because I was unaware of the analog
155
+
156
+ 06:39.120 --> 06:50.960
157
+ part of how they actually work. And I would have things that the schematic looked, everything
158
+
159
+ 06:50.960 --> 06:57.920
160
+ looked fine, and it didn't work. And what I didn't realize was that there was some
161
+
162
+ 06:58.640 --> 07:04.880
163
+ megahertz level oscillation that was throwing the circuit off, because I had a sort of,
164
+
165
+ 07:04.880 --> 07:12.080
166
+ two wires were too close or the switches were kind of poorly built.
167
+
168
+ 07:13.040 --> 07:18.960
169
+ But through that time, I think it's really interesting and instructive to think about,
170
+
171
+ 07:18.960 --> 07:24.880
172
+ because there's echoes of it in this time now. So in the 1970s, the personal computer was being
173
+
174
+ 07:24.880 --> 07:33.920
175
+ born. So did you sense in tinkering with these circuits, did you sense the encroaching revolution
176
+
177
+ 07:33.920 --> 07:40.000
178
+ in personal computing? So if at that point, you're sitting, we'll sit you down and ask you to predict
179
+
180
+ 07:40.000 --> 07:47.920
181
+ the 80s and the 90s, do you think you would be able to do so successfully to unroll this,
182
+
183
+ 07:47.920 --> 07:57.840
184
+ the process? No, I had no clue. I, I remember, I think in the summer after my senior year,
185
+
186
+ 07:57.840 --> 08:04.240
187
+ or maybe it was the summer after my junior year. Well, at some point, I think when I was 18,
188
+
189
+ 08:04.240 --> 08:13.200
190
+ I went on a trip to the math Olympiad in Eastern Europe. And there was like, I was part of the
191
+
192
+ 08:13.200 --> 08:20.080
193
+ Dutch team. And there were other nerdy kids that sort of had different experiences. And one of
194
+
195
+ 08:20.080 --> 08:26.240
196
+ them told me about this amazing thing called a computer. And I had never heard that word.
197
+
198
+ 08:26.240 --> 08:35.680
199
+ My own explorations in electronics were sort of about very simple digital circuits. And I,
200
+
201
+ 08:35.680 --> 08:43.680
202
+ I had sort of, I had the idea that I somewhat understood how a digital calculator worked. And
203
+
204
+ 08:43.680 --> 08:51.440
205
+ so there is maybe some echoes of computers there, but I didn't, didn't, I never made that connection.
206
+
207
+ 08:51.440 --> 08:59.360
208
+ I didn't know that when my parents were paying for magazine subscriptions using punched cards,
209
+
210
+ 08:59.360 --> 09:04.480
211
+ that there was something called a computer that was involved that read those cards and
212
+
213
+ 09:04.480 --> 09:09.600
214
+ transferred the money between accounts. I was actually also not really interested in those
215
+
216
+ 09:09.600 --> 09:18.640
217
+ things. It was only when I went to university to study math that I found out that they had a
218
+
219
+ 09:18.640 --> 09:24.560
220
+ computer and students were allowed to use it. And there were some, you're supposed to talk to that
221
+
222
+ 09:24.560 --> 09:30.080
223
+ computer by programming it. What did that feel like? Yeah, that was the only thing you could do
224
+
225
+ 09:30.080 --> 09:36.560
226
+ with it. The computer wasn't really connected to the real world. The only thing you could do was
227
+
228
+ 09:36.560 --> 09:43.840
229
+ sort of, you typed your program on a bunch of punched cards. You gave the punched cards to
230
+
231
+ 09:43.840 --> 09:52.000
232
+ the operator. And an hour later, the operator gave you back your printout. And so all you could do
233
+
234
+ 09:52.000 --> 10:00.080
235
+ was write a program that did something very abstract. And I don't even remember what my
236
+
237
+ 10:00.080 --> 10:10.400
238
+ first forays into programming were, but they were sort of doing simple math exercises and just to
239
+
240
+ 10:10.400 --> 10:18.320
241
+ learn how a programming language worked. Did you sense, okay, first year of college, you see this
242
+
243
+ 10:18.320 --> 10:25.360
244
+ computer, you're able to have a program and it generates some output. Did you start seeing the
245
+
246
+ 10:25.360 --> 10:32.560
247
+ possibility of this? Or was it a continuation of the tinkering with circuits? Did you start to
248
+
249
+ 10:32.560 --> 10:39.040
250
+ imagine that one, the personal computer, but did you see it as something that is a tool
251
+
252
+ 10:39.040 --> 10:44.880
253
+ to get tools like a word processing tool, maybe maybe for gaming or something? Or did you start
254
+
255
+ 10:44.880 --> 10:50.320
256
+ to imagine that it could be, you know, going to the world of robotics, like you, you know, the
257
+
258
+ 10:50.320 --> 10:55.280
259
+ Frankenstein picture that you could create an artificial being. There's like another entity
260
+
261
+ 10:55.280 --> 11:02.720
262
+ in front of you. You did not see it. I don't think I really saw it that way. I was really more
263
+
264
+ 11:02.720 --> 11:09.440
265
+ interested in the tinkering. It's maybe not a sort of a complete coincidence that I ended up
266
+
267
+ 11:10.720 --> 11:17.120
268
+ sort of creating a programming language, which is a tool for other programmers. I've always been
269
+
270
+ 11:17.120 --> 11:24.560
271
+ very focused on the sort of activity of programming itself and not so much what happens with
272
+
273
+ 11:24.560 --> 11:34.800
274
+ what happens with the program you write. I do remember, and I don't remember, maybe in my second
275
+
276
+ 11:34.800 --> 11:42.240
277
+ or third year, probably my second, actually, someone pointed out to me that there was this
278
+
279
+ 11:42.240 --> 11:51.120
280
+ thing called Conway's Game of Life. You're probably familiar with it. I think in the 70s,
281
+
282
+ 11:51.120 --> 11:55.760
283
+ I think, as long as you came up with it. So there was a Scientific American column by
284
+
285
+ 11:57.680 --> 12:04.880
286
+ someone who did a monthly column about mathematical diversions and also blinking out on the guy's
287
+
288
+ 12:04.880 --> 12:11.360
289
+ name. It was very famous at the time and I think up to the 90s or so. And one of his columns was
290
+
291
+ 12:11.360 --> 12:16.160
292
+ about Conway's Game of Life and he had some illustrations and he wrote down all the rules
293
+
294
+ 12:16.160 --> 12:23.040
295
+ and sort of there was the suggestion that this was philosophically interesting, that that was why
296
+
297
+ 12:23.040 --> 12:30.400
298
+ Conway had called it that. And all I had was like the two pages photocopy of that article.
299
+
300
+ 12:31.040 --> 12:37.520
301
+ I don't even remember where I got it. But it spoke to me and I remember implementing
302
+
303
+ 12:37.520 --> 12:48.800
304
+ a version of that game for the batch computer we were using where I had a whole Pascal program
305
+
306
+ 12:48.800 --> 12:55.200
307
+ that sort of read an initial situation from input and read some numbers that said,
308
+
309
+ 12:55.760 --> 13:03.120
310
+ do so many generations and print every so many generations and then out would come pages and
311
+
312
+ 13:03.120 --> 13:12.560
313
+ pages of sort of things. Patterns of different kinds and yeah. Yeah. And I remember much later
314
+
315
+ 13:13.120 --> 13:19.520
316
+ I've done a similar thing using Python, but I sort of that original version I wrote at the time
317
+
318
+ 13:20.560 --> 13:28.960
319
+ I found interesting because I combined it with some trick I had learned during my electronics
320
+
321
+ 13:28.960 --> 13:38.880
322
+ hobbyist times. I essentially first on paper I designed a simple circuit built out of logic gates
323
+
324
+ 13:39.840 --> 13:46.320
325
+ that took nine bits of input, which is the sort of the cell and its neighbors
326
+
327
+ 13:47.680 --> 13:55.360
328
+ and produce the new value for that cell. And it's like a combination of a half adder and some
329
+
330
+ 13:55.360 --> 14:02.400
331
+ other clipping. No, it's actually a full adder. And so I had worked that out and then I translated
332
+
333
+ 14:02.400 --> 14:12.400
334
+ that into a series of Boolean operations on Pascal integers where you could use the integers as
335
+
336
+ 14:12.400 --> 14:27.920
337
+ bitwise values. And so I could basically generate 60 bits of a generation in like eight instructions
338
+
339
+ 14:27.920 --> 14:35.360
340
+ or so. Nice. So I was proud of that. It's funny that you mentioned so for people who don't know
341
+
342
+ 14:35.360 --> 14:42.160
343
+ Conway's Game of Life is a cellular automata where there's single compute units that kind of
344
+
345
+ 14:42.160 --> 14:49.520
346
+ look at their neighbors and figure out what they look like in the next generation based on the
347
+
348
+ 14:49.520 --> 14:57.040
349
+ state of their neighbors and this is deeply distributed system in concept at least. And then
350
+
351
+ 14:57.040 --> 15:04.400
352
+ there's simple rules that all of them follow and somehow out of the simple rule when you step back
353
+
354
+ 15:04.400 --> 15:13.120
355
+ and look at what occurs, it's beautiful. There's an emergent complexity, even though the underlying
356
+
357
+ 15:13.120 --> 15:17.600
358
+ rules are simple, there's an emergent complexity. Now the funny thing is you've implemented this
359
+
360
+ 15:17.600 --> 15:24.480
361
+ and the thing you're commenting on is you're proud of a hack you did to make it run efficiently.
362
+
363
+ 15:25.280 --> 15:29.360
364
+ When you're not commenting on what like this is a beautiful implementation.
365
+
366
+ 15:29.360 --> 15:33.680
367
+ You're not commenting on the fact that there's an emergent complexity
368
+
369
+ 15:34.480 --> 15:40.240
370
+ that you've quoted a simple program and when you step back and you print out the
371
+
372
+ 15:40.240 --> 15:45.360
373
+ following generation after generation, that's stuff that you may have not predicted what
374
+
375
+ 15:45.360 --> 15:52.480
376
+ happened is happening. And is that magic? I mean that's the magic that all of us feel when we
377
+
378
+ 15:52.480 --> 15:58.880
379
+ program. When you create a program and then you run it and whether it's Hello World or it shows
380
+
381
+ 15:58.880 --> 16:03.200
382
+ something on screen if there's a graphical component, are you seeing the magic and the
383
+
384
+ 16:03.200 --> 16:11.120
385
+ mechanism of creating that? I think I went back and forth. As a student, we had an incredibly
386
+
387
+ 16:11.120 --> 16:19.120
388
+ small budget of computer time that we could use. It was actually measured. I once got in trouble with
389
+
390
+ 16:19.120 --> 16:24.560
391
+ one of my professors because I had overspent the department's budget. It's a different story.
392
+
393
+ 16:24.560 --> 16:35.520
394
+ But so I actually wanted the efficient implementation because I also wanted to explore
395
+
396
+ 16:36.400 --> 16:44.080
397
+ what would happen with a larger number of generations and a larger sort of size of the
398
+
399
+ 16:44.080 --> 16:55.280
400
+ board. And so once the implementation was flawless, I would feed it different patterns and then I
401
+
402
+ 16:55.280 --> 17:01.520
403
+ think maybe there was a follow up article where there were patterns that were like gliders,
404
+
405
+ 17:02.400 --> 17:12.320
406
+ patterns that repeated themselves after a number of generations but translated one or two positions
407
+
408
+ 17:12.320 --> 17:19.840
409
+ to the right or up or something like that. And there were, I remember things like glider guns.
410
+
411
+ 17:19.840 --> 17:28.240
412
+ Well, you can Google Conway's Game of Life. People still go on over it. For a reason because
413
+
414
+ 17:28.240 --> 17:33.760
415
+ it's not really well understood. I mean this is what Stephen Wolfram is obsessed about.
416
+
417
+ 17:37.440 --> 17:41.520
418
+ We don't have the mathematical tools to describe the kind of complexity that emerges
419
+
420
+ 17:41.520 --> 17:45.120
421
+ in these kinds of systems. And the only way you can do is to run it.
422
+
423
+ 17:46.960 --> 17:56.160
424
+ I'm not convinced that it's sort of a problem that lends itself to classic mathematical analysis.
425
+
426
+ 17:56.720 --> 18:04.560
427
+ No. And so one theory of how you create an artificial intelligence or an artificial being
428
+
429
+ 18:04.560 --> 18:08.960
430
+ is you kind of have to, same with the game of life, you kind of have to create a universe
431
+
432
+ 18:08.960 --> 18:16.400
433
+ and let it run. That creating it from scratch in a design way in the, you know, coding up a
434
+
435
+ 18:16.400 --> 18:22.080
436
+ Python program that creates a fully intelligent system may be quite challenging that you might
437
+
438
+ 18:22.080 --> 18:28.640
439
+ need to create a universe just like the game of life is. Well, you might have to experiment with
440
+
441
+ 18:28.640 --> 18:36.560
442
+ a lot of different universes before. There is a set of rules that doesn't essentially always just
443
+
444
+ 18:36.560 --> 18:46.320
445
+ and repeating itself in a trivial way. Yeah. And Steve Wolfram, Stephen Wolfram works with
446
+
447
+ 18:46.320 --> 18:51.520
448
+ these simple rules, says that it's kind of surprising how quickly you find rules that
449
+
450
+ 18:51.520 --> 18:58.240
451
+ create interesting things. You shouldn't be able to, but somehow you do. And so maybe our universe
452
+
453
+ 18:58.240 --> 19:03.440
454
+ is laden with rules that will create interesting things that might not look like humans, but
455
+
456
+ 19:03.440 --> 19:08.640
457
+ you know, emergent phenomena that's interesting may not be as difficult to create as we think.
458
+
459
+ 19:08.640 --> 19:15.120
460
+ Sure. But let me sort of ask, at that time, you know, some of the world, at least in popular press,
461
+
462
+ 19:17.120 --> 19:23.360
463
+ was kind of captivated, perhaps at least in America, by the idea of artificial intelligence,
464
+
465
+ 19:24.000 --> 19:31.520
466
+ that these computers would be able to think pretty soon. And did that touch you at all? Did
467
+
468
+ 19:31.520 --> 19:40.560
469
+ that in science fiction or in reality, in any way? I didn't really start reading science fiction
470
+
471
+ 19:40.560 --> 19:52.560
472
+ until much, much later. I think as a teenager, I read maybe one bundle of science fiction stories.
473
+
474
+ 19:54.160 --> 19:56.960
475
+ Was it in the background somewhere, like in your thoughts?
476
+
477
+ 19:56.960 --> 20:04.160
478
+ That sort of the using computers to build something intelligent always fell to me,
479
+
480
+ 20:04.160 --> 20:10.320
481
+ because I felt I had so much understanding of what actually goes on inside a computer.
482
+
483
+ 20:11.600 --> 20:19.280
484
+ I knew how many bits of memory it had and how difficult it was to program and sort of
485
+
486
+ 20:19.280 --> 20:29.440
487
+ I didn't believe at all that that you could just build something intelligent out of that,
488
+
489
+ 20:29.440 --> 20:38.080
490
+ that that would really sort of satisfy my definition of intelligence. I think the most
491
+
492
+ 20:38.080 --> 20:44.960
493
+ the most influential thing that I read in my early 20s was Gödel Escherbach.
494
+
495
+ 20:44.960 --> 20:51.760
496
+ That was about consciousness and that was a big eye opener, in some sense.
497
+
498
+ 20:53.600 --> 21:00.560
499
+ In what sense? So on your own brain, did you at the time or do you now see your
500
+
501
+ 21:00.560 --> 21:06.880
502
+ own brain as a computer? Or is there a total separation of the way? So yeah, you're very
503
+
504
+ 21:06.880 --> 21:13.840
505
+ pragmatically, practically know the limits of memory, the limits of this sequential computing,
506
+
507
+ 21:13.840 --> 21:19.120
508
+ or weekly paralyzed computing, and you just know what we have now and it's hard to see
509
+
510
+ 21:19.120 --> 21:26.160
511
+ how it creates, but it's also easy to see it was in the in the 40s, 50s, 60s, and now
512
+
513
+ 21:27.120 --> 21:32.480
514
+ at least similarities between the brain and our computers. Oh yeah, I mean, I
515
+
516
+ 21:32.480 --> 21:44.400
517
+ I totally believe that brains are computers in some sense. I mean, the rules they they use to
518
+
519
+ 21:44.400 --> 21:51.680
520
+ play by are pretty different from the rules we we can sort of implement in in our current
521
+
522
+ 21:51.680 --> 22:04.160
523
+ hardware. But I don't believe in like a separate thing that infuses us with intelligence or
524
+
525
+ 22:06.160 --> 22:10.880
526
+ consciousness or any of that. There's no soul. I've been an atheist probably
527
+
528
+ 22:11.840 --> 22:18.720
529
+ from when I was 10 years old, just by thinking a bit about math and the universe.
530
+
531
+ 22:18.720 --> 22:26.640
532
+ And well, my parents were atheists. Now, I know that you you you could be an atheist and still
533
+
534
+ 22:26.640 --> 22:34.720
535
+ believe that there is something sort of about intelligence or consciousness that cannot possibly
536
+
537
+ 22:34.720 --> 22:42.560
538
+ emerge from a fixed set of rules. I am not in that camp. I totally see that
539
+
540
+ 22:42.560 --> 22:53.840
541
+ that sort of given how many millions of years evolution took its time. DNA is is a particular
542
+
543
+ 22:53.840 --> 23:04.560
544
+ machine that that sort of encodes information and an unlimited amount of information in in
545
+
546
+ 23:04.560 --> 23:14.000
547
+ chemical form and has figured out a way to replicate itself. I thought that that was maybe
548
+
549
+ 23:14.000 --> 23:19.520
550
+ it's 300 million years ago, but I thought it was closer to half a half a billion years ago that that's
551
+
552
+ 23:20.480 --> 23:27.200
553
+ sort of originated and it hasn't really changed that the sort of the structure of DNA hasn't
554
+
555
+ 23:27.200 --> 23:35.520
556
+ changed ever since that is like our binary code that we have in hardware. I mean, the basic
557
+
558
+ 23:35.520 --> 23:43.360
559
+ programming language hasn't changed, but maybe the programming itself, obviously did sort of it.
560
+
561
+ 23:43.360 --> 23:49.120
562
+ It happened to be a set of rules that was good enough to to sort of develop
563
+
564
+ 23:49.120 --> 23:58.560
565
+ of endless variability and and sort of the the idea of self replicating molecules
566
+
567
+ 23:59.440 --> 24:05.680
568
+ competing with each other for resources and and one type eventually sort of always taking over
569
+
570
+ 24:07.120 --> 24:12.560
571
+ that happened before there were any fossils. So we don't know how that exactly happened, but
572
+
573
+ 24:12.560 --> 24:21.440
574
+ I believe it it's it's clear that that did happen and can you comment on consciousness and how you
575
+
576
+ 24:22.320 --> 24:27.760
577
+ see it? Because I think we'll talk about programming quite a bit. We'll talk about,
578
+
579
+ 24:27.760 --> 24:33.600
580
+ you know, intelligence connecting to programming fundamentally, but consciousness consciousness
581
+
582
+ 24:33.600 --> 24:39.680
583
+ is this whole other other thing. Do you think about it often as a developer of a programming
584
+
585
+ 24:39.680 --> 24:48.000
586
+ language and and as a human? Those those are pretty sort of separate topics.
587
+
588
+ 24:49.440 --> 24:58.800
589
+ Sort of my line of work working with programming does not involve anything that that goes in the
590
+
591
+ 24:58.800 --> 25:06.320
592
+ direction of developing intelligence or consciousness, but sort of privately as an avid reader of
593
+
594
+ 25:06.320 --> 25:16.880
595
+ popular science writing. I have some thoughts which which is mostly that
596
+
597
+ 25:18.400 --> 25:27.840
598
+ I don't actually believe that consciousness is an all or nothing thing. I have a feeling that and
599
+
600
+ 25:27.840 --> 25:37.840
601
+ and I forget what I read that influenced this, but I feel that if you look at a cat or a dog or a
602
+
603
+ 25:37.840 --> 25:47.280
604
+ mouse, they have some form of intelligence. If you look at a fish, it has some form of intelligence
605
+
606
+ 25:47.280 --> 25:56.560
607
+ and that evolution just took a long time. But I feel that the the sort of evolution of
608
+
609
+ 25:58.240 --> 26:02.880
610
+ more and more intelligence that led to to sort of the human form of intelligence
611
+
612
+ 26:04.160 --> 26:12.880
613
+ follow the the evolution of the senses, especially the visual sense.
614
+
615
+ 26:12.880 --> 26:21.120
616
+ I mean, there is an enormous amount of processing that's needed to interpret a scene. And humans are
617
+
618
+ 26:21.120 --> 26:28.480
619
+ still better at that than than computers are. Yeah, and so and and I have a feeling that
620
+
621
+ 26:29.680 --> 26:41.680
622
+ there is a sort of the reason that that like mammals is in particular developed the levels of
623
+
624
+ 26:41.680 --> 26:49.280
625
+ consciousness that they have and that eventually sort of going from intelligence to to self
626
+
627
+ 26:49.280 --> 26:56.880
628
+ awareness and consciousness has to do with sort of being a robot that has very highly developed
629
+
630
+ 26:56.880 --> 27:03.840
631
+ senses. Has a lot of rich sensory information coming in. So the that's a really interesting
632
+
633
+ 27:03.840 --> 27:12.240
634
+ thought that that whatever that basic mechanism of DNA, whatever that basic building blocks of
635
+
636
+ 27:12.240 --> 27:19.840
637
+ programming is you if you just add more abilities, more more high resolution sensors, more sensors,
638
+
639
+ 27:20.400 --> 27:25.760
640
+ you just keep stacking those things on top that this basic programming in trying to survive
641
+
642
+ 27:25.760 --> 27:31.360
643
+ develops very interesting things that start to us humans to appear like intelligence and
644
+
645
+ 27:31.360 --> 27:39.200
646
+ consciousness. Yeah, so in in as far as robots go, I think that the self driving cars have that sort
647
+
648
+ 27:39.200 --> 27:49.520
649
+ of the greatest opportunity of developing something like that because when I drive myself, I don't
650
+
651
+ 27:49.520 --> 27:57.040
652
+ just pay attention to the rules of the road. I also look around and I get clues from that Oh,
653
+
654
+ 27:57.040 --> 28:04.800
655
+ this is a shopping district. Oh, here's an old lady crossing the street. Oh, here is someone
656
+
657
+ 28:04.800 --> 28:12.400
658
+ carrying a pile of mail. There's a mailbox. I bet you they're gonna cross the street to reach
659
+
660
+ 28:12.400 --> 28:18.640
661
+ that mailbox. And I slow down. And I don't even think about that. Yeah. And so there is there is
662
+
663
+ 28:18.640 --> 28:27.760
664
+ so much where you turn your observations into an understanding of what other consciousnesses
665
+
666
+ 28:28.480 --> 28:34.960
667
+ are going to do or what what other systems in the world are going to be Oh, that tree is going to
668
+
669
+ 28:34.960 --> 28:46.320
670
+ fall. Yeah, I see sort of I see much more of I expect somehow that if anything is going to
671
+
672
+ 28:46.320 --> 28:52.640
673
+ become conscious, it's going to be the self driving car and not the network of a bazillion
674
+
675
+ 28:54.080 --> 29:00.240
676
+ computers at in a Google or Amazon data center that are all networked together to
677
+
678
+ 29:02.080 --> 29:08.320
679
+ to do whatever they do. So in that sense, so you actually highlight because that's what I work in
680
+
681
+ 29:08.320 --> 29:14.480
682
+ is an autonomous vehicles, you highlight the big gap between what we currently can't do and
683
+
684
+ 29:14.480 --> 29:20.400
685
+ what we truly need to be able to do to solve the problem. Under that formulation, then consciousness
686
+
687
+ 29:20.400 --> 29:27.280
688
+ and intelligence is something that basically a system should have in order to interact with us
689
+
690
+ 29:27.280 --> 29:35.440
691
+ humans, as opposed to some kind of abstract notion of a consciousness consciousness is
692
+
693
+ 29:35.440 --> 29:39.200
694
+ something that you need to have to be able to empathize to be able to
695
+
696
+ 29:39.200 --> 29:46.960
697
+ to fear the understand what the fear of death is. All these aspects that are important for
698
+
699
+ 29:46.960 --> 29:54.080
700
+ interacting with pedestrians need to be able to do basic computation based on our human
701
+
702
+ 29:55.600 --> 30:02.080
703
+ desires and if you sort of Yeah, if you if you look at the dog, the dog clearly knows, I mean,
704
+
705
+ 30:02.080 --> 30:06.320
706
+ I'm not the dog owner, my brother, I have friends who have dogs, the dogs clearly know
707
+
708
+ 30:06.320 --> 30:11.440
709
+ what the humans around them are going to do or at least they have a model of what those humans
710
+
711
+ 30:11.440 --> 30:17.360
712
+ are going to do when they learn the dog some dogs know when you're going out and they want to go
713
+
714
+ 30:17.360 --> 30:23.920
715
+ out with you, they're sad when you leave them alone, they cry. They're afraid because they were
716
+
717
+ 30:24.480 --> 30:35.680
718
+ mistreated when they were younger. We don't assign sort of consciousness to dogs or at least
719
+
720
+ 30:35.680 --> 30:43.520
721
+ not not all that much but I also don't think they have none of that. So I think it's it's
722
+
723
+ 30:45.280 --> 30:48.960
724
+ consciousness and intelligence are not all or nothing.
725
+
726
+ 30:50.160 --> 30:54.320
727
+ The spectrum is really interesting. But in returning to
728
+
729
+ 30:56.000 --> 31:00.560
730
+ programming languages and the way we think about building these kinds of things about building
731
+
732
+ 31:00.560 --> 31:05.440
733
+ intelligence, building consciousness, building artificial beings. So I think one of the exciting
734
+
735
+ 31:05.440 --> 31:13.360
736
+ ideas came in the 17th century. And with liveness, Hobbes, Descartes, where there's this feeling that
737
+
738
+ 31:13.360 --> 31:23.120
739
+ you can convert all thought all reasoning, all the thing that we find very special in our brains,
740
+
741
+ 31:23.120 --> 31:28.800
742
+ you can convert all of that into logic. You can formalize it, former reasoning. And then once
743
+
744
+ 31:28.800 --> 31:33.280
745
+ you formalize everything, all of knowledge, then you can just calculate. And that's what
746
+
747
+ 31:33.280 --> 31:39.120
748
+ we're doing with our brains is we're calculating. So there's this whole idea that we that this is
749
+
750
+ 31:39.120 --> 31:45.920
751
+ possible that this but they weren't aware of the concept of pattern matching in the sense that we
752
+
753
+ 31:45.920 --> 31:53.840
754
+ are aware of it now. They sort of thought you they had discovered incredible bits of mathematics
755
+
756
+ 31:53.840 --> 32:04.720
757
+ like Newton's calculus. And their sort of idealism there, their sort of extension of what they could
758
+
759
+ 32:04.720 --> 32:16.480
760
+ do with logic and math sort of went along those lines. And they thought there's there's like,
761
+
762
+ 32:16.480 --> 32:23.920
763
+ yeah, logic, there's there's like a bunch of rules, and a bunch of input, they didn't realize that how
764
+
765
+ 32:23.920 --> 32:33.520
766
+ you recognize a face is not just a bunch of rules, but is a shit ton of data, plus a circuit that
767
+
768
+ 32:34.560 --> 32:42.800
769
+ that sort of interprets the visual clues and the context and everything else. And somehow
770
+
771
+ 32:42.800 --> 32:53.120
772
+ how can massively parallel pattern match against stored rules? I mean, if I see you tomorrow here
773
+
774
+ 32:53.120 --> 32:58.320
775
+ in front of the Dropbox office, I might recognize you even if I'm wearing a different shirt. Yeah,
776
+
777
+ 32:58.320 --> 33:04.240
778
+ but if I if I see you tomorrow in a coffee shop in Belmont, I might have no idea that it was you
779
+
780
+ 33:04.240 --> 33:11.920
781
+ or on the beach or whatever. I make those mistakes myself all the time. I see someone that I only
782
+
783
+ 33:11.920 --> 33:17.840
784
+ know as like, Oh, this person is a colleague of my wife's. And then I see them at the movies and
785
+
786
+ 33:18.640 --> 33:26.400
787
+ I don't recognize them. But do you see those you call it pattern matching? Do you see that rules is
788
+
789
+ 33:28.880 --> 33:34.880
790
+ unable to encode that to you? Everything you see all the piece of information you look around
791
+
792
+ 33:34.880 --> 33:39.520
793
+ this room, I'm wearing a black shirt, I have a certain height, I'm a human all these you can
794
+
795
+ 33:39.520 --> 33:45.440
796
+ there's probably tens of thousands of facts you pick up moment by moment about this scene,
797
+
798
+ 33:45.440 --> 33:49.760
799
+ you take them for granted and you accumulate aggregate them together to understand the scene.
800
+
801
+ 33:49.760 --> 33:53.760
802
+ You don't think all of that could be encoded to weren't at the end of the day, you just put
803
+
804
+ 33:53.760 --> 34:02.160
805
+ it on the table and calculate. Oh, I don't know what that means. I mean, yes, in the sense that
806
+
807
+ 34:02.160 --> 34:10.880
808
+ there is no, there is no actual magic there, but there are enough layers of abstraction from sort
809
+
810
+ 34:10.880 --> 34:19.440
811
+ of from the facts as they enter my eyes and my ears to the understanding of the scene that I don't
812
+
813
+ 34:19.440 --> 34:31.200
814
+ think that that AI has really covered enough of that distance. It's like if you take a human body
815
+
816
+ 34:31.200 --> 34:40.960
817
+ and you realize it's built out of atoms, well, that that is a uselessly reductionist view, right?
818
+
819
+ 34:41.760 --> 34:46.640
820
+ The body is built out of organs, the organs are built out of cells, the cells are built out of
821
+
822
+ 34:46.640 --> 34:54.240
823
+ proteins, the proteins are built out of amino acids, the amino acids are built out of atoms,
824
+
825
+ 34:54.240 --> 35:00.800
826
+ and then you get to quantum mechanics. So that's a very pragmatic view. I mean, obviously as an
827
+
828
+ 35:00.800 --> 35:06.720
829
+ engineer, I agree with that kind of view, but I also you also have to consider the the with the
830
+
831
+ 35:06.720 --> 35:13.120
832
+ Sam Harris view of well, well, intelligence is just information processing. Do you just like
833
+
834
+ 35:13.120 --> 35:17.840
835
+ you said you take in sensory information, you do some stuff with it and you come up with actions
836
+
837
+ 35:17.840 --> 35:25.680
838
+ that are intelligent. That makes it sound so easy. I don't know who Sam Harris is. Oh, it's
839
+
840
+ 35:25.680 --> 35:30.240
841
+ philosopher. So like this is how philosophers often think, right? And essentially, that's what
842
+
843
+ 35:30.240 --> 35:37.040
844
+ Descartes was is, wait a minute, if there is, like you said, no magic. So you basically says it
845
+
846
+ 35:37.040 --> 35:43.280
847
+ doesn't appear like there's any magic, but we know so little about it that it might as well be magic.
848
+
849
+ 35:43.280 --> 35:48.240
850
+ So just because we know that we're made of atoms, just because we know we're made of organs,
851
+
852
+ 35:48.240 --> 35:54.320
853
+ the fact that we know very little how to get from the atoms to organs in a way that's recreatable
854
+
855
+ 35:54.320 --> 36:01.280
856
+ means it that you shouldn't get too excited just yet about the fact that you figured out that we're
857
+
858
+ 36:01.280 --> 36:09.600
859
+ made of atoms. Right. And and and the same about taking facts as our our sensory organs take them
860
+
861
+ 36:09.600 --> 36:19.440
862
+ in and turning that into reasons and actions that sort of there are a lot of abstractions that we
863
+
864
+ 36:19.440 --> 36:30.000
865
+ haven't quite figured out how to how to deal with those. I mean, I sometimes I don't know if I can
866
+
867
+ 36:30.000 --> 36:38.880
868
+ go on a tangent or not. Please. Drag you back in. Sure. So if I take a simple program that parses,
869
+
870
+ 36:40.880 --> 36:47.760
871
+ say I have a compiler, it parses a program. In a sense, the input routine of that compiler
872
+
873
+ 36:48.320 --> 36:57.120
874
+ of that parser is a sense, a sensing organ. And it builds up a mighty complicated internal
875
+
876
+ 36:57.120 --> 37:03.920
877
+ representation of the program it just saw it doesn't just have a linear sequence of bytes
878
+
879
+ 37:03.920 --> 37:10.640
880
+ representing the text of the program anymore, it has an abstract syntax tree. And I don't know how
881
+
882
+ 37:10.640 --> 37:18.800
883
+ many of your viewers or listeners are familiar with compiler technology, but there is fewer and
884
+
885
+ 37:18.800 --> 37:26.720
886
+ fewer these days, right? That's also true, probably. People want to take a shortcut, but there's sort
887
+
888
+ 37:26.720 --> 37:35.200
889
+ of this abstraction is a data structure that the compiler then uses to produce outputs that is
890
+
891
+ 37:35.200 --> 37:41.280
892
+ relevant like a translation of that program to machine code that can be executed by by hardware.
893
+
894
+ 37:45.360 --> 37:53.360
895
+ And then that data structure gets thrown away. When a fish or a fly sees
896
+
897
+ 37:53.360 --> 38:03.920
898
+ these sort of gets visual impulses. I'm sure it also builds up some data structure and for
899
+
900
+ 38:03.920 --> 38:11.920
901
+ the fly that may be very minimal, a fly may may have only a few. I mean, in the case of a fly's
902
+
903
+ 38:11.920 --> 38:20.720
904
+ brain, I could imagine that there are few enough layers of abstraction that it's not much more
905
+
906
+ 38:20.720 --> 38:28.000
907
+ than when it's darker here than it is here. Well, it can sense motion, because a fly sort of responds
908
+
909
+ 38:28.000 --> 38:35.440
910
+ when you move your arm towards it. So clearly, it's visual processing is intelligent, or well,
911
+
912
+ 38:35.440 --> 38:43.600
913
+ not intelligent, but is has an abstraction for motion. And we still have similar things in in
914
+
915
+ 38:43.600 --> 38:48.880
916
+ but much more complicated in our brains. I mean, otherwise, you couldn't drive a car if you,
917
+
918
+ 38:48.880 --> 38:52.880
919
+ you couldn't sort if you didn't have an incredibly good abstraction for motion.
920
+
921
+ 38:54.560 --> 39:00.160
922
+ Yeah, in some sense, the same abstraction for motion is probably one of the primary sources of
923
+
924
+ 39:00.160 --> 39:06.160
925
+ our of information for us, we just know what to do. I think we know what to do with that.
926
+
927
+ 39:06.160 --> 39:11.200
928
+ We've built up other abstractions on top. We build much more complicated data structures
929
+
930
+ 39:11.200 --> 39:17.280
931
+ based on that. And we build more persistent data structures, sort of after some processing,
932
+
933
+ 39:17.280 --> 39:24.240
934
+ some information sort of gets stored in our memory, pretty much permanently, and is available on
935
+
936
+ 39:24.240 --> 39:31.680
937
+ recall. I mean, there are some things that you sort of, you're conscious that you're remembering it,
938
+
939
+ 39:31.680 --> 39:37.840
940
+ like you give me your phone number, I, well, at my age, I have to write it down, but I could
941
+
942
+ 39:37.840 --> 39:44.480
943
+ imagine I could remember those seven numbers or 10, 10 digits, and reproduce them in a while.
944
+
945
+ 39:44.480 --> 39:53.120
946
+ If I sort of repeat them to myself a few times. So that's a fairly conscious form of memorization.
947
+
948
+ 39:53.120 --> 40:00.880
949
+ On the other hand, how do I recognize your face? I have no idea. My brain has a whole bunch of
950
+
951
+ 40:00.880 --> 40:07.120
952
+ specialized hardware that knows how to recognize faces. I don't know how much of that is sort of
953
+
954
+ 40:07.120 --> 40:15.200
955
+ coded in our DNA and how much of that is trained over and over between the ages of zero and three.
956
+
957
+ 40:16.240 --> 40:23.120
958
+ But somehow our brains know how to do lots of things like that that are useful in our interactions
959
+
960
+ 40:23.120 --> 40:30.080
961
+ with other humans without really being conscious of how it's done anymore.
962
+
963
+ 40:30.080 --> 40:35.920
964
+ Right. So our actual day to day lives, we're operating at the very highest level of abstraction.
965
+
966
+ 40:35.920 --> 40:40.720
967
+ We're just not even conscious of all the little details underlying it. There's compilers on top
968
+
969
+ 40:40.720 --> 40:45.040
970
+ of, it's like turtles on top of turtles or turtles all the way down. It's compilers all the way down.
971
+
972
+ 40:46.160 --> 40:52.800
973
+ But that's essentially, you say that there's no magic. That's what I, what I was trying to get at,
974
+
975
+ 40:52.800 --> 40:58.640
976
+ I think, is with Descartes started this whole train of saying that there's no magic. I mean,
977
+
978
+ 40:58.640 --> 41:03.280
979
+ there's others beforehand. Well, didn't Descartes also have the notion, though, that the soul and
980
+
981
+ 41:03.280 --> 41:09.520
982
+ the body were fundamentally separate? Yeah, I think he had to write in God in there for
983
+
984
+ 41:10.320 --> 41:16.480
985
+ political reasons. So I don't actually, I'm not historian, but there's notions in there that
986
+
987
+ 41:16.480 --> 41:22.720
988
+ all of reasoning, all of human thought can be formalized. I think that continued in the 20th
989
+
990
+ 41:22.720 --> 41:30.880
991
+ century with the Russell and with Gato's incompleteness theorem, this debate of what are
992
+
993
+ 41:30.880 --> 41:35.280
994
+ the limits of the things that could be formalized? That's where the Turing machine came along.
995
+
996
+ 41:35.280 --> 41:40.800
997
+ And this exciting idea, I mean, underlying a lot of computing, that you can do quite a lot
998
+
999
+ 41:40.800 --> 41:46.320
1000
+ with a computer. You can, you can encode a lot of the stuff we're talking about in terms of
1001
+
1002
+ 41:46.320 --> 41:52.400
1003
+ recognizing faces and so on, theoretically, in an algorithm that can then run on the computer.
1004
+
1005
+ 41:52.400 --> 42:01.120
1006
+ And in that context, I'd like to ask programming in a philosophical way.
1007
+
1008
+ 42:02.960 --> 42:08.160
1009
+ What, so what does it mean to program a computer? So you said you write a Python program
1010
+
1011
+ 42:08.880 --> 42:16.800
1012
+ or compiled a C++ program that compiles to somebody code. It's forming layers.
1013
+
1014
+ 42:16.800 --> 42:22.400
1015
+ You're, you're, you're programming in a layer of abstraction that's higher. How do you see programming
1016
+
1017
+ 42:22.960 --> 42:27.760
1018
+ in that context? Can it keep getting higher and higher levels of abstraction?
1019
+
1020
+ 42:29.680 --> 42:35.120
1021
+ I think at some, at some point, the higher level of levels of abstraction will not be called
1022
+
1023
+ 42:35.120 --> 42:44.800
1024
+ programming and they will not resemble what we, we call programming at the moment. There will
1025
+
1026
+ 42:44.800 --> 42:53.600
1027
+ not be source code. I mean, there will still be source code sort of at a lower level of the machine,
1028
+
1029
+ 42:53.600 --> 43:04.480
1030
+ just like there's still molecules and electrons and sort of proteins in our brains. But, and so
1031
+
1032
+ 43:04.480 --> 43:11.520
1033
+ there's still programming and system administration and who knows what keeping to keep the machine
1034
+
1035
+ 43:11.520 --> 43:17.600
1036
+ running. But what the machine does is, is a different level of abstraction in a sense. And
1037
+
1038
+ 43:18.160 --> 43:25.120
1039
+ as far as I understand the way that for last decade or more people have made progress with
1040
+
1041
+ 43:25.120 --> 43:32.000
1042
+ things like facial recognition or the self driving cars is all by endless, endless amounts of
1043
+
1044
+ 43:32.000 --> 43:42.240
1045
+ training data where at least as, as, as a layperson and I feel myself totally as a layperson in that
1046
+
1047
+ 43:42.240 --> 43:52.240
1048
+ field, it looks like the researchers who publish the results don't necessarily know exactly how,
1049
+
1050
+ 43:52.240 --> 44:02.480
1051
+ how their algorithms work. And I often get upset when I sort of read a sort of a fluff piece about
1052
+
1053
+ 44:02.480 --> 44:10.000
1054
+ Facebook in the newspaper or social networks and they say, well, algorithms. And that's like a totally
1055
+
1056
+ 44:10.000 --> 44:18.640
1057
+ different interpretation of the word algorithm. Because for me, the way I was trained or what I
1058
+
1059
+ 44:18.640 --> 44:25.200
1060
+ learned when I was eight or 10 years old, an algorithm is a set of rules that you completely
1061
+
1062
+ 44:25.200 --> 44:31.600
1063
+ understand that can be mathematically analyzed. And, and, and you can prove things, you can like
1064
+
1065
+ 44:31.600 --> 44:37.920
1066
+ prove that Aristotle's Civ produces all prime numbers and only prime numbers.
1067
+
1068
+ 44:39.120 --> 44:45.360
1069
+ Yeah. So the, I don't know if you know who Andre Capati is. I'm afraid not. So he's a
1070
+
1071
+ 44:45.360 --> 44:52.880
1072
+ head of AI at Tesla now, but he was at Stanford before, and he has this cheeky way of calling
1073
+
1074
+ 44:53.520 --> 45:01.200
1075
+ this concept software 2.0. So let me disentangle that for a second. So kind of what you're
1076
+
1077
+ 45:01.200 --> 45:06.560
1078
+ referring to is the traditional, traditional, the algorithm, the concept of an algorithm,
1079
+
1080
+ 45:06.560 --> 45:10.320
1081
+ something that's there, it's clear, you can read it, you understand it, you can prove it's
1082
+
1083
+ 45:10.320 --> 45:19.280
1084
+ functioning as kind of software 1.0. And what software 2.0 is, is exactly what you describe,
1085
+
1086
+ 45:19.280 --> 45:24.560
1087
+ which is you have neural networks, which is a type of machine learning that you feed a bunch
1088
+
1089
+ 45:24.560 --> 45:31.600
1090
+ of data, and that neural network learns to do a function. All you specify is the inputs and
1091
+
1092
+ 45:31.600 --> 45:38.560
1093
+ the outputs you want, and you can't look inside. You can't analyze it. All you can do is train
1094
+
1095
+ 45:38.560 --> 45:43.840
1096
+ this function to map the inputs, the outputs by giving a lot of data. In that sense, programming
1097
+
1098
+ 45:43.840 --> 45:49.360
1099
+ becomes getting a lot of cleaning, getting a lot of data. That's what programming is in this.
1100
+
1101
+ 45:49.360 --> 45:52.880
1102
+ Well, that would be programming 2.0. 2.0 to programming 2.0.
1103
+
1104
+ 45:53.680 --> 45:57.680
1105
+ I wouldn't call that programming. It's just a different activity, just like
1106
+
1107
+ 45:58.400 --> 46:01.600
1108
+ building organs out of cells is not called chemistry.
1109
+
1110
+ 46:01.600 --> 46:10.560
1111
+ Well, so let's just step back and think sort of more generally, of course, but it's like
1112
+
1113
+ 46:12.560 --> 46:20.000
1114
+ as a parent teaching your kids, things can be called programming. In that same sense,
1115
+
1116
+ 46:20.000 --> 46:26.320
1117
+ that's how programming is being used. You're providing them data, examples, use cases.
1118
+
1119
+ 46:26.320 --> 46:37.200
1120
+ So imagine writing a function not with for loops and clearly readable text, but more saying,
1121
+
1122
+ 46:37.760 --> 46:43.840
1123
+ well, here's a lot of examples of what this function should take, and here's a lot of
1124
+
1125
+ 46:43.840 --> 46:48.880
1126
+ examples of when it takes those functions, it should do this, and then figure out the rest.
1127
+
1128
+ 46:48.880 --> 46:57.600
1129
+ So that's the 2.0 concept. And the question I have for you is like, it's a very fuzzy way.
1130
+
1131
+ 46:58.320 --> 47:02.560
1132
+ This is the reality of a lot of these pattern recognition systems and so on. It's a fuzzy way
1133
+
1134
+ 47:02.560 --> 47:09.520
1135
+ of quote unquote programming. What do you think about this kind of world? Should it be called
1136
+
1137
+ 47:09.520 --> 47:17.840
1138
+ something totally different than programming? If you're a software engineer, does that mean
1139
+
1140
+ 47:17.840 --> 47:24.800
1141
+ you're designing systems that are very can be systematically tested, evaluated, they have a
1142
+
1143
+ 47:24.800 --> 47:31.440
1144
+ very specific specification, and then this other fuzzy software 2.0 world machine learning world,
1145
+
1146
+ 47:31.440 --> 47:35.920
1147
+ that's that's something else totally? Or is there some intermixing that's possible?
1148
+
1149
+ 47:35.920 --> 47:47.200
1150
+ Well, the question is probably only being asked because we we don't quite know what
1151
+
1152
+ 47:47.200 --> 47:57.920
1153
+ that software 2.0 actually is. And it sort of I think there is a truism that every task that
1154
+
1155
+ 47:58.960 --> 48:05.440
1156
+ AI has has tackled in the past. At some point, we realized how it was done. And then it was no
1157
+
1158
+ 48:05.440 --> 48:14.160
1159
+ longer considered part of artificial intelligence because it was no longer necessary to to use
1160
+
1161
+ 48:14.160 --> 48:25.120
1162
+ that term. It was just, oh, now we know how to do this. And a new field of science or engineering
1163
+
1164
+ 48:25.120 --> 48:36.320
1165
+ has been developed. And I don't know if sort of every form of learning or sort of controlling
1166
+
1167
+ 48:36.320 --> 48:41.920
1168
+ computer systems should always be called programming. So I don't know, maybe I'm focused too much on
1169
+
1170
+ 48:41.920 --> 48:53.120
1171
+ the terminology. I but I expect that that there just will be different concepts where people with
1172
+
1173
+ 48:53.120 --> 49:05.600
1174
+ sort of different education and a different model of what they're trying to do will will develop those
1175
+
1176
+ 49:05.600 --> 49:14.800
1177
+ concepts. Yeah, and I guess, if you could comment on another way to put this concept is, I think,
1178
+
1179
+ 49:16.480 --> 49:22.720
1180
+ I think the kind of functions that neural networks provide is things as opposed to being able to
1181
+
1182
+ 49:22.720 --> 49:29.760
1183
+ upfront prove that this should work for all cases you throw at it. All you're able, it's the worst
1184
+
1185
+ 49:29.760 --> 49:36.400
1186
+ case analysis versus average case analysis, all you're able to say is it seems on everything
1187
+
1188
+ 49:36.400 --> 49:43.840
1189
+ we've tested to work 99.9% of the time, but we can't guarantee it and it fails in unexpected ways.
1190
+
1191
+ 49:43.840 --> 49:48.400
1192
+ We can even give you examples of how it fails in unexpected ways. But it's like really good
1193
+
1194
+ 49:48.400 --> 49:55.200
1195
+ most of the time. Yeah, but there's no room for that in current ways we think about programming.
1196
+
1197
+ 50:00.160 --> 50:03.600
1198
+ Programming 1.0 is actually sort of
1199
+
1200
+ 50:06.160 --> 50:14.080
1201
+ getting to that point to where the sort of the ideal of a bug free program
1202
+
1203
+ 50:14.080 --> 50:25.440
1204
+ has been abandoned long ago by most software developers. We only care about bugs that manifest
1205
+
1206
+ 50:25.440 --> 50:33.840
1207
+ themselves often enough to be annoying. And we're willing to take the occasional crash or
1208
+
1209
+ 50:33.840 --> 50:45.760
1210
+ outage or incorrect result for granted, because we can't possibly we don't have enough programmers
1211
+
1212
+ 50:45.760 --> 50:50.880
1213
+ to make all the code bug free and it would be an incredibly tedious business. And if you try to
1214
+
1215
+ 50:50.880 --> 50:59.120
1216
+ throw formal methods at it, it gets it becomes even more tedious. So every once in a while,
1217
+
1218
+ 50:59.120 --> 51:07.200
1219
+ the user clicks on a link in and somehow they get an error. And the average user doesn't panic,
1220
+
1221
+ 51:07.200 --> 51:15.360
1222
+ they just click again and see if it works better the second time, which often magically it does.
1223
+
1224
+ 51:16.320 --> 51:24.240
1225
+ Or they go up and they try some other way of performing their tasks. So that's sort of an
1226
+
1227
+ 51:24.240 --> 51:33.440
1228
+ end to end recovery mechanism and inside systems, there is all sorts of retries and timeouts and
1229
+
1230
+ 51:34.720 --> 51:41.520
1231
+ fallbacks. And I imagine that that sort of biological systems are even more full of that
1232
+
1233
+ 51:41.520 --> 51:50.320
1234
+ because otherwise they wouldn't survive. Do you think programming should be taught and thought of
1235
+
1236
+ 51:50.320 --> 51:59.440
1237
+ as exactly what you just said before I come from is kind of you're you're always denying that fact
1238
+
1239
+ 51:59.440 --> 52:12.240
1240
+ always in in sort of basic programming education, the sort of the programs you're, you're having
1241
+
1242
+ 52:12.240 --> 52:22.240
1243
+ students write are so small and simple that if there is a bug, you can always find it and fix it.
1244
+
1245
+ 52:22.960 --> 52:29.520
1246
+ Because the sort of programming as it's being taught in some even elementary middle schools
1247
+
1248
+ 52:29.520 --> 52:36.880
1249
+ in high school, introduction to programming classes in college, typically, it's programming in the
1250
+
1251
+ 52:36.880 --> 52:45.280
1252
+ small. Very few classes sort of actually teach software engineering building large systems. I
1253
+
1254
+ 52:45.280 --> 52:52.480
1255
+ mean, every summer here at Dropbox, we have a large number of interns, every tech company
1256
+
1257
+ 52:53.440 --> 53:00.880
1258
+ on the West Coast has the same thing. These interns are always amazed because this is the
1259
+
1260
+ 53:00.880 --> 53:09.120
1261
+ first time in their life that they see what goes on in a really large software development environment.
1262
+
1263
+ 53:10.480 --> 53:20.320
1264
+ And everything they've learned in college was almost always about a much smaller scale and
1265
+
1266
+ 53:20.320 --> 53:26.880
1267
+ somehow that difference in scale makes a qualitative difference in how you how you
1268
+
1269
+ 53:26.880 --> 53:33.120
1270
+ do things and how you think about it. If you then take a few steps back into decades,
1271
+
1272
+ 53:34.000 --> 53:39.040
1273
+ 70s and 80s, when you're first thinking about Python or just that world of programming languages,
1274
+
1275
+ 53:39.840 --> 53:45.680
1276
+ did you ever think that there would be systems as large as underlying Google, Facebook and Dropbox?
1277
+
1278
+ 53:46.480 --> 53:54.000
1279
+ Did you when you were thinking about Python? I was actually always caught by surprise by
1280
+
1281
+ 53:54.000 --> 53:58.240
1282
+ every sort of this. Yeah, pretty much every stage of computing.
1283
+
1284
+ 53:59.440 --> 54:06.800
1285
+ So maybe just because you spoke in other interviews, but I think the evolution of
1286
+
1287
+ 54:06.800 --> 54:11.920
1288
+ programming languages are fascinating. And it's especially because it leads from my
1289
+
1290
+ 54:11.920 --> 54:17.440
1291
+ perspective towards greater and greater degrees of intelligence. I learned the first programming
1292
+
1293
+ 54:17.440 --> 54:26.720
1294
+ language I played with in Russia was with the turtle logo logo. Yeah. And if you look, I just
1295
+
1296
+ 54:26.720 --> 54:31.520
1297
+ have a list of programming languages, all of which I've played with a little bit. And they're all
1298
+
1299
+ 54:31.520 --> 54:38.320
1300
+ beautiful in different ways from Fortran, Cobalt, Lisp, Algal 60, basic logo again, C
1301
+
1302
+ 54:38.320 --> 54:48.240
1303
+ as a few object oriented came along in the 60s, Simula, Pascal, small talk, all of that leads
1304
+
1305
+ 54:48.240 --> 54:55.360
1306
+ all the classics, the classics. Yeah, the classic hits, right? Scheme built that's built on top of
1307
+
1308
+ 54:55.360 --> 55:03.680
1309
+ Lisp on the database side SQL C plus plus and all that leads up to Python, Pascal to
1310
+
1311
+ 55:03.680 --> 55:10.720
1312
+ and all that's before Python, MATLAB, these kind of different communities, different languages.
1313
+
1314
+ 55:10.720 --> 55:17.040
1315
+ So can you talk about that world? I know that sort of Python came out of ABC, which actually
1316
+
1317
+ 55:17.040 --> 55:22.720
1318
+ never knew that language. I just having researched this conversation went back to ABC and it looks
1319
+
1320
+ 55:22.720 --> 55:29.680
1321
+ remarkably, it has a lot of annoying qualities. But underneath those like all caps and so on.
1322
+
1323
+ 55:29.680 --> 55:34.880
1324
+ But underneath that, there's elements of Python that are quite they're already there.
1325
+
1326
+ 55:35.440 --> 55:39.280
1327
+ That's where I got all the good stuff, all the good stuff. So but in that world,
1328
+
1329
+ 55:39.280 --> 55:43.680
1330
+ you're swimming in these programming languages, were you focused on just the good stuff in your
1331
+
1332
+ 55:43.680 --> 55:50.240
1333
+ specific circle? Or did you have a sense of what, what is everyone chasing? You said that every
1334
+
1335
+ 55:50.240 --> 55:59.520
1336
+ programming language is built to scratch an itch. Were you aware of all the itches in the community
1337
+
1338
+ 55:59.520 --> 56:04.880
1339
+ and if not, or if yes, I mean, what itch we try to scratch with Python?
1340
+
1341
+ 56:05.600 --> 56:12.240
1342
+ Well, I'm glad I wasn't aware of all the itches because I would probably not have been able to
1343
+
1344
+ 56:12.880 --> 56:17.040
1345
+ do anything. I mean, if you're trying to solve every problem at once,
1346
+
1347
+ 56:18.000 --> 56:27.760
1348
+ you saw nothing. Well, yeah, it's, it's too overwhelming. And so I had a very, very focused
1349
+
1350
+ 56:27.760 --> 56:36.480
1351
+ problem. I wanted a programming language that set somewhere in between shell scripting and C.
1352
+
1353
+ 56:38.480 --> 56:48.480
1354
+ And now, arguably, there is like, one is higher level, one is lower level. And
1355
+
1356
+ 56:49.680 --> 56:56.800
1357
+ Python is sort of a language of an intermediate level, although it's still pretty much at the
1358
+
1359
+ 56:56.800 --> 57:10.160
1360
+ high level. And I was I was thinking about much more about I want a tool that I can use to be
1361
+
1362
+ 57:10.160 --> 57:19.040
1363
+ more productive as a programmer in a very specific environment. And I also had given myself a time
1364
+
1365
+ 57:19.040 --> 57:27.600
1366
+ budget for the development of the tool. And that was sort of about three months for both the design
1367
+
1368
+ 57:27.600 --> 57:31.920
1369
+ like thinking through what are all the features of the language syntactically.
1370
+
1371
+ 57:33.760 --> 57:42.080
1372
+ And semantically, and how do I implement the whole pipeline from parsing the source code to
1373
+
1374
+ 57:42.080 --> 57:51.200
1375
+ executing it. So I think both with the timeline and the goals, it seems like productivity was
1376
+
1377
+ 57:51.200 --> 57:59.520
1378
+ at the core of it as a goal. So, like for me, in the 90s, and the first decade of the 21st
1379
+
1380
+ 57:59.520 --> 58:06.400
1381
+ century, I was always doing machine learning AI, programming for my research was always in C++.
1382
+
1383
+ 58:06.400 --> 58:12.160
1384
+ Wow. And then the other people who are a little more mechanical engineering,
1385
+
1386
+ 58:12.160 --> 58:18.640
1387
+ electrical engineering, are Matlabby. They're a little bit more Matlab focused. Those are the
1388
+
1389
+ 58:18.640 --> 58:26.480
1390
+ world and maybe a little bit Java too, but people who are more interested in emphasizing
1391
+
1392
+ 58:26.480 --> 58:33.440
1393
+ the object oriented nature of things. So within the last 10 years or so, especially with the
1394
+
1395
+ 58:33.440 --> 58:38.400
1396
+ oncoming of neural networks and these packages that are built on Python to interface with
1397
+
1398
+ 58:39.200 --> 58:45.760
1399
+ neural networks, I switched to Python. And it's just, I've noticed a significant boost that I
1400
+
1401
+ 58:45.760 --> 58:50.400
1402
+ can't exactly, because I don't think about it, but I can't exactly put into words why I'm just
1403
+
1404
+ 58:50.400 --> 58:57.520
1405
+ much, much more productive, just being able to get the job done much, much faster. So how do you
1406
+
1407
+ 58:57.520 --> 59:02.640
1408
+ think whatever that qualitative difference is, I don't know if it's quantitative, it could be just
1409
+
1410
+ 59:02.640 --> 59:08.000
1411
+ a feeling. I don't know if I'm actually more productive, but how do you think about? You probably
1412
+
1413
+ 59:08.000 --> 59:14.480
1414
+ are. Yeah, well, that's right. I think there's elements. Let me just speak to one aspect that
1415
+
1416
+ 59:14.480 --> 59:23.920
1417
+ I think that was affecting our productivity is C++ was, I really enjoyed creating performant code
1418
+
1419
+ 59:24.800 --> 59:29.840
1420
+ and creating a beautiful structure where everything that, you know, this kind of going
1421
+
1422
+ 59:29.840 --> 59:34.560
1423
+ into this, especially with the newer and newer standards of templated programming of just really
1424
+
1425
+ 59:34.560 --> 59:41.280
1426
+ creating this beautiful, formal structure that I found myself spending most of my time doing that
1427
+
1428
+ 59:41.280 --> 59:46.160
1429
+ as opposed to getting it parsing a file and extracting a few keywords or whatever the task
1430
+
1431
+ 59:46.160 --> 59:51.680
1432
+ goes trying to do. So what is it about Python? How do you think of productivity in general as
1433
+
1434
+ 59:51.680 --> 59:57.440
1435
+ you were designing it now? So through the decades, last three decades, what do you think it means
1436
+
1437
+ 59:57.440 --> 1:00:01.920
1438
+ to be a productive programmer? And how did you try to design it into the language?
1439
+
1440
+ 1:00:03.200 --> 1:00:10.240
1441
+ There are different tasks. And as a programmer, it's, it's useful to have different tools available
1442
+
1443
+ 1:00:10.240 --> 1:00:17.680
1444
+ that sort of are suitable for different tasks. So I still write C code. I still write shell code.
1445
+
1446
+ 1:00:18.720 --> 1:00:22.080
1447
+ But I write most of my, my things in Python.
1448
+
1449
+ 1:00:22.080 --> 1:00:30.960
1450
+ Why do I still use those other languages? Because sometimes the task just demands it.
1451
+
1452
+ 1:00:32.400 --> 1:00:38.880
1453
+ And, well, I would say most of the time, the task actually demands a certain language because
1454
+
1455
+ 1:00:38.880 --> 1:00:44.640
1456
+ the task is not write a program that solves problem x from scratch, but it's more like
1457
+
1458
+ 1:00:44.640 --> 1:00:52.400
1459
+ fix a bug in existing program x or add a small feature to an existing large program.
1460
+
1461
+ 1:00:56.320 --> 1:01:04.560
1462
+ But even if, if you sort of, if you're not constrained in your choice of language
1463
+
1464
+ 1:01:04.560 --> 1:01:15.360
1465
+ by context like that, there is still the fact that if you write it in a certain language, then you
1466
+
1467
+ 1:01:15.360 --> 1:01:26.080
1468
+ sort of, you, you have this balance between how long does it time? Does it take you to write the
1469
+
1470
+ 1:01:26.080 --> 1:01:38.560
1471
+ code? And how long does the code run? And when you're in sort of, in the phase of exploring
1472
+
1473
+ 1:01:38.560 --> 1:01:45.760
1474
+ solutions, you often spend much more time writing the code than running it, because every time
1475
+
1476
+ 1:01:46.560 --> 1:01:52.880
1477
+ you've sort of, you've run it, you see that the output is not quite what you wanted. And
1478
+
1479
+ 1:01:52.880 --> 1:02:05.520
1480
+ you spend some more time coding. And a language like Python just makes that iteration much faster,
1481
+
1482
+ 1:02:05.520 --> 1:02:13.600
1483
+ because there are fewer details. There is a large library, sort of there are fewer details that,
1484
+
1485
+ 1:02:13.600 --> 1:02:20.480
1486
+ that you have to get right before your program compiles and runs. There are libraries that
1487
+
1488
+ 1:02:20.480 --> 1:02:27.040
1489
+ do all sorts of stuff for you. So you can sort of very quickly take a bunch of
1490
+
1491
+ 1:02:28.000 --> 1:02:36.560
1492
+ existing components, put them together and get your prototype application running just like
1493
+
1494
+ 1:02:37.120 --> 1:02:44.640
1495
+ when I was building electronics, I was using a breadboard most of the time. So I had this like
1496
+
1497
+ 1:02:44.640 --> 1:02:52.240
1498
+ sprawl out circuit that if you shook it, it would stop working because it was not put together
1499
+
1500
+ 1:02:52.800 --> 1:03:00.160
1501
+ very well. But it functioned and all I wanted was to see that it worked and then move on to the next
1502
+
1503
+ 1:03:01.040 --> 1:03:07.280
1504
+ next schematic or design or add something to it. Once you've sort of figured out, oh, this is the
1505
+
1506
+ 1:03:07.280 --> 1:03:13.920
1507
+ perfect design for my radio or light sensor or whatever, then you can say, okay, how do we
1508
+
1509
+ 1:03:13.920 --> 1:03:20.560
1510
+ design a PCB for this? How do we solder the components in a small space? How do we make it
1511
+
1512
+ 1:03:20.560 --> 1:03:32.800
1513
+ so that it is robust against, say, voltage fluctuations or mechanical disruption? I mean,
1514
+
1515
+ 1:03:32.800 --> 1:03:37.280
1516
+ I know nothing about that when it comes to designing electronics, but I know a lot about
1517
+
1518
+ 1:03:37.280 --> 1:03:45.200
1519
+ that when it comes to writing code. So the initial steps are efficient, fast, and there's not much
1520
+
1521
+ 1:03:45.200 --> 1:03:54.080
1522
+ stuff that gets in the way. But you're kind of describing from like Darwin described the evolution
1523
+
1524
+ 1:03:54.080 --> 1:04:01.920
1525
+ of species, right? You're observing of what is true about Python. Now, if you take a step back,
1526
+
1527
+ 1:04:01.920 --> 1:04:09.680
1528
+ if the act of creating languages is art, and you had three months to do it, initial steps,
1529
+
1530
+ 1:04:12.480 --> 1:04:17.040
1531
+ so you just specified a bunch of goals, sort of things that you observe about Python. Perhaps
1532
+
1533
+ 1:04:17.040 --> 1:04:23.440
1534
+ you had those goals, but how do you create the rules, the syntactic structure, the features
1535
+
1536
+ 1:04:23.440 --> 1:04:29.200
1537
+ that result in those? So I have, in the beginning, and I have follow up questions about through the
1538
+
1539
+ 1:04:29.200 --> 1:04:35.440
1540
+ evolution of Python, too. But in the very beginning, when you're sitting there, creating the lexical
1541
+
1542
+ 1:04:35.440 --> 1:04:45.680
1543
+ analyze or whatever evolution was still a big part of it, because I sort of I said to myself,
1544
+
1545
+ 1:04:46.480 --> 1:04:52.800
1546
+ I don't want to have to design everything from scratch. I'm going to borrow features from
1547
+
1548
+ 1:04:52.800 --> 1:04:57.440
1549
+ other languages that I like. Oh, interesting. So you basically, exactly, you first observe what
1550
+
1551
+ 1:04:57.440 --> 1:05:04.800
1552
+ you like. Yeah. And so that's why if you're 17 years old, and you want to sort of create a programming
1553
+
1554
+ 1:05:04.800 --> 1:05:11.840
1555
+ language, you're not going to be very successful at it. Because you have no experience with other
1556
+
1557
+ 1:05:11.840 --> 1:05:25.280
1558
+ languages. Whereas I was in my, let's say mid 30s. I had written parsers before. So I had worked on
1559
+
1560
+ 1:05:25.280 --> 1:05:32.000
1561
+ the implementation of ABC, I had spent years debating the design of ABC with its authors,
1562
+
1563
+ 1:05:32.000 --> 1:05:36.560
1564
+ it's with its designers, I had nothing to do with the design, it was designed
1565
+
1566
+ 1:05:37.600 --> 1:05:42.400
1567
+ fully as it was ended up being implemented when I joined the team. But so
1568
+
1569
+ 1:05:44.480 --> 1:05:52.000
1570
+ you borrow ideas and concepts and very concrete sort of local rules from different languages,
1571
+
1572
+ 1:05:52.000 --> 1:06:00.240
1573
+ like the indentation and certain other syntactic features from ABC. But I chose to borrow string
1574
+
1575
+ 1:06:00.240 --> 1:06:10.400
1576
+ literals and how numbers work from C and various other things. So in then, if you take that further,
1577
+
1578
+ 1:06:10.400 --> 1:06:17.280
1579
+ so yet, you've had this funny sounding, but I think surprisingly accurate and at least practical
1580
+
1581
+ 1:06:17.280 --> 1:06:23.280
1582
+ title of benevolent dictator for life for quite, you know, for the last three decades or whatever,
1583
+
1584
+ 1:06:23.280 --> 1:06:30.800
1585
+ or no, not the actual title, but functionally speaking. So you had to make decisions, design
1586
+
1587
+ 1:06:30.800 --> 1:06:40.240
1588
+ decisions. Can you maybe let's take Python two, so Python releasing Python three as an example.
1589
+
1590
+ 1:06:40.240 --> 1:06:47.680
1591
+ Mm hmm. It's not backward compatible to Python two in ways that a lot of people know. So what was
1592
+
1593
+ 1:06:47.680 --> 1:06:53.120
1594
+ that deliberation discussion decision like? Yeah, what was the psychology of that experience?
1595
+
1596
+ 1:06:54.320 --> 1:07:01.360
1597
+ Do you regret any aspects of how that experience undergone that? Well, yeah, so it was a group
1598
+
1599
+ 1:07:01.360 --> 1:07:10.640
1600
+ process really. At that point, even though I was BDFL in name, and certainly everybody sort of
1601
+
1602
+ 1:07:11.200 --> 1:07:20.240
1603
+ respected my position as the creator and the current sort of owner of the language design,
1604
+
1605
+ 1:07:21.680 --> 1:07:24.880
1606
+ I was looking at everyone else for feedback.
1607
+
1608
+ 1:07:24.880 --> 1:07:35.600
1609
+ Sort of Python 3.0 in some sense was sparked by other people in the community pointing out,
1610
+
1611
+ 1:07:36.880 --> 1:07:47.360
1612
+ oh, well, there are a few issues that sort of bite users over and over. Can we do something
1613
+
1614
+ 1:07:47.360 --> 1:07:55.280
1615
+ about that? And for Python three, we took a number of those Python words as they were called at the
1616
+
1617
+ 1:07:55.280 --> 1:08:04.880
1618
+ time. And we said, can we try to sort of make small changes to the language that address those words?
1619
+
1620
+ 1:08:06.000 --> 1:08:14.080
1621
+ And we had sort of in the past, we had always taken backwards compatibility very seriously.
1622
+
1623
+ 1:08:14.080 --> 1:08:20.000
1624
+ And so many Python words in earlier versions had already been resolved, because they could be resolved
1625
+
1626
+ 1:08:21.040 --> 1:08:28.960
1627
+ while maintaining backwards compatibility or sort of using a very gradual path of evolution of the
1628
+
1629
+ 1:08:28.960 --> 1:08:36.400
1630
+ language in a certain area. And so we were stuck with a number of words that were widely recognized
1631
+
1632
+ 1:08:36.400 --> 1:08:45.120
1633
+ as problems, not like roadblocks, but nevertheless, sort of things that some people trip over. And you
1634
+
1635
+ 1:08:45.120 --> 1:08:53.120
1636
+ know that that's always the same thing that that people trip over when they trip. And we could not
1637
+
1638
+ 1:08:53.120 --> 1:09:00.640
1639
+ think of a backwards compatible way of resolving those issues. But it's still an option to not
1640
+
1641
+ 1:09:00.640 --> 1:09:06.960
1642
+ resolve the issues. And so yes, for for a long time, we had sort of resigned ourselves to well,
1643
+
1644
+ 1:09:06.960 --> 1:09:14.720
1645
+ okay, the language is not going to be perfect in this way, and that way, and that way. And we sort
1646
+
1647
+ 1:09:14.720 --> 1:09:20.400
1648
+ of certain of these I mean, there are still plenty of things where you can say, well, that's
1649
+
1650
+ 1:09:20.400 --> 1:09:32.800
1651
+ that particular detail is better in Java or in R or in visual basic or whatever. And we're okay with
1652
+
1653
+ 1:09:32.800 --> 1:09:39.760
1654
+ that because well, we can't easily change it. It's not too bad, we can do a little bit with user
1655
+
1656
+ 1:09:39.760 --> 1:09:50.080
1657
+ education, or we can have static analyzer or warnings in in the parse or something. But there
1658
+
1659
+ 1:09:50.080 --> 1:09:55.200
1660
+ were things where we thought, well, these are really problems that are not going away, they're
1661
+
1662
+ 1:09:55.200 --> 1:10:03.280
1663
+ getting worse. In the future, we should do something about it. Do something. But ultimately, there is
1664
+
1665
+ 1:10:03.280 --> 1:10:10.480
1666
+ a decision to be made, right? Yes. So was that the toughest decision in the history of Python you
1667
+
1668
+ 1:10:10.480 --> 1:10:17.600
1669
+ had to make as the benevolent dictator for life? Or if not, what are other maybe even on a smaller
1670
+
1671
+ 1:10:17.600 --> 1:10:23.360
1672
+ scale? What was the decision where you were really torn up about? Well, the toughest decision was
1673
+
1674
+ 1:10:23.360 --> 1:10:30.400
1675
+ probably to resign. All right, let's go there. Hold on a second, then let me just because in the
1676
+
1677
+ 1:10:30.400 --> 1:10:35.040
1678
+ interest of time too, because I have a few cool questions for you. And let's touch a really
1679
+
1680
+ 1:10:35.040 --> 1:10:40.480
1681
+ important one because it was quite dramatic and beautiful in certain kinds of ways. In July this
1682
+
1683
+ 1:10:40.480 --> 1:10:47.760
1684
+ year, three months ago, you wrote, now that PEP 572 is done, I don't ever want to have to fight so
1685
+
1686
+ 1:10:47.760 --> 1:10:53.360
1687
+ hard for a PEP and find that so many people despise my decisions. I would like to remove myself
1688
+
1689
+ 1:10:53.360 --> 1:10:59.280
1690
+ entirely from the decision process. I'll still be there for a while as an ordinary core developer.
1691
+
1692
+ 1:10:59.280 --> 1:11:06.240
1693
+ And I'll still be available to mentor people possibly more available. But I'm basically giving
1694
+
1695
+ 1:11:06.240 --> 1:11:12.800
1696
+ myself a permanent vacation from being BDFL benevolent dictator for life. And you all will
1697
+
1698
+ 1:11:12.800 --> 1:11:20.720
1699
+ be on your own. First of all, just this, it's almost Shakespearean. I'm not going to appoint a
1700
+
1701
+ 1:11:20.720 --> 1:11:27.600
1702
+ successor. So what are you all going to do? Create a democracy, anarchy, a dictatorship,
1703
+
1704
+ 1:11:27.600 --> 1:11:35.120
1705
+ a federation. So that was a very dramatic and beautiful set of statements. It's almost,
1706
+
1707
+ 1:11:35.120 --> 1:11:41.120
1708
+ it's open ended nature, called the community to create a future for Python. This is kind of a
1709
+
1710
+ 1:11:41.120 --> 1:11:48.080
1711
+ beautiful aspect to it. Wow. So what and dramatic, you know, what was making that decision like?
1712
+
1713
+ 1:11:48.080 --> 1:11:52.400
1714
+ What was on your heart, on your mind, stepping back now, a few months later,
1715
+
1716
+ 1:11:52.400 --> 1:12:00.800
1717
+ taking it to your mindset? I'm glad you liked the writing because it was actually written pretty
1718
+
1719
+ 1:12:00.800 --> 1:12:12.320
1720
+ quickly. It was literally something like after months and months of going around in circles,
1721
+
1722
+ 1:12:12.320 --> 1:12:24.000
1723
+ I had finally approved PEP 572, which I had a big hand in its design, although I didn't
1724
+
1725
+ 1:12:24.000 --> 1:12:34.000
1726
+ initiate it originally. I sort of gave it a bunch of nudges in a direction that would be
1727
+
1728
+ 1:12:34.000 --> 1:12:42.160
1729
+ better for the language. So sorry, just to ask, is async IO, is that the one or no? No, PEP 572 was
1730
+
1731
+ 1:12:42.160 --> 1:12:48.240
1732
+ actually a small feature, which is assignment expressions. Oh, assignment expressions, okay.
1733
+
1734
+ 1:12:49.120 --> 1:12:55.840
1735
+ That had been thought there was just a lot of debate where a lot of people claimed that
1736
+
1737
+ 1:12:55.840 --> 1:13:03.440
1738
+ they knew what was Pythonic and what was not Pythonic, and they knew that this was going to
1739
+
1740
+ 1:13:03.440 --> 1:13:10.480
1741
+ destroy the language. This was like a violation of Python's most fundamental design philosophy.
1742
+
1743
+ 1:13:10.480 --> 1:13:15.920
1744
+ And I thought that was all bullshit because I was in favor of it. And I would think I know
1745
+
1746
+ 1:13:15.920 --> 1:13:22.560
1747
+ something about Python's design philosophy. So I was really tired and also stressed of that thing.
1748
+
1749
+ 1:13:22.560 --> 1:13:31.360
1750
+ And literally, after sort of announcing, I was going to accept it. A certain Wednesday evening,
1751
+
1752
+ 1:13:31.920 --> 1:13:39.760
1753
+ I had finally send the email, it's accepted. Now let's just go implement it. So I went to bed
1754
+
1755
+ 1:13:40.320 --> 1:13:48.480
1756
+ feeling really relieved. That's behind me. And I wake up Thursday morning, 7am. And I think,
1757
+
1758
+ 1:13:48.480 --> 1:13:59.520
1759
+ well, that was the last one. That's going to be such such a terrible debate. And that's
1760
+
1761
+ 1:13:59.520 --> 1:14:05.600
1762
+ going to be that's the last time that I let myself be so stressed out about a PEP decision.
1763
+
1764
+ 1:14:06.480 --> 1:14:13.200
1765
+ I should just resign. I've been sort of thinking about retirement for half a decade. I've been
1766
+
1767
+ 1:14:13.200 --> 1:14:22.080
1768
+ joking and sort of mentioning retirement, sort of telling the community, some point in the
1769
+
1770
+ 1:14:22.080 --> 1:14:30.240
1771
+ future, I'm going to retire. Don't take that FL part of my title too literally. And I thought,
1772
+
1773
+ 1:14:30.240 --> 1:14:38.560
1774
+ okay, this is it. I'm done. I had the day off. I wanted to have a good time with my wife. We
1775
+
1776
+ 1:14:38.560 --> 1:14:48.240
1777
+ were going to a little beach town nearby. And in, I think maybe 15, 20 minutes, I wrote that thing
1778
+
1779
+ 1:14:48.240 --> 1:14:53.600
1780
+ that you just called Shakespearean. And the funny thing is, I get so much crap for calling you
1781
+
1782
+ 1:14:53.600 --> 1:15:00.320
1783
+ Shakespearean. I didn't even I didn't even realize what a monumental decision it was.
1784
+
1785
+ 1:15:00.320 --> 1:15:08.640
1786
+ Because five minutes later, I read that a link to my message back on Twitter, where people were
1787
+
1788
+ 1:15:08.640 --> 1:15:17.360
1789
+ already discussing on Twitter, Guido resigned as the BDFL. And I had, I had posted it on an internal
1790
+
1791
+ 1:15:17.360 --> 1:15:22.880
1792
+ forum that I thought was only read by core developers. So I thought I would at least
1793
+
1794
+ 1:15:22.880 --> 1:15:30.560
1795
+ have one day before the news would sort of get out. The on your own aspects, I had also an
1796
+
1797
+ 1:15:30.560 --> 1:15:39.120
1798
+ element of quite, it was quite a powerful element of the uncertainty that lies ahead. But can you
1799
+
1800
+ 1:15:39.120 --> 1:15:45.280
1801
+ also just briefly talk about, you know, like, for example, I play guitar as a hobby for fun.
1802
+
1803
+ 1:15:45.280 --> 1:15:50.720
1804
+ And whenever I play, people are super positive, super friendly. They're like, this is awesome.
1805
+
1806
+ 1:15:50.720 --> 1:15:56.320
1807
+ This is great. But sometimes I enter as an outside observer, enter the programming community.
1808
+
1809
+ 1:15:57.120 --> 1:16:04.000
1810
+ And there seems to some sometimes be camps on whatever the topic. And in the two camps,
1811
+
1812
+ 1:16:04.000 --> 1:16:08.880
1813
+ the two or plus camps, are often pretty harsh at criticizing the opposing camps.
1814
+
1815
+ 1:16:11.520 --> 1:16:18.400
1816
+ As an onlooker, I may be totally wrong on this. Yeah, holy wars are sort of a favorite activity
1817
+
1818
+ 1:16:18.400 --> 1:16:23.200
1819
+ in the programming community. And what is the psychology behind that? Is, is that okay for
1820
+
1821
+ 1:16:23.200 --> 1:16:28.400
1822
+ a healthy community to have? Is that, is that a productive force ultimately for the evolution
1823
+
1824
+ 1:16:28.400 --> 1:16:35.840
1825
+ of a language? Well, if everybody is batting each other on the back and never telling the truth,
1826
+
1827
+ 1:16:35.840 --> 1:16:45.120
1828
+ yes, it would not be a good thing. I think there is a middle ground where sort of
1829
+
1830
+ 1:16:48.640 --> 1:16:56.960
1831
+ being nasty to each other is not okay. But there there is is is a middle ground where there is
1832
+
1833
+ 1:16:56.960 --> 1:17:06.880
1834
+ is healthy ongoing criticism and feedback that is very productive. And you mean at every level,
1835
+
1836
+ 1:17:06.880 --> 1:17:13.840
1837
+ you see that I mean, someone proposes to fix a very small issue in a code base.
1838
+
1839
+ 1:17:16.240 --> 1:17:22.480
1840
+ Chances are that some reviewer will sort of respond by saying, well, actually,
1841
+
1842
+ 1:17:22.480 --> 1:17:31.520
1843
+ you can do it better the other way. When it comes to deciding on the future of the Python
1844
+
1845
+ 1:17:31.520 --> 1:17:38.800
1846
+ core developer community, we now have, I think, five or six competing proposals for a constitution.
1847
+
1848
+ 1:17:40.960 --> 1:17:46.320
1849
+ So that future, do you have a fear of that future? Do you have a hope for that future?
1850
+
1851
+ 1:17:46.320 --> 1:17:54.880
1852
+ I'm very confident about that future. And by and large, I think that the debate has been very
1853
+
1854
+ 1:17:54.880 --> 1:18:06.000
1855
+ healthy and productive. And I actually when when I wrote that resignation email, I knew that that
1856
+
1857
+ 1:18:06.000 --> 1:18:11.920
1858
+ Python was in a very good spot and that the Python core development community that the group of
1859
+
1860
+ 1:18:11.920 --> 1:18:21.200
1861
+ 50 or 100 people who sort of write or review most of the code that goes into Python, those people
1862
+
1863
+ 1:18:22.400 --> 1:18:31.200
1864
+ get along very well most of the time. A large number of different areas of expertise are
1865
+
1866
+ 1:18:31.200 --> 1:18:42.240
1867
+ represented at different levels of experience in the Python core dev community, different levels
1868
+
1869
+ 1:18:42.240 --> 1:18:49.040
1870
+ of experience completely outside it in software development in general, large systems, small
1871
+
1872
+ 1:18:49.040 --> 1:19:00.000
1873
+ systems, embedded systems. So I felt okay, resigning because I knew that that the community can
1874
+
1875
+ 1:19:00.000 --> 1:19:08.240
1876
+ really take care of itself. And out of a grab bag of future feature developments, let me ask if
1877
+
1878
+ 1:19:08.240 --> 1:19:15.360
1879
+ you can comment, maybe on all very quickly, concurrent programming parallel computing,
1880
+
1881
+ 1:19:15.920 --> 1:19:23.520
1882
+ async IO, these are things that people have expressed hope, complained about, whatever
1883
+
1884
+ 1:19:23.520 --> 1:19:31.360
1885
+ I have discussed on Reddit, async IO, so the parallelization in general, packaging, I was totally
1886
+
1887
+ 1:19:31.360 --> 1:19:36.400
1888
+ close on this, I just use pip install stuff, but apparently, there's pip end of poetry, there's
1889
+
1890
+ 1:19:36.400 --> 1:19:41.760
1891
+ these dependency packaging systems that manage dependencies and so on, they're emerging, and
1892
+
1893
+ 1:19:41.760 --> 1:19:47.920
1894
+ there's a lot of confusion about what's what's the right thing to use. Then also, functional
1895
+
1896
+ 1:19:47.920 --> 1:19:57.760
1897
+ programming, the ever, are we going to get more functional programming or not, this kind of idea,
1898
+
1899
+ 1:19:58.560 --> 1:20:07.440
1900
+ and of course, the GIL connected to the parallelization, I suppose, the global interpreter
1901
+
1902
+ 1:20:07.440 --> 1:20:12.240
1903
+ lock problem. Can you just comment on whichever you want to comment on?
1904
+
1905
+ 1:20:12.240 --> 1:20:22.640
1906
+ Well, let's take the GIL and parallelization and async IO as one one topic.
1907
+
1908
+ 1:20:25.280 --> 1:20:35.840
1909
+ I'm not that hopeful that Python will develop into a sort of high concurrency, high parallelism
1910
+
1911
+ 1:20:35.840 --> 1:20:44.480
1912
+ language. That's sort of the way the language is designed, the way most users use the language,
1913
+
1914
+ 1:20:44.480 --> 1:20:50.080
1915
+ the way the language is implemented, all make that a pretty unlikely future.
1916
+
1917
+ 1:20:50.080 --> 1:20:56.000
1918
+ So you think it might not even need to really the way people use it, it might not be something
1919
+
1920
+ 1:20:56.000 --> 1:21:02.400
1921
+ that should be of great concern. I think I think async IO is a special case, because it sort of
1922
+
1923
+ 1:21:02.400 --> 1:21:14.320
1924
+ allows overlapping IO and only IO. And that is is a sort of best practice of supporting very
1925
+
1926
+ 1:21:14.320 --> 1:21:24.000
1927
+ high throughput IO, many connections per second. I'm not worried about that. I think async IO
1928
+
1929
+ 1:21:24.000 --> 1:21:30.080
1930
+ will evolve. There are a couple of competing packages, we have some very smart people who are
1931
+
1932
+ 1:21:30.080 --> 1:21:39.120
1933
+ sort of pushing us in sort of to make async IO better. Parallel computing, I think that
1934
+
1935
+ 1:21:40.320 --> 1:21:46.160
1936
+ Python is not the language for that. There are there are ways to work around it.
1937
+
1938
+ 1:21:47.120 --> 1:21:54.960
1939
+ But you sort of you can't expect to write an algorithm in Python and have a compiler
1940
+
1941
+ 1:21:54.960 --> 1:22:01.200
1942
+ automatically paralyze that what you can do is use a package like NumPy and there are a bunch of
1943
+
1944
+ 1:22:01.200 --> 1:22:10.080
1945
+ other very powerful packages that sort of use all the CPUs available, because you tell the package,
1946
+
1947
+ 1:22:10.720 --> 1:22:17.200
1948
+ here's the data, here's the abstract operation to apply over it, go at it, and then then we're
1949
+
1950
+ 1:22:17.200 --> 1:22:23.440
1951
+ back in the C++ world. But the those packages are themselves implemented usually in C++.
1952
+
1953
+ 1:22:23.440 --> 1:22:26.960
1954
+ That's right. That's where TensorFlow and all these packages come in where they parallelize
1955
+
1956
+ 1:22:26.960 --> 1:22:32.800
1957
+ across GPUs, for example, they take care of that for you. So in terms of packaging, can you comment
1958
+
1959
+ 1:22:32.800 --> 1:22:43.760
1960
+ on the packaging? Yeah, my packaging has always been my least favorite topic. It's a really tough
1961
+
1962
+ 1:22:43.760 --> 1:22:57.440
1963
+ problem because the OS and the platform want to own packaging. But their packaging solution is not
1964
+
1965
+ 1:22:57.440 --> 1:23:04.240
1966
+ specific to a language. Like, if you take Linux, there are two competing packaging solutions for
1967
+
1968
+ 1:23:04.240 --> 1:23:16.000
1969
+ Linux, or for Unix in general. And but they all work across all languages. And several languages,
1970
+
1971
+ 1:23:16.000 --> 1:23:25.600
1972
+ like Node, JavaScript, and Ruby, and Python all have their own packaging solutions that only work
1973
+
1974
+ 1:23:25.600 --> 1:23:34.400
1975
+ within the ecosystem of that language. Well, what should you use? That is a tough problem.
1976
+
1977
+ 1:23:34.400 --> 1:23:43.520
1978
+ My own own approach is I use the system packaging system to install Python, and I use the Python
1979
+
1980
+ 1:23:43.520 --> 1:23:50.240
1981
+ packaging system then to install third party Python packages. That's what most people do.
1982
+
1983
+ 1:23:50.240 --> 1:23:58.160
1984
+ 10 years ago, Python packaging was really a terrible situation. Nowadays, Pip is the future.
1985
+
1986
+ 1:23:58.160 --> 1:24:05.360
1987
+ There is there is a separate ecosystem for numerical and scientific Python, Python based on
1988
+
1989
+ 1:24:05.360 --> 1:24:11.280
1990
+ Anaconda. Those two can live together. I don't think there is a need for more than that.
1991
+
1992
+ 1:24:11.280 --> 1:24:16.800
1993
+ Great. So that's that's packaging. That's, well, at least for me, that's that's where I've been
1994
+
1995
+ 1:24:16.800 --> 1:24:22.240
1996
+ extremely happy. I didn't I didn't even know this was an issue until it was brought up. Well, in the
1997
+
1998
+ 1:24:22.240 --> 1:24:27.920
1999
+ interest of time, let me sort of skip through a million other questions I have. So I watched the
2000
+
2001
+ 1:24:27.920 --> 1:24:33.840
2002
+ five hour five five and a half hour oral history. They've done with the computer history museum.
2003
+
2004
+ 1:24:33.840 --> 1:24:38.480
2005
+ And the nice thing about it, it gave this because of the linear progression of the interview, it
2006
+
2007
+ 1:24:38.480 --> 1:24:47.040
2008
+ it gave this feeling of a life, you know, a life well lived with interesting things in it.
2009
+
2010
+ 1:24:47.040 --> 1:24:52.960
2011
+ Sort of a pretty, I would say a good spend of of this little existence we have on earth.
2012
+
2013
+ 1:24:52.960 --> 1:24:59.920
2014
+ So outside of your family, looking back, what about this journey are you really proud of?
2015
+
2016
+ 1:24:59.920 --> 1:25:10.240
2017
+ Are there moments that stand out accomplishments ideas? Is it the creation of Python itself
2018
+
2019
+ 1:25:10.240 --> 1:25:15.040
2020
+ that stands out as a thing that you look back and say, damn, I did pretty good there?
2021
+
2022
+ 1:25:17.600 --> 1:25:21.760
2023
+ Well, I would say that Python is definitely the best thing I've ever done.
2024
+
2025
+ 1:25:21.760 --> 1:25:34.000
2026
+ And I wouldn't sort of say just the creation of Python, but the way I sort of raised Python,
2027
+
2028
+ 1:25:34.000 --> 1:25:41.440
2029
+ like a baby, I didn't just conceive a child, but I raised a child. And now I'm setting the child
2030
+
2031
+ 1:25:41.440 --> 1:25:48.800
2032
+ free in the world. And I've set up the child to to sort of be able to take care of himself.
2033
+
2034
+ 1:25:48.800 --> 1:25:55.760
2035
+ And I'm very proud of that. And as the announcer of Monty Python's Flying Circus used to say,
2036
+
2037
+ 1:25:55.760 --> 1:26:01.200
2038
+ and now for something completely different, do you have a favorite Monty Python moment or a
2039
+
2040
+ 1:26:01.200 --> 1:26:05.680
2041
+ moment in Hitchhiker's Guide or any other literature show or movie that cracks you up when you think
2042
+
2043
+ 1:26:05.680 --> 1:26:12.480
2044
+ about it? Oh, you can always play me the Parrots, the dead Parrot sketch. Oh, that's brilliant.
2045
+
2046
+ 1:26:12.480 --> 1:26:19.360
2047
+ Yeah, that's my favorite as well. Pushing up the daisies. Okay, Greta, thank you so much for
2048
+
2049
+ 1:26:19.360 --> 1:26:44.000
2050
+ talking to me today. Lex, this has been a great conversation.
2051
+
vtt/episode_007_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_008_small.vtt ADDED
@@ -0,0 +1,1418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:02.800
4
+ The following is a conversation with Eric Schmidt.
5
+
6
+ 00:02.800 --> 00:07.640
7
+ He was the CEO of Google for ten years and a chairman for six more, guiding the company
8
+
9
+ 00:07.640 --> 00:13.080
10
+ through an incredible period of growth and a series of world changing innovations.
11
+
12
+ 00:13.080 --> 00:19.280
13
+ He is one of the most impactful leaders in the era of the internet and the powerful voice
14
+
15
+ 00:19.280 --> 00:22.400
16
+ for the promise of technology in our society.
17
+
18
+ 00:22.400 --> 00:27.760
19
+ It was truly an honor to speak with him as part of the MIT course on artificial general
20
+
21
+ 00:27.760 --> 00:32.040
22
+ intelligence and the artificial intelligence podcast.
23
+
24
+ 00:32.040 --> 00:37.120
25
+ And now here's my conversation with Eric Schmidt.
26
+
27
+ 00:37.120 --> 00:40.840
28
+ What was the first moment when you fell in love with technology?
29
+
30
+ 00:40.840 --> 00:47.080
31
+ I grew up in the 1960s as a boy where every boy wanted to be an astronaut and part of
32
+
33
+ 00:47.080 --> 00:49.000
34
+ the space program.
35
+
36
+ 00:49.000 --> 00:54.400
37
+ So like everyone else of my age, we would go out to the cow pasture behind my house,
38
+
39
+ 00:54.400 --> 00:58.640
40
+ which was literally a cow pasture, and we would shoot model rockets off.
41
+
42
+ 00:58.640 --> 01:00.200
43
+ And that I think is the beginning.
44
+
45
+ 01:00.200 --> 01:05.680
46
+ And of course, generationally, today, it would be video games and all the amazing things
47
+
48
+ 01:05.680 --> 01:09.280
49
+ that you can do online with computers.
50
+
51
+ 01:09.280 --> 01:15.080
52
+ There's a transformative, inspiring aspect of science and math that maybe rockets would
53
+
54
+ 01:15.080 --> 01:17.560
55
+ bring, would instill in individuals.
56
+
57
+ 01:17.560 --> 01:21.720
58
+ You've mentioned yesterday that eighth grade math is where the journey through Mathematical
59
+
60
+ 01:21.720 --> 01:24.520
61
+ University diverges from many people.
62
+
63
+ 01:24.520 --> 01:27.080
64
+ It's this fork in the roadway.
65
+
66
+ 01:27.080 --> 01:31.160
67
+ There's a professor of math at Berkeley, Edward Franco.
68
+
69
+ 01:31.160 --> 01:32.720
70
+ I'm not sure if you're familiar with him.
71
+
72
+ 01:32.720 --> 01:33.720
73
+ I am.
74
+
75
+ 01:33.720 --> 01:35.400
76
+ He has written this amazing book.
77
+
78
+ 01:35.400 --> 01:41.960
79
+ I recommend to everybody called Love and Math, two of my favorite words.
80
+
81
+ 01:41.960 --> 01:49.680
82
+ He says that if painting was taught like math, then students would be asked to paint a fence,
83
+
84
+ 01:49.680 --> 01:54.520
85
+ which is his analogy of essentially how math is taught, and you never get a chance to discover
86
+
87
+ 01:54.520 --> 01:59.400
88
+ the beauty of the art of painting or the beauty of the art of math.
89
+
90
+ 01:59.400 --> 02:05.240
91
+ So how, when, and where did you discover that beauty?
92
+
93
+ 02:05.240 --> 02:12.040
94
+ I think what happens with people like myself is that your math enabled pretty early, and
95
+
96
+ 02:12.040 --> 02:16.640
97
+ all of a sudden you discover that you can use that to discover new insights.
98
+
99
+ 02:16.640 --> 02:22.120
100
+ The great scientists will all tell a story, the men and women who are fantastic today,
101
+
102
+ 02:22.120 --> 02:25.560
103
+ that somewhere when they were in high school or in college, they discovered that they could
104
+
105
+ 02:25.560 --> 02:30.760
106
+ discover something themselves, and that sense of building something, of having an impact
107
+
108
+ 02:30.760 --> 02:35.520
109
+ that you own drives knowledge, acquisition, and learning.
110
+
111
+ 02:35.520 --> 02:41.000
112
+ In my case, it was programming, and the notion that I could build things that had not existed
113
+
114
+ 02:41.000 --> 02:46.560
115
+ that I had built, that it had my name on it, and this was before open source, but you could
116
+
117
+ 02:46.560 --> 02:49.160
118
+ think of it as open source contributions.
119
+
120
+ 02:49.160 --> 02:53.760
121
+ So today, if I were a 16 or 17 year old boy, I'm sure that I would aspire as a computer
122
+
123
+ 02:53.760 --> 02:59.000
124
+ scientist to make a contribution like the open source heroes of the world today.
125
+
126
+ 02:59.000 --> 03:03.760
127
+ That would be what would be driving me, and I'd be trying and learning and making mistakes,
128
+
129
+ 03:03.760 --> 03:06.720
130
+ and so forth, in the ways that it works.
131
+
132
+ 03:06.720 --> 03:12.360
133
+ The repository that GitHub represents and that open source libraries represent is an
134
+
135
+ 03:12.360 --> 03:17.840
136
+ enormous bank of knowledge of all of the people who are doing that, and one of the lessons
137
+
138
+ 03:17.840 --> 03:22.240
139
+ that I learned at Google was that the world is a very big place, and there's an awful
140
+
141
+ 03:22.240 --> 03:26.360
142
+ lot of smart people, and an awful lot of them are underutilized.
143
+
144
+ 03:26.360 --> 03:32.240
145
+ So here's an opportunity, for example, building parts of programs, building new ideas to contribute
146
+
147
+ 03:32.240 --> 03:36.640
148
+ to the greater of society.
149
+
150
+ 03:36.640 --> 03:41.000
151
+ So in that moment in the 70s, the inspiring moment where there was nothing, and then you
152
+
153
+ 03:41.000 --> 03:44.800
154
+ created something through programming, that magical moment.
155
+
156
+ 03:44.800 --> 03:50.360
157
+ So in 1975, I think you've created a program called Lex, which I especially like because
158
+
159
+ 03:50.360 --> 03:51.560
160
+ my name is Lex.
161
+
162
+ 03:51.560 --> 03:52.560
163
+ So thank you.
164
+
165
+ 03:52.560 --> 03:58.240
166
+ Thank you for creating a brand that established a reputation that's long lasting reliable
167
+
168
+ 03:58.240 --> 04:01.240
169
+ and has a big impact on the world and still used today.
170
+
171
+ 04:01.240 --> 04:03.000
172
+ So thank you for that.
173
+
174
+ 04:03.000 --> 04:11.880
175
+ But more seriously, in that time, in the 70s, as an engineer, personal computers were being
176
+
177
+ 04:11.880 --> 04:12.880
178
+ born.
179
+
180
+ 04:12.880 --> 04:17.800
181
+ Do you think you'd be able to predict the 80s, 90s, and the aughts of where computers
182
+
183
+ 04:17.800 --> 04:18.800
184
+ would go?
185
+
186
+ 04:18.800 --> 04:23.160
187
+ I'm sure I could not and would not have gotten it right.
188
+
189
+ 04:23.160 --> 04:27.960
190
+ I was the beneficiary of the great work of many, many people who saw it clearer than I
191
+
192
+ 04:27.960 --> 04:29.160
193
+ did.
194
+
195
+ 04:29.160 --> 04:34.760
196
+ With Lex, I worked with a fellow named Michael Lesk, who was my supervisor, and he essentially
197
+
198
+ 04:34.760 --> 04:39.400
199
+ helped me architect and deliver a system that's still in use today.
200
+
201
+ 04:39.400 --> 04:43.800
202
+ After that, I worked at Xerox Palo Alto Research Center, where the Alto was invented, and the
203
+
204
+ 04:43.800 --> 04:50.600
205
+ Alto is the predecessor of the modern personal computer, or Macintosh, and so forth.
206
+
207
+ 04:50.600 --> 04:55.360
208
+ And the altos were very rare, and I had to drive an hour from Berkeley to go use them,
209
+
210
+ 04:55.360 --> 05:01.160
211
+ but I made a point of skipping classes and doing whatever it took to have access to this
212
+
213
+ 05:01.160 --> 05:02.480
214
+ extraordinary achievement.
215
+
216
+ 05:02.480 --> 05:05.000
217
+ I knew that they were consequential.
218
+
219
+ 05:05.000 --> 05:08.240
220
+ What I did not understand was scaling.
221
+
222
+ 05:08.240 --> 05:12.960
223
+ I did not understand what would happen when you had 100 million as opposed to 100.
224
+
225
+ 05:12.960 --> 05:17.360
226
+ And so since then, and I have learned the benefit of scale, I always look for things
227
+
228
+ 05:17.360 --> 05:23.160
229
+ which are going to scale to platforms, so mobile phones, Android, all those things.
230
+
231
+ 05:23.160 --> 05:27.560
232
+ There are many, many people in the world.
233
+
234
+ 05:27.560 --> 05:28.560
235
+ People really have needs.
236
+
237
+ 05:28.560 --> 05:32.600
238
+ They really will use these platforms, and you can build big businesses on top of them.
239
+
240
+ 05:32.600 --> 05:33.600
241
+ So it's interesting.
242
+
243
+ 05:33.600 --> 05:36.880
244
+ So when you see a piece of technology, now you think, what will this technology look
245
+
246
+ 05:36.880 --> 05:39.080
247
+ like when it's in the hands of a billion people?
248
+
249
+ 05:39.080 --> 05:40.160
250
+ That's right.
251
+
252
+ 05:40.160 --> 05:46.520
253
+ So an example would be that the market is so competitive now that if you can't figure
254
+
255
+ 05:46.520 --> 05:51.880
256
+ out a way for something to have a million users or a billion users, it probably is not
257
+
258
+ 05:51.880 --> 05:57.280
259
+ going to be successful because something else will become the general platform, and your
260
+
261
+ 05:57.280 --> 06:04.360
262
+ idea will become a lost idea or a specialized service with relatively few users.
263
+
264
+ 06:04.360 --> 06:06.000
265
+ So it's a path to generality.
266
+
267
+ 06:06.000 --> 06:07.720
268
+ It's a path to general platform use.
269
+
270
+ 06:07.720 --> 06:09.400
271
+ It's a path to broad applicability.
272
+
273
+ 06:09.400 --> 06:15.000
274
+ Now, there are plenty of good businesses that are tiny, so luxury goods, for example.
275
+
276
+ 06:15.000 --> 06:20.360
277
+ But if you want to have an impact at scale, you have to look for things which are of
278
+
279
+ 06:20.360 --> 06:25.200
280
+ common value, common pricing, common distribution, and solve common problems, the problems that
281
+
282
+ 06:25.200 --> 06:26.200
283
+ everyone has.
284
+
285
+ 06:26.200 --> 06:30.440
286
+ And by the way, people have lots of problems, information, medicine, health, education,
287
+
288
+ 06:30.440 --> 06:31.440
289
+ and so forth.
290
+
291
+ 06:31.440 --> 06:32.440
292
+ Work on those problems.
293
+
294
+ 06:32.440 --> 06:40.240
295
+ Like you said, you're a big fan of the middle class because there's so many of them by definition.
296
+
297
+ 06:40.240 --> 06:46.600
298
+ So any product, any thing that has a huge impact, it improves their lives is a great
299
+
300
+ 06:46.600 --> 06:48.960
301
+ business decision and it's just good for society.
302
+
303
+ 06:48.960 --> 06:53.520
304
+ And there's nothing wrong with starting off in the high end as long as you have a plan
305
+
306
+ 06:53.520 --> 06:55.520
307
+ to get to the middle class.
308
+
309
+ 06:55.520 --> 06:59.280
310
+ There's nothing wrong with starting with a specialized market in order to learn and to
311
+
312
+ 06:59.280 --> 07:01.080
313
+ build and to fund things.
314
+
315
+ 07:01.080 --> 07:04.520
316
+ So you start a luxury market to build a general purpose market.
317
+
318
+ 07:04.520 --> 07:09.640
319
+ But if you define yourself as only a narrow market, someone else can come along with a
320
+
321
+ 07:09.640 --> 07:14.320
322
+ general purpose market that can push you to the corner, can restrict the scale of operation,
323
+
324
+ 07:14.320 --> 07:17.320
325
+ can force you to be a lesser impact than you might be.
326
+
327
+ 07:17.320 --> 07:22.800
328
+ So it's very important to think in terms of broad businesses and broad impact, even if
329
+
330
+ 07:22.800 --> 07:26.360
331
+ you start in a little corner somewhere.
332
+
333
+ 07:26.360 --> 07:33.200
334
+ So as you look to the 70s, but also in the decades to come, and you saw computers, did
335
+
336
+ 07:33.200 --> 07:40.240
337
+ you see them as tools or was there a little element of another entity?
338
+
339
+ 07:40.240 --> 07:46.240
340
+ I remember a quote saying AI began with our dream to create the gods.
341
+
342
+ 07:46.240 --> 07:51.520
343
+ Is there a feeling when you wrote that program that you were creating another entity, giving
344
+
345
+ 07:51.520 --> 07:52.800
346
+ life to something?
347
+
348
+ 07:52.800 --> 07:58.880
349
+ I wish I could say otherwise, but I simply found the technology platforms so exciting.
350
+
351
+ 07:58.880 --> 08:00.400
352
+ That's what I was focused on.
353
+
354
+ 08:00.400 --> 08:04.640
355
+ I think the majority of the people that I've worked with, and there are a few exceptions,
356
+
357
+ 08:04.640 --> 08:09.960
358
+ Steve Jobs being an example, really saw this as a great technological play.
359
+
360
+ 08:09.960 --> 08:15.520
361
+ I think relatively few of the technical people understood the scale of its impact.
362
+
363
+ 08:15.520 --> 08:19.680
364
+ So I used NCP, which is a predecessor to TCPIP.
365
+
366
+ 08:19.680 --> 08:21.240
367
+ It just made sense to connect things.
368
+
369
+ 08:21.240 --> 08:26.240
370
+ We didn't think of it in terms of the internet, and then companies, and then Facebook, and
371
+
372
+ 08:26.240 --> 08:29.200
373
+ then Twitter, and then politics, and so forth.
374
+
375
+ 08:29.200 --> 08:30.800
376
+ We never did that build.
377
+
378
+ 08:30.800 --> 08:32.920
379
+ We didn't have that vision.
380
+
381
+ 08:32.920 --> 08:38.200
382
+ And I think most people, it's a rare person who can see compounding at scale.
383
+
384
+ 08:38.200 --> 08:41.520
385
+ Most people can see, if you ask people to predict the future, they'll say, they'll give
386
+
387
+ 08:41.520 --> 08:44.080
388
+ you an answer of six to nine months or 12 months.
389
+
390
+ 08:44.080 --> 08:47.560
391
+ Because that's about as far as people can imagine.
392
+
393
+ 08:47.560 --> 08:51.020
394
+ But there's an old saying, which actually was attributed to a professor at MIT a long
395
+
396
+ 08:51.020 --> 08:58.120
397
+ time ago, that we overestimate what can be done in one year, and we underestimate what
398
+
399
+ 08:58.120 --> 09:00.280
400
+ can be done in a decade.
401
+
402
+ 09:00.280 --> 09:05.560
403
+ And there's a great deal of evidence that these core platforms at hardware and software
404
+
405
+ 09:05.560 --> 09:07.800
406
+ take a decade.
407
+
408
+ 09:07.800 --> 09:09.600
409
+ So think about self driving cars.
410
+
411
+ 09:09.600 --> 09:12.160
412
+ Self driving cars were thought about in the 90s.
413
+
414
+ 09:12.160 --> 09:17.160
415
+ Over projects around them, the first DARPA Durand Challenge was roughly 2004.
416
+
417
+ 09:17.160 --> 09:19.760
418
+ So that's roughly 15 years ago.
419
+
420
+ 09:19.760 --> 09:25.400
421
+ And today we have self driving cars operating in a city in Arizona, right, so 15 years.
422
+
423
+ 09:25.400 --> 09:31.720
424
+ And we still have a ways to go before they're more generally available.
425
+
426
+ 09:31.720 --> 09:33.840
427
+ So you've spoken about the importance.
428
+
429
+ 09:33.840 --> 09:37.080
430
+ You just talked about predicting into the future.
431
+
432
+ 09:37.080 --> 09:41.640
433
+ You've spoken about the importance of thinking five years ahead and having a plan for those
434
+
435
+ 09:41.640 --> 09:42.640
436
+ five years.
437
+
438
+ 09:42.640 --> 09:47.840
439
+ And the way to say it is that almost everybody has a one year plan.
440
+
441
+ 09:47.840 --> 09:50.960
442
+ Almost no one has a proper five year plan.
443
+
444
+ 09:50.960 --> 09:55.160
445
+ And the key thing to having a five year plan is having a model for what's going to happen
446
+
447
+ 09:55.160 --> 09:57.000
448
+ under the underlying platforms.
449
+
450
+ 09:57.000 --> 09:59.840
451
+ So here's an example.
452
+
453
+ 09:59.840 --> 10:05.120
454
+ Computer Moore's Law, as we know it, the thing that powered improvements in CPUs has largely
455
+
456
+ 10:05.120 --> 10:10.400
457
+ halted in its traditional shrinking mechanism, because the costs have just gotten so high.
458
+
459
+ 10:10.400 --> 10:12.200
460
+ It's getting harder and harder.
461
+
462
+ 10:12.200 --> 10:16.600
463
+ But there's plenty of algorithmic improvements and specialized hardware improvements.
464
+
465
+ 10:16.600 --> 10:21.240
466
+ So you need to understand the nature of those improvements and where they'll go in order
467
+
468
+ 10:21.240 --> 10:24.360
469
+ to understand how it will change the platform.
470
+
471
+ 10:24.360 --> 10:28.000
472
+ In the area of network connectivity, what are the gains that are going to be possible
473
+
474
+ 10:28.000 --> 10:29.480
475
+ in wireless?
476
+
477
+ 10:29.480 --> 10:35.720
478
+ It looks like there's an enormous expansion of wireless connectivity at many different
479
+
480
+ 10:35.720 --> 10:36.960
481
+ bands, right?
482
+
483
+ 10:36.960 --> 10:40.520
484
+ And that we will primarily, historically, I've always thought that we were primarily
485
+
486
+ 10:40.520 --> 10:45.040
487
+ going to be using fiber, but now it looks like we're going to be using fiber plus very
488
+
489
+ 10:45.040 --> 10:51.560
490
+ powerful high bandwidth sort of short distance connectivity to bridge the last mile, right?
491
+
492
+ 10:51.560 --> 10:53.100
493
+ That's an amazing achievement.
494
+
495
+ 10:53.100 --> 10:56.880
496
+ If you know that, then you're going to build your systems differently.
497
+
498
+ 10:56.880 --> 10:59.800
499
+ By the way, those networks have different latency properties, right?
500
+
501
+ 10:59.800 --> 11:05.040
502
+ Because they're more symmetric, the algorithms feel faster for that reason.
503
+
504
+ 11:05.040 --> 11:09.920
505
+ And so when you think about whether it's a fiber or just technologies in general.
506
+
507
+ 11:09.920 --> 11:15.920
508
+ So there's this barber, wooden poem or quote that I really like.
509
+
510
+ 11:15.920 --> 11:20.400
511
+ It's from the champions of the impossible rather than the slaves of the possible that
512
+
513
+ 11:20.400 --> 11:23.280
514
+ evolution draws its creative force.
515
+
516
+ 11:23.280 --> 11:27.840
517
+ So in predicting the next five years, I'd like to talk about the impossible and the
518
+
519
+ 11:27.840 --> 11:28.840
520
+ possible.
521
+
522
+ 11:28.840 --> 11:34.720
523
+ Well, and again, one of the great things about humanity is that we produce dreamers, right?
524
+
525
+ 11:34.720 --> 11:37.760
526
+ We literally have people who have a vision and a dream.
527
+
528
+ 11:37.760 --> 11:43.400
529
+ They are, if you will, disagreeable in the sense that they disagree with the, they disagree
530
+
531
+ 11:43.400 --> 11:46.240
532
+ with what the sort of zeitgeist is.
533
+
534
+ 11:46.240 --> 11:48.040
535
+ They say there is another way.
536
+
537
+ 11:48.040 --> 11:49.040
538
+ They have a belief.
539
+
540
+ 11:49.040 --> 11:50.320
541
+ They have a vision.
542
+
543
+ 11:50.320 --> 11:56.560
544
+ If you look at science, science is always marked by such people who, who went against
545
+
546
+ 11:56.560 --> 12:01.360
547
+ some conventional wisdom, collected the knowledge at the time and assembled it in a way that
548
+
549
+ 12:01.360 --> 12:03.760
550
+ produced a powerful platform.
551
+
552
+ 12:03.760 --> 12:11.120
553
+ And you've been amazingly honest about in an inspiring way about things you've been wrong
554
+
555
+ 12:11.120 --> 12:14.800
556
+ about predicting and you've obviously been right about a lot of things.
557
+
558
+ 12:14.800 --> 12:23.880
559
+ But in this kind of tension, how do you balance as a company in predicting the next five years,
560
+
561
+ 12:23.880 --> 12:26.520
562
+ the impossible, planning for the impossible.
563
+
564
+ 12:26.520 --> 12:32.720
565
+ So listening to those crazy dreamers, letting them do, letting them run away and make the
566
+
567
+ 12:32.720 --> 12:38.760
568
+ impossible real, make it happen and slow, you know, that's how programmers often think
569
+
570
+ 12:38.760 --> 12:44.800
571
+ and slowing things down and saying, well, this is the rational, this is the possible,
572
+
573
+ 12:44.800 --> 12:49.160
574
+ the pragmatic, the, the dreamer versus the pragmatist.
575
+
576
+ 12:49.160 --> 12:56.680
577
+ So it's helpful to have a model which encourages a predictable revenue stream as well as the
578
+
579
+ 12:56.680 --> 12:58.720
580
+ ability to do new things.
581
+
582
+ 12:58.720 --> 13:03.120
583
+ So in Google's case, we're big enough and well enough managed and so forth that we have
584
+
585
+ 13:03.120 --> 13:06.600
586
+ a pretty good sense of what our revenue will be for the next year or two, at least for
587
+
588
+ 13:06.600 --> 13:07.960
589
+ a while.
590
+
591
+ 13:07.960 --> 13:13.720
592
+ And so we have enough cash generation that we can make bets.
593
+
594
+ 13:13.720 --> 13:18.760
595
+ And indeed, Google has become alphabet so the corporation is organized around these
596
+
597
+ 13:18.760 --> 13:19.760
598
+ bets.
599
+
600
+ 13:19.760 --> 13:25.560
601
+ And these bets are in areas of fundamental importance to, to the world, whether it's
602
+
603
+ 13:25.560 --> 13:31.920
604
+ digital intelligence, medical technology, self driving cars, connectivity through balloons,
605
+
606
+ 13:31.920 --> 13:33.440
607
+ on and on and on.
608
+
609
+ 13:33.440 --> 13:36.080
610
+ And there's more coming and more coming.
611
+
612
+ 13:36.080 --> 13:41.480
613
+ So one way you could express this is that the current business is successful enough
614
+
615
+ 13:41.480 --> 13:44.720
616
+ that we have the luxury of making bets.
617
+
618
+ 13:44.720 --> 13:48.960
619
+ And another one that you could say is that we have the, the wisdom of being able to see
620
+
621
+ 13:48.960 --> 13:53.920
622
+ that a corporate structure needs to be created to enhance the likelihood of the success of
623
+
624
+ 13:53.920 --> 13:55.320
625
+ those bets.
626
+
627
+ 13:55.320 --> 13:59.760
628
+ So we essentially turned ourselves into a conglomerate of bets.
629
+
630
+ 13:59.760 --> 14:04.360
631
+ And then this underlying corporation Google, which is itself innovative.
632
+
633
+ 14:04.360 --> 14:08.160
634
+ So in order to pull this off, you have to have a bunch of belief systems.
635
+
636
+ 14:08.160 --> 14:12.080
637
+ And one of them is that you have to have bottoms up and tops down the bottoms up.
638
+
639
+ 14:12.080 --> 14:13.600
640
+ We call 20% time.
641
+
642
+ 14:13.600 --> 14:17.040
643
+ And the idea is that people can spend 20% of the time on whatever they want.
644
+
645
+ 14:17.040 --> 14:21.960
646
+ And the top down is that our founders in particular have a keen eye on technology and they're
647
+
648
+ 14:21.960 --> 14:24.000
649
+ reviewing things constantly.
650
+
651
+ 14:24.000 --> 14:27.520
652
+ So an example would be they'll, they'll hear about an idea or I'll hear about something
653
+
654
+ 14:27.520 --> 14:28.880
655
+ and it sounds interesting.
656
+
657
+ 14:28.880 --> 14:34.920
658
+ Let's go visit them and then let's begin to assemble the pieces to see if that's possible.
659
+
660
+ 14:34.920 --> 14:39.920
661
+ And if you do this long enough, you get pretty good at predicting what's likely to work.
662
+
663
+ 14:39.920 --> 14:42.120
664
+ So that's, that's a beautiful balance that struck.
665
+
666
+ 14:42.120 --> 14:44.560
667
+ Is this something that applies at all scale?
668
+
669
+ 14:44.560 --> 14:53.960
670
+ So seems seems to be that the Sergei, again, 15 years ago, came up with a concept that
671
+
672
+ 14:53.960 --> 14:59.040
673
+ called 10% of the budget should be on things that are unrelated.
674
+
675
+ 14:59.040 --> 15:05.040
676
+ It was called 70, 20, 10, 70% of our time on core business, 20% on adjacent business
677
+
678
+ 15:05.040 --> 15:06.920
679
+ and 10% on other.
680
+
681
+ 15:06.920 --> 15:11.200
682
+ And he proved mathematically, of course, he's a brilliant mathematician, that you needed
683
+
684
+ 15:11.200 --> 15:18.800
685
+ that 10% right to make the sum of the growth work and it turns out he was right.
686
+
687
+ 15:18.800 --> 15:24.320
688
+ So getting into the world of artificial intelligence, you've, you've talked quite extensively and
689
+
690
+ 15:24.320 --> 15:32.160
691
+ effectively to the impact in the near term, the positive impact of artificial intelligence,
692
+
693
+ 15:32.160 --> 15:38.720
694
+ whether it's machine, especially machine learning in medical applications and education
695
+
696
+ 15:38.720 --> 15:44.160
697
+ and just making information more accessible, right in the AI community, there is a kind
698
+
699
+ 15:44.160 --> 15:49.520
700
+ of debate, so there's this shroud of uncertainty as we face this new world with artificial
701
+
702
+ 15:49.520 --> 15:50.800
703
+ intelligence in it.
704
+
705
+ 15:50.800 --> 15:57.560
706
+ And there is some people like Elon Musk, you've disagreed on at least on the degree of emphasis
707
+
708
+ 15:57.560 --> 16:00.800
709
+ he places on the existential threat of AI.
710
+
711
+ 16:00.800 --> 16:07.120
712
+ So I've spoken with Stuart Russell, Max Tagmark, who share Elon Musk's view, and Yoshio Benjio,
713
+
714
+ 16:07.120 --> 16:09.240
715
+ Steven Pinker, who do not.
716
+
717
+ 16:09.240 --> 16:13.320
718
+ And so there's a, there's a, there's a lot of very smart people who are thinking about
719
+
720
+ 16:13.320 --> 16:17.280
721
+ this stuff, disagreeing, which is really healthy, of course.
722
+
723
+ 16:17.280 --> 16:22.800
724
+ So what do you think is the healthiest way for the AI community to, and really for the
725
+
726
+ 16:22.800 --> 16:30.880
727
+ general public to think about AI and the concern of the technology being mismanaged
728
+
729
+ 16:30.880 --> 16:33.000
730
+ in some, in some kind of way.
731
+
732
+ 16:33.000 --> 16:37.560
733
+ So the source of education for the general public has been robot killer movies.
734
+
735
+ 16:37.560 --> 16:38.760
736
+ Right.
737
+
738
+ 16:38.760 --> 16:44.840
739
+ And Terminator, et cetera, and the one thing I can assure you we're not building are those
740
+
741
+ 16:44.840 --> 16:46.200
742
+ kinds of solutions.
743
+
744
+ 16:46.200 --> 16:51.240
745
+ Furthermore, if they were to show up, someone would notice and unplug them, right?
746
+
747
+ 16:51.240 --> 16:57.760
748
+ So as exciting as those movies are, and they're great movies, where the killer robots to start,
749
+
750
+ 16:57.760 --> 17:00.520
751
+ we would find a way to stop them, right?
752
+
753
+ 17:00.520 --> 17:04.320
754
+ So I'm not concerned about that.
755
+
756
+ 17:04.320 --> 17:08.680
757
+ And much of this has to do with the timeframe of conversation.
758
+
759
+ 17:08.680 --> 17:16.120
760
+ So you can imagine a situation a hundred years from now, when the human brain is fully understood,
761
+
762
+ 17:16.120 --> 17:19.880
763
+ and the next generation and next generation of brilliant MIT scientists have figured
764
+
765
+ 17:19.880 --> 17:25.960
766
+ all this out, we're going to have a large number of ethics questions, right, around science
767
+
768
+ 17:25.960 --> 17:29.760
769
+ and thinking and robots and computers and so forth and so on.
770
+
771
+ 17:29.760 --> 17:32.360
772
+ So it depends on the question of the timeframe.
773
+
774
+ 17:32.360 --> 17:37.280
775
+ In the next five to 10 years, we're not facing those questions.
776
+
777
+ 17:37.280 --> 17:42.000
778
+ What we're facing in the next five to 10 years is how do we spread this disruptive technology
779
+
780
+ 17:42.000 --> 17:46.520
781
+ as broadly as possible to gain the maximum benefit of it?
782
+
783
+ 17:46.520 --> 17:51.880
784
+ The primary benefit should be in healthcare and in education, healthcare because it's
785
+
786
+ 17:51.880 --> 17:52.880
787
+ obvious.
788
+
789
+ 17:52.880 --> 17:55.840
790
+ We're all the same, even though we somehow believe we're not.
791
+
792
+ 17:55.840 --> 18:00.400
793
+ As a medical matter, the fact that we have big data about our health will save lives,
794
+
795
+ 18:00.400 --> 18:05.520
796
+ allow us to deal with skin cancer and other cancers, ophthalmological problems.
797
+
798
+ 18:05.520 --> 18:10.080
799
+ There's people working on psychological diseases and so forth using these techniques.
800
+
801
+ 18:10.080 --> 18:11.680
802
+ I go on and on.
803
+
804
+ 18:11.680 --> 18:15.840
805
+ The promise of AI in medicine is extraordinary.
806
+
807
+ 18:15.840 --> 18:20.360
808
+ There are many, many companies and startups and funds and solutions and we will all live
809
+
810
+ 18:20.360 --> 18:22.120
811
+ much better for that.
812
+
813
+ 18:22.120 --> 18:25.680
814
+ The same argument in education.
815
+
816
+ 18:25.680 --> 18:31.760
817
+ Can you imagine that for each generation of child and even adult, you have a tutor educator
818
+
819
+ 18:31.760 --> 18:37.320
820
+ that's AI based, that's not a human but is properly trained, that helps you get smarter,
821
+
822
+ 18:37.320 --> 18:41.440
823
+ helps you address your language difficulties or your math difficulties or what have you.
824
+
825
+ 18:41.440 --> 18:43.400
826
+ Why don't we focus on those two?
827
+
828
+ 18:43.400 --> 18:49.240
829
+ The gains societally of making humans smarter and healthier are enormous.
830
+
831
+ 18:49.240 --> 18:54.000
832
+ Those translate for decades and decades and will all benefit from them.
833
+
834
+ 18:54.000 --> 18:58.560
835
+ There are people who are working on AI safety, which is the issue that you're describing.
836
+
837
+ 18:58.560 --> 19:02.920
838
+ There are conversations in the community that should there be such problems, what should
839
+
840
+ 19:02.920 --> 19:04.360
841
+ the rules be like?
842
+
843
+ 19:04.360 --> 19:09.360
844
+ Google, for example, has announced its policies with respect to AI safety, which I certainly
845
+
846
+ 19:09.360 --> 19:14.320
847
+ support and I think most everybody would support and they make sense.
848
+
849
+ 19:14.320 --> 19:19.760
850
+ It helps guide the research but the killer robots are not arriving this year and they're
851
+
852
+ 19:19.760 --> 19:23.840
853
+ not even being built.
854
+
855
+ 19:23.840 --> 19:31.040
856
+ On that line of thinking, you said the timescale, in this topic or other topics, have you found
857
+
858
+ 19:31.040 --> 19:37.920
859
+ a useful, on the business side or the intellectual side, to think beyond 5, 10 years, to think
860
+
861
+ 19:37.920 --> 19:39.480
862
+ 50 years out?
863
+
864
+ 19:39.480 --> 19:42.000
865
+ Has it ever been useful or productive?
866
+
867
+ 19:42.000 --> 19:49.040
868
+ In our industry, there are essentially no examples of 50 year predictions that have been correct.
869
+
870
+ 19:49.040 --> 19:54.320
871
+ Let's review AI, which was largely invented here at MIT and a couple of other universities
872
+
873
+ 19:54.320 --> 19:57.840
874
+ in 1956, 1957, 1958.
875
+
876
+ 19:57.840 --> 20:01.800
877
+ The original claims were a decade or two.
878
+
879
+ 20:01.800 --> 20:08.040
880
+ When I was a PhD student, I studied AI a bit and it entered during my looking at it a period
881
+
882
+ 20:08.040 --> 20:13.880
883
+ which is known as AI winter, which went on for about 30 years, which is a whole generation
884
+
885
+ 20:13.880 --> 20:18.800
886
+ of scientists and a whole group of people who didn't make a lot of progress because the
887
+
888
+ 20:18.800 --> 20:22.160
889
+ algorithms had not improved and the computers had not improved.
890
+
891
+ 20:22.160 --> 20:26.160
892
+ It took some brilliant mathematicians, starting with a fellow named Jeff Hinton at Toronto
893
+
894
+ 20:26.160 --> 20:33.120
895
+ in Montreal, who basically invented this deep learning model which empowers us today.
896
+
897
+ 20:33.120 --> 20:40.400
898
+ The seminal work there was 20 years ago and in the last 10 years, it's become popularized.
899
+
900
+ 20:40.400 --> 20:43.520
901
+ Think about the time frames for that level of discovery.
902
+
903
+ 20:43.520 --> 20:46.080
904
+ It's very hard to predict.
905
+
906
+ 20:46.080 --> 20:50.240
907
+ Many people think that we'll be flying around in the equivalent of flying cars.
908
+
909
+ 20:50.240 --> 20:51.240
910
+ Who knows?
911
+
912
+ 20:51.240 --> 20:56.680
913
+ My own view, if I want to go out on a limb, is to say that we know a couple of things
914
+
915
+ 20:56.680 --> 20:58.000
916
+ about 50 years from now.
917
+
918
+ 20:58.000 --> 21:00.480
919
+ We know that there'll be more people alive.
920
+
921
+ 21:00.480 --> 21:04.000
922
+ We know that we'll have to have platforms that are more sustainable because the earth
923
+
924
+ 21:04.000 --> 21:09.440
925
+ is limited in the ways we all know and that the kind of platforms that are going to get
926
+
927
+ 21:09.440 --> 21:13.000
928
+ billed will be consistent with the principles that I've described.
929
+
930
+ 21:13.000 --> 21:17.560
931
+ They will be much more empowering of individuals, they'll be much more sensitive to the ecology
932
+
933
+ 21:17.560 --> 21:20.440
934
+ because they have to be, they just have to be.
935
+
936
+ 21:20.440 --> 21:24.160
937
+ I also think that humans are going to be a great deal smarter and I think they're going
938
+
939
+ 21:24.160 --> 21:28.320
940
+ to be a lot smarter because of the tools that I've discussed with you and of course people
941
+
942
+ 21:28.320 --> 21:29.320
943
+ will live longer.
944
+
945
+ 21:29.320 --> 21:32.080
946
+ Life extension is continuing apace.
947
+
948
+ 21:32.080 --> 21:36.840
949
+ A baby born today has a reasonable chance of living to 100, which is pretty exciting.
950
+
951
+ 21:36.840 --> 21:40.760
952
+ It's well past the 21st century, so we better take care of them.
953
+
954
+ 21:40.760 --> 21:46.160
955
+ You mentioned interesting statistic on some very large percentage, 60, 70% of people may
956
+
957
+ 21:46.160 --> 21:48.360
958
+ live in cities.
959
+
960
+ 21:48.360 --> 21:53.880
961
+ Today more than half the world lives in cities and one of the great stories of humanity in
962
+
963
+ 21:53.880 --> 21:57.560
964
+ the last 20 years has been the rural to urban migration.
965
+
966
+ 21:57.560 --> 22:02.720
967
+ This has occurred in the United States, it's occurred in Europe, it's occurring in Asia
968
+
969
+ 22:02.720 --> 22:04.760
970
+ and it's occurring in Africa.
971
+
972
+ 22:04.760 --> 22:09.280
973
+ When people move to cities, the cities get more crowded, but believe it or not their health
974
+
975
+ 22:09.280 --> 22:15.560
976
+ gets better, their productivity gets better, their IQ and educational capabilities improve,
977
+
978
+ 22:15.560 --> 22:19.840
979
+ so it's good news that people are moving to cities, but we have to make them livable
980
+
981
+ 22:19.840 --> 22:22.800
982
+ and safe.
983
+
984
+ 22:22.800 --> 22:28.240
985
+ So you, first of all, you are, but you've also worked with some of the greatest leaders
986
+
987
+ 22:28.240 --> 22:30.180
988
+ in the history of tech.
989
+
990
+ 22:30.180 --> 22:37.080
991
+ What insights do you draw from the difference in leadership styles of yourself, Steve Jobs,
992
+
993
+ 22:37.080 --> 22:45.320
994
+ Elon Musk, Larry Page, now the new CEO, Sandra Pichai and others from the, I would say, calm
995
+
996
+ 22:45.320 --> 22:49.600
997
+ sages to the mad geniuses?
998
+
999
+ 22:49.600 --> 22:53.880
1000
+ One of the things that I learned as a young executive is that there's no single formula
1001
+
1002
+ 22:53.880 --> 22:56.200
1003
+ for leadership.
1004
+
1005
+ 22:56.200 --> 23:00.080
1006
+ They try to teach one, but that's not how it really works.
1007
+
1008
+ 23:00.080 --> 23:04.360
1009
+ There are people who just understand what they need to do and they need to do it quickly.
1010
+
1011
+ 23:04.360 --> 23:06.800
1012
+ Those people are often entrepreneurs.
1013
+
1014
+ 23:06.800 --> 23:09.080
1015
+ They just know and they move fast.
1016
+
1017
+ 23:09.080 --> 23:13.400
1018
+ There are other people who are systems thinkers and planners, that's more who I am, somewhat
1019
+
1020
+ 23:13.400 --> 23:18.760
1021
+ more conservative, more thorough in execution, a little bit more risk averse.
1022
+
1023
+ 23:18.760 --> 23:24.120
1024
+ There's also people who are sort of slightly insane, right, in the sense that they are
1025
+
1026
+ 23:24.120 --> 23:29.040
1027
+ emphatic and charismatic and they feel it and they drive it and so forth.
1028
+
1029
+ 23:29.040 --> 23:31.440
1030
+ There's no single formula to success.
1031
+
1032
+ 23:31.440 --> 23:35.320
1033
+ There is one thing that unifies all of the people that you named, which is very high
1034
+
1035
+ 23:35.320 --> 23:41.240
1036
+ intelligence, at the end of the day, the thing that characterizes all of them is that they
1037
+
1038
+ 23:41.240 --> 23:46.360
1039
+ saw the world quicker, faster, they processed information faster, they didn't necessarily
1040
+
1041
+ 23:46.360 --> 23:50.160
1042
+ make the right decisions all the time, but they were on top of it.
1043
+
1044
+ 23:50.160 --> 23:54.600
1045
+ The other thing that's interesting about all those people is they all started young.
1046
+
1047
+ 23:54.600 --> 23:58.560
1048
+ Think about Steve Jobs starting Apple roughly at 18 or 19.
1049
+
1050
+ 23:58.560 --> 24:01.840
1051
+ Think about Bill Gates starting at roughly 2021.
1052
+
1053
+ 24:01.840 --> 24:07.040
1054
+ Think about by the time they were 30, Mark Zuckerberg, a more good example at 1920.
1055
+
1056
+ 24:07.040 --> 24:13.720
1057
+ By the time they were 30, they had 10 years, at 30 years old, they had 10 years of experience
1058
+
1059
+ 24:13.720 --> 24:19.920
1060
+ of dealing with people and products and shipments and the press and business and so forth.
1061
+
1062
+ 24:19.920 --> 24:24.480
1063
+ It's incredible how much experience they had compared to the rest of us who were busy getting
1064
+
1065
+ 24:24.480 --> 24:25.480
1066
+ our PhDs.
1067
+
1068
+ 24:25.480 --> 24:26.480
1069
+ Yes, exactly.
1070
+
1071
+ 24:26.480 --> 24:32.760
1072
+ We should celebrate these people because they've just had more life experience and that helps
1073
+
1074
+ 24:32.760 --> 24:34.520
1075
+ inform the judgment.
1076
+
1077
+ 24:34.520 --> 24:41.360
1078
+ At the end of the day, when you're at the top of these organizations, all the easy questions
1079
+
1080
+ 24:41.360 --> 24:43.680
1081
+ have been dealt with.
1082
+
1083
+ 24:43.680 --> 24:45.840
1084
+ How should we design the buildings?
1085
+
1086
+ 24:45.840 --> 24:48.400
1087
+ Where should we put the colors on our product?
1088
+
1089
+ 24:48.400 --> 24:51.440
1090
+ What should the box look like?
1091
+
1092
+ 24:51.440 --> 24:55.520
1093
+ The problems, that's why it's so interesting to be in these rooms, the problems that they
1094
+
1095
+ 24:55.520 --> 25:00.200
1096
+ face in terms of the way they operate, the way they deal with their employees, their
1097
+
1098
+ 25:00.200 --> 25:04.160
1099
+ customers, their innovation are profoundly challenging.
1100
+
1101
+ 25:04.160 --> 25:09.360
1102
+ Each of the companies is demonstrably different culturally.
1103
+
1104
+ 25:09.360 --> 25:11.800
1105
+ They are not, in fact, cut of the same.
1106
+
1107
+ 25:11.800 --> 25:16.680
1108
+ They behave differently based on input, their internal cultures are different, their compensation
1109
+
1110
+ 25:16.680 --> 25:24.920
1111
+ schemes are different, their values are different, so there's proof that diversity works.
1112
+
1113
+ 25:24.920 --> 25:33.440
1114
+ So when faced with a tough decision, in need of advice, it's been said that the best thing
1115
+
1116
+ 25:33.440 --> 25:39.840
1117
+ one can do is to find the best person in the world who can give that advice and find a
1118
+
1119
+ 25:39.840 --> 25:44.880
1120
+ way to be in a room with them, one on one and ask.
1121
+
1122
+ 25:44.880 --> 25:51.920
1123
+ So here we are, and let me ask in a long winded way, I wrote this down, in 1998 there were
1124
+
1125
+ 25:51.920 --> 26:01.960
1126
+ many good search engines, Lycos, Excite, Altavista, Infoseek, AskJeeves maybe, Yahoo even.
1127
+
1128
+ 26:01.960 --> 26:07.040
1129
+ So Google stepped in and disrupted everything, they disrupted the nature of search, the nature
1130
+
1131
+ 26:07.040 --> 26:12.040
1132
+ of our access to information, the way we discover new knowledge.
1133
+
1134
+ 26:12.040 --> 26:19.120
1135
+ So now it's 2018, actually 20 years later, there are many good personal AI assistants,
1136
+
1137
+ 26:19.120 --> 26:22.360
1138
+ including of course the best from Google.
1139
+
1140
+ 26:22.360 --> 26:28.720
1141
+ So you've spoken in medical and education, the impact of such an AI assistant could bring.
1142
+
1143
+ 26:28.720 --> 26:34.920
1144
+ So we arrive at this question, so it's a personal one for me, but I hope my situation represents
1145
+
1146
+ 26:34.920 --> 26:41.200
1147
+ that of many other, as we said, dreamers and the crazy engineers.
1148
+
1149
+ 26:41.200 --> 26:46.680
1150
+ So my whole life, I've dreamed of creating such an AI assistant.
1151
+
1152
+ 26:46.680 --> 26:50.800
1153
+ Every step I've taken has been towards that goal, now I'm a research scientist in Human
1154
+
1155
+ 26:50.800 --> 26:58.920
1156
+ Senate AI here at MIT, so the next step for me as I sit here, facing my passion, is to
1157
+
1158
+ 26:58.920 --> 27:04.880
1159
+ do what Larry and Sergey did in 1998, this simple start up.
1160
+
1161
+ 27:04.880 --> 27:10.640
1162
+ And so here's my simple question, given the low odds of success, the timing and luck required,
1163
+
1164
+ 27:10.640 --> 27:14.280
1165
+ the countless other factors that can't be controlled or predicted, which is all the
1166
+
1167
+ 27:14.280 --> 27:16.560
1168
+ things that Larry and Sergey faced.
1169
+
1170
+ 27:16.560 --> 27:23.080
1171
+ Is there some calculation, some strategy to follow in this step, or do you simply follow
1172
+
1173
+ 27:23.080 --> 27:26.560
1174
+ the passion just because there's no other choice?
1175
+
1176
+ 27:26.560 --> 27:32.880
1177
+ I think the people who are in universities are always trying to study the extraordinarily
1178
+
1179
+ 27:32.880 --> 27:37.360
1180
+ chaotic nature of innovation and entrepreneurship.
1181
+
1182
+ 27:37.360 --> 27:42.880
1183
+ My answer is that they didn't have that conversation, they just did it.
1184
+
1185
+ 27:42.880 --> 27:48.840
1186
+ They sensed a moment when, in the case of Google, there was all of this data that needed
1187
+
1188
+ 27:48.840 --> 27:53.940
1189
+ to be organized and they had a better algorithm, they had invented a better way.
1190
+
1191
+ 27:53.940 --> 28:01.040
1192
+ So today with Human Senate AI, which is your area of research, there must be new approaches.
1193
+
1194
+ 28:01.040 --> 28:07.320
1195
+ It's such a big field, there must be new approaches, different from what we and others are doing.
1196
+
1197
+ 28:07.320 --> 28:12.320
1198
+ There must be startups to fund, there must be research projects to try, there must be
1199
+
1200
+ 28:12.320 --> 28:15.200
1201
+ graduate students to work on new approaches.
1202
+
1203
+ 28:15.200 --> 28:19.120
1204
+ Here at MIT, there are people who are looking at learning from the standpoint of looking
1205
+
1206
+ 28:19.120 --> 28:23.840
1207
+ at child learning, how do children learn starting at each one?
1208
+
1209
+ 28:23.840 --> 28:25.560
1210
+ And the work is fantastic.
1211
+
1212
+ 28:25.560 --> 28:30.120
1213
+ Those approaches are different from the approach that most people are taking.
1214
+
1215
+ 28:30.120 --> 28:33.980
1216
+ Perhaps that's a bet that you should make, or perhaps there's another one.
1217
+
1218
+ 28:33.980 --> 28:40.200
1219
+ But at the end of the day, the successful entrepreneurs are not as crazy as they sound.
1220
+
1221
+ 28:40.200 --> 28:43.200
1222
+ They see an opportunity based on what's happened.
1223
+
1224
+ 28:43.200 --> 28:45.400
1225
+ Let's use Uber as an example.
1226
+
1227
+ 28:45.400 --> 28:49.840
1228
+ As Travis sells the story, he and his cofounder were sitting in Paris and they had this idea
1229
+
1230
+ 28:49.840 --> 28:52.160
1231
+ because they couldn't get a cab.
1232
+
1233
+ 28:52.160 --> 28:56.800
1234
+ And they said, we have smartphones and the rest is history.
1235
+
1236
+ 28:56.800 --> 29:04.040
1237
+ So what's the equivalent of that Travis Eiffel Tower, where is a cab moment that you could,
1238
+
1239
+ 29:04.040 --> 29:08.800
1240
+ as an entrepreneur, take advantage of, whether it's in Human Senate AI or something else?
1241
+
1242
+ 29:08.800 --> 29:11.480
1243
+ That's the next great start up.
1244
+
1245
+ 29:11.480 --> 29:13.760
1246
+ And the psychology of that moment.
1247
+
1248
+ 29:13.760 --> 29:20.120
1249
+ So when Sergey and Larry talk about, and listen to a few interviews, it's very nonchalant.
1250
+
1251
+ 29:20.120 --> 29:25.280
1252
+ Well, here's the very fascinating web data.
1253
+
1254
+ 29:25.280 --> 29:29.080
1255
+ And here's an algorithm we have for, you know, we just kind of want to play around with that
1256
+
1257
+ 29:29.080 --> 29:30.080
1258
+ data.
1259
+
1260
+ 29:30.080 --> 29:32.520
1261
+ And it seems like that's a really nice way to organize this data.
1262
+
1263
+ 29:32.520 --> 29:38.000
1264
+ Well, I should say what happened, remember, is that they were graduate students at Stanford
1265
+
1266
+ 29:38.000 --> 29:41.320
1267
+ and they thought this is interesting, so they built a search engine and they kept it in
1268
+
1269
+ 29:41.320 --> 29:43.400
1270
+ their room.
1271
+
1272
+ 29:43.400 --> 29:47.520
1273
+ And they had to get power from the room next door because they were using too much power
1274
+
1275
+ 29:47.520 --> 29:48.520
1276
+ in the room.
1277
+
1278
+ 29:48.520 --> 29:51.640
1279
+ So they ran an extension cord over.
1280
+
1281
+ 29:51.640 --> 29:55.360
1282
+ And then they went and they found a house and they had Google World headquarters of
1283
+
1284
+ 29:55.360 --> 29:57.600
1285
+ five people to start the company.
1286
+
1287
+ 29:57.600 --> 30:02.560
1288
+ And they raised $100,000 from Andy Bechtelstein, who was the sun founder to do this, and Dave
1289
+
1290
+ 30:02.560 --> 30:04.520
1291
+ Chariton and a few others.
1292
+
1293
+ 30:04.520 --> 30:11.960
1294
+ The point is their beginnings were very simple, but they were based on a powerful insight.
1295
+
1296
+ 30:11.960 --> 30:14.320
1297
+ That is a replicable model for any startup.
1298
+
1299
+ 30:14.320 --> 30:16.520
1300
+ It has to be a powerful insight.
1301
+
1302
+ 30:16.520 --> 30:17.680
1303
+ The beginnings are simple.
1304
+
1305
+ 30:17.680 --> 30:19.960
1306
+ And there has to be an innovation.
1307
+
1308
+ 30:19.960 --> 30:24.280
1309
+ In Larry and Sergey's case, it was PageRank, which was a brilliant idea, one of the most
1310
+
1311
+ 30:24.280 --> 30:26.880
1312
+ cited papers in the world today.
1313
+
1314
+ 30:26.880 --> 30:29.880
1315
+ What's the next one?
1316
+
1317
+ 30:29.880 --> 30:37.280
1318
+ So you're one of, if I may say, richest people in the world, and yet it seems that money
1319
+
1320
+ 30:37.280 --> 30:43.200
1321
+ is simply a side effect of your passions and not an inherent goal.
1322
+
1323
+ 30:43.200 --> 30:48.360
1324
+ But it's a, you're a fascinating person to ask.
1325
+
1326
+ 30:48.360 --> 30:55.080
1327
+ So much of our society at the individual level and at the company level and its nations is
1328
+
1329
+ 30:55.080 --> 30:58.920
1330
+ driven by the desire for wealth.
1331
+
1332
+ 30:58.920 --> 31:01.280
1333
+ What do you think about this drive?
1334
+
1335
+ 31:01.280 --> 31:07.000
1336
+ And what have you learned about, if I may romanticize the notion, the meaning of life,
1337
+
1338
+ 31:07.000 --> 31:10.520
1339
+ having achieved success on so many dimensions?
1340
+
1341
+ 31:10.520 --> 31:16.960
1342
+ There have been many studies of human happiness and above some threshold, which is typically
1343
+
1344
+ 31:16.960 --> 31:23.600
1345
+ relatively low for this conversation, there's no difference in happiness about money.
1346
+
1347
+ 31:23.600 --> 31:30.120
1348
+ The happiness is correlated with meaning and purpose, a sense of family, a sense of impact.
1349
+
1350
+ 31:30.120 --> 31:34.440
1351
+ So if you organize your life, assuming you have enough to get around and have a nice
1352
+
1353
+ 31:34.440 --> 31:40.400
1354
+ home and so forth, you'll be far happier if you figure out what you care about and work
1355
+
1356
+ 31:40.400 --> 31:41.800
1357
+ on that.
1358
+
1359
+ 31:41.800 --> 31:44.120
1360
+ It's often being in service to others.
1361
+
1362
+ 31:44.120 --> 31:47.840
1363
+ It's a great deal of evidence that people are happiest when they're serving others
1364
+
1365
+ 31:47.840 --> 31:49.640
1366
+ and not themselves.
1367
+
1368
+ 31:49.640 --> 31:57.480
1369
+ This goes directly against the sort of press induced excitement about powerful and wealthy
1370
+
1371
+ 31:57.480 --> 32:01.840
1372
+ leaders of one kind and indeed, these are consequential people.
1373
+
1374
+ 32:01.840 --> 32:06.720
1375
+ But if you are in a situation where you've been very fortunate as I have, you also have
1376
+
1377
+ 32:06.720 --> 32:12.160
1378
+ to take that as a responsibility and you have to basically work both to educate others and
1379
+
1380
+ 32:12.160 --> 32:16.760
1381
+ give them that opportunity, but also use that wealth to advance human society.
1382
+
1383
+ 32:16.760 --> 32:20.440
1384
+ In my case, I'm particularly interested in using the tools of artificial intelligence
1385
+
1386
+ 32:20.440 --> 32:22.800
1387
+ and machine learning to make society better.
1388
+
1389
+ 32:22.800 --> 32:24.000
1390
+ I've mentioned education.
1391
+
1392
+ 32:24.000 --> 32:29.040
1393
+ I've mentioned inequality and middle class and things like this, all of which are a passion
1394
+
1395
+ 32:29.040 --> 32:30.160
1396
+ of mine.
1397
+
1398
+ 32:30.160 --> 32:31.920
1399
+ It doesn't matter what you do.
1400
+
1401
+ 32:31.920 --> 32:36.560
1402
+ It matters that you believe in it, that it's important to you and that your life will be
1403
+
1404
+ 32:36.560 --> 32:40.480
1405
+ far more satisfying if you spend your life doing that.
1406
+
1407
+ 32:40.480 --> 32:45.320
1408
+ I think there's no better place to end than a discussion of the meaning of life.
1409
+
1410
+ 32:45.320 --> 32:46.320
1411
+ Eric, thank you so much.
1412
+
1413
+ 32:46.320 --> 32:47.320
1414
+ Thank you very much.
1415
+
1416
+ 32:47.320 --> 33:16.320
1417
+ Thank you.
1418
+
vtt/episode_009_small.vtt ADDED
@@ -0,0 +1,2240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:05.040
4
+ The following is a conversation from Stuart Russell. He's a professor of computer science at UC
5
+
6
+ 00:05.040 --> 00:11.360
7
+ Berkeley and a coauthor of a book that introduced me and millions of other people to the amazing world
8
+
9
+ 00:11.360 --> 00:18.320
10
+ of AI called Artificial Intelligence The Modern Approach. So it was an honor for me to have this
11
+
12
+ 00:18.320 --> 00:24.480
13
+ conversation as part of MIT course on artificial journal intelligence and the artificial intelligence
14
+
15
+ 00:24.480 --> 00:30.800
16
+ podcast. If you enjoy it, please subscribe on YouTube, iTunes or your podcast provider of choice
17
+
18
+ 00:31.360 --> 00:37.600
19
+ or simply connect with me on Twitter at Lex Freedman spelled F R I D. And now here's my
20
+
21
+ 00:37.600 --> 00:46.160
22
+ conversation with Stuart Russell. So you've mentioned in 1975 in high school you've created
23
+
24
+ 00:46.160 --> 00:54.160
25
+ one of your first AI programs that played chess. Were you ever able to build a program that
26
+
27
+ 00:54.160 --> 01:02.080
28
+ beat you at chess or another board game? So my program never beat me at chess.
29
+
30
+ 01:03.520 --> 01:10.480
31
+ I actually wrote the program at Imperial College. So I used to take the bus every Wednesday with a
32
+
33
+ 01:10.480 --> 01:17.200
34
+ box of cards this big and shove them into the card reader and they gave us eight seconds of CPU time.
35
+
36
+ 01:17.200 --> 01:24.720
37
+ It took about five seconds to read the cards in and compile the code. So we had three seconds of
38
+
39
+ 01:24.720 --> 01:30.960
40
+ CPU time, which was enough to make one move, you know, with a not very deep search. And then we
41
+
42
+ 01:30.960 --> 01:34.960
43
+ would print that move out and then we'd have to go to the back of the queue and wait to feed the
44
+
45
+ 01:34.960 --> 01:40.480
46
+ cards in again. How deep was the search? Well, are we talking about two moves? So no, I think we've
47
+
48
+ 01:40.480 --> 01:48.000
49
+ got we got an eight move, eight, you know, depth eight with alpha beta. And we had some tricks of
50
+
51
+ 01:48.000 --> 01:54.480
52
+ our own about move ordering and some pruning of the tree. And we were still able to beat that
53
+
54
+ 01:54.480 --> 02:00.960
55
+ program. Yeah, yeah, I was a reasonable chess player in my youth. I did an Othello program
56
+
57
+ 02:01.680 --> 02:05.920
58
+ and a backgammon program. So when I got to Berkeley, I worked a lot on
59
+
60
+ 02:05.920 --> 02:12.560
61
+ what we call meta reasoning, which really means reasoning about reasoning. And in the case of
62
+
63
+ 02:13.200 --> 02:18.320
64
+ a game playing program, you need to reason about what parts of the search tree you're actually
65
+
66
+ 02:18.320 --> 02:23.440
67
+ going to explore, because the search tree is enormous, you know, bigger than the number of
68
+
69
+ 02:23.440 --> 02:30.960
70
+ atoms in the universe. And the way programs succeed and the way humans succeed is by only
71
+
72
+ 02:30.960 --> 02:36.160
73
+ looking at a small fraction of the search tree. And if you look at the right fraction, you play
74
+
75
+ 02:36.160 --> 02:41.360
76
+ really well. If you look at the wrong fraction, if you waste your time thinking about things that
77
+
78
+ 02:41.360 --> 02:45.840
79
+ are never going to happen, the moves that no one's ever going to make, then you're going to lose,
80
+
81
+ 02:45.840 --> 02:53.760
82
+ because you won't be able to figure out the right decision. So that question of how machines can
83
+
84
+ 02:53.760 --> 02:59.760
85
+ manage their own computation, how they decide what to think about is the meta reasoning question.
86
+
87
+ 02:59.760 --> 03:05.920
88
+ We developed some methods for doing that. And very simply, a machine should think about
89
+
90
+ 03:06.640 --> 03:11.920
91
+ whatever thoughts are going to improve its decision quality. We were able to show that
92
+
93
+ 03:12.640 --> 03:18.240
94
+ both for a fellow, which is a standard two player game, and for backgammon, which includes
95
+
96
+ 03:19.040 --> 03:24.000
97
+ dice rolls, so it's a two player game with uncertainty. For both of those cases, we could
98
+
99
+ 03:24.000 --> 03:30.480
100
+ come up with algorithms that were actually much more efficient than the standard alpha beta search,
101
+
102
+ 03:31.120 --> 03:36.560
103
+ which chess programs at the time were using. And that those programs could beat me.
104
+
105
+ 03:38.080 --> 03:44.720
106
+ And I think you can see the same basic ideas in AlphaGo and AlphaZero today.
107
+
108
+ 03:44.720 --> 03:52.560
109
+ The way they explore the tree is using a form of meta reasoning to select what to think about
110
+
111
+ 03:52.560 --> 03:57.840
112
+ based on how useful it is to think about it. Is there any insights you can describe
113
+
114
+ 03:57.840 --> 04:03.040
115
+ without Greek symbols of how do we select which paths to go down?
116
+
117
+ 04:04.240 --> 04:10.560
118
+ There's really two kinds of learning going on. So as you say, AlphaGo learns to evaluate board
119
+
120
+ 04:10.560 --> 04:17.680
121
+ to evaluate board position. So it can look at a go board. And it actually has probably a super
122
+
123
+ 04:17.680 --> 04:25.760
124
+ human ability to instantly tell how promising that situation is. To me, the amazing thing about
125
+
126
+ 04:25.760 --> 04:34.560
127
+ AlphaGo is not that it can be the world champion with its hands tied behind his back. But the fact that
128
+
129
+ 04:34.560 --> 04:41.360
130
+ if you stop it from searching altogether, so you say, okay, you're not allowed to do
131
+
132
+ 04:41.360 --> 04:46.480
133
+ any thinking ahead. You can just consider each of your legal moves and then look at the
134
+
135
+ 04:47.120 --> 04:53.280
136
+ resulting situation and evaluate it. So what we call a depth one search. So just the immediate
137
+
138
+ 04:53.280 --> 04:57.920
139
+ outcome of your moves and decide if that's good or bad. That version of AlphaGo
140
+
141
+ 04:57.920 --> 05:05.200
142
+ can still play at a professional level. And human professionals are sitting there for
143
+
144
+ 05:05.200 --> 05:13.440
145
+ five, 10 minutes deciding what to do. And AlphaGo in less than a second can instantly intuit what
146
+
147
+ 05:13.440 --> 05:18.880
148
+ is the right move to make based on its ability to evaluate positions. And that is remarkable
149
+
150
+ 05:19.760 --> 05:26.080
151
+ because we don't have that level of intuition about go. We actually have to think about the
152
+
153
+ 05:26.080 --> 05:35.200
154
+ situation. So anyway, that capability that AlphaGo has is one big part of why it beats humans.
155
+
156
+ 05:35.840 --> 05:44.560
157
+ The other big part is that it's able to look ahead 40, 50, 60 moves into the future. And
158
+
159
+ 05:46.880 --> 05:51.200
160
+ if it was considering all possibilities, 40 or 50 or 60 moves into the future,
161
+
162
+ 05:51.200 --> 06:02.240
163
+ that would be 10 to the 200 possibilities. So way more than atoms in the universe and so on.
164
+
165
+ 06:02.240 --> 06:09.680
166
+ So it's very, very selective about what it looks at. So let me try to give you an intuition about
167
+
168
+ 06:10.880 --> 06:15.360
169
+ how you decide what to think about. It's a combination of two things. One is
170
+
171
+ 06:15.360 --> 06:22.000
172
+ how promising it is. So if you're already convinced that a move is terrible,
173
+
174
+ 06:22.560 --> 06:26.560
175
+ there's no point spending a lot more time convincing yourself that it's terrible.
176
+
177
+ 06:27.520 --> 06:33.600
178
+ Because it's probably not going to change your mind. So the real reason you think is because
179
+
180
+ 06:33.600 --> 06:39.920
181
+ there's some possibility of changing your mind about what to do. And is that changing your mind
182
+
183
+ 06:39.920 --> 06:46.000
184
+ that would result then in a better final action in the real world. So that's the purpose of thinking
185
+
186
+ 06:46.800 --> 06:53.520
187
+ is to improve the final action in the real world. And so if you think about a move that is guaranteed
188
+
189
+ 06:53.520 --> 06:58.000
190
+ to be terrible, you can convince yourself it's terrible, you're still not going to change your
191
+
192
+ 06:58.000 --> 07:04.320
193
+ mind. But on the other hand, suppose you had a choice between two moves, one of them you've
194
+
195
+ 07:04.320 --> 07:10.400
196
+ already figured out is guaranteed to be a draw, let's say. And then the other one looks a little
197
+
198
+ 07:10.400 --> 07:14.000
199
+ bit worse. Like it looks fairly likely that if you make that move, you're going to lose.
200
+
201
+ 07:14.640 --> 07:20.720
202
+ But there's still some uncertainty about the value of that move. There's still some possibility
203
+
204
+ 07:20.720 --> 07:25.920
205
+ that it will turn out to be a win. Then it's worth thinking about that. So even though it's
206
+
207
+ 07:25.920 --> 07:31.280
208
+ less promising on average than the other move, which is guaranteed to be a draw, there's still
209
+
210
+ 07:31.280 --> 07:36.160
211
+ some purpose in thinking about it because there's a chance that you'll change your mind and discover
212
+
213
+ 07:36.160 --> 07:42.080
214
+ that in fact it's a better move. So it's a combination of how good the move appears to be
215
+
216
+ 07:42.080 --> 07:48.000
217
+ and how much uncertainty there is about its value. The more uncertainty, the more it's worth thinking
218
+
219
+ 07:48.000 --> 07:52.800
220
+ about because there's a higher upside if you want to think of it that way. And of course in the
221
+
222
+ 07:52.800 --> 07:59.920
223
+ beginning, especially in the AlphaGo Zero formulation, it's everything is shrouded in
224
+
225
+ 07:59.920 --> 08:06.240
226
+ uncertainty. So you're really swimming in a sea of uncertainty. So it benefits you to
227
+
228
+ 08:07.520 --> 08:11.120
229
+ I mean, actually falling in the same process as you described, but because you're so uncertain
230
+
231
+ 08:11.120 --> 08:15.280
232
+ about everything, you basically have to try a lot of different directions.
233
+
234
+ 08:15.280 --> 08:22.400
235
+ Yeah. So the early parts of the search tree are fairly bushy that it will look at a lot
236
+
237
+ 08:22.400 --> 08:27.840
238
+ of different possibilities, but fairly quickly, the degree of certainty about some of the moves.
239
+
240
+ 08:27.840 --> 08:31.760
241
+ I mean, if a move is really terrible, you'll pretty quickly find out, right? You'll lose
242
+
243
+ 08:31.760 --> 08:37.200
244
+ half your pieces or half your territory. And then you'll say, okay, this is not worth thinking
245
+
246
+ 08:37.200 --> 08:45.280
247
+ about anymore. And then so further down, the tree becomes very long and narrow. And you're following
248
+
249
+ 08:45.280 --> 08:54.800
250
+ various lines of play, 10, 20, 30, 40, 50 moves into the future. And that again is something
251
+
252
+ 08:54.800 --> 09:01.920
253
+ that human beings have a very hard time doing mainly because they just lack the short term memory.
254
+
255
+ 09:02.480 --> 09:09.440
256
+ You just can't remember a sequence of moves. That's 50 moves long. And you can't imagine
257
+
258
+ 09:09.440 --> 09:15.520
259
+ the board correctly for that many moves into the future. Of course, the top players,
260
+
261
+ 09:16.400 --> 09:19.280
262
+ I'm much more familiar with chess, but the top players probably have,
263
+
264
+ 09:19.280 --> 09:26.480
265
+ they have echoes of the same kind of intuition instinct that in a moment's time, AlphaGo applies
266
+
267
+ 09:27.280 --> 09:31.760
268
+ when they see a board. I mean, they've seen those patterns, human beings have seen those
269
+
270
+ 09:31.760 --> 09:37.680
271
+ patterns before at the top, at the grandmaster level. It seems that there is some
272
+
273
+ 09:40.000 --> 09:45.920
274
+ similarities, or maybe it's our imagination creates a vision of those similarities, but it
275
+
276
+ 09:45.920 --> 09:53.120
277
+ feels like this kind of pattern recognition that the AlphaGo approaches are using is similar to
278
+
279
+ 09:53.120 --> 10:00.560
280
+ what human beings at the top level are using. I think there's some truth to that.
281
+
282
+ 10:01.520 --> 10:08.960
283
+ But not entirely. Yeah, I mean, I think the extent to which a human grandmaster can reliably
284
+
285
+ 10:10.080 --> 10:13.680
286
+ instantly recognize the right move and instantly recognize the values of position.
287
+
288
+ 10:13.680 --> 10:19.120
289
+ I think that's a little bit overrated. But if you sacrifice a queen, for example,
290
+
291
+ 10:19.120 --> 10:23.840
292
+ I mean, there's these, there's these beautiful games of chess with Bobby Fisher, somebody where
293
+
294
+ 10:24.640 --> 10:32.720
295
+ it's seeming to make a bad move. And I'm not sure there's a perfect degree of calculation
296
+
297
+ 10:32.720 --> 10:37.440
298
+ involved where they've calculated all the possible things that happen. But there's an
299
+
300
+ 10:37.440 --> 10:45.440
301
+ instinct there, right, that somehow adds up to. Yeah, so I think what happens is you get a sense
302
+
303
+ 10:45.440 --> 10:51.680
304
+ that there's some possibility in the position, even if you make a weird looking move, that it
305
+
306
+ 10:51.680 --> 11:05.120
307
+ opens up some lines of calculation that otherwise would be definitely bad. And it's that intuition
308
+
309
+ 11:05.120 --> 11:13.920
310
+ that there's something here in this position that might might yield a win down the set. And then
311
+
312
+ 11:13.920 --> 11:20.640
313
+ you follow that. Right. And in some sense, when a chess player is following a line in his or her
314
+
315
+ 11:20.640 --> 11:27.120
316
+ mind, they're they mentally simulating what the other person is going to do, what the opponent
317
+
318
+ 11:27.120 --> 11:33.680
319
+ is going to do. And they can do that as long as the moves are kind of forced, right, as long as
320
+
321
+ 11:33.680 --> 11:39.440
322
+ there's a we call a forcing variation where the opponent doesn't really have much choice how to
323
+
324
+ 11:39.440 --> 11:45.200
325
+ respond. And then you see if you can force them into a situation where you win. We see plenty
326
+
327
+ 11:45.200 --> 11:53.520
328
+ of mistakes, even in grandmaster games, where they just miss some simple three, four, five move
329
+
330
+ 11:54.560 --> 12:00.400
331
+ combination that wasn't particularly apparent in the position, but was still there.
332
+
333
+ 12:00.400 --> 12:07.360
334
+ That's the thing that makes us human. Yeah. So when you mentioned that in Othello, those games
335
+
336
+ 12:07.360 --> 12:13.760
337
+ were after some meta reasoning improvements and research was able to beat you. How did that make
338
+
339
+ 12:13.760 --> 12:21.280
340
+ you feel part of the meta reasoning capability that it had was based on learning. And,
341
+
342
+ 12:23.280 --> 12:28.160
343
+ and you could sit down the next day and you could just feel that it had got a lot smarter.
344
+
345
+ 12:28.160 --> 12:33.280
346
+ You know, and all of a sudden, you really felt like you're sort of pressed against
347
+
348
+ 12:34.480 --> 12:40.800
349
+ the wall because it was it was much more aggressive and was totally unforgiving of any
350
+
351
+ 12:40.800 --> 12:47.760
352
+ minor mistake that you might make. And actually, it seemed understood the game better than I did.
353
+
354
+ 12:47.760 --> 12:55.520
355
+ And Gary Kasparov has this quote where during his match against Deep Blue, he said he suddenly
356
+
357
+ 12:55.520 --> 13:01.680
358
+ felt that there was a new kind of intelligence across the board. Do you think that's a scary or
359
+
360
+ 13:01.680 --> 13:10.240
361
+ an exciting possibility for Kasparov and for yourself in the context of chess purely sort of
362
+
363
+ 13:10.240 --> 13:16.720
364
+ in this like that feeling, whatever that is, I think it's definitely an exciting feeling.
365
+
366
+ 13:17.600 --> 13:23.680
367
+ You know, this is what made me work on AI in the first place was as soon as I really understood
368
+
369
+ 13:23.680 --> 13:30.080
370
+ what a computer was, I wanted to make it smart. You know, I started out with the first program I
371
+
372
+ 13:30.080 --> 13:38.640
373
+ wrote was for the Sinclair Programmable Calculator. And I think you could write a 21 step algorithm.
374
+
375
+ 13:38.640 --> 13:44.160
376
+ That was the biggest program you could write something like that and do little arithmetic
377
+
378
+ 13:44.160 --> 13:49.440
379
+ calculations. So I think I implemented Newton's method for square roots and a few other things
380
+
381
+ 13:49.440 --> 13:56.640
382
+ like that. But then, you know, I thought, okay, if I just had more space, I could make this thing
383
+
384
+ 13:56.640 --> 14:10.560
385
+ intelligent. And so I started thinking about AI. And I think the thing that's scary is not the
386
+
387
+ 14:10.560 --> 14:19.520
388
+ chess program, because you know, chess programs, they're not in the taking over the world business.
389
+
390
+ 14:19.520 --> 14:29.440
391
+ But if you extrapolate, you know, there are things about chess that don't resemble the real
392
+
393
+ 14:29.440 --> 14:37.600
394
+ world, right? We know, we know the rules of chess. The chess board is completely visible
395
+
396
+ 14:37.600 --> 14:43.280
397
+ to the program where, of course, the real world is not. Most the real world is not visible from
398
+
399
+ 14:43.280 --> 14:52.400
400
+ wherever you're sitting, so to speak. And to overcome those kinds of problems, you need
401
+
402
+ 14:52.400 --> 14:58.240
403
+ qualitatively different algorithms. Another thing about the real world is that, you know, we
404
+
405
+ 14:58.240 --> 15:07.520
406
+ we regularly plan ahead on the timescales involving billions or trillions of steps. Now,
407
+
408
+ 15:07.520 --> 15:13.760
409
+ we don't plan those in detail. But, you know, when you choose to do a PhD at Berkeley,
410
+
411
+ 15:14.800 --> 15:20.480
412
+ that's a five year commitment that amounts to about a trillion motor control steps that you
413
+
414
+ 15:20.480 --> 15:26.160
415
+ will eventually be committed to. Including going up the stairs, opening doors,
416
+
417
+ 15:26.160 --> 15:32.880
418
+ a drinking water type. Yeah, I mean, every every finger movement while you're typing every character
419
+
420
+ 15:32.880 --> 15:37.280
421
+ of every paper and the thesis and everything. So you're not committing in advance to the specific
422
+
423
+ 15:37.280 --> 15:43.760
424
+ motor control steps, but you're still reasoning on a timescale that will eventually reduce to
425
+
426
+ 15:44.400 --> 15:50.000
427
+ trillions of motor control actions. And so for all these reasons,
428
+
429
+ 15:50.000 --> 15:58.160
430
+ you know, AlphaGo and Deep Blue and so on don't represent any kind of threat to humanity. But
431
+
432
+ 15:58.160 --> 16:08.320
433
+ they are a step towards it, right? And progress in AI occurs by essentially removing one by one
434
+
435
+ 16:08.320 --> 16:14.640
436
+ these assumptions that make problems easy, like the assumption of complete observability
437
+
438
+ 16:14.640 --> 16:21.120
439
+ of the situation, right? We remove that assumption, you need a much more complicated kind of computing
440
+
441
+ 16:22.160 --> 16:26.000
442
+ design and you need something that actually keeps track of all the things you can't see
443
+
444
+ 16:26.000 --> 16:31.920
445
+ and tries to estimate what's going on. And there's inevitable uncertainty in that. So it becomes a
446
+
447
+ 16:31.920 --> 16:38.160
448
+ much more complicated problem. But, you know, we are removing those assumptions, we are starting to
449
+
450
+ 16:38.160 --> 16:44.400
451
+ have algorithms that can cope with much longer timescales, cope with uncertainty that can cope
452
+
453
+ 16:44.400 --> 16:53.360
454
+ with partial observability. And so each of those steps sort of magnifies by a thousand the range
455
+
456
+ 16:53.360 --> 16:58.400
457
+ of things that we can do with AI systems. So the way I started in AI, I wanted to be a psychiatrist
458
+
459
+ 16:58.400 --> 17:03.840
460
+ for a long time and understand the mind in high school, and of course program and so on. And I
461
+
462
+ 17:03.840 --> 17:10.640
463
+ showed up University of Illinois to an AI lab and they said, okay, I don't have time for you, but here
464
+
465
+ 17:10.640 --> 17:18.480
466
+ is a book, AI Modern Approach, I think it was the first edition at the time. Here, go learn this.
467
+
468
+ 17:18.480 --> 17:23.120
469
+ And I remember the lay of the land was, well, it's incredible that we solved chess, but we'll
470
+
471
+ 17:23.120 --> 17:30.480
472
+ never solve go. I mean, it was pretty certain that go in the way we thought about systems that reason
473
+
474
+ 17:31.520 --> 17:36.080
475
+ wasn't possible to solve. And now we've solved it. So it's a very... Well, I think I would have said
476
+
477
+ 17:36.080 --> 17:44.080
478
+ that it's unlikely we could take the kind of algorithm that was used for chess and just get
479
+
480
+ 17:44.080 --> 17:55.680
481
+ it to scale up and work well for go. And at the time, what we thought was that in order to solve
482
+
483
+ 17:55.680 --> 18:01.600
484
+ go, we would have to do something similar to the way humans manage the complexity of go,
485
+
486
+ 18:01.600 --> 18:06.960
487
+ which is to break it down into kind of sub games. So when a human thinks about a go board,
488
+
489
+ 18:06.960 --> 18:12.480
490
+ they think about different parts of the board as sort of weakly connected to each other.
491
+
492
+ 18:12.480 --> 18:16.880
493
+ And they think about, okay, within this part of the board, here's how things could go.
494
+
495
+ 18:16.880 --> 18:20.560
496
+ In that part of board, here's how things could go. And then you try to sort of couple those
497
+
498
+ 18:20.560 --> 18:26.400
499
+ two analyses together and deal with the interactions and maybe revise your views of how things are
500
+
501
+ 18:26.400 --> 18:32.160
502
+ going to go in each part. And then you've got maybe five, six, seven, 10 parts of the board. And
503
+
504
+ 18:33.440 --> 18:40.640
505
+ that actually resembles the real world much more than chess does. Because in the real world,
506
+
507
+ 18:41.440 --> 18:49.200
508
+ we have work, we have home life, we have sport, whatever different kinds of activities, shopping,
509
+
510
+ 18:49.200 --> 18:57.040
511
+ these all are connected to each other, but they're weakly connected. So when I'm typing a paper,
512
+
513
+ 18:58.480 --> 19:03.600
514
+ I don't simultaneously have to decide which order I'm going to get the milk and the butter.
515
+
516
+ 19:04.400 --> 19:09.760
517
+ That doesn't affect the typing. But I do need to realize, okay, better finish this
518
+
519
+ 19:10.320 --> 19:14.080
520
+ before the shops close because I don't have anything, I don't have any food at home.
521
+
522
+ 19:14.080 --> 19:20.560
523
+ So there's some weak connection, but not in the way that chess works, where everything is tied
524
+
525
+ 19:20.560 --> 19:27.600
526
+ into a single stream of thought. So the thought was that go to solve go would have to make progress
527
+
528
+ 19:27.600 --> 19:31.600
529
+ on stuff that would be useful for the real world. And in a way, AlphaGo is a little bit disappointing
530
+
531
+ 19:32.480 --> 19:38.160
532
+ because the program designed for AlphaGo is actually not that different from
533
+
534
+ 19:38.160 --> 19:45.840
535
+ from Deep Blue or even from Arthur Samuel's Jacob playing program from the 1950s.
536
+
537
+ 19:48.160 --> 19:54.560
538
+ And in fact, the two things that make AlphaGo work is one is this amazing ability to evaluate
539
+
540
+ 19:54.560 --> 19:59.200
541
+ the positions. And the other is the meta reasoning capability, which allows it to
542
+
543
+ 19:59.200 --> 20:06.960
544
+ to explore some paths in the tree very deeply and to abandon other paths very quickly.
545
+
546
+ 20:06.960 --> 20:14.640
547
+ So this word meta reasoning, while technically correct, inspires perhaps the wrong
548
+
549
+ 20:16.000 --> 20:21.360
550
+ degree of power that AlphaGo has, for example, the word reasoning is a powerful word. So let me
551
+
552
+ 20:21.360 --> 20:29.840
553
+ ask you sort of, do you were part of the symbolic AI world for a while, like where AI was, there's
554
+
555
+ 20:29.840 --> 20:38.960
556
+ a lot of excellent interesting ideas there that unfortunately met a winter. And so do you think
557
+
558
+ 20:38.960 --> 20:46.800
559
+ it reemerges? Oh, so I would say, yeah, it's not quite as simple as that. So the AI winter,
560
+
561
+ 20:46.800 --> 20:54.400
562
+ the first winter that was actually named as such was the one in the late 80s.
563
+
564
+ 20:56.400 --> 21:00.880
565
+ And that came about because in the mid 80s, there was
566
+
567
+ 21:03.280 --> 21:10.480
568
+ really a concerted attempt to push AI out into the real world using what was called
569
+
570
+ 21:10.480 --> 21:17.280
571
+ expert system technology. And for the most part, that technology was just not ready for prime
572
+
573
+ 21:17.280 --> 21:27.200
574
+ time. They were trying in many cases to do a form of uncertain reasoning, judgment combinations of
575
+
576
+ 21:27.200 --> 21:34.640
577
+ evidence diagnosis, those kinds of things, which was simply invalid. And when you try to apply
578
+
579
+ 21:34.640 --> 21:40.960
580
+ invalid reasoning methods to real problems, you can fudge it for small versions of the problem.
581
+
582
+ 21:40.960 --> 21:47.200
583
+ But when it starts to get larger, the thing just falls apart. So many companies found that
584
+
585
+ 21:49.040 --> 21:53.440
586
+ the stuff just didn't work. And they were spending tons of money on consultants to
587
+
588
+ 21:53.440 --> 21:59.600
589
+ try to make it work. And there were other practical reasons, like they were asking
590
+
591
+ 21:59.600 --> 22:07.760
592
+ the companies to buy incredibly expensive Lisp machine workstations, which were literally
593
+
594
+ 22:07.760 --> 22:17.680
595
+ between $50,000 and $100,000 in 1980s money, which would be between $150,000 and $300,000 per
596
+
597
+ 22:17.680 --> 22:24.000
598
+ workstation in current prices. Then the bottom line, they weren't seeing a profit from it.
599
+
600
+ 22:24.000 --> 22:29.840
601
+ Yeah. In many cases, I think there were some successes. There's no doubt about that. But
602
+
603
+ 22:30.880 --> 22:37.760
604
+ people, I would say, over invested. Every major company was starting an AI department just like
605
+
606
+ 22:37.760 --> 22:45.840
607
+ now. And I worry a bit that we might see similar disappointments, not because the
608
+
609
+ 22:45.840 --> 22:57.600
610
+ current technology is invalid, but it's limited in its scope. And it's almost the dual of the
611
+
612
+ 22:57.600 --> 23:03.360
613
+ scope problems that expert systems had. What have you learned from that hype cycle? And
614
+
615
+ 23:03.360 --> 23:09.760
616
+ what can we do to prevent another winter, for example? Yeah. So when I'm giving talks these
617
+
618
+ 23:09.760 --> 23:17.520
619
+ days, that's one of the warnings that I give. So there's two part warning slide. One is that
620
+
621
+ 23:18.480 --> 23:24.000
622
+ rather than data being the new oil, data is the new snake oil. That's a good line. And then
623
+
624
+ 23:26.000 --> 23:35.440
625
+ the other is that we might see a very visible failure in some of the major application areas.
626
+
627
+ 23:35.440 --> 23:42.400
628
+ And I think self driving cars would be the flagship. And I think
629
+
630
+ 23:43.600 --> 23:48.560
631
+ when you look at the history, so the first self driving car was on the freeway,
632
+
633
+ 23:51.200 --> 24:00.400
634
+ driving itself, changing lanes, overtaking in 1987. And so it's more than 30 years.
635
+
636
+ 24:00.400 --> 24:06.720
637
+ And that kind of looks like where we are today, right? Prototypes on the freeway,
638
+
639
+ 24:06.720 --> 24:13.760
640
+ changing lanes and overtaking. Now, I think significant progress has been made, particularly
641
+
642
+ 24:13.760 --> 24:20.560
643
+ on the perception side. So we worked a lot on autonomous vehicles in the early, mid 90s at
644
+
645
+ 24:20.560 --> 24:29.040
646
+ Berkeley. And we had our own big demonstrations. We put congressmen into self driving cars and
647
+
648
+ 24:29.040 --> 24:36.000
649
+ had them zooming along the freeway. And the problem was clearly perception.
650
+
651
+ 24:37.520 --> 24:42.880
652
+ At the time, the problem was perception. Yeah. So in simulation, with perfect perception,
653
+
654
+ 24:42.880 --> 24:47.200
655
+ you could actually show that you can drive safely for a long time, even if the other cars
656
+
657
+ 24:47.200 --> 24:55.360
658
+ are misbehaving and so on. But simultaneously, we worked on machine vision for detecting cars and
659
+
660
+ 24:55.360 --> 25:03.040
661
+ tracking pedestrians and so on. And we couldn't get the reliability of detection and tracking
662
+
663
+ 25:03.040 --> 25:11.440
664
+ up to a high enough level, particularly in bad weather conditions, nighttime rainfall.
665
+
666
+ 25:11.440 --> 25:16.000
667
+ Good enough for demos, but perhaps not good enough to cover the general operation.
668
+
669
+ 25:16.000 --> 25:20.800
670
+ Yeah. So the thing about driving is, so suppose you're a taxi driver and you drive every day,
671
+
672
+ 25:20.800 --> 25:27.360
673
+ eight hours a day for 10 years, that's 100 million seconds of driving. And any one of those
674
+
675
+ 25:27.360 --> 25:33.280
676
+ seconds, you can make a fatal mistake. So you're talking about eight nines of reliability.
677
+
678
+ 25:34.960 --> 25:43.840
679
+ Now, if your vision system only detects 98.3% of the vehicles, that's sort of one
680
+
681
+ 25:43.840 --> 25:52.720
682
+ on a bit nine reliability. So you have another seven orders of magnitude to go. And this is
683
+
684
+ 25:52.720 --> 25:57.920
685
+ what people don't understand. They think, oh, because I had a successful demo, I'm pretty much
686
+
687
+ 25:57.920 --> 26:07.440
688
+ done. But you're not even within seven orders of magnitude of being done. And that's the difficulty.
689
+
690
+ 26:07.440 --> 26:14.320
691
+ And it's not, can I follow a white line? That's not the problem. We follow a white line all the
692
+
693
+ 26:14.320 --> 26:22.160
694
+ way across the country. But it's the weird stuff that happens. It's all the edge cases. Yeah.
695
+
696
+ 26:22.160 --> 26:30.640
697
+ The edge case, other drivers doing weird things. So if you talk to Google, so they had actually
698
+
699
+ 26:30.640 --> 26:36.560
700
+ a very classical architecture where you had machine vision, which would detect all the
701
+
702
+ 26:36.560 --> 26:41.920
703
+ other cars and pedestrians and the white lines and the road signs. And then basically,
704
+
705
+ 26:42.480 --> 26:49.680
706
+ that was fed into a logical database. And then you had a classical 1970s rule based expert system
707
+
708
+ 26:52.000 --> 26:55.680
709
+ telling you, okay, if you're in the middle lane, and there's a bicyclist in the right lane,
710
+
711
+ 26:55.680 --> 27:03.040
712
+ who is signaling this, then then do that, right? And what they found was that every day that go
713
+
714
+ 27:03.040 --> 27:07.760
715
+ out and there'd be another situation that the rules didn't cover. So they come to a traffic
716
+
717
+ 27:07.760 --> 27:11.680
718
+ circle and there's a little girl riding her bicycle the wrong way around the traffic circle.
719
+
720
+ 27:11.680 --> 27:17.520
721
+ Okay, what do you do? We don't have a rule. Oh my God. Okay, stop. And then they come back
722
+
723
+ 27:17.520 --> 27:24.400
724
+ and add more rules. And they just found that this was not really converging. And if you think about
725
+
726
+ 27:24.400 --> 27:31.280
727
+ it, right, how do you deal with an unexpected situation, meaning one that you've never previously
728
+
729
+ 27:31.280 --> 27:37.200
730
+ encountered and the sort of the reasoning required to figure out the solution for that
731
+
732
+ 27:37.200 --> 27:42.800
733
+ situation has never been done. It doesn't match any previous situation in terms of the kind of
734
+
735
+ 27:42.800 --> 27:49.520
736
+ reasoning you have to do. Well, in chess programs, this happens all the time. You're constantly
737
+
738
+ 27:49.520 --> 27:54.560
739
+ coming up with situations you haven't seen before. And you have to reason about them and you have
740
+
741
+ 27:54.560 --> 27:59.840
742
+ to think about, okay, here are the possible things I could do. Here are the outcomes. Here's how
743
+
744
+ 27:59.840 --> 28:04.560
745
+ desirable the outcomes are and then pick the right one. In the 90s, we were saying, okay,
746
+
747
+ 28:04.560 --> 28:08.160
748
+ this is how you're going to have to do automated vehicles. They're going to have to have a look
749
+
750
+ 28:08.160 --> 28:14.400
751
+ ahead capability. But the look ahead for driving is more difficult than it is for chess. Because
752
+
753
+ 28:14.400 --> 28:20.720
754
+ of humans. Right, there's humans and they're less predictable than chess pieces. Well,
755
+
756
+ 28:20.720 --> 28:28.240
757
+ then you have an opponent in chess who's also somewhat unpredictable. But for example, in chess,
758
+
759
+ 28:28.240 --> 28:33.600
760
+ you always know the opponent's intention. They're trying to beat you. Whereas in driving, you don't
761
+
762
+ 28:33.600 --> 28:39.040
763
+ know, is this guy trying to turn left or has he just forgotten to turn off his turn signal? Or is
764
+
765
+ 28:39.040 --> 28:45.680
766
+ he drunk? Or is he changing the channel on his radio or whatever it might be, you got to try and
767
+
768
+ 28:45.680 --> 28:52.560
769
+ figure out the mental state, the intent of the other drivers to forecast the possible evolutions
770
+
771
+ 28:52.560 --> 28:58.160
772
+ of their trajectories. And then you got to figure out, okay, which is the trajectory for me that's
773
+
774
+ 28:58.160 --> 29:04.000
775
+ going to be safest. And those all interact with each other because the other drivers are going
776
+
777
+ 29:04.000 --> 29:09.120
778
+ to react to your trajectory and so on. So, you know, they've got the classic merging onto the
779
+
780
+ 29:09.120 --> 29:14.640
781
+ freeway problem where you're kind of racing a vehicle that's already on the freeway and you're
782
+
783
+ 29:14.640 --> 29:17.680
784
+ are you going to pull ahead of them or are you going to let them go first and pull in behind
785
+
786
+ 29:17.680 --> 29:23.680
787
+ and you get this sort of uncertainty about who's going first. So all those kinds of things
788
+
789
+ 29:23.680 --> 29:34.720
790
+ mean that you need a decision making architecture that's very different from either a rule based
791
+
792
+ 29:34.720 --> 29:41.360
793
+ system or it seems to me a kind of an end to end neural network system. You know, so just as Alpha
794
+
795
+ 29:41.360 --> 29:47.360
796
+ Go is pretty good when it doesn't do any look ahead, but it's way, way, way, way better when it does.
797
+
798
+ 29:47.360 --> 29:52.720
799
+ I think the same is going to be true for driving. You can have a driving system that's pretty good
800
+
801
+ 29:54.080 --> 29:59.280
802
+ when it doesn't do any look ahead, but that's not good enough. You know, and we've already seen
803
+
804
+ 29:59.920 --> 30:07.440
805
+ multiple deaths caused by poorly designed machine learning algorithms that don't really
806
+
807
+ 30:07.440 --> 30:13.600
808
+ understand what they're doing. Yeah, and on several levels, I think it's on the perception side,
809
+
810
+ 30:13.600 --> 30:19.520
811
+ there's mistakes being made by those algorithms where the perception is very shallow on the
812
+
813
+ 30:19.520 --> 30:26.720
814
+ planning side, the look ahead, like you said, and the thing that we come up against that's
815
+
816
+ 30:28.560 --> 30:32.080
817
+ really interesting when you try to deploy systems in the real world is
818
+
819
+ 30:33.280 --> 30:37.680
820
+ you can't think of an artificial intelligence system as a thing that responds to the world always.
821
+
822
+ 30:38.320 --> 30:41.600
823
+ You have to realize that it's an agent that others will respond to as well.
824
+
825
+ 30:41.600 --> 30:47.200
826
+ Well, so in order to drive successfully, you can't just try to do obstacle avoidance.
827
+
828
+ 30:47.840 --> 30:51.520
829
+ You can't pretend that you're invisible, right? You're the invisible car.
830
+
831
+ 30:52.400 --> 30:57.280
832
+ It doesn't work that way. I mean, but you have to assert, yet others have to be scared of you,
833
+
834
+ 30:57.280 --> 31:04.160
835
+ just there's this tension, there's this game. So we study a lot of work with pedestrians.
836
+
837
+ 31:04.160 --> 31:09.360
838
+ If you approach pedestrians as purely an obstacle avoidance, so you're doing look
839
+
840
+ 31:09.360 --> 31:15.040
841
+ ahead as in modeling the intent, they're not going to take advantage of you.
842
+
843
+ 31:15.040 --> 31:20.080
844
+ They're not going to respect you at all. There has to be a tension, a fear, some amount of
845
+
846
+ 31:20.080 --> 31:26.720
847
+ uncertainty. That's how we have created. Or at least just a kind of a resoluteness.
848
+
849
+ 31:28.000 --> 31:32.000
850
+ You have to display a certain amount of resoluteness. You can't be too tentative.
851
+
852
+ 31:32.000 --> 31:42.480
853
+ Yeah. So the solutions then become pretty complicated. You get into game theoretic
854
+
855
+ 31:42.480 --> 31:50.960
856
+ analyses. So at Berkeley now, we're working a lot on this kind of interaction between machines
857
+
858
+ 31:50.960 --> 32:03.600
859
+ and humans. And that's exciting. So my colleague, Anka Dragan, actually, if you formulate the problem
860
+
861
+ 32:03.600 --> 32:08.800
862
+ game theoretically and you just let the system figure out the solution, it does interesting,
863
+
864
+ 32:08.800 --> 32:16.640
865
+ unexpected things. Like sometimes at a stop sign, if no one is going first, the car will
866
+
867
+ 32:16.640 --> 32:23.200
868
+ actually back up a little. It's just to indicate to the other cars that they should go. And that's
869
+
870
+ 32:23.200 --> 32:28.480
871
+ something it invented entirely by itself. That's interesting. We didn't say this is the language
872
+
873
+ 32:28.480 --> 32:36.240
874
+ of communication at stop signs. It figured it out. That's really interesting. So let me one just
875
+
876
+ 32:36.240 --> 32:42.960
877
+ step back for a second. Just this beautiful philosophical notion. So Pamela McCordick in
878
+
879
+ 32:42.960 --> 32:50.320
880
+ 1979 wrote AI began with the ancient wish to forge the gods. So when you think about the
881
+
882
+ 32:50.320 --> 32:57.520
883
+ history of our civilization, do you think that there is an inherent desire to create,
884
+
885
+ 32:58.960 --> 33:05.680
886
+ let's not say gods, but to create superintelligence? Is it inherent to us? Is it in our genes,
887
+
888
+ 33:05.680 --> 33:13.680
889
+ that the natural arc of human civilization is to create things that are of greater and greater
890
+
891
+ 33:13.680 --> 33:21.680
892
+ power and perhaps echoes of ourselves? So to create the gods, as Pamela said.
893
+
894
+ 33:21.680 --> 33:34.160
895
+ It may be. I mean, we're all individuals, but certainly we see over and over again in history
896
+
897
+ 33:35.760 --> 33:42.320
898
+ individuals who thought about this possibility. Hopefully, I'm not being too philosophical here.
899
+
900
+ 33:42.320 --> 33:48.560
901
+ But if you look at the arc of this, where this is going and we'll talk about AI safety,
902
+
903
+ 33:48.560 --> 33:55.840
904
+ we'll talk about greater and greater intelligence, do you see that when you created the Othello
905
+
906
+ 33:55.840 --> 34:01.680
907
+ program and you felt this excitement, what was that excitement? Was it the excitement of a tinkerer
908
+
909
+ 34:01.680 --> 34:10.240
910
+ who created something cool, like a clock? Or was there a magic, or was it more like a child being
911
+
912
+ 34:10.240 --> 34:17.520
913
+ born? Yeah. So I mean, I certainly understand that viewpoint. And if you look at the light
914
+
915
+ 34:17.520 --> 34:26.640
916
+ hill report, so in the 70s, there was a lot of controversy in the UK about AI and whether it
917
+
918
+ 34:26.640 --> 34:34.720
919
+ was for real and how much the money the government should invest. So it's a long story, but the
920
+
921
+ 34:34.720 --> 34:43.280
922
+ government commissioned a report by Lighthill, who was a physicist, and he wrote a very damning
923
+
924
+ 34:43.280 --> 34:53.920
925
+ report about AI, which I think was the point. And he said that these are frustrated men who
926
+
927
+ 34:54.480 --> 35:05.760
928
+ unable to have children would like to create life as a kind of replacement, which I think is
929
+
930
+ 35:05.760 --> 35:21.600
931
+ really pretty unfair. But there is a kind of magic, I would say, when you build something
932
+
933
+ 35:25.680 --> 35:29.760
934
+ and what you're building in is really just you're building in some understanding of the
935
+
936
+ 35:29.760 --> 35:37.120
937
+ principles of learning and decision making. And to see those principles actually then
938
+
939
+ 35:37.840 --> 35:47.920
940
+ turn into intelligent behavior in specific situations, it's an incredible thing. And
941
+
942
+ 35:47.920 --> 35:58.480
943
+ that is naturally going to make you think, okay, where does this end?
944
+
945
+ 36:00.080 --> 36:08.240
946
+ And so there's a there's magical, optimistic views of word and whatever your view of optimism is,
947
+
948
+ 36:08.240 --> 36:13.360
949
+ whatever your view of utopia is, it's probably different for everybody. But you've often talked
950
+
951
+ 36:13.360 --> 36:26.080
952
+ about concerns you have of how things might go wrong. So I've talked to Max Tegmark. There's a
953
+
954
+ 36:26.080 --> 36:33.360
955
+ lot of interesting ways to think about AI safety. You're one of the seminal people thinking about
956
+
957
+ 36:33.360 --> 36:39.360
958
+ this problem amongst sort of being in the weeds of actually solving specific AI problems,
959
+
960
+ 36:39.360 --> 36:44.080
961
+ you're also thinking about the big picture of where we're going. So can you talk about
962
+
963
+ 36:44.080 --> 36:49.200
964
+ several elements of it? Let's just talk about maybe the control problem. So this idea of
965
+
966
+ 36:50.800 --> 36:58.720
967
+ losing ability to control the behavior of our AI system. So how do you see that? How do you see
968
+
969
+ 36:58.720 --> 37:04.480
970
+ that coming about? What do you think we can do to manage it?
971
+
972
+ 37:04.480 --> 37:11.520
973
+ Well, so it doesn't take a genius to realize that if you make something that's smarter than you,
974
+
975
+ 37:11.520 --> 37:20.320
976
+ you might have a problem. Alan Turing wrote about this and gave lectures about this,
977
+
978
+ 37:21.600 --> 37:32.480
979
+ 1951. He did a lecture on the radio. And he basically says, once the machine thinking method
980
+
981
+ 37:32.480 --> 37:45.600
982
+ starts, very quickly, they'll outstrip humanity. And if we're lucky, we might be able to turn off
983
+
984
+ 37:45.600 --> 37:52.160
985
+ the power at strategic moments, but even so, our species would be humbled. And actually,
986
+
987
+ 37:52.160 --> 37:56.240
988
+ I think it was wrong about that. If it's a sufficiently intelligent machine, it's not
989
+
990
+ 37:56.240 --> 38:00.160
991
+ going to let you switch it off. It's actually in competition with you.
992
+
993
+ 38:00.160 --> 38:05.840
994
+ So what do you think is meant just for a quick tangent if we shut off this
995
+
996
+ 38:05.840 --> 38:08.800
997
+ super intelligent machine that our species would be humbled?
998
+
999
+ 38:11.840 --> 38:20.560
1000
+ I think he means that we would realize that we are inferior, that we only survive by the skin
1001
+
1002
+ 38:20.560 --> 38:27.440
1003
+ of our teeth because we happen to get to the off switch just in time. And if we hadn't,
1004
+
1005
+ 38:27.440 --> 38:34.400
1006
+ then we would have lost control over the earth. So are you more worried when you think about
1007
+
1008
+ 38:34.400 --> 38:41.600
1009
+ this stuff about super intelligent AI or are you more worried about super powerful AI that's not
1010
+
1011
+ 38:41.600 --> 38:49.760
1012
+ aligned with our values? So the paperclip scenarios kind of... I think so the main problem I'm
1013
+
1014
+ 38:49.760 --> 38:58.960
1015
+ working on is the control problem, the problem of machines pursuing objectives that are, as you
1016
+
1017
+ 38:58.960 --> 39:06.720
1018
+ say, not aligned with human objectives. And this has been the way we've thought about AI
1019
+
1020
+ 39:06.720 --> 39:15.120
1021
+ since the beginning. You build a machine for optimizing and then you put in some objective
1022
+
1023
+ 39:15.120 --> 39:25.520
1024
+ and it optimizes. And we can think of this as the king Midas problem. Because if the king Midas
1025
+
1026
+ 39:26.480 --> 39:32.640
1027
+ put in this objective, everything I touch should turn to gold and the gods, that's like the machine,
1028
+
1029
+ 39:32.640 --> 39:39.360
1030
+ they said, okay, done. You now have this power and of course his food and his drink and his family
1031
+
1032
+ 39:39.360 --> 39:50.080
1033
+ all turned to gold and then he dies of misery and starvation. It's a warning, it's a failure mode that
1034
+
1035
+ 39:50.080 --> 39:56.160
1036
+ pretty much every culture in history has had some story along the same lines. There's the
1037
+
1038
+ 39:56.160 --> 40:01.920
1039
+ genie that gives you three wishes and third wish is always, please undo the first two wishes because
1040
+
1041
+ 40:01.920 --> 40:11.920
1042
+ I messed up. And when Arthur Samuel wrote his checker playing program, which learned to play
1043
+
1044
+ 40:11.920 --> 40:16.800
1045
+ checkers considerably better than Arthur Samuel could play and actually reached a pretty decent
1046
+
1047
+ 40:16.800 --> 40:25.040
1048
+ standard, Norbert Wiener, who was one of the major mathematicians of the 20th century, he's sort of
1049
+
1050
+ 40:25.040 --> 40:32.560
1051
+ the father of modern automation control systems. He saw this and he basically extrapolated
1052
+
1053
+ 40:33.360 --> 40:43.680
1054
+ as Turing did and said, okay, this is how we could lose control. And specifically that
1055
+
1056
+ 40:45.520 --> 40:50.960
1057
+ we have to be certain that the purpose we put into the machine is the purpose which we really
1058
+
1059
+ 40:50.960 --> 40:59.760
1060
+ desire. And the problem is, we can't do that. Right. You mean we're not, it's a very difficult
1061
+
1062
+ 40:59.760 --> 41:05.440
1063
+ to encode, to put our values on paper is really difficult, or you're just saying it's impossible?
1064
+
1065
+ 41:09.120 --> 41:15.360
1066
+ The line is great between the two. So theoretically, it's possible, but in practice,
1067
+
1068
+ 41:15.360 --> 41:23.520
1069
+ it's extremely unlikely that we could specify correctly in advance the full range of concerns
1070
+
1071
+ 41:23.520 --> 41:29.360
1072
+ of humanity. You talked about cultural transmission of values, I think is how humans to human
1073
+
1074
+ 41:29.360 --> 41:36.320
1075
+ transmission of values happens, right? Well, we learn, yeah, I mean, as we grow up, we learn about
1076
+
1077
+ 41:36.320 --> 41:42.640
1078
+ the values that matter, how things should go, what is reasonable to pursue and what isn't
1079
+
1080
+ 41:42.640 --> 41:47.920
1081
+ reasonable to pursue. I think machines can learn in the same kind of way. Yeah. So I think that
1082
+
1083
+ 41:49.120 --> 41:54.480
1084
+ what we need to do is to get away from this idea that you build an optimizing machine and then you
1085
+
1086
+ 41:54.480 --> 42:03.200
1087
+ put the objective into it. Because if it's possible that you might put in a wrong objective, and we
1088
+
1089
+ 42:03.200 --> 42:08.880
1090
+ already know this is possible because it's happened lots of times, right? That means that the machine
1091
+
1092
+ 42:08.880 --> 42:17.760
1093
+ should never take an objective that's given as gospel truth. Because once it takes the objective
1094
+
1095
+ 42:17.760 --> 42:26.800
1096
+ as gospel truth, then it believes that whatever actions it's taking in pursuit of that objective
1097
+
1098
+ 42:26.800 --> 42:31.200
1099
+ are the correct things to do. So you could be jumping up and down and saying, no, no, no, no,
1100
+
1101
+ 42:31.200 --> 42:36.480
1102
+ you're going to destroy the world, but the machine knows what the true objective is and is pursuing
1103
+
1104
+ 42:36.480 --> 42:42.640
1105
+ it and tough luck to you. And this is not restricted to AI, right? This is, I think,
1106
+
1107
+ 42:43.360 --> 42:48.880
1108
+ many of the 20th century technologies, right? So in statistics, you minimize a loss function,
1109
+
1110
+ 42:48.880 --> 42:54.320
1111
+ the loss function is exogenously specified in control theory, you minimize a cost function,
1112
+
1113
+ 42:54.320 --> 42:59.840
1114
+ in operations research, you maximize a reward function, and so on. So in all these disciplines,
1115
+
1116
+ 42:59.840 --> 43:08.560
1117
+ this is how we conceive of the problem. And it's the wrong problem. Because we cannot specify
1118
+
1119
+ 43:08.560 --> 43:15.360
1120
+ with certainty the correct objective, right? We need uncertainty, we need the machine to be
1121
+
1122
+ 43:15.360 --> 43:19.440
1123
+ uncertain about what it is that it's supposed to be maximizing.
1124
+
1125
+ 43:19.440 --> 43:25.200
1126
+ It's my favorite idea of yours. I've heard you say somewhere, well, I shouldn't pick favorites,
1127
+
1128
+ 43:25.200 --> 43:32.640
1129
+ but it just sounds beautiful. We need to teach machines humility. It's a beautiful way to put
1130
+
1131
+ 43:32.640 --> 43:40.320
1132
+ it. I love it. That they're humble. They know that they don't know what it is they're supposed
1133
+
1134
+ 43:40.320 --> 43:48.240
1135
+ to be doing. And that those objectives, I mean, they exist, they're within us, but we may not
1136
+
1137
+ 43:48.240 --> 43:56.160
1138
+ be able to explicate them. We may not even know how we want our future to go.
1139
+
1140
+ 43:57.040 --> 44:06.800
1141
+ So exactly. And a machine that's uncertain is going to be differential to us. So if we say,
1142
+
1143
+ 44:06.800 --> 44:11.840
1144
+ don't do that, well, now the machines learn something a bit more about our true objectives,
1145
+
1146
+ 44:11.840 --> 44:16.480
1147
+ because something that it thought was reasonable in pursuit of our objective,
1148
+
1149
+ 44:16.480 --> 44:20.800
1150
+ it turns out not to be so now it's learned something. So it's going to defer because it
1151
+
1152
+ 44:20.800 --> 44:30.240
1153
+ wants to be doing what we really want. And that point, I think, is absolutely central
1154
+
1155
+ 44:30.240 --> 44:37.920
1156
+ to solving the control problem. And it's a different kind of AI when you take away this
1157
+
1158
+ 44:37.920 --> 44:44.560
1159
+ idea that the objective is known, then in fact, a lot of the theoretical frameworks that we're so
1160
+
1161
+ 44:44.560 --> 44:53.520
1162
+ familiar with, you know, mark off decision processes, goal based planning, you know,
1163
+
1164
+ 44:53.520 --> 44:59.280
1165
+ standard game research, all of these techniques actually become inapplicable.
1166
+
1167
+ 45:01.040 --> 45:11.120
1168
+ And you get a more complicated problem because because now the interaction with the human becomes
1169
+
1170
+ 45:11.120 --> 45:20.400
1171
+ part of the problem. Because the human by making choices is giving you more information about
1172
+
1173
+ 45:21.280 --> 45:25.360
1174
+ the true objective and that information helps you achieve the objective better.
1175
+
1176
+ 45:26.640 --> 45:32.000
1177
+ And so that really means that you're mostly dealing with game theoretic problems where you've
1178
+
1179
+ 45:32.000 --> 45:38.000
1180
+ got the machine and the human and they're coupled together, rather than a machine going off by itself
1181
+
1182
+ 45:38.000 --> 45:43.600
1183
+ with a fixed objective. Which is fascinating on the machine and the human level that we,
1184
+
1185
+ 45:44.400 --> 45:51.920
1186
+ when you don't have an objective means you're together coming up with an objective. I mean,
1187
+
1188
+ 45:51.920 --> 45:56.160
1189
+ there's a lot of philosophy that, you know, you could argue that life doesn't really have meaning.
1190
+
1191
+ 45:56.160 --> 46:01.680
1192
+ We we together agree on what gives it meaning and we kind of culturally create
1193
+
1194
+ 46:01.680 --> 46:08.560
1195
+ things that give why the heck we are in this earth anyway. We together as a society create
1196
+
1197
+ 46:08.560 --> 46:13.680
1198
+ that meaning and you have to learn that objective. And one of the biggest, I thought that's where
1199
+
1200
+ 46:13.680 --> 46:19.200
1201
+ you were going to go for a second. One of the biggest troubles we run into outside of statistics
1202
+
1203
+ 46:19.200 --> 46:26.240
1204
+ and machine learning and AI in just human civilization is when you look at, I came from,
1205
+
1206
+ 46:26.240 --> 46:32.160
1207
+ I was born in the Soviet Union. And the history of the 20th century, we ran into the most trouble,
1208
+
1209
+ 46:32.160 --> 46:40.160
1210
+ us humans, when there was a certainty about the objective. And you do whatever it takes to achieve
1211
+
1212
+ 46:40.160 --> 46:46.480
1213
+ that objective, whether you're talking about Germany or communist Russia, you get into trouble
1214
+
1215
+ 46:46.480 --> 46:52.960
1216
+ with humans. And I would say with corporations, in fact, some people argue that we don't have
1217
+
1218
+ 46:52.960 --> 46:57.840
1219
+ to look forward to a time when AI systems take over the world, they already have. And they call
1220
+
1221
+ 46:57.840 --> 47:04.880
1222
+ corporations, right? That corporations happen to be using people as components right now.
1223
+
1224
+ 47:05.920 --> 47:11.680
1225
+ But they are effectively algorithmic machines, and they're optimizing an objective, which is
1226
+
1227
+ 47:11.680 --> 47:18.080
1228
+ quarterly profit that isn't aligned with overall well being of the human race. And they are
1229
+
1230
+ 47:18.080 --> 47:24.160
1231
+ destroying the world. They are primarily responsible for our inability to tackle climate change.
1232
+
1233
+ 47:24.960 --> 47:30.400
1234
+ So I think that's one way of thinking about what's going on with corporations. But
1235
+
1236
+ 47:31.840 --> 47:39.680
1237
+ I think the point you're making is valid, that there are many systems in the real world where
1238
+
1239
+ 47:39.680 --> 47:48.480
1240
+ we've sort of prematurely fixed on the objective and then decoupled the machine from those that
1241
+
1242
+ 47:48.480 --> 47:54.720
1243
+ are supposed to be serving. And I think you see this with government, right? Government is supposed
1244
+
1245
+ 47:54.720 --> 48:02.720
1246
+ to be a machine that serves people. But instead, it tends to be taken over by people who have their
1247
+
1248
+ 48:02.720 --> 48:08.160
1249
+ own objective and use government to optimize that objective, regardless of what people want.
1250
+
1251
+ 48:08.160 --> 48:16.080
1252
+ Do you find appealing the idea of almost arguing machines where you have multiple AI systems with
1253
+
1254
+ 48:16.080 --> 48:22.400
1255
+ a clear fixed objective? We have in government the red team and the blue team that are very fixed
1256
+
1257
+ 48:22.400 --> 48:28.240
1258
+ on their objectives. And they argue, and it kind of maybe would disagree, but it kind of seems to
1259
+
1260
+ 48:28.240 --> 48:39.440
1261
+ make it work somewhat that the duality of it, okay, let's go 100 years back when there was still
1262
+
1263
+ 48:39.440 --> 48:44.480
1264
+ was going on or at the founding of this country, there was disagreements and that disagreement is
1265
+
1266
+ 48:44.480 --> 48:52.160
1267
+ where so it was a balance between certainty and forced humility because the power was distributed.
1268
+
1269
+ 48:52.160 --> 49:05.280
1270
+ Yeah, I think that the nature of debate and disagreement argument takes as a premise the idea
1271
+
1272
+ 49:05.280 --> 49:12.800
1273
+ that you could be wrong, which means that you're not necessarily absolutely convinced that your
1274
+
1275
+ 49:12.800 --> 49:19.520
1276
+ objective is the correct one. If you were absolutely convinced, there'd be no point
1277
+
1278
+ 49:19.520 --> 49:24.160
1279
+ in having any discussion or argument because you would never change your mind. And there wouldn't
1280
+
1281
+ 49:24.160 --> 49:32.080
1282
+ be any sort of synthesis or anything like that. So I think you can think of argumentation as an
1283
+
1284
+ 49:32.080 --> 49:44.640
1285
+ implementation of a form of uncertain reasoning. I've been reading recently about utilitarianism
1286
+
1287
+ 49:44.640 --> 49:54.960
1288
+ and the history of efforts to define in a sort of clear mathematical way a if you like a formula for
1289
+
1290
+ 49:54.960 --> 50:02.320
1291
+ moral or political decision making. And it's really interesting that the parallels between
1292
+
1293
+ 50:02.320 --> 50:08.720
1294
+ the philosophical discussions going back 200 years and what you see now in discussions about
1295
+
1296
+ 50:08.720 --> 50:15.040
1297
+ existential risk because it's almost exactly the same. So someone would say, okay, well,
1298
+
1299
+ 50:15.040 --> 50:21.600
1300
+ here's a formula for how we should make decisions. So utilitarianism is roughly each person has a
1301
+
1302
+ 50:21.600 --> 50:27.680
1303
+ utility function and then we make decisions to maximize the sum of everybody's utility.
1304
+
1305
+ 50:28.720 --> 50:36.480
1306
+ And then people point out, well, in that case, the best policy is one that leads to
1307
+
1308
+ 50:36.480 --> 50:42.480
1309
+ the enormously vast population, all of whom are living a life that's barely worth living.
1310
+
1311
+ 50:43.520 --> 50:50.640
1312
+ And this is called the repugnant conclusion. And another version is that we should maximize
1313
+
1314
+ 50:51.200 --> 50:57.680
1315
+ pleasure and that's what we mean by utility. And then you'll get people effectively saying,
1316
+
1317
+ 50:57.680 --> 51:02.480
1318
+ well, in that case, we might as well just have everyone hooked up to a heroin drip. And they
1319
+
1320
+ 51:02.480 --> 51:08.720
1321
+ didn't use those words. But that debate was happening in the 19th century, as it is now
1322
+
1323
+ 51:09.920 --> 51:17.600
1324
+ about AI, that if we get the formula wrong, we're going to have AI systems working towards
1325
+
1326
+ 51:17.600 --> 51:22.080
1327
+ an outcome that in retrospect, would be exactly wrong.
1328
+
1329
+ 51:22.080 --> 51:26.960
1330
+ Do you think there's has beautifully put so the echoes are there. But do you think,
1331
+
1332
+ 51:26.960 --> 51:34.640
1333
+ I mean, if you look at Sam Harris, our imagination worries about the AI version of that, because
1334
+
1335
+ 51:34.640 --> 51:44.640
1336
+ of the speed at which the things going wrong in the utilitarian context could happen.
1337
+
1338
+ 51:45.840 --> 51:47.280
1339
+ Is that a worry for you?
1340
+
1341
+ 51:47.280 --> 51:55.360
1342
+ Yeah, I think that in most cases, not in all, but if we have a wrong political idea,
1343
+
1344
+ 51:55.360 --> 52:01.200
1345
+ we see it starting to go wrong. And we're not completely stupid. And so we said, okay,
1346
+
1347
+ 52:02.000 --> 52:10.160
1348
+ maybe that was a mistake. Let's try something different. And also, we're very slow and inefficient
1349
+
1350
+ 52:10.160 --> 52:15.520
1351
+ about implementing these things and so on. So you have to worry when you have corporations
1352
+
1353
+ 52:15.520 --> 52:20.800
1354
+ or political systems that are extremely efficient. But when we look at AI systems,
1355
+
1356
+ 52:20.800 --> 52:27.840
1357
+ or even just computers in general, right, they have this different characteristic
1358
+
1359
+ 52:28.400 --> 52:35.040
1360
+ from ordinary human activity in the past. So let's say you were a surgeon. You had some idea
1361
+
1362
+ 52:35.040 --> 52:40.480
1363
+ about how to do some operation, right? Well, and let's say you were wrong, right, that that way
1364
+
1365
+ 52:40.480 --> 52:45.840
1366
+ of doing the operation would mostly kill the patient. Well, you'd find out pretty quickly,
1367
+
1368
+ 52:45.840 --> 52:56.000
1369
+ like after three, maybe three or four tries, right? But that isn't true for pharmaceutical
1370
+
1371
+ 52:56.000 --> 53:03.040
1372
+ companies, because they don't do three or four operations. They manufacture three or four billion
1373
+
1374
+ 53:03.040 --> 53:08.800
1375
+ pills and they sell them. And then they find out maybe six months or a year later that, oh,
1376
+
1377
+ 53:08.800 --> 53:14.880
1378
+ people are dying of heart attacks or getting cancer from this drug. And so that's why we have the FDA,
1379
+
1380
+ 53:14.880 --> 53:22.960
1381
+ right? Because of the scalability of pharmaceutical production. And there have been some unbelievably
1382
+
1383
+ 53:22.960 --> 53:34.320
1384
+ bad episodes in the history of pharmaceuticals and adulteration of products and so on that have
1385
+
1386
+ 53:34.320 --> 53:37.520
1387
+ killed tens of thousands or paralyzed hundreds of thousands of people.
1388
+
1389
+ 53:39.360 --> 53:43.280
1390
+ Now, with computers, we have that same scalability problem that you can
1391
+
1392
+ 53:43.280 --> 53:49.520
1393
+ sit there and type for i equals one to five billion, two, right? And all of a sudden,
1394
+
1395
+ 53:49.520 --> 53:55.360
1396
+ you're having an impact on a global scale. And yet we have no FDA, right? There's absolutely no
1397
+
1398
+ 53:55.360 --> 54:02.480
1399
+ controls at all over what a bunch of undergraduates with too much caffeine can do to the world.
1400
+
1401
+ 54:03.440 --> 54:09.600
1402
+ And, you know, we look at what happened with Facebook, well, social media in general, and
1403
+
1404
+ 54:09.600 --> 54:18.480
1405
+ click through optimization. So you have a simple feedback algorithm that's trying to just optimize
1406
+
1407
+ 54:18.480 --> 54:24.080
1408
+ click through, right? That sounds reasonable, right? Because you don't want to be feeding people
1409
+
1410
+ 54:24.080 --> 54:33.200
1411
+ ads that they don't care about or not interested in. And you might even think of that process as
1412
+
1413
+ 54:33.200 --> 54:42.160
1414
+ simply adjusting the the feeding of ads or news articles or whatever it might be to match people's
1415
+
1416
+ 54:42.160 --> 54:50.000
1417
+ preferences, right? Which sounds like a good idea. But in fact, that isn't how the algorithm works,
1418
+
1419
+ 54:50.880 --> 54:59.760
1420
+ right? You make more money. The algorithm makes more money. If it can better predict what people
1421
+
1422
+ 54:59.760 --> 55:06.400
1423
+ are going to click on, because then it can feed them exactly that, right? So the way to maximize
1424
+
1425
+ 55:06.400 --> 55:13.360
1426
+ click through is actually to modify the people, to make them more predictable. And one way to do
1427
+
1428
+ 55:13.360 --> 55:21.280
1429
+ that is to feed them information which will change their behavior and preferences towards
1430
+
1431
+ 55:21.920 --> 55:27.600
1432
+ extremes that make them predictable. Whatever is the nearest extreme or the nearest predictable
1433
+
1434
+ 55:27.600 --> 55:33.840
1435
+ point, that's where you're going to end up. And the machines will force you there.
1436
+
1437
+ 55:34.480 --> 55:40.400
1438
+ Now, and I think there's a reasonable argument to say that this, among other things, is
1439
+
1440
+ 55:40.400 --> 55:48.880
1441
+ contributing to the destruction of democracy in the world. And where was the oversight
1442
+
1443
+ 55:50.160 --> 55:55.600
1444
+ of this process? Where were the people saying, okay, you would like to apply this algorithm to
1445
+
1446
+ 55:55.600 --> 56:01.120
1447
+ five billion people on the face of the earth? Can you show me that it's safe? Can you show
1448
+
1449
+ 56:01.120 --> 56:07.040
1450
+ me that it won't have various kinds of negative effects? No, there was no one asking that question.
1451
+
1452
+ 56:07.040 --> 56:14.800
1453
+ There was no one placed between the undergrads with too much caffeine and the human race.
1454
+
1455
+ 56:15.520 --> 56:20.480
1456
+ It's just they just did it. And some way outside the scope of my knowledge,
1457
+
1458
+ 56:20.480 --> 56:27.120
1459
+ so economists would argue that the invisible hand, so the capitalist system, it was the
1460
+
1461
+ 56:27.120 --> 56:32.480
1462
+ oversight. So if you're going to corrupt society with whatever decision you make as a company,
1463
+
1464
+ 56:32.480 --> 56:38.640
1465
+ then that's going to be reflected in people not using your product. That's one model of oversight.
1466
+
1467
+ 56:39.280 --> 56:48.000
1468
+ We shall see. But in the meantime, but you might even have broken the political system
1469
+
1470
+ 56:48.000 --> 56:54.960
1471
+ that enables capitalism to function. Well, you've changed it. So we should see. Yeah.
1472
+
1473
+ 56:54.960 --> 57:01.360
1474
+ Change is often painful. So my question is absolutely, it's fascinating. You're absolutely
1475
+
1476
+ 57:01.360 --> 57:07.840
1477
+ right that there was zero oversight on algorithms that can have a profound civilization changing
1478
+
1479
+ 57:09.120 --> 57:15.440
1480
+ effect. So do you think it's possible? I mean, I haven't, have you seen government?
1481
+
1482
+ 57:15.440 --> 57:22.800
1483
+ So do you think it's possible to create regulatory bodies oversight over AI algorithms,
1484
+
1485
+ 57:22.800 --> 57:28.400
1486
+ which are inherently such cutting edge set of ideas and technologies?
1487
+
1488
+ 57:30.960 --> 57:37.520
1489
+ Yeah, but I think it takes time to figure out what kind of oversight, what kinds of controls.
1490
+
1491
+ 57:37.520 --> 57:42.960
1492
+ I mean, it took time to design the FDA regime. Some people still don't like it and they want
1493
+
1494
+ 57:42.960 --> 57:50.400
1495
+ to fix it. And I think there are clear ways that it could be improved. But the whole notion that
1496
+
1497
+ 57:50.400 --> 57:56.400
1498
+ you have stage one, stage two, stage three, and here are the criteria for what you have to do
1499
+
1500
+ 57:56.400 --> 58:02.000
1501
+ to pass a stage one trial, right? We haven't even thought about what those would be for algorithms.
1502
+
1503
+ 58:02.000 --> 58:10.320
1504
+ So I mean, I think there are, there are things we could do right now with regard to bias, for
1505
+
1506
+ 58:10.320 --> 58:19.040
1507
+ example, we have a pretty good technical handle on how to detect algorithms that are propagating
1508
+
1509
+ 58:19.040 --> 58:26.720
1510
+ bias that exists in data sets, how to debias those algorithms, and even what it's going to cost you
1511
+
1512
+ 58:26.720 --> 58:34.480
1513
+ to do that. So I think we could start having some standards on that. I think there are things to do
1514
+
1515
+ 58:34.480 --> 58:41.840
1516
+ with impersonation and falsification that we could, we could work on. So I think, yeah.
1517
+
1518
+ 58:43.360 --> 58:50.240
1519
+ Or the very simple point. So impersonation is a machine acting as if it was a person.
1520
+
1521
+ 58:51.440 --> 58:58.800
1522
+ I can't see a real justification for why we shouldn't insist that machine self identify as
1523
+
1524
+ 58:58.800 --> 59:08.480
1525
+ machines. Where is the social benefit in fooling people into thinking that this is really a person
1526
+
1527
+ 59:08.480 --> 59:15.120
1528
+ when it isn't? I don't mind if it uses a human like voice that's easy to understand. That's fine.
1529
+
1530
+ 59:15.120 --> 59:22.000
1531
+ But it should just say, I'm a machine in some form. And now many people are speaking to that.
1532
+
1533
+ 59:22.800 --> 59:26.400
1534
+ I would think relatively obvious facts. So I think most people... Yeah. I mean,
1535
+
1536
+ 59:26.400 --> 59:32.400
1537
+ there is actually a law in California that bans impersonation, but only in certain
1538
+
1539
+ 59:33.280 --> 59:41.280
1540
+ restricted circumstances. So for the purpose of engaging in a fraudulent transaction and for
1541
+
1542
+ 59:41.280 --> 59:48.160
1543
+ the purpose of modifying someone's voting behavior. So those are the circumstances where
1544
+
1545
+ 59:48.160 --> 59:55.520
1546
+ machines have to self identify. But I think, arguably, it should be in all circumstances.
1547
+
1548
+ 59:56.400 --> 1:00:03.280
1549
+ And then when you talk about deep fakes, we're just at the beginning. But already,
1550
+
1551
+ 1:00:03.280 --> 1:00:10.480
1552
+ it's possible to make a movie of anybody saying anything in ways that are pretty hard to detect.
1553
+
1554
+ 1:00:11.520 --> 1:00:15.040
1555
+ Including yourself because you're on camera now and your voice is coming through with high
1556
+
1557
+ 1:00:15.040 --> 1:00:19.360
1558
+ resolution. Yeah. So you could take what I'm saying and replace it with pretty much anything
1559
+
1560
+ 1:00:19.360 --> 1:00:24.400
1561
+ else you wanted me to be saying. And even it would change my lips and facial expressions to fit.
1562
+
1563
+ 1:00:26.960 --> 1:00:35.280
1564
+ And there's actually not much in the way of real legal protection against that.
1565
+
1566
+ 1:00:35.920 --> 1:00:38.640
1567
+ I think in the commercial area, you could say, yeah, that's...
1568
+
1569
+ 1:00:38.640 --> 1:00:46.000
1570
+ You're using my brand and so on. There are rules about that. But in the political sphere, I think,
1571
+
1572
+ 1:00:47.600 --> 1:00:52.560
1573
+ at the moment, it's anything goes. So that could be really, really damaging.
1574
+
1575
+ 1:00:53.920 --> 1:01:02.400
1576
+ And let me just try to make not an argument, but try to look back at history and say something
1577
+
1578
+ 1:01:02.400 --> 1:01:09.680
1579
+ dark, in essence, is while regulation seems to be... Oversight seems to be exactly the
1580
+
1581
+ 1:01:09.680 --> 1:01:14.480
1582
+ right thing to do here. It seems that human beings, what they naturally do is they wait
1583
+
1584
+ 1:01:14.480 --> 1:01:20.080
1585
+ for something to go wrong. If you're talking about nuclear weapons, you can't talk about
1586
+
1587
+ 1:01:20.080 --> 1:01:26.000
1588
+ nuclear weapons being dangerous until somebody actually, like the United States drops the bomb,
1589
+
1590
+ 1:01:26.000 --> 1:01:34.960
1591
+ or Chernobyl melting. Do you think we will have to wait for things going wrong in a way that's
1592
+
1593
+ 1:01:34.960 --> 1:01:39.840
1594
+ obviously damaging to society, not an existential risk, but obviously damaging?
1595
+
1596
+ 1:01:42.320 --> 1:01:48.000
1597
+ Or do you have faith that... I hope not. But I mean, I think we do have to look at history.
1598
+
1599
+ 1:01:48.000 --> 1:01:57.600
1600
+ So the two examples you gave, nuclear weapons and nuclear power, are very, very interesting because
1601
+
1602
+ 1:01:59.520 --> 1:02:07.840
1603
+ nuclear weapons, we knew in the early years of the 20th century that atoms contained a huge
1604
+
1605
+ 1:02:07.840 --> 1:02:13.280
1606
+ amount of energy. We had E equals MC squared. We knew the mass differences between the different
1607
+
1608
+ 1:02:13.280 --> 1:02:20.640
1609
+ atoms and their components, and we knew that you might be able to make an incredibly powerful
1610
+
1611
+ 1:02:20.640 --> 1:02:28.000
1612
+ explosive. So H.G. Wells wrote science fiction book, I think, in 1912. Frederick Soddy, who was the
1613
+
1614
+ 1:02:28.000 --> 1:02:35.760
1615
+ guy who discovered isotopes as a Nobel Prize winner, he gave a speech in 1915 saying that
1616
+
1617
+ 1:02:37.840 --> 1:02:42.160
1618
+ one pound of this new explosive would be the equivalent of 150 tons of dynamite,
1619
+
1620
+ 1:02:42.160 --> 1:02:50.720
1621
+ which turns out to be about right. And this was in World War I, so he was imagining how much worse
1622
+
1623
+ 1:02:51.360 --> 1:02:57.600
1624
+ the world war would be if we were using that kind of explosive. But the physics establishment
1625
+
1626
+ 1:02:57.600 --> 1:03:05.760
1627
+ simply refused to believe that these things could be made. Including the people who were making it.
1628
+
1629
+ 1:03:06.400 --> 1:03:11.280
1630
+ Well, so they were doing the nuclear physics. I mean, eventually were the ones who made it.
1631
+
1632
+ 1:03:11.280 --> 1:03:21.280
1633
+ You talk about Fermi or whoever. Well, so up to the development was mostly theoretical. So it was
1634
+
1635
+ 1:03:21.280 --> 1:03:28.320
1636
+ people using sort of primitive kinds of particle acceleration and doing experiments at the level
1637
+
1638
+ 1:03:28.320 --> 1:03:36.480
1639
+ of single particles or collections of particles. They weren't yet thinking about how to actually
1640
+
1641
+ 1:03:36.480 --> 1:03:40.160
1642
+ make a bomb or anything like that. But they knew the energy was there and they figured if they
1643
+
1644
+ 1:03:40.160 --> 1:03:46.720
1645
+ understood it better, it might be possible. But the physics establishment, their view, and I think
1646
+
1647
+ 1:03:46.720 --> 1:03:51.840
1648
+ because they did not want it to be true, their view was that it could not be true.
1649
+
1650
+ 1:03:53.360 --> 1:04:00.240
1651
+ That this could not provide a way to make a super weapon. And there was this famous
1652
+
1653
+ 1:04:01.120 --> 1:04:08.240
1654
+ speech given by Rutherford, who was the sort of leader of nuclear physics. And it was on
1655
+
1656
+ 1:04:08.240 --> 1:04:14.800
1657
+ September 11, 1933. And he said, you know, anyone who talks about the possibility of
1658
+
1659
+ 1:04:14.800 --> 1:04:21.600
1660
+ obtaining energy from transformation of atoms is talking complete moonshine. And the next
1661
+
1662
+ 1:04:22.800 --> 1:04:28.560
1663
+ morning, Leo Zillard read about that speech and then invented the nuclear chain reaction.
1664
+
1665
+ 1:04:28.560 --> 1:04:35.920
1666
+ And so as soon as he invented, as soon as he had that idea, that you could make a chain reaction
1667
+
1668
+ 1:04:35.920 --> 1:04:40.640
1669
+ with neutrons because neutrons were not repelled by the nucleus so they could enter the nucleus
1670
+
1671
+ 1:04:40.640 --> 1:04:48.480
1672
+ and then continue the reaction. As soon as he has that idea, he instantly realized that the world
1673
+
1674
+ 1:04:48.480 --> 1:04:58.160
1675
+ was in deep doo doo. Because this is 1933, right? Hitler had recently come to power in Germany.
1676
+
1677
+ 1:04:58.160 --> 1:05:09.280
1678
+ Zillard was in London. He eventually became a refugee and he came to the US. And in the
1679
+
1680
+ 1:05:09.280 --> 1:05:14.880
1681
+ process of having the idea about the chain reaction, he figured out basically how to make
1682
+
1683
+ 1:05:14.880 --> 1:05:22.800
1684
+ a bomb and also how to make a reactor. And he patented the reactor in 1934. But because
1685
+
1686
+ 1:05:22.800 --> 1:05:28.480
1687
+ of the situation, the great power conflict situation that he could see happening,
1688
+
1689
+ 1:05:29.200 --> 1:05:38.160
1690
+ he kept that a secret. And so between then and the beginning of World War II,
1691
+
1692
+ 1:05:39.600 --> 1:05:48.000
1693
+ people were working, including the Germans, on how to actually create neutron sources,
1694
+
1695
+ 1:05:48.000 --> 1:05:54.720
1696
+ what specific fission reactions would produce neutrons of the right energy to continue the
1697
+
1698
+ 1:05:54.720 --> 1:06:02.480
1699
+ reaction. And that was demonstrated in Germany, I think in 1938, if I remember correctly. The first
1700
+
1701
+ 1:06:03.760 --> 1:06:17.280
1702
+ nuclear weapon patent was 1939 by the French. So this was actually going on well before World War
1703
+
1704
+ 1:06:17.280 --> 1:06:22.720
1705
+ II really got going. And then the British probably had the most advanced capability
1706
+
1707
+ 1:06:22.720 --> 1:06:27.920
1708
+ in this area. But for safety reasons, among others, and bless just sort of just resources,
1709
+
1710
+ 1:06:28.720 --> 1:06:33.520
1711
+ they moved the program from Britain to the US. And then that became Manhattan Project.
1712
+
1713
+ 1:06:34.480 --> 1:06:37.920
1714
+ So the reason why we couldn't
1715
+
1716
+ 1:06:37.920 --> 1:06:48.320
1717
+ have any kind of oversight of nuclear weapons and nuclear technology was because we were basically
1718
+
1719
+ 1:06:48.320 --> 1:06:57.520
1720
+ already in an arms race in a war. But you mentioned in the 20s and 30s, so what are the echoes?
1721
+
1722
+ 1:07:00.000 --> 1:07:04.320
1723
+ The way you've described the story, I mean, there's clearly echoes. What do you think most AI
1724
+
1725
+ 1:07:04.320 --> 1:07:11.440
1726
+ researchers, folks who are really close to the metal, they really are not concerned about AI,
1727
+
1728
+ 1:07:11.440 --> 1:07:16.960
1729
+ they don't think about it, whether it's they don't want to think about it. But what are the,
1730
+
1731
+ 1:07:16.960 --> 1:07:23.760
1732
+ yeah, why do you think that is? What are the echoes of the nuclear situation to the current AI
1733
+
1734
+ 1:07:23.760 --> 1:07:33.120
1735
+ situation? And what can we do about it? I think there is a kind of motivated cognition, which is
1736
+
1737
+ 1:07:33.120 --> 1:07:40.640
1738
+ a term in psychology means that you believe what you would like to be true, rather than what is
1739
+
1740
+ 1:07:40.640 --> 1:07:50.240
1741
+ true. And it's unsettling to think that what you're working on might be the end of the human race,
1742
+
1743
+ 1:07:50.800 --> 1:07:58.400
1744
+ obviously. So you would rather instantly deny it and come up with some reason why it couldn't be
1745
+
1746
+ 1:07:58.400 --> 1:08:07.440
1747
+ true. And I collected a long list of regions that extremely intelligent, competent AI scientists
1748
+
1749
+ 1:08:08.080 --> 1:08:16.560
1750
+ have come up with for why we shouldn't worry about this. For example, calculators are superhuman at
1751
+
1752
+ 1:08:16.560 --> 1:08:21.680
1753
+ arithmetic and they haven't taken over the world, so there's nothing to worry about. Well, okay,
1754
+
1755
+ 1:08:21.680 --> 1:08:29.440
1756
+ my five year old could have figured out why that was an unreasonable and really quite weak argument.
1757
+
1758
+ 1:08:31.520 --> 1:08:40.800
1759
+ Another one was, while it's theoretically possible that you could have superhuman AI
1760
+
1761
+ 1:08:41.760 --> 1:08:46.480
1762
+ destroy the world, it's also theoretically possible that a black hole could materialize
1763
+
1764
+ 1:08:46.480 --> 1:08:51.760
1765
+ right next to the earth and destroy humanity. I mean, yes, it's theoretically possible,
1766
+
1767
+ 1:08:51.760 --> 1:08:56.720
1768
+ quantum theoretically, extremely unlikely that it would just materialize right there.
1769
+
1770
+ 1:08:58.400 --> 1:09:04.720
1771
+ But that's a completely bogus analogy because if the whole physics community on earth was working
1772
+
1773
+ 1:09:04.720 --> 1:09:11.680
1774
+ to materialize a black hole in near earth orbit, wouldn't you ask them, is that a good idea? Is
1775
+
1776
+ 1:09:11.680 --> 1:09:19.040
1777
+ that going to be safe? What if you succeed? And that's the thing. The AI community is sort of
1778
+
1779
+ 1:09:19.040 --> 1:09:26.240
1780
+ refused to ask itself, what if you succeed? And initially, I think that was because it was too
1781
+
1782
+ 1:09:26.240 --> 1:09:35.520
1783
+ hard, but Alan Turing asked himself that and he said, we'd be toast. If we were lucky, we might
1784
+
1785
+ 1:09:35.520 --> 1:09:40.000
1786
+ be able to switch off the power but probably we'd be toast. But there's also an aspect
1787
+
1788
+ 1:09:40.000 --> 1:09:49.680
1789
+ that because we're not exactly sure what the future holds, it's not clear exactly so technically
1790
+
1791
+ 1:09:49.680 --> 1:09:58.480
1792
+ what to worry about, sort of how things go wrong. And so there is something it feels like, maybe
1793
+
1794
+ 1:09:58.480 --> 1:10:04.400
1795
+ you can correct me if I'm wrong, but there's something paralyzing about worrying about something
1796
+
1797
+ 1:10:04.400 --> 1:10:10.000
1798
+ that logically is inevitable. But you don't really know what that will look like.
1799
+
1800
+ 1:10:10.720 --> 1:10:19.440
1801
+ Yeah, I think that's a reasonable point. And it's certainly in terms of existential risks,
1802
+
1803
+ 1:10:19.440 --> 1:10:26.800
1804
+ it's different from asteroid collides with the earth, which again is quite possible. It's
1805
+
1806
+ 1:10:26.800 --> 1:10:33.040
1807
+ happened in the past. It'll probably happen again. We don't know right now. But if we did detect an
1808
+
1809
+ 1:10:33.040 --> 1:10:39.200
1810
+ asteroid that was going to hit the earth in 75 years time, we'd certainly be doing something
1811
+
1812
+ 1:10:39.200 --> 1:10:43.600
1813
+ about it. Well, it's clear there's got big rock and we'll probably have a meeting and see what
1814
+
1815
+ 1:10:43.600 --> 1:10:49.040
1816
+ do we do about the big rock with AI. Right, with AI. I mean, there are very few people who think it's
1817
+
1818
+ 1:10:49.040 --> 1:10:54.400
1819
+ not going to happen within the next 75 years. I know Rod Brooks doesn't think it's going to happen.
1820
+
1821
+ 1:10:55.200 --> 1:11:00.880
1822
+ Maybe Andrew Ng doesn't think it's happened. But a lot of the people who work day to day,
1823
+
1824
+ 1:11:00.880 --> 1:11:07.360
1825
+ you know, as you say, at the rock face, they think it's going to happen. I think the median
1826
+
1827
+ 1:11:08.640 --> 1:11:14.320
1828
+ estimate from AI researchers is somewhere in 40 to 50 years from now. Or maybe, you know,
1829
+
1830
+ 1:11:14.320 --> 1:11:20.000
1831
+ I think in Asia, they think it's going to be even faster than that. I'm a little bit
1832
+
1833
+ 1:11:21.280 --> 1:11:25.840
1834
+ more conservative. I think it probably take longer than that. But I think, you know, as
1835
+
1836
+ 1:11:25.840 --> 1:11:32.400
1837
+ happened with nuclear weapons, it can happen overnight that you have these breakthroughs.
1838
+
1839
+ 1:11:32.400 --> 1:11:38.240
1840
+ And we need more than one breakthrough. But, you know, it's on the order of half a dozen.
1841
+
1842
+ 1:11:38.800 --> 1:11:43.920
1843
+ This is a very rough scale. But so half a dozen breakthroughs of that nature
1844
+
1845
+ 1:11:45.840 --> 1:11:53.600
1846
+ would have to happen for us to reach superhuman AI. But the AI research community is
1847
+
1848
+ 1:11:53.600 --> 1:12:00.640
1849
+ vast now, the massive investments from governments, from corporations, tons of really,
1850
+
1851
+ 1:12:00.640 --> 1:12:05.760
1852
+ really smart people. You just have to look at the rate of progress in different areas of AI
1853
+
1854
+ 1:12:05.760 --> 1:12:10.800
1855
+ to see that things are moving pretty fast. So to say, oh, it's just going to be thousands of years.
1856
+
1857
+ 1:12:11.920 --> 1:12:18.160
1858
+ I don't see any basis for that. You know, I see, you know, for example, the
1859
+
1860
+ 1:12:18.160 --> 1:12:28.640
1861
+ Stanford 100 year AI project, which is supposed to be sort of, you know, the serious establishment view,
1862
+
1863
+ 1:12:29.520 --> 1:12:33.440
1864
+ their most recent report actually said it's probably not even possible.
1865
+
1866
+ 1:12:34.160 --> 1:12:34.720
1867
+ Oh, wow.
1868
+
1869
+ 1:12:35.280 --> 1:12:42.960
1870
+ Right. Which if you want a perfect example of people in denial, that's it. Because, you know,
1871
+
1872
+ 1:12:42.960 --> 1:12:49.760
1873
+ for the whole history of AI, we've been saying to philosophers who said it wasn't possible. Well,
1874
+
1875
+ 1:12:49.760 --> 1:12:53.520
1876
+ you have no idea what you're talking about. Of course, it's possible. Right. Give me an
1877
+
1878
+ 1:12:53.520 --> 1:12:59.760
1879
+ argument for why it couldn't happen. And there isn't one. Right. And now, because people are
1880
+
1881
+ 1:12:59.760 --> 1:13:04.000
1882
+ worried that maybe AI might get a bad name, or I just don't want to think about this,
1883
+
1884
+ 1:13:05.200 --> 1:13:09.520
1885
+ they're saying, okay, well, of course, it's not really possible. You know, imagine, right? Imagine
1886
+
1887
+ 1:13:09.520 --> 1:13:16.000
1888
+ if, you know, the leaders of the cancer biology community got up and said, well, you know, of
1889
+
1890
+ 1:13:16.000 --> 1:13:23.680
1891
+ course, curing cancer, it's not really possible. There'd be a complete outrage and dismay. And,
1892
+
1893
+ 1:13:25.040 --> 1:13:30.800
1894
+ you know, I find this really a strange phenomenon. So,
1895
+
1896
+ 1:13:30.800 --> 1:13:38.640
1897
+ okay, so if you accept it as possible, and if you accept that it's probably going to happen,
1898
+
1899
+ 1:13:40.560 --> 1:13:44.400
1900
+ the point that you're making that, you know, how does it go wrong?
1901
+
1902
+ 1:13:46.320 --> 1:13:51.600
1903
+ A valid question without that, without an answer to that question, then you're stuck with what I
1904
+
1905
+ 1:13:51.600 --> 1:13:56.160
1906
+ call the gorilla problem, which is, you know, the problem that the gorillas face, right? They
1907
+
1908
+ 1:13:56.160 --> 1:14:02.800
1909
+ made something more intelligent than them, namely us a few million years ago, and now they're in
1910
+
1911
+ 1:14:02.800 --> 1:14:09.360
1912
+ deep doo doo. So there's really nothing they can do. They've lost the control. They failed to solve
1913
+
1914
+ 1:14:09.360 --> 1:14:16.240
1915
+ the control problem of controlling humans. And so they've lost. So we don't want to be in that
1916
+
1917
+ 1:14:16.240 --> 1:14:22.320
1918
+ situation. And if the gorilla problem is the only formulation you have, there's not a lot you can do.
1919
+
1920
+ 1:14:22.320 --> 1:14:28.240
1921
+ Right. Other than to say, okay, we should try to stop. You know, we should just not make the humans
1922
+
1923
+ 1:14:28.240 --> 1:14:33.120
1924
+ or, or in this case, not make the AI. And I think that's really hard to do.
1925
+
1926
+ 1:14:35.120 --> 1:14:42.480
1927
+ To, I'm not actually proposing that that's a feasible course of action. And I also think that,
1928
+
1929
+ 1:14:43.040 --> 1:14:46.080
1930
+ you know, if properly controlled AI could be incredibly beneficial.
1931
+
1932
+ 1:14:46.080 --> 1:14:54.960
1933
+ So the, but it seems to me that there's a, there's a consensus that one of the major
1934
+
1935
+ 1:14:54.960 --> 1:15:02.320
1936
+ failure modes is this loss of control that we create AI systems that are pursuing incorrect
1937
+
1938
+ 1:15:02.320 --> 1:15:11.040
1939
+ objectives. And because the AI system believes it knows what the objective is, it has no incentive
1940
+
1941
+ 1:15:11.040 --> 1:15:16.800
1942
+ to listen to us anymore, so to speak, right? It's just carrying out the,
1943
+
1944
+ 1:15:17.920 --> 1:15:22.160
1945
+ the strategy that it, it has computed as being the optimal solution.
1946
+
1947
+ 1:15:24.320 --> 1:15:31.040
1948
+ And, you know, it may be that in the process, it needs to acquire more resources to increase the
1949
+
1950
+ 1:15:31.600 --> 1:15:37.360
1951
+ possibility of success or prevent various failure modes by defending itself against interference.
1952
+
1953
+ 1:15:37.360 --> 1:15:42.640
1954
+ And so that collection of problems, I think, is something we can address.
1955
+
1956
+ 1:15:45.280 --> 1:15:55.360
1957
+ The other problems are roughly speaking, you know, misuse, right? So even if we solve the control
1958
+
1959
+ 1:15:55.360 --> 1:16:00.960
1960
+ problem, we make perfectly safe controllable AI systems. Well, why, you know, why does Dr.
1961
+
1962
+ 1:16:00.960 --> 1:16:05.600
1963
+ Evil going to use those, right? He wants to just take over the world and he'll make unsafe AI systems
1964
+
1965
+ 1:16:05.600 --> 1:16:12.000
1966
+ that then get out of control. So that's one problem, which is sort of a, you know, partly a
1967
+
1968
+ 1:16:12.000 --> 1:16:20.880
1969
+ policing problem, partly a sort of a cultural problem for the profession of how we teach people
1970
+
1971
+ 1:16:21.760 --> 1:16:26.560
1972
+ what kinds of AI systems are safe. You talk about autonomous weapon system and how pretty much
1973
+
1974
+ 1:16:26.560 --> 1:16:31.920
1975
+ everybody agrees that there's too many ways that that can go horribly wrong. You have this great
1976
+
1977
+ 1:16:31.920 --> 1:16:36.560
1978
+ Slotabots movie that kind of illustrates that beautifully. Well, I want to talk about that.
1979
+
1980
+ 1:16:36.560 --> 1:16:41.440
1981
+ That's another, there's another topic I'm happy to talk about the, I just want to mention that
1982
+
1983
+ 1:16:41.440 --> 1:16:48.080
1984
+ what I see is the third major failure mode, which is overuse, not so much misuse, but overuse of AI,
1985
+
1986
+ 1:16:49.680 --> 1:16:55.280
1987
+ that we become overly dependent. So I call this the warly problems. If you've seen the warly,
1988
+
1989
+ 1:16:55.280 --> 1:17:00.800
1990
+ the movie, all right, all the humans are on the spaceship and the machines look after everything
1991
+
1992
+ 1:17:00.800 --> 1:17:07.680
1993
+ for them. And they just watch TV and drink big gulps. And they're all sort of obese and stupid.
1994
+
1995
+ 1:17:07.680 --> 1:17:17.040
1996
+ And they sort of totally lost any notion of human autonomy. And, you know, so in effect, right,
1997
+
1998
+ 1:17:18.240 --> 1:17:23.520
1999
+ this would happen like the slow boiling frog, right, we would gradually turn over
2000
+
2001
+ 1:17:24.320 --> 1:17:28.480
2002
+ more and more of the management of our civilization to machines as we are already doing.
2003
+
2004
+ 1:17:28.480 --> 1:17:34.560
2005
+ And this, you know, this, if this process continues, you know, we sort of gradually
2006
+
2007
+ 1:17:34.560 --> 1:17:41.440
2008
+ switch from sort of being the masters of technology to just being the guests, right?
2009
+
2010
+ 1:17:41.440 --> 1:17:45.840
2011
+ So, so we become guests on a cruise ship, you know, which is fine for a week, but not,
2012
+
2013
+ 1:17:46.480 --> 1:17:53.520
2014
+ not for the rest of eternity, right? You know, and it's almost irreversible, right? Once you,
2015
+
2016
+ 1:17:53.520 --> 1:18:00.000
2017
+ once you lose the incentive to, for example, you know, learn to be an engineer or a doctor
2018
+
2019
+ 1:18:00.800 --> 1:18:08.000
2020
+ or a sanitation operative or any other of the, the infinitely many ways that we
2021
+
2022
+ 1:18:08.000 --> 1:18:13.200
2023
+ maintain and propagate our civilization. You know, if you, if you don't have the
2024
+
2025
+ 1:18:13.200 --> 1:18:18.400
2026
+ incentive to do any of that, you won't. And then it's really hard to recover.
2027
+
2028
+ 1:18:18.400 --> 1:18:23.440
2029
+ And of course they add just one of the technologies that could, that third failure mode result in that.
2030
+
2031
+ 1:18:23.440 --> 1:18:27.360
2032
+ There's probably other technology in general detaches us from.
2033
+
2034
+ 1:18:28.400 --> 1:18:33.440
2035
+ It does a bit, but the, the, the difference is that in terms of the knowledge to,
2036
+
2037
+ 1:18:34.080 --> 1:18:39.360
2038
+ to run our civilization, you know, up to now we've had no alternative, but to put it into
2039
+
2040
+ 1:18:39.360 --> 1:18:44.400
2041
+ people's heads, right? And if you, if you, software with Google, I mean, so software in
2042
+
2043
+ 1:18:44.400 --> 1:18:51.600
2044
+ general, so computers in general, but, but the, you know, the knowledge of how, you know, how
2045
+
2046
+ 1:18:51.600 --> 1:18:56.560
2047
+ a sanitation system works, you know, that's an AI has to understand that it's no good putting it
2048
+
2049
+ 1:18:56.560 --> 1:19:02.960
2050
+ into Google. So, I mean, we, we've always put knowledge in on paper, but paper doesn't run
2051
+
2052
+ 1:19:02.960 --> 1:19:07.520
2053
+ our civilization. It only runs when it goes from the paper into people's heads again, right? So
2054
+
2055
+ 1:19:07.520 --> 1:19:13.920
2056
+ we've always propagated civilization through human minds and we've spent about a trillion
2057
+
2058
+ 1:19:13.920 --> 1:19:19.440
2059
+ person years doing that literally, right? You, you can work it out. It's about, right? There's
2060
+
2061
+ 1:19:19.440 --> 1:19:25.120
2062
+ about just over a hundred billion people who've ever lived and each of them has spent about 10
2063
+
2064
+ 1:19:25.120 --> 1:19:30.640
2065
+ years learning stuff to keep their civilization going. And so that's a trillion person years we
2066
+
2067
+ 1:19:30.640 --> 1:19:35.760
2068
+ put into this effort. Beautiful way to describe all of civilization. And now we're, you know,
2069
+
2070
+ 1:19:35.760 --> 1:19:39.840
2071
+ we're in danger of throwing that away. So this is a problem that AI can't solve. It's not a
2072
+
2073
+ 1:19:39.840 --> 1:19:47.120
2074
+ technical problem. It's a, you know, and if we do our job right, the AI systems will say, you know,
2075
+
2076
+ 1:19:47.120 --> 1:19:52.800
2077
+ the human race doesn't in the long run want to be passengers in a cruise ship. The human race
2078
+
2079
+ 1:19:52.800 --> 1:19:59.840
2080
+ wants autonomy. This is part of human preferences. So we, the AI systems are not going to do this
2081
+
2082
+ 1:19:59.840 --> 1:20:05.440
2083
+ stuff for you. You've got to do it for yourself, right? I'm not going to carry you to the top of
2084
+
2085
+ 1:20:05.440 --> 1:20:11.840
2086
+ Everest in an autonomous helicopter. You have to climb it if you want to get the benefit and so on. So
2087
+
2088
+ 1:20:14.160 --> 1:20:20.160
2089
+ but I'm afraid that because we are short sighted and lazy, we're going to override the AI systems.
2090
+
2091
+ 1:20:20.880 --> 1:20:27.520
2092
+ And, and there's an amazing short story that I recommend to everyone that I talk to about this
2093
+
2094
+ 1:20:27.520 --> 1:20:36.080
2095
+ called the machine stops written in 1909 by Ian Forster, who, you know, wrote novels about the
2096
+
2097
+ 1:20:36.080 --> 1:20:41.200
2098
+ British Empire and sort of things that became costume dramas on the BBC. But he wrote this one
2099
+
2100
+ 1:20:41.200 --> 1:20:49.280
2101
+ science fiction story, which is an amazing vision of the future. It has, it has basically iPads.
2102
+
2103
+ 1:20:49.280 --> 1:20:57.680
2104
+ It has video conferencing. It has MOOCs. It has computer and computer induced obesity. I mean,
2105
+
2106
+ 1:20:57.680 --> 1:21:02.960
2107
+ literally, the whole thing is what people spend their time doing is giving online courses or
2108
+
2109
+ 1:21:02.960 --> 1:21:07.920
2110
+ listening to online courses and talking about ideas. But they never get out there in the real
2111
+
2112
+ 1:21:07.920 --> 1:21:13.680
2113
+ world. They don't really have a lot of face to face contact. Everything is done online.
2114
+
2115
+ 1:21:13.680 --> 1:21:19.680
2116
+ You know, so all the things we're worrying about now were described in the story and and then the
2117
+
2118
+ 1:21:19.680 --> 1:21:26.640
2119
+ human race becomes more and more dependent on the machine loses knowledge of how things really run
2120
+
2121
+ 1:21:27.600 --> 1:21:35.200
2122
+ and then becomes vulnerable to collapse. And so it's a it's a pretty unbelievably amazing
2123
+
2124
+ 1:21:35.200 --> 1:21:41.520
2125
+ story for someone writing in 1909 to imagine all this. Plus, yeah. So there's very few people
2126
+
2127
+ 1:21:41.520 --> 1:21:46.080
2128
+ that represent artificial intelligence more than you, Stuart Russell.
2129
+
2130
+ 1:21:46.960 --> 1:21:50.880
2131
+ If you say it's okay, that's very kind. So it's all my fault.
2132
+
2133
+ 1:21:50.880 --> 1:21:59.680
2134
+ It's all your fault. No, right. You're often brought up as the person. Well, Stuart Russell,
2135
+
2136
+ 1:22:00.560 --> 1:22:04.960
2137
+ like the AI person is worried about this. That's why you should be worried about it.
2138
+
2139
+ 1:22:06.080 --> 1:22:11.280
2140
+ Do you feel the burden of that? I don't know if you feel that at all. But when I talk to people,
2141
+
2142
+ 1:22:11.280 --> 1:22:16.800
2143
+ like from you talk about people outside of computer science, when they think about this,
2144
+
2145
+ 1:22:16.800 --> 1:22:22.560
2146
+ Stuart Russell is worried about AI safety, you should be worried too. Do you feel the burden
2147
+
2148
+ 1:22:22.560 --> 1:22:31.520
2149
+ of that? I mean, in a practical sense, yeah, because I get, you know, a dozen, sometimes
2150
+
2151
+ 1:22:31.520 --> 1:22:39.600
2152
+ 25 invitations a day to talk about it, to give interviews, to write press articles and so on.
2153
+
2154
+ 1:22:39.600 --> 1:22:47.120
2155
+ So in that very practical sense, I'm seeing that people are concerned and really interested about
2156
+
2157
+ 1:22:47.120 --> 1:22:53.680
2158
+ this. Are you worried that you could be wrong, as all good scientists are? Of course. I worry about
2159
+
2160
+ 1:22:53.680 --> 1:23:00.400
2161
+ that all the time. I mean, that's always been the way that I've worked, you know, is like I have an
2162
+
2163
+ 1:23:00.400 --> 1:23:06.320
2164
+ argument in my head with myself, right? So I have some idea. And then I think, okay,
2165
+
2166
+ 1:23:06.320 --> 1:23:11.680
2167
+ okay, how could that be wrong? Or did someone else already have that idea? So I'll go and
2168
+
2169
+ 1:23:12.800 --> 1:23:18.240
2170
+ search in as much literature as I can to see whether someone else already thought of that
2171
+
2172
+ 1:23:18.240 --> 1:23:25.600
2173
+ or even refuted it. So, you know, right now, I'm reading a lot of philosophy because,
2174
+
2175
+ 1:23:25.600 --> 1:23:37.920
2176
+ you know, in the form of the debates over utilitarianism and other kinds of moral formulas,
2177
+
2178
+ 1:23:37.920 --> 1:23:44.320
2179
+ shall we say, people have already thought through some of these issues. But, you know,
2180
+
2181
+ 1:23:44.320 --> 1:23:51.280
2182
+ what one of the things I'm not seeing in a lot of these debates is this specific idea about
2183
+
2184
+ 1:23:51.280 --> 1:23:58.560
2185
+ the importance of uncertainty in the objective, that this is the way we should think about machines
2186
+
2187
+ 1:23:58.560 --> 1:24:06.800
2188
+ that are beneficial to humans. So this idea of provably beneficial machines based on explicit
2189
+
2190
+ 1:24:06.800 --> 1:24:15.200
2191
+ uncertainty in the objective, you know, it seems to be, you know, my gut feeling is this is the core
2192
+
2193
+ 1:24:15.200 --> 1:24:21.200
2194
+ of it. It's going to have to be elaborated in a lot of different directions. And they're a lot
2195
+
2196
+ 1:24:21.200 --> 1:24:27.440
2197
+ of beneficial. Yeah, but they're, I mean, it has to be, right? We can't afford, you know,
2198
+
2199
+ 1:24:27.440 --> 1:24:33.040
2200
+ hand wavy beneficial. Because there are, you know, whenever we do hand wavy stuff, there are
2201
+
2202
+ 1:24:33.040 --> 1:24:38.080
2203
+ loopholes. And the thing about super intelligent machines is they find the loopholes. You know,
2204
+
2205
+ 1:24:38.080 --> 1:24:44.320
2206
+ just like, you know, tax evaders, if you don't write your tax law properly, people will find
2207
+
2208
+ 1:24:44.320 --> 1:24:53.440
2209
+ the loopholes and end up paying no tax. And so you should think of it this way. And getting those
2210
+
2211
+ 1:24:53.440 --> 1:25:03.440
2212
+ definitions right, you know, it is really a long process, you know, so you can you can define
2213
+
2214
+ 1:25:03.440 --> 1:25:07.760
2215
+ mathematical frameworks. And within that framework, you can prove mathematical theorems that, yes,
2216
+
2217
+ 1:25:07.760 --> 1:25:12.800
2218
+ this will, you know, this this theoretical entity will be provably beneficial to that theoretical
2219
+
2220
+ 1:25:12.800 --> 1:25:20.160
2221
+ entity. But that framework may not match the real world in some crucial way. So the long process
2222
+
2223
+ 1:25:20.160 --> 1:25:27.120
2224
+ thinking through it to iterating and so on. Last question. Yep. You have 10 seconds to answer it.
2225
+
2226
+ 1:25:27.120 --> 1:25:34.480
2227
+ What is your favorite sci fi movie about AI? I would say interstellar has my favorite robots.
2228
+
2229
+ 1:25:34.480 --> 1:25:42.160
2230
+ Oh, beats space. Yeah, yeah, yeah. So so Tars, the robots, one of the robots in interstellar is
2231
+
2232
+ 1:25:42.160 --> 1:25:52.080
2233
+ the way robots should behave. And I would say X Machina is in some ways the one, the one that
2234
+
2235
+ 1:25:52.080 --> 1:25:58.000
2236
+ makes you think in a nervous kind of way about about where we're going.
2237
+
2238
+ 1:25:58.000 --> 1:26:13.920
2239
+ Well, Stuart, thank you so much for talking today. Pleasure.
2240
+
vtt/episode_010_small.vtt ADDED
@@ -0,0 +1,1331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.040
4
+ The following is a conversation with Peter Abiel.
5
+
6
+ 00:03.040 --> 00:07.760
7
+ He's a professor at UC Berkeley and the director of the Berkeley Robotics Learning Lab.
8
+
9
+ 00:07.760 --> 00:13.200
10
+ He's one of the top researchers in the world working on how to make robots understand and
11
+
12
+ 00:13.200 --> 00:18.480
13
+ interact with the world around them, especially using imitation and deeper enforcement learning.
14
+
15
+ 00:19.680 --> 00:24.160
16
+ This conversation is part of the MIT course on artificial general intelligence
17
+
18
+ 00:24.160 --> 00:29.040
19
+ and the artificial intelligence podcast. If you enjoy it, please subscribe on YouTube,
20
+
21
+ 00:29.040 --> 00:34.160
22
+ iTunes, or your podcast provider of choice, or simply connect with me on Twitter at Lex
23
+
24
+ 00:34.160 --> 00:40.560
25
+ Freedman, spelled F R I D. And now here's my conversation with Peter Abiel.
26
+
27
+ 00:41.440 --> 00:46.480
28
+ You've mentioned that if there was one person you could meet, it would be Roger Federer. So let
29
+
30
+ 00:46.480 --> 00:52.720
31
+ me ask, when do you think we will have a robot that fully autonomously can beat Roger Federer
32
+
33
+ 00:52.720 --> 00:59.840
34
+ at tennis? Roger Federer level player at tennis? Well, first, if you can make it happen for me
35
+
36
+ 00:59.840 --> 01:07.840
37
+ to meet Roger, let me know. In terms of getting a robot to beat him at tennis, it's kind of an
38
+
39
+ 01:07.840 --> 01:15.280
40
+ interesting question because for a lot of the challenges we think about in AI, the software
41
+
42
+ 01:15.280 --> 01:22.800
43
+ is really the missing piece. But for something like this, the hardware is nowhere near either. To
44
+
45
+ 01:22.800 --> 01:28.240
46
+ really have a robot that can physically run around, the Boston Dynamics robots are starting to get
47
+
48
+ 01:28.240 --> 01:34.560
49
+ there, but still not really human level ability to run around and then swing a racket.
50
+
51
+ 01:36.720 --> 01:40.160
52
+ So you think that's a hardware problem? I don't think it's a hardware problem only. I think it's
53
+
54
+ 01:40.160 --> 01:45.600
55
+ a hardware and a software problem. I think it's both. And I think they'll have independent progress.
56
+
57
+ 01:45.600 --> 01:53.360
58
+ So I'd say the hardware maybe in 10, 15 years. On clay, not grass. I mean, grass is probably hard.
59
+
60
+ 01:53.360 --> 02:00.080
61
+ With the sliding? Yeah. Well, clay, I'm not sure what's harder, grass or clay. The clay involves
62
+
63
+ 02:00.080 --> 02:09.360
64
+ sliding, which might be harder to master actually. Yeah. But you're not limited to bipedal. I mean,
65
+
66
+ 02:09.360 --> 02:12.560
67
+ I'm sure there's no... Well, if we can build a machine, it's a whole different question, of
68
+
69
+ 02:12.560 --> 02:18.000
70
+ course. If you can say, okay, this robot can be on wheels, it can move around on wheels and
71
+
72
+ 02:18.000 --> 02:24.880
73
+ can be designed differently, then I think that can be done sooner probably than a full humanoid
74
+
75
+ 02:24.880 --> 02:30.400
76
+ type of setup. What do you think of swing a racket? So you've worked at basic manipulation.
77
+
78
+ 02:31.120 --> 02:36.480
79
+ How hard do you think is the task of swinging a racket with a be able to hit a nice backhand
80
+
81
+ 02:36.480 --> 02:44.240
82
+ or a forehand? Let's say we just set up stationery, a nice robot arm, let's say. You know,
83
+
84
+ 02:44.240 --> 02:49.440
85
+ a standard industrial arm, and it can watch the ball come and then swing the racket.
86
+
87
+ 02:50.560 --> 02:57.600
88
+ It's a good question. I'm not sure it would be super hard to do. I mean, I'm sure it would require
89
+
90
+ 02:57.600 --> 03:01.520
91
+ a lot... If we do it with reinforcement learning, it would require a lot of trial and error. It's
92
+
93
+ 03:01.520 --> 03:06.960
94
+ not going to swing it right the first time around, but yeah, I don't see why I couldn't
95
+
96
+ 03:08.240 --> 03:12.320
97
+ swing it the right way. I think it's learnable. I think if you set up a ball machine, let's say
98
+
99
+ 03:12.320 --> 03:18.960
100
+ on one side and then a robot with a tennis racket on the other side, I think it's learnable
101
+
102
+ 03:20.160 --> 03:25.360
103
+ and maybe a little bit of pre training and simulation. Yeah, I think that's feasible.
104
+
105
+ 03:25.360 --> 03:28.880
106
+ I think the swinging the racket is feasible. It'd be very interesting to see how much precision it
107
+
108
+ 03:28.880 --> 03:37.760
109
+ can get. I mean, that's where... I mean, some of the human players can hit it on the lines,
110
+
111
+ 03:37.760 --> 03:44.320
112
+ which is very high precision. With spin. The spin is an interesting whether RL can learn to
113
+
114
+ 03:44.320 --> 03:48.160
115
+ put a spin on the ball. Well, you got me interested. Maybe someday we'll set this up.
116
+
117
+ 03:51.040 --> 03:55.440
118
+ Your answer is basically, okay, for this problem, it sounds fascinating, but for the general problem
119
+
120
+ 03:55.440 --> 03:59.840
121
+ of a tennis player, we might be a little bit farther away. What's the most impressive thing
122
+
123
+ 03:59.840 --> 04:06.720
124
+ you've seen a robot do in the physical world? So physically, for me, it's
125
+
126
+ 04:08.720 --> 04:16.560
127
+ the Boston Dynamics videos always just ring home and just super impressed. Recently, the robot
128
+
129
+ 04:16.560 --> 04:22.160
130
+ running up the stairs during the parkour type thing. I mean, yes, we don't know what's underneath.
131
+
132
+ 04:22.160 --> 04:26.400
133
+ They don't really write a lot of detail, but even if it's hard coded underneath,
134
+
135
+ 04:27.040 --> 04:30.800
136
+ which it might or might not be just the physical abilities of doing that parkour,
137
+
138
+ 04:30.800 --> 04:36.000
139
+ that's a very impressive robot right there. So have you met Spotmini or any of those robots in
140
+
141
+ 04:36.000 --> 04:43.040
142
+ person? I met Spotmini last year in April at the Mars event that Jeff Bezos organizes. They
143
+
144
+ 04:43.040 --> 04:49.840
145
+ brought it out there and it was nicely falling around Jeff. When Jeff left the room, they had it
146
+
147
+ 04:49.840 --> 04:55.680
148
+ following him along, which is pretty impressive. So I think there's some confidence to know that
149
+
150
+ 04:55.680 --> 05:00.080
151
+ there's no learning going on in those robots. The psychology of it, so while knowing that,
152
+
153
+ 05:00.080 --> 05:03.360
154
+ while knowing there's not, if there's any learning going on, it's very limited,
155
+
156
+ 05:03.920 --> 05:08.720
157
+ I met Spotmini earlier this year and knowing everything that's going on,
158
+
159
+ 05:09.520 --> 05:12.400
160
+ having one on one interaction, so I get to spend some time alone.
161
+
162
+ 05:14.400 --> 05:18.720
163
+ And there's immediately a deep connection on the psychological level,
164
+
165
+ 05:18.720 --> 05:22.320
166
+ even though you know the fundamentals, how it works, there's something magical.
167
+
168
+ 05:23.280 --> 05:29.040
169
+ So do you think about the psychology of interacting with robots in the physical world,
170
+
171
+ 05:29.040 --> 05:36.000
172
+ even you just showed me the PR2, the robot, and there was a little bit something like a face,
173
+
174
+ 05:37.040 --> 05:40.480
175
+ had a little bit something like a face, there's something that immediately draws you to it.
176
+
177
+ 05:40.480 --> 05:45.040
178
+ Do you think about that aspect of the robotics problem?
179
+
180
+ 05:45.040 --> 05:50.560
181
+ Well, it's very hard with Brett here. We'll give him a name, Berkeley Robot,
182
+
183
+ 05:50.560 --> 05:56.480
184
+ for the elimination of tedious tasks. It's very hard to not think of the robot as a person,
185
+
186
+ 05:56.480 --> 06:00.560
187
+ and it seems like everybody calls him a he for whatever reason, but that also makes it more
188
+
189
+ 06:00.560 --> 06:07.200
190
+ a person than if it was a it. And it seems pretty natural to think of it that way.
191
+
192
+ 06:07.200 --> 06:12.400
193
+ This past weekend really struck me, I've seen Pepper many times on videos,
194
+
195
+ 06:12.400 --> 06:18.640
196
+ but then I was at an event organized by, this was by Fidelity, and they had scripted Pepper to help
197
+
198
+ 06:19.280 --> 06:24.880
199
+ moderate some sessions, and they had scripted Pepper to have the personality of a child a
200
+
201
+ 06:24.880 --> 06:31.360
202
+ little bit. And it was very hard to not think of it as its own person in some sense, because it
203
+
204
+ 06:31.360 --> 06:35.120
205
+ was just kind of jumping, it would just jump into conversation making it very interactive.
206
+
207
+ 06:35.120 --> 06:38.720
208
+ Moderate would be saying Pepper would just jump in, hold on, how about me,
209
+
210
+ 06:38.720 --> 06:43.600
211
+ how about me, can I participate in this doing it, just like, okay, this is like like a person,
212
+
213
+ 06:43.600 --> 06:48.800
214
+ and that was 100% scripted. And even then it was hard not to have that sense of somehow
215
+
216
+ 06:48.800 --> 06:55.120
217
+ there is something there. So as we have robots interact in this physical world, is that a signal
218
+
219
+ 06:55.120 --> 07:00.160
220
+ that could be used in reinforcement learning? You've worked a little bit in this direction,
221
+
222
+ 07:00.160 --> 07:05.920
223
+ but do you think that that psychology can be somehow pulled in? Yes, that's a question I would
224
+
225
+ 07:05.920 --> 07:12.800
226
+ say a lot, a lot of people ask. And I think part of why they ask it is they're thinking about
227
+
228
+ 07:14.160 --> 07:18.560
229
+ how unique are we really still as people, like after they see some results, they see
230
+
231
+ 07:18.560 --> 07:23.200
232
+ a computer play go to say a computer do this that they're like, okay, but can it really have
233
+
234
+ 07:23.200 --> 07:28.960
235
+ emotion? Can it really interact with us in that way? And then once you're around robots,
236
+
237
+ 07:28.960 --> 07:33.760
238
+ you already start feeling it. And I think that kind of maybe methodologically, the way that I
239
+
240
+ 07:33.760 --> 07:38.560
241
+ think of it is, if you run something like reinforcement learnings about optimizing some
242
+
243
+ 07:38.560 --> 07:48.240
244
+ objective, and there's no reason that the objective couldn't be tied into how much
245
+
246
+ 07:48.240 --> 07:53.120
247
+ does a person like interacting with this system? And why could not the reinforcement learning system
248
+
249
+ 07:53.120 --> 07:59.040
250
+ optimize for the robot being fun to be around? And why wouldn't it then naturally become more
251
+
252
+ 07:59.040 --> 08:03.920
253
+ more interactive and more and more maybe like a person or like a pet? I don't know what it would
254
+
255
+ 08:03.920 --> 08:08.720
256
+ exactly be, but more and more have those features and acquire them automatically. As long as you
257
+
258
+ 08:08.720 --> 08:16.320
259
+ can formalize an objective of what it means to like something, how you exhibit what's the ground
260
+
261
+ 08:16.320 --> 08:21.360
262
+ truth? How do you get the reward from human? Because you have to somehow collect that information
263
+
264
+ 08:21.360 --> 08:27.120
265
+ from human. But you're saying if you can formulate as an objective, it can be learned.
266
+
267
+ 08:27.120 --> 08:30.800
268
+ There's no reason it couldn't emerge through learning. And maybe one way to formulate as an
269
+
270
+ 08:30.800 --> 08:35.840
271
+ objective, you wouldn't have to necessarily score it explicitly. So standard rewards are
272
+
273
+ 08:35.840 --> 08:41.920
274
+ numbers. And numbers are hard to come by. This is a 1.5 or 1.7 on some scale. It's very hard to do
275
+
276
+ 08:41.920 --> 08:47.680
277
+ for a person. But much easier is for a person to say, okay, what you did the last five minutes
278
+
279
+ 08:47.680 --> 08:53.600
280
+ was much nicer than we did the previous five minutes. And that now gives a comparison. And in fact,
281
+
282
+ 08:53.600 --> 08:58.080
283
+ there have been some results on that. For example, Paul Cristiano and collaborators at OpenEye had
284
+
285
+ 08:58.080 --> 09:05.840
286
+ the hopper, Mojoka hopper, one legged robot, the backflip, backflips purely from feedback. I like
287
+
288
+ 09:05.840 --> 09:11.280
289
+ this better than that. That's kind of equally good. And after a bunch of interactions, it figured
290
+
291
+ 09:11.280 --> 09:15.200
292
+ out what it was the person was asking for, namely a backflip. And so I think the same thing.
293
+
294
+ 09:16.080 --> 09:20.880
295
+ It wasn't trying to do a backflip. It was just getting a score from the comparison score from
296
+
297
+ 09:20.880 --> 09:27.760
298
+ the person based on person having a mind in their own mind. I wanted to do a backflip. But
299
+
300
+ 09:27.760 --> 09:32.480
301
+ the robot didn't know what it was supposed to be doing. It just knew that sometimes the person
302
+
303
+ 09:32.480 --> 09:37.120
304
+ said, this is better, this is worse. And then the robot figured out what the person was actually
305
+
306
+ 09:37.120 --> 09:42.560
307
+ after was a backflip. And I imagine the same would be true for things like more interactive
308
+
309
+ 09:42.560 --> 09:47.520
310
+ robots that the robot would figure out over time. Oh, this kind of thing apparently is appreciated
311
+
312
+ 09:47.520 --> 09:54.720
313
+ more than this other kind of thing. So when I first picked up Sutton's Richard Sutton's
314
+
315
+ 09:54.720 --> 10:02.480
316
+ reinforcement learning book, before sort of this deep learning, before the reemergence
317
+
318
+ 10:02.480 --> 10:07.600
319
+ of neural networks as a powerful mechanism for machine learning, IRL seemed to me like magic.
320
+
321
+ 10:07.600 --> 10:18.000
322
+ It was beautiful. So that seemed like what intelligence is, RRL reinforcement learning. So how
323
+
324
+ 10:18.000 --> 10:24.320
325
+ do you think we can possibly learn anything about the world when the reward for the actions is delayed
326
+
327
+ 10:24.320 --> 10:32.160
328
+ is so sparse? Like where is, why do you think RRL works? Why do you think you can learn anything
329
+
330
+ 10:32.160 --> 10:37.600
331
+ under such sparse rewards, whether it's regular reinforcement learning or deeper reinforcement
332
+
333
+ 10:37.600 --> 10:45.600
334
+ learning? What's your intuition? The kind of part of that is, why is RRL, why does it need
335
+
336
+ 10:45.600 --> 10:51.040
337
+ so many samples, so many experiences to learn from? Because really what's happening is when you
338
+
339
+ 10:51.040 --> 10:56.240
340
+ have a sparse reward, you do something maybe for like, I don't know, you take 100 actions and then
341
+
342
+ 10:56.240 --> 11:01.920
343
+ you get a reward, or maybe you get like a score of three. And I'm like, okay, three. Not sure what
344
+
345
+ 11:01.920 --> 11:06.960
346
+ that means. You go again and now you get two. And now you know that that sequence of 100 actions
347
+
348
+ 11:06.960 --> 11:10.640
349
+ that you did the second time around somehow was worse than the sequence of 100 actions you did
350
+
351
+ 11:10.640 --> 11:15.040
352
+ the first time around. But that's tough to now know which one of those were better or worse.
353
+
354
+ 11:15.040 --> 11:19.680
355
+ Some might have been good and bad in either one. And so that's why you need so many experiences.
356
+
357
+ 11:19.680 --> 11:24.080
358
+ But once you have enough experiences, effectively RRL is teasing that apart. It's starting to say,
359
+
360
+ 11:24.080 --> 11:28.640
361
+ okay, what is consistently there when you get a higher reward and what's consistently there when
362
+
363
+ 11:28.640 --> 11:34.080
364
+ you get a lower reward? And then kind of the magic of sometimes the policy grant update is to say,
365
+
366
+ 11:34.720 --> 11:39.520
367
+ now let's update the neural network to make the actions that were kind of present when things are
368
+
369
+ 11:39.520 --> 11:44.960
370
+ good, more likely, and make the actions that are present when things are not as good, less likely.
371
+
372
+ 11:44.960 --> 11:50.480
373
+ So that's that is the counterpoint. But it seems like you would need to run it a lot more than
374
+
375
+ 11:50.480 --> 11:55.120
376
+ you do. Even though right now, people could say that RRL is very inefficient. But it seems to be
377
+
378
+ 11:55.120 --> 12:01.200
379
+ way more efficient than one would imagine on paper, that the simple updates to the policy,
380
+
381
+ 12:01.760 --> 12:07.520
382
+ the policy gradient that somehow you can learn is exactly as I said, what are the common actions
383
+
384
+ 12:07.520 --> 12:11.680
385
+ that seem to produce some good results, that that somehow can learn anything.
386
+
387
+ 12:12.640 --> 12:16.800
388
+ It seems counterintuitive, at least. Is there some intuition behind it?
389
+
390
+ 12:16.800 --> 12:24.720
391
+ Yeah, so I think there's a few ways to think about this. The way I tend to think about it
392
+
393
+ 12:24.720 --> 12:29.920
394
+ mostly originally. And so when we started working on deep reinforcement learning here at Berkeley,
395
+
396
+ 12:29.920 --> 12:36.880
397
+ which was maybe 2011, 12, 13, around that time, John Shulman was a PhD student initially kind of
398
+
399
+ 12:36.880 --> 12:44.480
400
+ driving it forward here. And kind of the way we thought about it at the time was if you think
401
+
402
+ 12:44.480 --> 12:51.360
403
+ about rectified linear units or kind of rectifier type neural networks, what do you get? You get
404
+
405
+ 12:51.360 --> 12:56.320
406
+ something that's piecewise linear feedback control. And if you look at the literature,
407
+
408
+ 12:56.960 --> 13:02.080
409
+ linear feedback control is extremely successful, can solve many, many problems surprisingly well.
410
+
411
+ 13:03.520 --> 13:07.200
412
+ I remember, for example, when we did helicopter flight, if you're in a stationary flight regime,
413
+
414
+ 13:07.200 --> 13:12.080
415
+ not a non stationary, but a stationary flight regime like hover, you can use linear feedback
416
+
417
+ 13:12.080 --> 13:16.960
418
+ control to stabilize the helicopter, a very complex dynamical system. But the controller
419
+
420
+ 13:16.960 --> 13:22.240
421
+ is relatively simple. And so I think that's a big part of is that if you do feedback control,
422
+
423
+ 13:22.240 --> 13:25.280
424
+ even though the system you control can be very, very complex, often,
425
+
426
+ 13:26.000 --> 13:31.520
427
+ relatively simple control architectures can already do a lot. But then also just linear
428
+
429
+ 13:31.520 --> 13:35.840
430
+ is not good enough. And so one way you can think of these neural networks is that in some of the
431
+
432
+ 13:35.840 --> 13:40.880
433
+ tile the space, which people were already trying to do more by hand or with finite state machines,
434
+
435
+ 13:40.880 --> 13:44.560
436
+ say this linear controller here, this linear controller here, neural network,
437
+
438
+ 13:44.560 --> 13:48.160
439
+ learns to tell the spin say linear controller here, another linear controller here,
440
+
441
+ 13:48.160 --> 13:52.000
442
+ but it's more subtle than that. And so it's benefiting from this linear control aspect is
443
+
444
+ 13:52.000 --> 13:57.760
445
+ benefiting from the tiling, but it's somehow tiling it one dimension at a time. Because if
446
+
447
+ 13:57.760 --> 14:04.160
448
+ let's say you have a two layer network, even that hidden layer, you make a transition from active
449
+
450
+ 14:04.160 --> 14:09.600
451
+ to inactive or the other way around, that is essentially one axis, but not axis aligned, but
452
+
453
+ 14:09.600 --> 14:15.200
454
+ one direction that you change. And so you have this kind of very gradual tiling of the space,
455
+
456
+ 14:15.200 --> 14:19.840
457
+ we have a lot of sharing between the linear controllers that tile the space. And that was
458
+
459
+ 14:19.840 --> 14:25.280
460
+ always my intuition as to why to expect that this might work pretty well. It's essentially
461
+
462
+ 14:25.280 --> 14:30.000
463
+ leveraging the fact that linear feedback control is so good. But of course, not enough. And this
464
+
465
+ 14:30.000 --> 14:35.520
466
+ is a gradual tiling of the space with linear feedback controls that share a lot of expertise
467
+
468
+ 14:35.520 --> 14:41.120
469
+ across them. So that that's, that's really nice intuition. But do you think that scales to the
470
+
471
+ 14:41.120 --> 14:47.040
472
+ more and more general problems of when you start going up the number of control dimensions,
473
+
474
+ 14:48.160 --> 14:55.280
475
+ when you start going down in terms of how often you get a clean reward signal,
476
+
477
+ 14:55.280 --> 15:00.960
478
+ does that intuition carry forward to those crazy or weirder worlds that we think of as the real
479
+
480
+ 15:00.960 --> 15:10.000
481
+ world? So I think where things get really tricky in the real world compared to the things we've
482
+
483
+ 15:10.000 --> 15:13.920
484
+ looked at so far with great success and reinforcement learning is
485
+
486
+ 15:16.160 --> 15:21.920
487
+ the time scales, which takes us to an extreme. So when you think about the real world, I mean,
488
+
489
+ 15:22.800 --> 15:28.560
490
+ I don't know, maybe some student decided to do a PhD here, right? Okay, that's that's a decision,
491
+
492
+ 15:28.560 --> 15:34.000
493
+ that's a very high level decision. But if you think about their lives, I mean, any person's life,
494
+
495
+ 15:34.000 --> 15:39.360
496
+ it's a sequence of muscle fiber contractions and relaxations. And that's how you interact with
497
+
498
+ 15:39.360 --> 15:44.480
499
+ the world. And that's a very high frequency control thing. But it's ultimately what you do
500
+
501
+ 15:44.480 --> 15:49.280
502
+ and how you affect the world. Until I guess we have brain readings, you can maybe do it slightly
503
+
504
+ 15:49.280 --> 15:55.120
505
+ differently. But typically, that's how you affect the world. And the decision of doing a PhD is
506
+
507
+ 15:55.120 --> 16:00.240
508
+ like so abstract relative to what you're actually doing in the world. And I think that's where
509
+
510
+ 16:00.240 --> 16:07.360
511
+ credit assignment becomes just completely beyond what any current RL algorithm can do. And we need
512
+
513
+ 16:07.360 --> 16:13.360
514
+ hierarchical reasoning at a level that is just not available at all yet. Where do you think we can
515
+
516
+ 16:13.360 --> 16:19.360
517
+ pick up hierarchical reasoning by which mechanisms? Yeah, so maybe let me highlight what I think the
518
+
519
+ 16:19.360 --> 16:27.600
520
+ limitations are of what already was done 20, 30 years ago. In fact, you'll find reasoning systems
521
+
522
+ 16:27.600 --> 16:33.200
523
+ that reason over relatively long horizons. But the problem is that they were not grounded in the real
524
+
525
+ 16:33.200 --> 16:43.040
526
+ world. So people would have to hand design some kind of logical, dynamical descriptions of the
527
+
528
+ 16:43.040 --> 16:49.120
529
+ world. And that didn't tie into perception. And so that didn't tie into real objects and so forth.
530
+
531
+ 16:49.120 --> 16:57.920
532
+ And so that was a big gap. Now with deep learning, we start having the ability to really see with
533
+
534
+ 16:57.920 --> 17:02.800
535
+ sensors process that and understand what's in the world. And so it's a good time to try to
536
+
537
+ 17:02.800 --> 17:08.080
538
+ bring these things together. I see a few ways of getting there. One way to get there would be to say
539
+
540
+ 17:08.080 --> 17:12.160
541
+ deep learning can get bolted on somehow to some of these more traditional approaches.
542
+
543
+ 17:12.160 --> 17:16.160
544
+ Now bolted on would probably mean you need to do some kind of end to end training,
545
+
546
+ 17:16.160 --> 17:21.840
547
+ where you say, my deep learning processing somehow leads to a representation that in term
548
+
549
+ 17:22.720 --> 17:29.680
550
+ uses some kind of traditional underlying dynamical systems that can be used for planning.
551
+
552
+ 17:29.680 --> 17:33.920
553
+ And that's, for example, the direction of Eve Tamar and Thanard Kuritach here have been pushing
554
+
555
+ 17:33.920 --> 17:38.800
556
+ with causal info again. And of course, other people too, that that's that's one way. Can we
557
+
558
+ 17:38.800 --> 17:43.520
559
+ somehow force it into the form factor that is amenable to reasoning?
560
+
561
+ 17:43.520 --> 17:50.160
562
+ Another direction we've been thinking about for a long time and didn't make any progress on
563
+
564
+ 17:50.160 --> 17:56.880
565
+ was more information theoretic approaches. So the idea there was that what it means to take
566
+
567
+ 17:56.880 --> 18:03.840
568
+ high level action is to take and choose a latent variable now that tells you a lot about what's
569
+
570
+ 18:03.840 --> 18:08.640
571
+ going to be the case in the future, because that's what it means to to take a high level action.
572
+
573
+ 18:08.640 --> 18:14.480
574
+ I say, okay, what I decide I'm going to navigate to the gas station because I need to get
575
+
576
+ 18:14.480 --> 18:18.800
577
+ gas from my car. Well, that'll now take five minutes to get there. But the fact that I get
578
+
579
+ 18:18.800 --> 18:23.200
580
+ there, I could already tell that from the high level action I took much earlier.
581
+
582
+ 18:24.480 --> 18:30.080
583
+ That we had a very hard time getting success with, not saying it's a dead end,
584
+
585
+ 18:30.080 --> 18:34.160
586
+ necessarily, but we had a lot of trouble getting that to work. And then we started revisiting
587
+
588
+ 18:34.160 --> 18:39.600
589
+ the notion of what are we really trying to achieve? What we're trying to achieve is
590
+
591
+ 18:39.600 --> 18:42.880
592
+ not necessarily a hierarchy per se, but you could think about what does hierarchy give us?
593
+
594
+ 18:44.160 --> 18:50.560
595
+ What we hope it would give us is better credit assignment. What is better credit assignment
596
+
597
+ 18:50.560 --> 18:58.640
598
+ is giving us, it gives us faster learning. And so faster learning is ultimately maybe
599
+
600
+ 18:58.640 --> 19:03.840
601
+ what we're after. And so that's where we ended up with the RL squared paper on learning to
602
+
603
+ 19:03.840 --> 19:10.640
604
+ reinforcement learn, which at a time Rocky Dwan led. And that's exactly the meta learning
605
+
606
+ 19:10.640 --> 19:15.040
607
+ approach where we say, okay, we don't know how to design hierarchy. We know what we want to get
608
+
609
+ 19:15.040 --> 19:20.000
610
+ from it. Let's just enter and optimize for what we want to get from it and see if it might emerge.
611
+
612
+ 19:20.000 --> 19:24.720
613
+ And we saw things emerge. The maze navigation had consistent motion down hallways,
614
+
615
+ 19:25.920 --> 19:29.520
616
+ which is what you want. A hierarchical control should say, I want to go down this hallway.
617
+
618
+ 19:29.520 --> 19:33.040
619
+ And then when there is an option to take a turn, I can decide whether to take a turn or not and
620
+
621
+ 19:33.040 --> 19:38.480
622
+ repeat, even had the notion of, where have you been before or not to not revisit places you've
623
+
624
+ 19:38.480 --> 19:45.840
625
+ been before? It still didn't scale yet to the real world kind of scenarios I think you had in mind,
626
+
627
+ 19:45.840 --> 19:50.000
628
+ but it was some sign of life that maybe you can meta learn these hierarchical concepts.
629
+
630
+ 19:51.040 --> 19:58.000
631
+ I mean, it seems like through these meta learning concepts, we get at the, what I think is one of
632
+
633
+ 19:58.000 --> 20:05.120
634
+ the hardest and most important problems of AI, which is transfer learning. So it's generalization.
635
+
636
+ 20:06.240 --> 20:12.160
637
+ How far along this journey towards building general systems are we being able to do transfer
638
+
639
+ 20:12.160 --> 20:18.320
640
+ learning? Well, so there's some signs that you can generalize a little bit. But do you think
641
+
642
+ 20:18.320 --> 20:25.360
643
+ we're on the right path or totally different breakthroughs are needed to be able to transfer
644
+
645
+ 20:25.360 --> 20:34.000
646
+ knowledge between different learned models? Yeah, I'm pretty torn on this in that I think
647
+
648
+ 20:34.000 --> 20:44.400
649
+ there are some very impressive results already, right? I mean, I would say when even with the
650
+
651
+ 20:44.400 --> 20:50.160
652
+ initial kind of big breakthrough in 2012 with Alex net, right, the initial, the initial thing is,
653
+
654
+ 20:50.160 --> 20:57.600
655
+ okay, great. This does better on image net hands image recognition. But then immediately thereafter,
656
+
657
+ 20:57.600 --> 21:04.080
658
+ there was of course the notion that wow, what was learned on image net, and you now want to solve
659
+
660
+ 21:04.080 --> 21:11.280
661
+ a new task, you can fine tune Alex net for new tasks. And that was often found to be the even
662
+
663
+ 21:11.280 --> 21:15.920
664
+ bigger deal that you learn something that was reusable, which was not often the case before
665
+
666
+ 21:15.920 --> 21:19.520
667
+ usually machine learning, you learn something for one scenario. And that was it. And that's
668
+
669
+ 21:19.520 --> 21:23.200
670
+ really exciting. I mean, that's just a huge application. That's probably the biggest
671
+
672
+ 21:23.200 --> 21:28.960
673
+ success of transfer learning today, if in terms of scope and impact. That was a huge breakthrough.
674
+
675
+ 21:28.960 --> 21:37.040
676
+ And then recently, I feel like similar kind of by scaling things up, it seems like this has been
677
+
678
+ 21:37.040 --> 21:41.440
679
+ expanded upon like people training even bigger networks, they might transfer even better. If
680
+
681
+ 21:41.440 --> 21:46.480
682
+ you look that, for example, some of the opening results on language models. And so in the recent
683
+
684
+ 21:46.480 --> 21:53.600
685
+ Google results on language models, they are learned for just prediction. And then they get
686
+
687
+ 21:54.320 --> 21:59.600
688
+ reused for other tasks. And so I think there is something there where somehow if you train a
689
+
690
+ 21:59.600 --> 22:05.200
691
+ big enough model on enough things, it seems to transfer some deep mind results that I thought
692
+
693
+ 22:05.200 --> 22:12.160
694
+ were very impressive, the unreal results, where it was learning to navigate mazes in ways where
695
+
696
+ 22:12.160 --> 22:16.880
697
+ it wasn't just doing reinforcement learning, but it had other objectives was optimizing for. So I
698
+
699
+ 22:16.880 --> 22:23.680
700
+ think there's a lot of interesting results already. I think maybe where it's hard to wrap my head
701
+
702
+ 22:23.680 --> 22:30.160
703
+ around this, to which extent or when do we call something generalization, right? Or the levels
704
+
705
+ 22:30.160 --> 22:37.360
706
+ of generalization involved in these different tasks, right? So you draw this, by the way, just
707
+
708
+ 22:37.360 --> 22:43.280
709
+ to frame things. I've heard you say somewhere, it's the difference in learning to master versus
710
+
711
+ 22:43.280 --> 22:49.680
712
+ learning to generalize. That it's a nice line to think about. And I guess you're saying it's a gray
713
+
714
+ 22:49.680 --> 22:54.640
715
+ area of what learning to master and learning to generalize where one starts.
716
+
717
+ 22:54.640 --> 22:58.800
718
+ I think I might have heard this. I might have heard it somewhere else. And I think it might have
719
+
720
+ 22:58.800 --> 23:05.120
721
+ been one of your interviews, maybe the one with Yoshua Benjamin, 900% sure. But I like the example
722
+
723
+ 23:05.120 --> 23:12.000
724
+ and I'm going to not sure who it was, but the example was essentially if you use current deep
725
+
726
+ 23:12.000 --> 23:20.480
727
+ learning techniques, what we're doing to predict, let's say the relative motion of our planets,
728
+
729
+ 23:20.480 --> 23:27.680
730
+ it would do pretty well. But then now if a massive new mass enters our solar system,
731
+
732
+ 23:28.320 --> 23:32.880
733
+ it would probably not predict what will happen, right? And that's a different kind of
734
+
735
+ 23:32.880 --> 23:38.400
736
+ generalization. That's a generalization that relies on the ultimate simplest explanation
737
+
738
+ 23:38.400 --> 23:42.640
739
+ that we have available today to explain the motion of planets, whereas just pattern recognition
740
+
741
+ 23:42.640 --> 23:48.160
742
+ could predict our current solar system motion pretty well. No problem. And so I think that's
743
+
744
+ 23:48.160 --> 23:53.920
745
+ an example of a kind of generalization that is a little different from what we've achieved so far.
746
+
747
+ 23:54.480 --> 24:01.360
748
+ And it's not clear if just, you know, regularizing more and forcing it to come up with a simpler,
749
+
750
+ 24:01.360 --> 24:05.280
751
+ simpler, simpler explanation. Look, this is not simple, but that's what physics researchers do,
752
+
753
+ 24:05.280 --> 24:10.000
754
+ right, to say, can I make this even simpler? How simple can I get this? What's the simplest
755
+
756
+ 24:10.000 --> 24:14.560
757
+ equation that can explain everything, right? The master equation for the entire dynamics of the
758
+
759
+ 24:14.560 --> 24:20.960
760
+ universe. We haven't really pushed that direction as hard in deep learning, I would say. Not sure
761
+
762
+ 24:20.960 --> 24:24.960
763
+ if it should be pushed, but it seems a kind of generalization you get from that that you don't
764
+
765
+ 24:24.960 --> 24:30.400
766
+ get in our current methods so far. So I just talked to Vladimir Vapnik, for example, who was
767
+
768
+ 24:30.400 --> 24:39.200
769
+ a statistician in statistical learning, and he kind of dreams of creating the E equals Mc
770
+
771
+ 24:39.200 --> 24:44.400
772
+ squared for learning, right, the general theory of learning. Do you think that's a fruitless pursuit
773
+
774
+ 24:46.480 --> 24:50.560
775
+ in the near term, within the next several decades?
776
+
777
+ 24:51.680 --> 24:56.800
778
+ I think that's a really interesting pursuit. And in the following sense, in that there is a
779
+
780
+ 24:56.800 --> 25:05.440
781
+ lot of evidence that the brain is pretty modular. And so I wouldn't maybe think of it as the theory,
782
+
783
+ 25:05.440 --> 25:12.480
784
+ maybe, the underlying theory, but more kind of the principle where there have been findings where
785
+
786
+ 25:14.160 --> 25:20.240
787
+ people who are blind will use the part of the brain usually used for vision for other functions.
788
+
789
+ 25:20.240 --> 25:26.800
790
+ And even after some kind of, if people get rewired in some way, they might be able to reuse parts of
791
+
792
+ 25:26.800 --> 25:35.040
793
+ their brain for other functions. And so what that suggests is some kind of modularity. And I think
794
+
795
+ 25:35.040 --> 25:41.120
796
+ it is a pretty natural thing to strive for to see, can we find that modularity? Can we find this
797
+
798
+ 25:41.120 --> 25:45.440
799
+ thing? Of course, it's not every part of the brain is not exactly the same. Not everything can be
800
+
801
+ 25:45.440 --> 25:50.080
802
+ rewired arbitrarily. But if you think of things like the neocortex, which is a pretty big part of
803
+
804
+ 25:50.080 --> 25:56.880
805
+ the brain, that seems fairly modular from what the findings so far. Can you design something
806
+
807
+ 25:56.880 --> 26:01.840
808
+ equally modular? And if you can just grow it, it becomes more capable, probably. I think that would
809
+
810
+ 26:01.840 --> 26:07.200
811
+ be the kind of interesting underlying principle to shoot for that is not unrealistic.
812
+
813
+ 26:07.200 --> 26:14.400
814
+ Do you think you prefer math or empirical trial and error for the discovery of the essence of what
815
+
816
+ 26:14.400 --> 26:19.680
817
+ it means to do something intelligent? So reinforcement learning embodies both groups, right?
818
+
819
+ 26:19.680 --> 26:25.760
820
+ To prove that something converges, prove the bounds. And then at the same time, a lot of those
821
+
822
+ 26:25.760 --> 26:31.280
823
+ successes are, well, let's try this and see if it works. So which do you gravitate towards? How do
824
+
825
+ 26:31.280 --> 26:40.960
826
+ you think of those two parts of your brain? So maybe I would prefer we could make the progress
827
+
828
+ 26:41.600 --> 26:46.560
829
+ with mathematics. And the reason maybe I would prefer that is because often if you have something you
830
+
831
+ 26:46.560 --> 26:54.080
832
+ can mathematically formalize, you can leapfrog a lot of experimentation. And experimentation takes
833
+
834
+ 26:54.080 --> 27:01.440
835
+ a long time to get through. And a lot of trial and error, reinforcement learning, your research
836
+
837
+ 27:01.440 --> 27:05.040
838
+ process. But you need to do a lot of trial and error before you get to a success. So if you can
839
+
840
+ 27:05.040 --> 27:10.400
841
+ leapfrog that, to my mind, that's what the math is about. And hopefully once you do a bunch of
842
+
843
+ 27:10.400 --> 27:15.600
844
+ experiments, you start seeing a pattern, you can do some derivations that leapfrog some experiments.
845
+
846
+ 27:16.240 --> 27:20.160
847
+ But I agree with you. I mean, in practice, a lot of the progress has been such that we have not
848
+
849
+ 27:20.160 --> 27:25.840
850
+ been able to find the math that allows it to leapfrog ahead. And we are kind of making gradual
851
+
852
+ 27:25.840 --> 27:30.480
853
+ progress one step at a time. A new experiment here, a new experiment there that gives us new
854
+
855
+ 27:30.480 --> 27:35.280
856
+ insights and gradually building up, but not getting to something yet where we're just, okay,
857
+
858
+ 27:35.280 --> 27:39.920
859
+ here's an equation that now explains how, you know, that would be have been two years of
860
+
861
+ 27:39.920 --> 27:44.880
862
+ experimentation to get there. But this tells us what the results going to be. Unfortunately,
863
+
864
+ 27:44.880 --> 27:52.800
865
+ unfortunately, not so much yet. Not so much yet. But your hope is there. In trying to teach robots
866
+
867
+ 27:52.800 --> 28:01.200
868
+ or systems to do everyday tasks, or even in simulation, what do you think you're more excited
869
+
870
+ 28:01.200 --> 28:10.560
871
+ about? imitation learning or self play. So letting robots learn from humans, or letting robots plan
872
+
873
+ 28:10.560 --> 28:18.240
874
+ their own, try to figure out in their own way, and eventually play, eventually interact with humans,
875
+
876
+ 28:18.240 --> 28:23.200
877
+ or solve whatever problem is. What's the more exciting to you? What's more promising you think
878
+
879
+ 28:23.200 --> 28:34.240
880
+ is a research direction? So when we look at self play, what's so beautiful about it is,
881
+
882
+ 28:34.240 --> 28:37.680
883
+ goes back to kind of the challenges in reinforcement learning. So the challenge
884
+
885
+ 28:37.680 --> 28:43.200
886
+ of reinforcement learning is getting signal. And if you don't never succeed, you don't get any signal.
887
+
888
+ 28:43.200 --> 28:49.040
889
+ In self play, you're on both sides. So one of you succeeds. And the beauty is also one of you
890
+
891
+ 28:49.040 --> 28:53.520
892
+ fails. And so you see the contrast, you see the one version of me that did better than the other
893
+
894
+ 28:53.520 --> 28:58.400
895
+ version. And so every time you play yourself, you get signal. And so whenever you can turn
896
+
897
+ 28:58.400 --> 29:04.160
898
+ something into self play, you're in a beautiful situation where you can naturally learn much
899
+
900
+ 29:04.160 --> 29:10.080
901
+ more quickly than in most other reinforcement learning environments. So I think, I think if
902
+
903
+ 29:10.080 --> 29:15.760
904
+ somehow we can turn more reinforcement learning problems into self play formulations, that would
905
+
906
+ 29:15.760 --> 29:21.760
907
+ go really, really far. So far, self play has been largely around games where there is natural
908
+
909
+ 29:21.760 --> 29:25.440
910
+ opponents. But if we could do self play for other things, and let's say, I don't know,
911
+
912
+ 29:25.440 --> 29:29.360
913
+ a robot learns to build a house, I mean, that's a pretty advanced thing to try to do for a robot,
914
+
915
+ 29:29.360 --> 29:34.240
916
+ but maybe it tries to build a hut or something. If that can be done through self play, it would
917
+
918
+ 29:34.240 --> 29:38.560
919
+ learn a lot more quickly if somebody can figure it out. And I think that would be something where
920
+
921
+ 29:38.560 --> 29:42.560
922
+ it goes closer to kind of the mathematical leapfrogging where somebody figures out a
923
+
924
+ 29:42.560 --> 29:47.680
925
+ formalism to say, okay, any RL problem by playing this and this idea, you can turn it
926
+
927
+ 29:47.680 --> 29:50.480
928
+ into a self play problem where you get signal a lot more easily.
929
+
930
+ 29:52.400 --> 29:57.680
931
+ Reality is many problems, we don't know how to turn to self play. And so either we need to provide
932
+
933
+ 29:57.680 --> 30:02.640
934
+ detailed reward. That doesn't just reward for achieving a goal, but rewards for making progress,
935
+
936
+ 30:02.640 --> 30:06.480
937
+ and that becomes time consuming. And once you're starting to do that, let's say you want a robot
938
+
939
+ 30:06.480 --> 30:09.920
940
+ to do something, you need to give all this detailed reward. Well, why not just give a
941
+
942
+ 30:09.920 --> 30:15.920
943
+ demonstration? Because why not just show the robot. And now the question is, how do you show
944
+
945
+ 30:15.920 --> 30:20.240
946
+ the robot? One way to show is to tally operator robot and then robot really experiences things.
947
+
948
+ 30:20.800 --> 30:24.480
949
+ And that's nice, because that's really high signal to noise ratio data. And we've done a lot
950
+
951
+ 30:24.480 --> 30:29.360
952
+ of that. And you teach your robot skills. In just 10 minutes, you can teach your robot a new basic
953
+
954
+ 30:29.360 --> 30:33.360
955
+ skill, like, okay, pick up the bottle, place it somewhere else. That's a skill, no matter where
956
+
957
+ 30:33.360 --> 30:38.000
958
+ the bottle starts, maybe it always goes on to a target or something. That's fairly easy to teach
959
+
960
+ 30:38.000 --> 30:43.120
961
+ your robot with teleop. Now, what's even more interesting, if you can now teach your robot
962
+
963
+ 30:43.120 --> 30:48.480
964
+ through third person learning, where the robot watches you do something, and doesn't experience
965
+
966
+ 30:48.480 --> 30:52.880
967
+ it, but just watches it and says, okay, well, if you're showing me that, that means I should
968
+
969
+ 30:52.880 --> 30:56.880
970
+ be doing this. And I'm not going to be using your hand, because I don't get to control your hand,
971
+
972
+ 30:56.880 --> 31:02.000
973
+ but I'm going to use my hand, I do that mapping. And so that's where I think one of the big breakthroughs
974
+
975
+ 31:02.000 --> 31:07.520
976
+ has happened this year. This was led by Chelsea Finn here. It's almost like learning a machine
977
+
978
+ 31:07.520 --> 31:12.000
979
+ translation for demonstrations where you have a human demonstration and the robot learns to
980
+
981
+ 31:12.000 --> 31:17.440
982
+ translate it into what it means for the robot to do it. And that was a meta learning formulation,
983
+
984
+ 31:17.440 --> 31:23.440
985
+ learn from one to get the other. And that I think opens up a lot of opportunities to learn a lot
986
+
987
+ 31:23.440 --> 31:28.080
988
+ more quickly. So my focus is on autonomous vehicles. Do you think this approach of third
989
+
990
+ 31:28.080 --> 31:33.040
991
+ person watching is the autonomous driving is amenable to this kind of approach?
992
+
993
+ 31:33.840 --> 31:42.080
994
+ So for autonomous driving, I would say it's third person is slightly easier. And the reason I'm
995
+
996
+ 31:42.080 --> 31:48.320
997
+ going to say it's slightly easier to do with third person is because the car dynamics are very well
998
+
999
+ 31:48.320 --> 31:56.560
1000
+ understood. So the easier than first person, you mean, or easier than. So I think the distinction
1001
+
1002
+ 31:56.560 --> 32:01.680
1003
+ between third person and first person is not a very important distinction for autonomous driving.
1004
+
1005
+ 32:01.680 --> 32:07.760
1006
+ They're very similar. Because the distinction is really about who turns the steering wheel.
1007
+
1008
+ 32:07.760 --> 32:15.280
1009
+ And or maybe let me put it differently. How to get from a point where you are now to a point,
1010
+
1011
+ 32:15.280 --> 32:19.120
1012
+ let's say a couple of meters in front of you. And that's a problem that's very well understood.
1013
+
1014
+ 32:19.120 --> 32:22.480
1015
+ And that's the only distinction between third and first person there. Whereas with the robot
1016
+
1017
+ 32:22.480 --> 32:26.720
1018
+ manipulation, interaction forces are very complex. And it's still a very different thing.
1019
+
1020
+ 32:27.840 --> 32:33.840
1021
+ For autonomous driving, I think there's still the question imitation versus RL.
1022
+
1023
+ 32:33.840 --> 32:39.520
1024
+ Well, so imitation gives you a lot more signal. I think where imitation is lacking and needs
1025
+
1026
+ 32:39.520 --> 32:47.600
1027
+ some extra machinery is it doesn't in its normal format, doesn't think about goals or objectives.
1028
+
1029
+ 32:48.480 --> 32:52.240
1030
+ And of course, there are versions of imitation learning, inverse reinforcement learning type
1031
+
1032
+ 32:52.240 --> 32:57.440
1033
+ imitation, which also thinks about goals. I think then we're getting much closer. But I think it's
1034
+
1035
+ 32:57.440 --> 33:05.120
1036
+ very hard to think of a fully reactive car generalizing well, if it really doesn't have a notion
1037
+
1038
+ 33:05.120 --> 33:10.720
1039
+ of objectives to generalize well to the kind of general that you would want, you want more than
1040
+
1041
+ 33:10.720 --> 33:15.200
1042
+ just that reactivity that you get from just behavioral cloning slash supervised learning.
1043
+
1044
+ 33:17.040 --> 33:22.560
1045
+ So a lot of the work, whether it's self play or even imitation learning would benefit
1046
+
1047
+ 33:22.560 --> 33:27.440
1048
+ significantly from simulation, from effective simulation, and you're doing a lot of stuff
1049
+
1050
+ 33:27.440 --> 33:32.400
1051
+ in the physical world and in simulation, do you have hope for greater and greater
1052
+
1053
+ 33:33.520 --> 33:40.160
1054
+ power of simulation loop being boundless, eventually, to where most of what we need
1055
+
1056
+ 33:40.160 --> 33:45.600
1057
+ to operate in the physical world, what could be simulated to a degree that's directly
1058
+
1059
+ 33:45.600 --> 33:54.720
1060
+ transferable to the physical world? Are we still very far away from that? So I think
1061
+
1062
+ 33:55.840 --> 34:03.200
1063
+ we could even rephrase that question in some sense, please. And so the power of simulation,
1064
+
1065
+ 34:04.720 --> 34:09.760
1066
+ as simulators get better and better, of course, becomes stronger, and we can learn more in
1067
+
1068
+ 34:09.760 --> 34:13.760
1069
+ simulation. But there's also another version, which is where you say the simulator doesn't
1070
+
1071
+ 34:13.760 --> 34:19.120
1072
+ even have to be that precise. As long as it's somewhat representative. And instead of trying
1073
+
1074
+ 34:19.120 --> 34:24.480
1075
+ to get one simulator that is sufficiently precise to learn and transfer really well to the real
1076
+
1077
+ 34:24.480 --> 34:29.200
1078
+ world, I'm going to build many simulators, ensemble of simulators, ensemble of simulators,
1079
+
1080
+ 34:30.080 --> 34:35.120
1081
+ not any single one of them is sufficiently representative of the real world such that
1082
+
1083
+ 34:35.120 --> 34:41.760
1084
+ it would work if you train in there. But if you train in all of them, then there is something
1085
+
1086
+ 34:41.760 --> 34:47.840
1087
+ that's good in all of them. The real world will just be, you know, another one of them. That's,
1088
+
1089
+ 34:47.840 --> 34:50.720
1090
+ you know, not identical to any one of them, but just another one of them.
1091
+
1092
+ 34:50.720 --> 34:53.120
1093
+ Now, this sample from the distribution of simulators.
1094
+
1095
+ 34:53.120 --> 34:53.360
1096
+ Exactly.
1097
+
1098
+ 34:53.360 --> 34:57.600
1099
+ We do live in a simulation. So this is just one, one other one.
1100
+
1101
+ 34:57.600 --> 35:03.440
1102
+ I'm not sure about that. But yeah, it's definitely a very advanced simulator if it is.
1103
+
1104
+ 35:03.440 --> 35:08.960
1105
+ Yeah, it's a pretty good one. I've talked to Russell. It's something you think about a little bit
1106
+
1107
+ 35:08.960 --> 35:13.120
1108
+ too. Of course, you're like really trying to build these systems. But do you think about the future
1109
+
1110
+ 35:13.120 --> 35:18.880
1111
+ of AI? A lot of people have concern about safety. How do you think about AI safety as you build
1112
+
1113
+ 35:18.880 --> 35:24.960
1114
+ robots that are operating the physical world? What is, yeah, how do you approach this problem
1115
+
1116
+ 35:24.960 --> 35:27.440
1117
+ in an engineering kind of way in a systematic way?
1118
+
1119
+ 35:29.200 --> 35:36.720
1120
+ So when a robot is doing things, you kind of have a few notions of safety to worry about. One is that
1121
+
1122
+ 35:36.720 --> 35:43.760
1123
+ the robot is physically strong and of course could do a lot of damage. Same for cars, which we can
1124
+
1125
+ 35:43.760 --> 35:49.360
1126
+ think of as robots do in some way. And this could be completely unintentional. So it could be not
1127
+
1128
+ 35:49.360 --> 35:54.240
1129
+ the kind of long term AI safety concerns that, okay, AI is smarter than us. And now what do we do?
1130
+
1131
+ 35:54.240 --> 35:57.760
1132
+ But it could be just very practical. Okay, this robot, if it makes a mistake,
1133
+
1134
+ 35:58.800 --> 36:04.080
1135
+ what are the results going to be? Of course, simulation comes in a lot there to test in simulation.
1136
+
1137
+ 36:04.080 --> 36:10.960
1138
+ It's a difficult question. And I'm always wondering, like I always wonder, let's say you look at,
1139
+
1140
+ 36:10.960 --> 36:14.000
1141
+ let's go back to driving, because a lot of people know driving well, of course.
1142
+
1143
+ 36:15.120 --> 36:20.800
1144
+ What do we do to test somebody for driving, right, to get a driver's license? What do they
1145
+
1146
+ 36:20.800 --> 36:27.680
1147
+ really do? I mean, you fill out some tests, and then you drive and I mean, for a few minutes,
1148
+
1149
+ 36:27.680 --> 36:34.800
1150
+ it's suburban California, that driving test is just you drive around the block, pull over, you
1151
+
1152
+ 36:34.800 --> 36:39.280
1153
+ do a stop sign successfully, and then, you know, you pull over again, and you're pretty much done.
1154
+
1155
+ 36:40.000 --> 36:46.720
1156
+ And you're like, okay, if a self driving car did that, would you trust it that it can drive?
1157
+
1158
+ 36:46.720 --> 36:49.840
1159
+ And I'd be like, no, that's not enough for me to trust it. But somehow for humans,
1160
+
1161
+ 36:50.560 --> 36:54.480
1162
+ we've figured out that somebody being able to do that is representative
1163
+
1164
+ 36:54.480 --> 36:59.840
1165
+ of them being able to do a lot of other things. And so I think somehow for humans,
1166
+
1167
+ 36:59.840 --> 37:05.200
1168
+ we figured out representative tests of what it means if you can do this, what you can really do.
1169
+
1170
+ 37:05.760 --> 37:09.840
1171
+ Of course, testing humans, humans don't want to be tested at all times. Self driving cars or
1172
+
1173
+ 37:09.840 --> 37:13.760
1174
+ robots could be tested more often probably, you can have replicas that get tested and are known
1175
+
1176
+ 37:13.760 --> 37:19.600
1177
+ to be identical because they use the same neural net and so forth. But still, I feel like we don't
1178
+
1179
+ 37:19.600 --> 37:25.040
1180
+ have this kind of unit tests or proper tests for robots. And I think there's something very
1181
+
1182
+ 37:25.040 --> 37:29.440
1183
+ interesting to be thought about there, especially as you update things, your software improves,
1184
+
1185
+ 37:29.440 --> 37:34.640
1186
+ you have a better self driving car suite, you update it. How do you know it's indeed more
1187
+
1188
+ 37:34.640 --> 37:41.440
1189
+ capable on everything than what you had before that you didn't have any bad things creep into it?
1190
+
1191
+ 37:41.440 --> 37:45.680
1192
+ So I think that's a very interesting direction of research that there is no real solution yet,
1193
+
1194
+ 37:45.680 --> 37:50.640
1195
+ except that somehow for humans, we do because we say, okay, you have a driving test, you passed,
1196
+
1197
+ 37:50.640 --> 37:55.760
1198
+ you can go on the road now and you must have accents every like a million or 10 million miles,
1199
+
1200
+ 37:55.760 --> 38:01.520
1201
+ something pretty phenomenal compared to that short test that is being done.
1202
+
1203
+ 38:01.520 --> 38:06.000
1204
+ So let me ask, you've mentioned, you've mentioned that Andrew Ang, by example,
1205
+
1206
+ 38:06.000 --> 38:11.440
1207
+ showed you the value of kindness. And do you think the space of
1208
+
1209
+ 38:11.440 --> 38:20.240
1210
+ of policies, good policies for humans and for AI is populated by policies that
1211
+
1212
+ 38:21.440 --> 38:28.880
1213
+ with kindness or ones that are the opposite, exploitation, even evil. So if you just look
1214
+
1215
+ 38:28.880 --> 38:34.400
1216
+ at the sea of policies we operate under as human beings, or if AI system had to operate in this
1217
+
1218
+ 38:34.400 --> 38:39.440
1219
+ real world, do you think it's really easy to find policies that are full of kindness,
1220
+
1221
+ 38:39.440 --> 38:44.480
1222
+ like we naturally fall into them? Or is it like a very hard optimization problem?
1223
+
1224
+ 38:47.920 --> 38:52.720
1225
+ I mean, there is kind of two optimizations happening for humans, right? So for humans,
1226
+
1227
+ 38:52.720 --> 38:57.440
1228
+ there's kind of the very long term optimization, which evolution has done for us. And we're kind of
1229
+
1230
+ 38:57.440 --> 39:02.640
1231
+ predisposed to like certain things. And that's in some sense, what makes our learning easier,
1232
+
1233
+ 39:02.640 --> 39:10.000
1234
+ because I mean, we know things like pain and hunger and thirst. And the fact that we know about those
1235
+
1236
+ 39:10.000 --> 39:13.840
1237
+ is not something that we were taught. That's kind of innate. When we're hungry, we're unhappy.
1238
+
1239
+ 39:13.840 --> 39:20.720
1240
+ When we're thirsty, we're unhappy. When we have pain, we're unhappy. And ultimately evolution
1241
+
1242
+ 39:20.720 --> 39:25.040
1243
+ built that into us to think about those things. And so I think there is a notion that it seems
1244
+
1245
+ 39:25.040 --> 39:33.840
1246
+ somehow humans evolved in general to prefer to get along in some ways. But at the same time,
1247
+
1248
+ 39:33.840 --> 39:43.040
1249
+ also to be very territorial and kind of centric to their own tribe. It seems like that's the kind
1250
+
1251
+ 39:43.040 --> 39:47.360
1252
+ of space we converged on to. I mean, I'm not an expert in anthropology, but it seems like we're
1253
+
1254
+ 39:47.360 --> 39:54.480
1255
+ very kind of good within our own tribe, but need to be taught to be nice to other tribes.
1256
+
1257
+ 39:54.480 --> 39:58.000
1258
+ Well, if you look at Steven Pinker, he highlights this pretty nicely in
1259
+
1260
+ 40:00.720 --> 40:05.520
1261
+ Better Angels of Our Nature, where he talks about violence decreasing over time consistently.
1262
+
1263
+ 40:05.520 --> 40:11.360
1264
+ So whatever tension, whatever teams we pick, it seems that the long arc of history goes
1265
+
1266
+ 40:11.360 --> 40:17.840
1267
+ towards us getting along more and more. So do you think that
1268
+
1269
+ 40:17.840 --> 40:27.280
1270
+ do you think it's possible to teach RRL based robots this kind of kindness, this kind of ability
1271
+
1272
+ 40:27.280 --> 40:33.040
1273
+ to interact with humans, this kind of policy, even to let me ask, let me ask upon one, do you think
1274
+
1275
+ 40:33.040 --> 40:38.800
1276
+ it's possible to teach RRL based robot to love a human being and to inspire that human to love
1277
+
1278
+ 40:38.800 --> 40:48.080
1279
+ the robot back? So to like a RRL based algorithm that leads to a happy marriage? That's an interesting
1280
+
1281
+ 40:48.080 --> 40:56.080
1282
+ question. Maybe I'll answer it with another question, right? Because I mean, but I'll come
1283
+
1284
+ 40:56.080 --> 41:02.000
1285
+ back to it. So another question you can have is okay. I mean, how close does some people's
1286
+
1287
+ 41:02.000 --> 41:09.760
1288
+ happiness get from interacting with just a really nice dog? Like, I mean, dogs, you come home,
1289
+
1290
+ 41:09.760 --> 41:14.000
1291
+ that's what dogs do. They greet you. They're excited. It makes you happy when you come home
1292
+
1293
+ 41:14.000 --> 41:17.600
1294
+ to your dog. You're just like, okay, this is exciting. They're always happy when I'm here.
1295
+
1296
+ 41:18.160 --> 41:22.560
1297
+ I mean, if they don't greet you, because maybe whatever, your partner took them on a trip or
1298
+
1299
+ 41:22.560 --> 41:27.600
1300
+ something, you might not be nearly as happy when you get home, right? And so the kind of,
1301
+
1302
+ 41:27.600 --> 41:33.600
1303
+ it seems like the level of reasoning a dog has is pretty sophisticated, but then it's still not yet
1304
+
1305
+ 41:33.600 --> 41:38.240
1306
+ at the level of human reasoning. And so it seems like we don't even need to achieve human level
1307
+
1308
+ 41:38.240 --> 41:44.320
1309
+ reasoning to get like very strong affection with humans. And so my thinking is, why not, right?
1310
+
1311
+ 41:44.320 --> 41:51.360
1312
+ Why couldn't, with an AI, couldn't we achieve the kind of level of affection that humans feel
1313
+
1314
+ 41:51.360 --> 41:59.280
1315
+ among each other or with friendly animals and so forth? So question, is it a good thing for us
1316
+
1317
+ 41:59.280 --> 42:07.040
1318
+ or not? That's another thing, right? Because I mean, but I don't see why not. Why not? Yeah.
1319
+
1320
+ 42:07.040 --> 42:12.640
1321
+ So Elon Musk says love is the answer. Maybe he should say love is the objective function and
1322
+
1323
+ 42:12.640 --> 42:19.280
1324
+ then RL is the answer, right? Well, maybe. Oh, Peter, thank you so much. I don't want to take
1325
+
1326
+ 42:19.280 --> 42:23.360
1327
+ up more of your time. Thank you so much for talking today. Well, thanks for coming by.
1328
+
1329
+ 42:23.360 --> 42:53.200
1330
+ Great to have you visit.
1331
+
vtt/episode_011_small.vtt ADDED
@@ -0,0 +1,3773 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.520
4
+ The following is a conversation with Jürgen Schmidhuber.
5
+
6
+ 00:03.520 --> 00:06.360
7
+ He's the codirector of its CS Swiss AI lab
8
+
9
+ 00:06.360 --> 00:10.360
10
+ and a cocreator of long short term memory networks.
11
+
12
+ 00:10.360 --> 00:13.720
13
+ LSDMs are used in billions of devices today
14
+
15
+ 00:13.720 --> 00:17.400
16
+ for speech recognition, translation, and much more.
17
+
18
+ 00:17.400 --> 00:20.800
19
+ Over 30 years, he has proposed a lot of interesting
20
+
21
+ 00:20.800 --> 00:24.800
22
+ out of the box ideas, a meta learning, adversarial networks,
23
+
24
+ 00:24.800 --> 00:28.720
25
+ computer vision, and even a formal theory of quote,
26
+
27
+ 00:28.720 --> 00:32.360
28
+ creativity, curiosity, and fun.
29
+
30
+ 00:32.360 --> 00:34.920
31
+ This conversation is part of the MIT course
32
+
33
+ 00:34.920 --> 00:36.520
34
+ on artificial general intelligence
35
+
36
+ 00:36.520 --> 00:38.840
37
+ and the artificial intelligence podcast.
38
+
39
+ 00:38.840 --> 00:41.960
40
+ If you enjoy it, subscribe on YouTube, iTunes,
41
+
42
+ 00:41.960 --> 00:43.960
43
+ or simply connect with me on Twitter
44
+
45
+ 00:43.960 --> 00:47.280
46
+ at Lex Friedman spelled F R I D.
47
+
48
+ 00:47.280 --> 00:51.480
49
+ And now here's my conversation with Jürgen Schmidhuber.
50
+
51
+ 00:53.080 --> 00:55.640
52
+ Early on you dreamed of AI systems
53
+
54
+ 00:55.640 --> 00:58.680
55
+ that self improve recursively.
56
+
57
+ 00:58.680 --> 01:01.440
58
+ When was that dream born?
59
+
60
+ 01:01.440 --> 01:02.840
61
+ When I was a baby.
62
+
63
+ 01:02.840 --> 01:04.000
64
+ No, that's not true.
65
+
66
+ 01:04.000 --> 01:06.200
67
+ When I was a teenager.
68
+
69
+ 01:06.200 --> 01:09.400
70
+ And what was the catalyst for that birth?
71
+
72
+ 01:09.400 --> 01:12.800
73
+ What was the thing that first inspired you?
74
+
75
+ 01:12.800 --> 01:15.000
76
+ When I was a boy, I...
77
+
78
+ 01:17.400 --> 01:19.880
79
+ I was thinking about what to do in my life
80
+
81
+ 01:19.880 --> 01:23.560
82
+ and then I thought the most exciting thing
83
+
84
+ 01:23.560 --> 01:27.160
85
+ is to solve the riddles of the universe.
86
+
87
+ 01:27.160 --> 01:30.720
88
+ And that means you have to become a physicist.
89
+
90
+ 01:30.720 --> 01:35.640
91
+ However, then I realized that there's something even grander.
92
+
93
+ 01:35.640 --> 01:39.680
94
+ You can try to build a machine.
95
+
96
+ 01:39.680 --> 01:41.920
97
+ That isn't really a machine any longer.
98
+
99
+ 01:41.920 --> 01:44.280
100
+ That learns to become a much better physicist
101
+
102
+ 01:44.280 --> 01:46.840
103
+ than I could ever hope to be.
104
+
105
+ 01:46.840 --> 01:50.120
106
+ And that's how I thought maybe I can multiply
107
+
108
+ 01:50.120 --> 01:54.320
109
+ my tiny little bit of creativity into infinity.
110
+
111
+ 01:54.320 --> 01:57.160
112
+ But ultimately, that creativity will be multiplied
113
+
114
+ 01:57.160 --> 01:59.160
115
+ to understand the universe around us.
116
+
117
+ 01:59.160 --> 02:05.640
118
+ That's the curiosity for that mystery that drove you.
119
+
120
+ 02:05.640 --> 02:08.320
121
+ Yes, so if you can build a machine
122
+
123
+ 02:08.320 --> 02:13.760
124
+ that learns to solve more and more complex problems
125
+
126
+ 02:13.760 --> 02:16.720
127
+ and more and more general problems over,
128
+
129
+ 02:16.720 --> 02:22.520
130
+ then you basically have solved all the problems.
131
+
132
+ 02:22.520 --> 02:25.960
133
+ At least all the solvable problems.
134
+
135
+ 02:25.960 --> 02:27.080
136
+ So how do you think...
137
+
138
+ 02:27.080 --> 02:31.440
139
+ What is the mechanism for that kind of general solver look like?
140
+
141
+ 02:31.440 --> 02:35.480
142
+ Obviously, we don't quite yet have one or know
143
+
144
+ 02:35.480 --> 02:37.040
145
+ how to build one boy of ideas
146
+
147
+ 02:37.040 --> 02:40.800
148
+ and you have had throughout your career several ideas about it.
149
+
150
+ 02:40.800 --> 02:43.600
151
+ So how do you think about that mechanism?
152
+
153
+ 02:43.600 --> 02:48.640
154
+ So in the 80s, I thought about how to build this machine
155
+
156
+ 02:48.640 --> 02:51.000
157
+ that learns to solve all these problems
158
+
159
+ 02:51.000 --> 02:54.120
160
+ that I cannot solve myself.
161
+
162
+ 02:54.120 --> 02:57.120
163
+ And I thought it is clear, it has to be a machine
164
+
165
+ 02:57.120 --> 03:00.880
166
+ that not only learns to solve this problem here
167
+
168
+ 03:00.880 --> 03:02.640
169
+ and this problem here,
170
+
171
+ 03:02.640 --> 03:06.240
172
+ but it also has to learn to improve
173
+
174
+ 03:06.240 --> 03:09.360
175
+ the learning algorithm itself.
176
+
177
+ 03:09.360 --> 03:12.480
178
+ So it has to have the learning algorithm
179
+
180
+ 03:12.480 --> 03:15.720
181
+ in a representation that allows it to inspect it
182
+
183
+ 03:15.720 --> 03:19.240
184
+ and modify it so that it can come up
185
+
186
+ 03:19.240 --> 03:22.080
187
+ with a better learning algorithm.
188
+
189
+ 03:22.080 --> 03:25.680
190
+ So I called that meta learning, learning to learn
191
+
192
+ 03:25.680 --> 03:28.040
193
+ and recursive self improvement.
194
+
195
+ 03:28.040 --> 03:29.840
196
+ That is really the pinnacle of that,
197
+
198
+ 03:29.840 --> 03:35.960
199
+ where you then not only learn how to improve
200
+
201
+ 03:35.960 --> 03:37.480
202
+ on that problem and on that,
203
+
204
+ 03:37.480 --> 03:41.080
205
+ but you also improve the way the machine improves
206
+
207
+ 03:41.080 --> 03:43.480
208
+ and you also improve the way it improves the way
209
+
210
+ 03:43.480 --> 03:45.720
211
+ it improves itself.
212
+
213
+ 03:45.720 --> 03:48.560
214
+ And that was my 1987 diploma thesis,
215
+
216
+ 03:48.560 --> 03:53.200
217
+ which was all about that hierarchy of meta learners
218
+
219
+ 03:53.200 --> 03:57.240
220
+ that have no computational limits
221
+
222
+ 03:57.240 --> 03:59.920
223
+ except for the well known limits
224
+
225
+ 03:59.920 --> 04:03.160
226
+ that Gödel identified in 1931
227
+
228
+ 04:03.160 --> 04:05.640
229
+ and for the limits of physics.
230
+
231
+ 04:06.480 --> 04:10.040
232
+ In the recent years, meta learning has gained popularity
233
+
234
+ 04:10.040 --> 04:12.760
235
+ in a specific kind of form.
236
+
237
+ 04:12.760 --> 04:16.000
238
+ You've talked about how that's not really meta learning
239
+
240
+ 04:16.000 --> 04:21.000
241
+ with neural networks, that's more basic transfer learning.
242
+
243
+ 04:21.480 --> 04:22.720
244
+ Can you talk about the difference
245
+
246
+ 04:22.720 --> 04:25.440
247
+ between the big general meta learning
248
+
249
+ 04:25.440 --> 04:27.960
250
+ and a more narrow sense of meta learning
251
+
252
+ 04:27.960 --> 04:30.880
253
+ the way it's used today, the way it's talked about today?
254
+
255
+ 04:30.880 --> 04:33.440
256
+ Let's take the example of a deep neural network
257
+
258
+ 04:33.440 --> 04:37.240
259
+ that has learned to classify images.
260
+
261
+ 04:37.240 --> 04:40.080
262
+ And maybe you have trained that network
263
+
264
+ 04:40.080 --> 04:43.800
265
+ on 100 different databases of images.
266
+
267
+ 04:43.800 --> 04:48.120
268
+ And now a new database comes along
269
+
270
+ 04:48.120 --> 04:52.000
271
+ and you want to quickly learn the new thing as well.
272
+
273
+ 04:53.400 --> 04:57.720
274
+ So one simple way of doing that is you take the network
275
+
276
+ 04:57.720 --> 05:02.440
277
+ which already knows 100 types of databases
278
+
279
+ 05:02.440 --> 05:06.320
280
+ and then you just take the top layer of that
281
+
282
+ 05:06.320 --> 05:11.320
283
+ and you retrain that using the new labeled data
284
+
285
+ 05:11.320 --> 05:14.720
286
+ that you have in the new image database.
287
+
288
+ 05:14.720 --> 05:17.320
289
+ And then it turns out that it really, really quickly
290
+
291
+ 05:17.320 --> 05:20.560
292
+ can learn that too, one shot basically,
293
+
294
+ 05:20.560 --> 05:24.280
295
+ because from the first 100 data sets,
296
+
297
+ 05:24.280 --> 05:27.520
298
+ it already has learned so much about computer vision
299
+
300
+ 05:27.520 --> 05:31.840
301
+ that it can reuse that and that is then almost good enough
302
+
303
+ 05:31.840 --> 05:34.240
304
+ to solve the new tasks except you need a little bit
305
+
306
+ 05:34.240 --> 05:37.040
307
+ of adjustment on the top.
308
+
309
+ 05:37.040 --> 05:40.200
310
+ So that is transfer learning
311
+
312
+ 05:40.200 --> 05:43.480
313
+ and it has been done in principle for many decades.
314
+
315
+ 05:43.480 --> 05:45.680
316
+ People have done similar things for decades.
317
+
318
+ 05:47.360 --> 05:49.880
319
+ Meta learning, true meta learning is about
320
+
321
+ 05:49.880 --> 05:53.880
322
+ having the learning algorithm itself
323
+
324
+ 05:54.440 --> 05:59.440
325
+ open to introspection by the system that is using it
326
+
327
+ 06:00.440 --> 06:03.760
328
+ and also open to modification
329
+
330
+ 06:03.760 --> 06:07.200
331
+ such that the learning system has an opportunity
332
+
333
+ 06:07.200 --> 06:11.400
334
+ to modify any part of the learning algorithm
335
+
336
+ 06:11.400 --> 06:16.000
337
+ and then evaluate the consequences of that modification
338
+
339
+ 06:16.000 --> 06:21.000
340
+ and then learn from that to create a better learning algorithm
341
+
342
+ 06:22.000 --> 06:23.960
343
+ and so on recursively.
344
+
345
+ 06:24.960 --> 06:27.680
346
+ So that's a very different animal
347
+
348
+ 06:27.680 --> 06:32.680
349
+ where you are opening the space of possible learning algorithms
350
+
351
+ 06:32.680 --> 06:35.520
352
+ to the learning system itself.
353
+
354
+ 06:35.520 --> 06:39.040
355
+ Right, so you've like in the 2004 paper,
356
+
357
+ 06:39.040 --> 06:42.440
358
+ you describe Gatal machines and programs
359
+
360
+ 06:42.440 --> 06:44.520
361
+ that rewrite themselves, right?
362
+
363
+ 06:44.520 --> 06:47.520
364
+ Philosophically and even in your paper mathematically,
365
+
366
+ 06:47.520 --> 06:50.000
367
+ these are really compelling ideas,
368
+
369
+ 06:50.000 --> 06:55.000
370
+ but practically, do you see these self referential programs
371
+
372
+ 06:55.320 --> 06:59.400
373
+ being successful in the near term to having an impact
374
+
375
+ 06:59.400 --> 07:03.040
376
+ where sort of it demonstrates to the world
377
+
378
+ 07:03.040 --> 07:08.040
379
+ that this direction is a good one to pursue in the near term?
380
+
381
+ 07:08.680 --> 07:11.400
382
+ Yes, we had these two different types
383
+
384
+ 07:11.400 --> 07:13.440
385
+ of fundamental research,
386
+
387
+ 07:13.440 --> 07:15.840
388
+ how to build a universal problem solver,
389
+
390
+ 07:15.840 --> 07:19.840
391
+ one basically exploiting proof search
392
+
393
+ 07:23.000 --> 07:24.960
394
+ and things like that that you need to come up
395
+
396
+ 07:24.960 --> 07:29.960
397
+ with asymptotically optimal, theoretically optimal
398
+
399
+ 07:30.320 --> 07:33.200
400
+ self improvers and problem solvers.
401
+
402
+ 07:34.200 --> 07:39.200
403
+ However, one has to admit that through this proof search
404
+
405
+ 07:40.640 --> 07:43.640
406
+ comes in an additive constant,
407
+
408
+ 07:43.640 --> 07:46.800
409
+ an overhead, an additive overhead
410
+
411
+ 07:46.800 --> 07:51.800
412
+ that vanishes in comparison to what you have to do
413
+
414
+ 07:51.800 --> 07:53.960
415
+ to solve large problems.
416
+
417
+ 07:53.960 --> 07:56.920
418
+ However, for many of the small problems
419
+
420
+ 07:56.920 --> 07:59.920
421
+ that we want to solve in our everyday life,
422
+
423
+ 07:59.920 --> 08:02.440
424
+ we cannot ignore this constant overhead.
425
+
426
+ 08:02.440 --> 08:07.440
427
+ And that's why we also have been doing other things,
428
+
429
+ 08:07.440 --> 08:11.160
430
+ non universal things such as recurrent neural networks
431
+
432
+ 08:11.160 --> 08:14.360
433
+ which are trained by gradient descent
434
+
435
+ 08:14.360 --> 08:17.600
436
+ and local search techniques which aren't universal at all,
437
+
438
+ 08:17.600 --> 08:20.280
439
+ which aren't provably optimal at all
440
+
441
+ 08:20.280 --> 08:22.000
442
+ like the other stuff that we did,
443
+
444
+ 08:22.000 --> 08:24.600
445
+ but which are much more practical
446
+
447
+ 08:24.600 --> 08:27.920
448
+ as long as we only want to solve the small problems
449
+
450
+ 08:27.920 --> 08:32.920
451
+ that we are typically trying to solve in this environment here.
452
+
453
+ 08:34.680 --> 08:38.200
454
+ So the universal problem solvers like the Gödel machine
455
+
456
+ 08:38.200 --> 08:41.320
457
+ but also Markus Hutter's fastest way
458
+
459
+ 08:41.320 --> 08:43.560
460
+ of solving all possible problems,
461
+
462
+ 08:43.560 --> 08:47.360
463
+ which he developed around 2002 in my lab,
464
+
465
+ 08:47.360 --> 08:51.280
466
+ they are associated with these constant overheads
467
+
468
+ 08:51.280 --> 08:53.280
469
+ for proof search, which guarantees
470
+
471
+ 08:53.280 --> 08:55.480
472
+ that the thing that you're doing is optimal.
473
+
474
+ 08:55.480 --> 08:59.880
475
+ For example, there is this fastest way
476
+
477
+ 08:59.880 --> 09:03.880
478
+ of solving all problems with a computable solution
479
+
480
+ 09:03.880 --> 09:05.880
481
+ which is due to Markus Hutter.
482
+
483
+ 09:05.880 --> 09:10.880
484
+ And to explain what's going on there,
485
+
486
+ 09:10.880 --> 09:13.040
487
+ let's take traveling salesman problems.
488
+
489
+ 09:14.240 --> 09:16.160
490
+ With traveling salesman problems,
491
+
492
+ 09:16.160 --> 09:20.080
493
+ you have a number of cities, N cities,
494
+
495
+ 09:20.080 --> 09:22.480
496
+ and you try to find the shortest path
497
+
498
+ 09:22.480 --> 09:26.480
499
+ through all these cities without visiting any city twice.
500
+
501
+ 09:28.480 --> 09:31.040
502
+ And nobody knows the fastest way
503
+
504
+ 09:31.040 --> 09:35.040
505
+ of solving traveling salesman problems, TSPs,
506
+
507
+ 09:37.520 --> 09:40.480
508
+ but let's assume there is a method of solving them
509
+
510
+ 09:40.480 --> 09:44.480
511
+ within N to the five operations
512
+
513
+ 09:44.480 --> 09:48.560
514
+ where N is the number of cities.
515
+
516
+ 09:50.160 --> 09:54.560
517
+ Then the universal method of Markus
518
+
519
+ 09:54.560 --> 09:58.560
520
+ is going to solve the same traveling salesman problem
521
+
522
+ 09:58.560 --> 10:02.080
523
+ also within N to the five steps,
524
+
525
+ 10:02.080 --> 10:06.360
526
+ plus O of one, plus a constant number of steps
527
+
528
+ 10:06.360 --> 10:09.240
529
+ that you need for the proof searcher,
530
+
531
+ 10:09.240 --> 10:13.800
532
+ which you need to show that this particular
533
+
534
+ 10:13.800 --> 10:17.240
535
+ class of problems that traveling salesman problems
536
+
537
+ 10:17.240 --> 10:19.360
538
+ can be solved within a certain time bound,
539
+
540
+ 10:20.520 --> 10:24.400
541
+ within order N to the five steps, basically.
542
+
543
+ 10:24.400 --> 10:28.520
544
+ And this additive constant doesn't care for N,
545
+
546
+ 10:28.520 --> 10:32.400
547
+ which means as N is getting larger and larger,
548
+
549
+ 10:32.400 --> 10:34.880
550
+ as you have more and more cities,
551
+
552
+ 10:34.880 --> 10:38.600
553
+ the constant overhead pales and comparison.
554
+
555
+ 10:38.600 --> 10:44.120
556
+ And that means that almost all large problems are solved
557
+
558
+ 10:44.120 --> 10:46.520
559
+ in the best possible way already today.
560
+
561
+ 10:46.520 --> 10:50.480
562
+ We already have a universal problem solved like that.
563
+
564
+ 10:50.480 --> 10:54.520
565
+ However, it's not practical because the overhead,
566
+
567
+ 10:54.520 --> 10:57.440
568
+ the constant overhead is so large
569
+
570
+ 10:57.440 --> 11:00.200
571
+ that for the small kinds of problems
572
+
573
+ 11:00.200 --> 11:04.560
574
+ that we want to solve in this little biosphere.
575
+
576
+ 11:04.560 --> 11:06.360
577
+ By the way, when you say small,
578
+
579
+ 11:06.360 --> 11:08.600
580
+ you're talking about things that fall
581
+
582
+ 11:08.600 --> 11:10.880
583
+ within the constraints of our computational systems.
584
+
585
+ 11:10.880 --> 11:14.280
586
+ So they can seem quite large to us mere humans.
587
+
588
+ 11:14.280 --> 11:15.360
589
+ That's right, yeah.
590
+
591
+ 11:15.360 --> 11:19.000
592
+ So they seem large and even unsolvable
593
+
594
+ 11:19.000 --> 11:21.000
595
+ in a practical sense today,
596
+
597
+ 11:21.000 --> 11:24.760
598
+ but they are still small compared to almost all problems
599
+
600
+ 11:24.760 --> 11:28.480
601
+ because almost all problems are large problems,
602
+
603
+ 11:28.480 --> 11:30.840
604
+ which are much larger than any constant.
605
+
606
+ 11:31.920 --> 11:34.520
607
+ Do you find it useful as a person
608
+
609
+ 11:34.520 --> 11:38.680
610
+ who is dreamed of creating a general learning system,
611
+
612
+ 11:38.680 --> 11:39.880
613
+ has worked on creating one,
614
+
615
+ 11:39.880 --> 11:42.160
616
+ has done a lot of interesting ideas there
617
+
618
+ 11:42.160 --> 11:46.360
619
+ to think about P versus NP,
620
+
621
+ 11:46.360 --> 11:50.800
622
+ this formalization of how hard problems are,
623
+
624
+ 11:50.800 --> 11:52.360
625
+ how they scale,
626
+
627
+ 11:52.360 --> 11:55.200
628
+ this kind of worst case analysis type of thinking.
629
+
630
+ 11:55.200 --> 11:56.840
631
+ Do you find that useful?
632
+
633
+ 11:56.840 --> 11:59.720
634
+ Or is it only just a mathematical,
635
+
636
+ 12:00.560 --> 12:02.640
637
+ it's a set of mathematical techniques
638
+
639
+ 12:02.640 --> 12:05.760
640
+ to give you intuition about what's good and bad?
641
+
642
+ 12:05.760 --> 12:09.440
643
+ So P versus NP, that's super interesting
644
+
645
+ 12:09.440 --> 12:11.800
646
+ from a theoretical point of view.
647
+
648
+ 12:11.800 --> 12:14.560
649
+ And in fact, as you are thinking about that problem,
650
+
651
+ 12:14.560 --> 12:17.280
652
+ you can also get inspiration
653
+
654
+ 12:17.280 --> 12:21.280
655
+ for better practical problem solvers.
656
+
657
+ 12:21.280 --> 12:23.320
658
+ On the other hand, we have to admit
659
+
660
+ 12:23.320 --> 12:24.560
661
+ that at the moment,
662
+
663
+ 12:24.560 --> 12:28.360
664
+ the best practical problem solvers
665
+
666
+ 12:28.360 --> 12:30.120
667
+ for all kinds of problems
668
+
669
+ 12:30.120 --> 12:33.880
670
+ that we are now solving through what is called AI at the moment,
671
+
672
+ 12:33.880 --> 12:36.240
673
+ they are not of the kind
674
+
675
+ 12:36.240 --> 12:38.800
676
+ that is inspired by these questions.
677
+
678
+ 12:38.800 --> 12:42.680
679
+ There we are using general purpose computers,
680
+
681
+ 12:42.680 --> 12:44.840
682
+ such as recurrent neural networks,
683
+
684
+ 12:44.840 --> 12:46.680
685
+ but we have a search technique,
686
+
687
+ 12:46.680 --> 12:50.320
688
+ which is just local search gradient descent
689
+
690
+ 12:50.320 --> 12:51.960
691
+ to try to find a program
692
+
693
+ 12:51.960 --> 12:54.400
694
+ that is running on these recurrent networks,
695
+
696
+ 12:54.400 --> 12:58.160
697
+ such that it can solve some interesting problems,
698
+
699
+ 12:58.160 --> 13:01.920
700
+ such as speech recognition or machine translation
701
+
702
+ 13:01.920 --> 13:03.200
703
+ and something like that.
704
+
705
+ 13:03.200 --> 13:06.480
706
+ And there is very little theory
707
+
708
+ 13:06.480 --> 13:09.720
709
+ behind the best solutions that we have at the moment
710
+
711
+ 13:09.720 --> 13:10.800
712
+ that can do that.
713
+
714
+ 13:10.800 --> 13:12.640
715
+ Do you think that needs to change?
716
+
717
+ 13:12.640 --> 13:15.120
718
+ Do you think that will change or can we go,
719
+
720
+ 13:15.120 --> 13:17.120
721
+ can we create a general intelligence systems
722
+
723
+ 13:17.120 --> 13:19.200
724
+ without ever really proving
725
+
726
+ 13:19.200 --> 13:20.600
727
+ that that system is intelligent
728
+
729
+ 13:20.600 --> 13:22.560
730
+ in some kind of mathematical way,
731
+
732
+ 13:22.560 --> 13:24.960
733
+ solving machine translation perfectly
734
+
735
+ 13:24.960 --> 13:26.320
736
+ or something like that,
737
+
738
+ 13:26.320 --> 13:29.160
739
+ within some kind of syntactic definition of a language?
740
+
741
+ 13:29.160 --> 13:31.120
742
+ Or can we just be super impressed
743
+
744
+ 13:31.120 --> 13:35.080
745
+ by the thing working extremely well and that's sufficient?
746
+
747
+ 13:35.080 --> 13:36.720
748
+ There's an old saying,
749
+
750
+ 13:36.720 --> 13:39.360
751
+ and I don't know who brought it up first,
752
+
753
+ 13:39.360 --> 13:42.440
754
+ which says there's nothing more practical
755
+
756
+ 13:42.440 --> 13:43.680
757
+ than a good theory.
758
+
759
+ 13:43.680 --> 13:48.680
760
+ And a good theory of problem solving
761
+
762
+ 13:52.760 --> 13:55.560
763
+ under limited resources like here in this universe
764
+
765
+ 13:55.560 --> 13:57.000
766
+ or on this little planet
767
+
768
+ 13:58.480 --> 14:01.800
769
+ has to take into account these limited resources.
770
+
771
+ 14:01.800 --> 14:06.800
772
+ And so probably there is locking a theory
773
+
774
+ 14:08.040 --> 14:10.800
775
+ which is related to what we already have,
776
+
777
+ 14:10.800 --> 14:14.440
778
+ these asymptotically optimal problem solvers,
779
+
780
+ 14:14.440 --> 14:18.560
781
+ which tells us what we need in addition to that
782
+
783
+ 14:18.560 --> 14:21.760
784
+ to come up with a practically optimal problem solver.
785
+
786
+ 14:21.760 --> 14:26.760
787
+ So I believe we will have something like that
788
+
789
+ 14:27.080 --> 14:29.720
790
+ and maybe just a few little tiny twists
791
+
792
+ 14:29.720 --> 14:34.320
793
+ are necessary to change what we already have
794
+
795
+ 14:34.320 --> 14:36.360
796
+ to come up with that as well.
797
+
798
+ 14:36.360 --> 14:37.800
799
+ As long as we don't have that,
800
+
801
+ 14:37.800 --> 14:42.600
802
+ we admit that we are taking suboptimal ways
803
+
804
+ 14:42.600 --> 14:46.040
805
+ and recurrent neural networks and long short term memory
806
+
807
+ 14:46.040 --> 14:50.440
808
+ for equipped with local search techniques
809
+
810
+ 14:50.440 --> 14:53.560
811
+ and we are happy that it works better
812
+
813
+ 14:53.560 --> 14:55.480
814
+ than any competing methods,
815
+
816
+ 14:55.480 --> 15:00.480
817
+ but that doesn't mean that we think we are done.
818
+
819
+ 15:00.800 --> 15:05.040
820
+ You've said that an AGI system will ultimately be a simple one,
821
+
822
+ 15:05.040 --> 15:08.000
823
+ a general intelligence system will ultimately be a simple one,
824
+
825
+ 15:08.000 --> 15:10.240
826
+ maybe a pseudo code of a few lines
827
+
828
+ 15:10.240 --> 15:11.840
829
+ will be able to describe it.
830
+
831
+ 15:11.840 --> 15:16.760
832
+ Can you talk through your intuition behind this idea,
833
+
834
+ 15:16.760 --> 15:21.760
835
+ why you feel that at its core intelligence
836
+
837
+ 15:22.120 --> 15:25.560
838
+ is a simple algorithm?
839
+
840
+ 15:26.920 --> 15:31.680
841
+ Experience tells us that the stuff that works best
842
+
843
+ 15:31.680 --> 15:33.120
844
+ is really simple.
845
+
846
+ 15:33.120 --> 15:37.640
847
+ So the asymptotically optimal ways of solving problems,
848
+
849
+ 15:37.640 --> 15:38.800
850
+ if you look at them,
851
+
852
+ 15:38.800 --> 15:41.800
853
+ they're just a few lines of code, it's really true.
854
+
855
+ 15:41.800 --> 15:44.000
856
+ Although they are these amazing properties,
857
+
858
+ 15:44.000 --> 15:45.760
859
+ just a few lines of code,
860
+
861
+ 15:45.760 --> 15:50.760
862
+ then the most promising and most useful practical things
863
+
864
+ 15:53.760 --> 15:57.760
865
+ maybe don't have this proof of optimality associated with them.
866
+
867
+ 15:57.760 --> 16:00.840
868
+ However, they are also just a few lines of code.
869
+
870
+ 16:00.840 --> 16:05.040
871
+ The most successful recurrent neural networks,
872
+
873
+ 16:05.040 --> 16:08.360
874
+ you can write them down and five lines of pseudo code.
875
+
876
+ 16:08.360 --> 16:10.920
877
+ That's a beautiful, almost poetic idea,
878
+
879
+ 16:10.920 --> 16:15.600
880
+ but what you're describing there
881
+
882
+ 16:15.600 --> 16:17.400
883
+ is the lines of pseudo code
884
+
885
+ 16:17.400 --> 16:20.600
886
+ are sitting on top of layers and layers of abstractions,
887
+
888
+ 16:20.600 --> 16:22.240
889
+ in a sense.
890
+
891
+ 16:22.240 --> 16:25.040
892
+ So you're saying at the very top,
893
+
894
+ 16:25.040 --> 16:30.040
895
+ it'll be a beautifully written sort of algorithm,
896
+
897
+ 16:31.120 --> 16:33.960
898
+ but do you think that there's many layers of abstractions
899
+
900
+ 16:33.960 --> 16:36.880
901
+ we have to first learn to construct?
902
+
903
+ 16:36.880 --> 16:38.280
904
+ Yeah, of course.
905
+
906
+ 16:38.280 --> 16:42.640
907
+ We are building on all these great abstractions
908
+
909
+ 16:42.640 --> 16:46.040
910
+ that people have invented over the millennia,
911
+
912
+ 16:46.040 --> 16:51.040
913
+ such as matrix multiplications and drill numbers
914
+
915
+ 16:51.600 --> 16:56.600
916
+ and basic arithmetics and calculus and derivations
917
+
918
+ 16:58.720 --> 17:03.320
919
+ of error functions and derivatives of error functions
920
+
921
+ 17:03.320 --> 17:04.320
922
+ and stuff like that.
923
+
924
+ 17:05.440 --> 17:10.440
925
+ So without that language that greatly simplifies
926
+
927
+ 17:10.440 --> 17:13.880
928
+ our way of thinking about these problems,
929
+
930
+ 17:13.880 --> 17:14.840
931
+ we couldn't do anything.
932
+
933
+ 17:14.840 --> 17:16.560
934
+ So in that sense, as always,
935
+
936
+ 17:16.560 --> 17:19.600
937
+ we are standing on the shoulders of the giants
938
+
939
+ 17:19.600 --> 17:24.600
940
+ who in the past simplified the problem of problem solving
941
+
942
+ 17:25.520 --> 17:30.000
943
+ so much that now we have a chance to do the final step.
944
+
945
+ 17:30.000 --> 17:32.120
946
+ So the final step will be a simple one.
947
+
948
+ 17:34.000 --> 17:36.760
949
+ If we take a step back through all of human civilization
950
+
951
+ 17:36.760 --> 17:38.360
952
+ and just the universe in general,
953
+
954
+ 17:38.360 --> 17:41.440
955
+ how do you think about evolution?
956
+
957
+ 17:41.440 --> 17:45.400
958
+ And what if creating a universe is required
959
+
960
+ 17:45.400 --> 17:47.320
961
+ to achieve this final step?
962
+
963
+ 17:47.320 --> 17:50.920
964
+ What if going through the very painful
965
+
966
+ 17:50.920 --> 17:53.840
967
+ and inefficient process of evolution is needed
968
+
969
+ 17:53.840 --> 17:55.880
970
+ to come up with this set of abstractions
971
+
972
+ 17:55.880 --> 17:57.800
973
+ that ultimately lead to intelligence?
974
+
975
+ 17:57.800 --> 18:00.800
976
+ Do you think there's a shortcut
977
+
978
+ 18:00.800 --> 18:04.640
979
+ or do you think we have to create something like our universe
980
+
981
+ 18:04.640 --> 18:09.480
982
+ in order to create something like human level intelligence?
983
+
984
+ 18:09.480 --> 18:13.160
985
+ So far, the only example we have is this one,
986
+
987
+ 18:13.160 --> 18:15.160
988
+ this universe in which we are living.
989
+
990
+ 18:15.160 --> 18:16.360
991
+ You think you can do better?
992
+
993
+ 18:20.880 --> 18:25.000
994
+ Maybe not, but we are part of this whole process.
995
+
996
+ 18:25.000 --> 18:30.000
997
+ So apparently, so it might be the case
998
+
999
+ 18:30.000 --> 18:32.160
1000
+ that the code that runs the universe
1001
+
1002
+ 18:32.160 --> 18:33.720
1003
+ is really, really simple.
1004
+
1005
+ 18:33.720 --> 18:36.640
1006
+ Everything points to that possibility
1007
+
1008
+ 18:36.640 --> 18:39.960
1009
+ because gravity and other basic forces
1010
+
1011
+ 18:39.960 --> 18:44.120
1012
+ are really simple laws that can be easily described,
1013
+
1014
+ 18:44.120 --> 18:47.080
1015
+ also in just a few lines of code, basically.
1016
+
1017
+ 18:47.080 --> 18:52.080
1018
+ And then there are these other events
1019
+
1020
+ 18:52.200 --> 18:55.080
1021
+ that the apparently random events
1022
+
1023
+ 18:55.080 --> 18:56.560
1024
+ in the history of the universe,
1025
+
1026
+ 18:56.560 --> 18:58.800
1027
+ which as far as we know at the moment
1028
+
1029
+ 18:58.800 --> 19:00.720
1030
+ don't have a compact code,
1031
+
1032
+ 19:00.720 --> 19:03.240
1033
+ but who knows, maybe somebody in the near future
1034
+
1035
+ 19:03.240 --> 19:06.800
1036
+ is going to figure out the pseudo random generator,
1037
+
1038
+ 19:06.800 --> 19:11.800
1039
+ which is computing whether the measurement of that
1040
+
1041
+ 19:13.520 --> 19:15.920
1042
+ spin up or down thing here
1043
+
1044
+ 19:15.920 --> 19:18.440
1045
+ is going to be positive or negative.
1046
+
1047
+ 19:18.440 --> 19:19.880
1048
+ Underline quantum mechanics.
1049
+
1050
+ 19:19.880 --> 19:20.720
1051
+ Yes, so.
1052
+
1053
+ 19:20.720 --> 19:23.160
1054
+ Do you ultimately think quantum mechanics
1055
+
1056
+ 19:23.160 --> 19:25.200
1057
+ is a pseudo random number generator?
1058
+
1059
+ 19:25.200 --> 19:26.920
1060
+ So it's all deterministic.
1061
+
1062
+ 19:26.920 --> 19:28.760
1063
+ There's no randomness in our universe.
1064
+
1065
+ 19:30.400 --> 19:31.800
1066
+ Does God play dice?
1067
+
1068
+ 19:31.800 --> 19:34.080
1069
+ So a couple of years ago,
1070
+
1071
+ 19:34.080 --> 19:39.080
1072
+ a famous physicist, quantum physicist, Anton Zeilinger,
1073
+
1074
+ 19:39.080 --> 19:41.600
1075
+ he wrote an essay in Nature,
1076
+
1077
+ 19:41.600 --> 19:44.280
1078
+ and it started more or less like that.
1079
+
1080
+ 19:46.720 --> 19:51.720
1081
+ One of the fundamental insights of the 20th century
1082
+
1083
+ 19:53.280 --> 19:58.280
1084
+ was that the universe is fundamentally random
1085
+
1086
+ 19:58.280 --> 20:02.760
1087
+ on the quantum level, and that whenever
1088
+
1089
+ 20:03.760 --> 20:06.720
1090
+ you measure spin up or down or something like that,
1091
+
1092
+ 20:06.720 --> 20:10.720
1093
+ a new bit of information enters the history of the universe.
1094
+
1095
+ 20:13.440 --> 20:14.680
1096
+ And while I was reading that,
1097
+
1098
+ 20:14.680 --> 20:18.000
1099
+ I was already typing the response
1100
+
1101
+ 20:18.000 --> 20:20.280
1102
+ and they had to publish it because I was right,
1103
+
1104
+ 20:21.560 --> 20:25.560
1105
+ that there is no evidence, no physical evidence for that.
1106
+
1107
+ 20:25.560 --> 20:28.440
1108
+ So there's an alternative explanation
1109
+
1110
+ 20:28.440 --> 20:31.240
1111
+ where everything that we consider random
1112
+
1113
+ 20:31.240 --> 20:33.800
1114
+ is actually pseudo random,
1115
+
1116
+ 20:33.800 --> 20:38.800
1117
+ such as the decimal expansion of pi, 3.141 and so on,
1118
+
1119
+ 20:39.400 --> 20:42.120
1120
+ which looks random, but isn't.
1121
+
1122
+ 20:42.120 --> 20:47.120
1123
+ So pi is interesting because every three digit sequence,
1124
+
1125
+ 20:47.720 --> 20:51.720
1126
+ every sequence of three digits appears roughly
1127
+
1128
+ 20:51.720 --> 20:56.720
1129
+ one in a thousand times, and every five digit sequence
1130
+
1131
+ 20:57.360 --> 21:00.760
1132
+ appears roughly one in 10,000 times.
1133
+
1134
+ 21:00.760 --> 21:02.760
1135
+ What do you expect?
1136
+
1137
+ 21:02.760 --> 21:06.760
1138
+ If it was random, but there's a very short algorithm,
1139
+
1140
+ 21:06.760 --> 21:09.120
1141
+ a short program that computes all of that.
1142
+
1143
+ 21:09.120 --> 21:11.200
1144
+ So it's extremely compressible.
1145
+
1146
+ 21:11.200 --> 21:13.120
1147
+ And who knows, maybe tomorrow somebody,
1148
+
1149
+ 21:13.120 --> 21:15.360
1150
+ some grad student at CERN goes back
1151
+
1152
+ 21:15.360 --> 21:19.120
1153
+ over all these data points, better decay,
1154
+
1155
+ 21:19.120 --> 21:21.760
1156
+ and whatever, and figures out, oh,
1157
+
1158
+ 21:21.760 --> 21:25.760
1159
+ it's the second billion digits of pi or something like that.
1160
+
1161
+ 21:25.760 --> 21:28.840
1162
+ We don't have any fundamental reason at the moment
1163
+
1164
+ 21:28.840 --> 21:33.600
1165
+ to believe that this is truly random
1166
+
1167
+ 21:33.600 --> 21:36.440
1168
+ and not just a deterministic video game.
1169
+
1170
+ 21:36.440 --> 21:38.680
1171
+ If it was a deterministic video game,
1172
+
1173
+ 21:38.680 --> 21:40.360
1174
+ it would be much more beautiful
1175
+
1176
+ 21:40.360 --> 21:44.160
1177
+ because beauty is simplicity.
1178
+
1179
+ 21:44.160 --> 21:47.560
1180
+ And many of the basic laws of the universe
1181
+
1182
+ 21:47.560 --> 21:51.560
1183
+ like gravity and the other basic forces are very simple.
1184
+
1185
+ 21:51.560 --> 21:55.560
1186
+ So very short programs can explain what these are doing.
1187
+
1188
+ 21:56.560 --> 22:00.560
1189
+ And it would be awful and ugly.
1190
+
1191
+ 22:00.560 --> 22:01.560
1192
+ The universe would be ugly.
1193
+
1194
+ 22:01.560 --> 22:03.560
1195
+ The history of the universe would be ugly
1196
+
1197
+ 22:03.560 --> 22:06.560
1198
+ if for the extra things, the random,
1199
+
1200
+ 22:06.560 --> 22:10.560
1201
+ the seemingly random data points that we get all the time
1202
+
1203
+ 22:10.560 --> 22:15.560
1204
+ that we really need a huge number of extra bits
1205
+
1206
+ 22:15.560 --> 22:21.560
1207
+ to describe all these extra bits of information.
1208
+
1209
+ 22:22.560 --> 22:25.560
1210
+ So as long as we don't have evidence
1211
+
1212
+ 22:25.560 --> 22:27.560
1213
+ that there is no short program
1214
+
1215
+ 22:27.560 --> 22:32.560
1216
+ that computes the entire history of the entire universe,
1217
+
1218
+ 22:32.560 --> 22:38.560
1219
+ we are, as scientists, compelled to look further
1220
+
1221
+ 22:38.560 --> 22:41.560
1222
+ for that shortest program.
1223
+
1224
+ 22:41.560 --> 22:46.560
1225
+ Your intuition says there exists a program
1226
+
1227
+ 22:46.560 --> 22:50.560
1228
+ that can backtrack to the creation of the universe.
1229
+
1230
+ 22:50.560 --> 22:53.560
1231
+ So it can take the shortest path to the creation of the universe.
1232
+
1233
+ 22:53.560 --> 22:57.560
1234
+ Yes, including all the entanglement things
1235
+
1236
+ 22:57.560 --> 23:01.560
1237
+ and all the spin up and down measurements
1238
+
1239
+ 23:01.560 --> 23:09.560
1240
+ that have been taken place since 13.8 billion years ago.
1241
+
1242
+ 23:09.560 --> 23:14.560
1243
+ So we don't have a proof that it is random.
1244
+
1245
+ 23:14.560 --> 23:19.560
1246
+ We don't have a proof that it is compressible to a short program.
1247
+
1248
+ 23:19.560 --> 23:21.560
1249
+ But as long as we don't have that proof,
1250
+
1251
+ 23:21.560 --> 23:24.560
1252
+ we are obliged as scientists to keep looking
1253
+
1254
+ 23:24.560 --> 23:26.560
1255
+ for that simple explanation.
1256
+
1257
+ 23:26.560 --> 23:27.560
1258
+ Absolutely.
1259
+
1260
+ 23:27.560 --> 23:30.560
1261
+ So you said simplicity is beautiful or beauty is simple.
1262
+
1263
+ 23:30.560 --> 23:32.560
1264
+ Either one works.
1265
+
1266
+ 23:32.560 --> 23:36.560
1267
+ But you also work on curiosity, discovery.
1268
+
1269
+ 23:36.560 --> 23:42.560
1270
+ The romantic notion of randomness, of serendipity,
1271
+
1272
+ 23:42.560 --> 23:49.560
1273
+ of being surprised by things that are about you,
1274
+
1275
+ 23:49.560 --> 23:53.560
1276
+ kind of in our poetic notion of reality,
1277
+
1278
+ 23:53.560 --> 23:56.560
1279
+ we think as humans require randomness.
1280
+
1281
+ 23:56.560 --> 23:58.560
1282
+ So you don't find randomness beautiful.
1283
+
1284
+ 23:58.560 --> 24:04.560
1285
+ You find simple determinism beautiful.
1286
+
1287
+ 24:04.560 --> 24:06.560
1288
+ Yeah.
1289
+
1290
+ 24:06.560 --> 24:07.560
1291
+ Okay.
1292
+
1293
+ 24:07.560 --> 24:08.560
1294
+ So why?
1295
+
1296
+ 24:08.560 --> 24:09.560
1297
+ Why?
1298
+
1299
+ 24:09.560 --> 24:12.560
1300
+ Because the explanation becomes shorter.
1301
+
1302
+ 24:12.560 --> 24:19.560
1303
+ A universe that is compressible to a short program
1304
+
1305
+ 24:19.560 --> 24:22.560
1306
+ is much more elegant and much more beautiful
1307
+
1308
+ 24:22.560 --> 24:24.560
1309
+ than another one,
1310
+
1311
+ 24:24.560 --> 24:28.560
1312
+ which needs an almost infinite number of bits to be described.
1313
+
1314
+ 24:28.560 --> 24:31.560
1315
+ As far as we know,
1316
+
1317
+ 24:31.560 --> 24:34.560
1318
+ many things that are happening in this universe are really simple
1319
+
1320
+ 24:34.560 --> 24:38.560
1321
+ in terms of short programs that compute gravity
1322
+
1323
+ 24:38.560 --> 24:43.560
1324
+ and the interaction between elementary particles and so on.
1325
+
1326
+ 24:43.560 --> 24:45.560
1327
+ So all of that seems to be very, very simple.
1328
+
1329
+ 24:45.560 --> 24:50.560
1330
+ Every electron seems to reuse the same subprogram all the time
1331
+
1332
+ 24:50.560 --> 24:57.560
1333
+ as it is interacting with other elementary particles.
1334
+
1335
+ 24:57.560 --> 25:04.560
1336
+ If we now require an extra oracle
1337
+
1338
+ 25:04.560 --> 25:07.560
1339
+ injecting new bits of information all the time
1340
+
1341
+ 25:07.560 --> 25:11.560
1342
+ for these extra things which are currently not understood,
1343
+
1344
+ 25:11.560 --> 25:18.560
1345
+ such as better decay,
1346
+
1347
+ 25:18.560 --> 25:25.560
1348
+ then the whole description length of the data that we can observe
1349
+
1350
+ 25:25.560 --> 25:31.560
1351
+ of the history of the universe would become much longer.
1352
+
1353
+ 25:31.560 --> 25:33.560
1354
+ And therefore, uglier.
1355
+
1356
+ 25:33.560 --> 25:34.560
1357
+ And uglier.
1358
+
1359
+ 25:34.560 --> 25:38.560
1360
+ Again, the simplicity is elegant and beautiful.
1361
+
1362
+ 25:38.560 --> 25:42.560
1363
+ All the history of science is a history of compression progress.
1364
+
1365
+ 25:42.560 --> 25:43.560
1366
+ Yeah.
1367
+
1368
+ 25:43.560 --> 25:48.560
1369
+ So you've described sort of as we build up abstractions
1370
+
1371
+ 25:48.560 --> 25:52.560
1372
+ and you've talked about the idea of compression.
1373
+
1374
+ 25:52.560 --> 25:55.560
1375
+ How do you see this, the history of science,
1376
+
1377
+ 25:55.560 --> 25:59.560
1378
+ the history of humanity, our civilization and life on Earth
1379
+
1380
+ 25:59.560 --> 26:03.560
1381
+ as some kind of path towards greater and greater compression?
1382
+
1383
+ 26:03.560 --> 26:04.560
1384
+ What do you mean by that?
1385
+
1386
+ 26:04.560 --> 26:06.560
1387
+ How do you think about that?
1388
+
1389
+ 26:06.560 --> 26:12.560
1390
+ Indeed, the history of science is a history of compression progress.
1391
+
1392
+ 26:12.560 --> 26:14.560
1393
+ What does that mean?
1394
+
1395
+ 26:14.560 --> 26:17.560
1396
+ Hundreds of years ago, there was an astronomer
1397
+
1398
+ 26:17.560 --> 26:19.560
1399
+ whose name was Kepler.
1400
+
1401
+ 26:19.560 --> 26:25.560
1402
+ And he looked at the data points that he got by watching planets move.
1403
+
1404
+ 26:25.560 --> 26:28.560
1405
+ And then he had all these data points and suddenly it turned out
1406
+
1407
+ 26:28.560 --> 26:37.560
1408
+ that he can greatly compress the data by predicting it through an ellipse law.
1409
+
1410
+ 26:37.560 --> 26:44.560
1411
+ So it turns out that all these data points are more or less on ellipses around the sun.
1412
+
1413
+ 26:44.560 --> 26:50.560
1414
+ And another guy came along whose name was Newton and before him Hook.
1415
+
1416
+ 26:50.560 --> 26:57.560
1417
+ And they said the same thing that is making these planets move like that
1418
+
1419
+ 26:57.560 --> 27:01.560
1420
+ is what makes the apples fall down.
1421
+
1422
+ 27:01.560 --> 27:10.560
1423
+ And it also holds for stones and for all kinds of other objects.
1424
+
1425
+ 27:10.560 --> 27:16.560
1426
+ And suddenly many, many of these observations became much more compressible
1427
+
1428
+ 27:16.560 --> 27:19.560
1429
+ because as long as you can predict the next thing,
1430
+
1431
+ 27:19.560 --> 27:22.560
1432
+ given what you have seen so far, you can compress it.
1433
+
1434
+ 27:22.560 --> 27:24.560
1435
+ But you don't have to store that data extra.
1436
+
1437
+ 27:24.560 --> 27:28.560
1438
+ This is called predictive coding.
1439
+
1440
+ 27:28.560 --> 27:33.560
1441
+ And then there was still something wrong with that theory of the universe
1442
+
1443
+ 27:33.560 --> 27:37.560
1444
+ and you had deviations from these predictions of the theory.
1445
+
1446
+ 27:37.560 --> 27:41.560
1447
+ And 300 years later another guy came along whose name was Einstein
1448
+
1449
+ 27:41.560 --> 27:50.560
1450
+ and he was able to explain away all these deviations from the predictions of the old theory
1451
+
1452
+ 27:50.560 --> 27:56.560
1453
+ through a new theory which was called the general theory of relativity
1454
+
1455
+ 27:56.560 --> 28:00.560
1456
+ which at first glance looks a little bit more complicated
1457
+
1458
+ 28:00.560 --> 28:05.560
1459
+ and you have to warp space and time but you can't phrase it within one single sentence
1460
+
1461
+ 28:05.560 --> 28:12.560
1462
+ which is no matter how fast you accelerate and how fast or how hard you decelerate
1463
+
1464
+ 28:12.560 --> 28:18.560
1465
+ and no matter what is the gravity in your local framework,
1466
+
1467
+ 28:18.560 --> 28:21.560
1468
+ light speed always looks the same.
1469
+
1470
+ 28:21.560 --> 28:24.560
1471
+ And from that you can calculate all the consequences.
1472
+
1473
+ 28:24.560 --> 28:30.560
1474
+ So it's a very simple thing and it allows you to further compress all the observations
1475
+
1476
+ 28:30.560 --> 28:35.560
1477
+ because certainly there are hardly any deviations any longer
1478
+
1479
+ 28:35.560 --> 28:39.560
1480
+ that you can measure from the predictions of this new theory.
1481
+
1482
+ 28:39.560 --> 28:44.560
1483
+ So art of science is a history of compression progress.
1484
+
1485
+ 28:44.560 --> 28:50.560
1486
+ You never arrive immediately at the shortest explanation of the data
1487
+
1488
+ 28:50.560 --> 28:52.560
1489
+ but you're making progress.
1490
+
1491
+ 28:52.560 --> 28:56.560
1492
+ Whenever you are making progress you have an insight.
1493
+
1494
+ 28:56.560 --> 29:01.560
1495
+ You see, oh, first I needed so many bits of information to describe the data,
1496
+
1497
+ 29:01.560 --> 29:04.560
1498
+ to describe my falling apples, my video of falling apples,
1499
+
1500
+ 29:04.560 --> 29:08.560
1501
+ I need so many data, so many pixels have to be stored
1502
+
1503
+ 29:08.560 --> 29:14.560
1504
+ but then suddenly I realize, no, there is a very simple way of predicting the third frame
1505
+
1506
+ 29:14.560 --> 29:20.560
1507
+ in the video from the first two and maybe not every little detail can be predicted
1508
+
1509
+ 29:20.560 --> 29:24.560
1510
+ but more or less most of these orange blots that are coming down,
1511
+
1512
+ 29:24.560 --> 29:28.560
1513
+ I'm sorry, in the same way, which means that I can greatly compress the video
1514
+
1515
+ 29:28.560 --> 29:33.560
1516
+ and the amount of compression, progress,
1517
+
1518
+ 29:33.560 --> 29:37.560
1519
+ that is the depth of the insight that you have at that moment.
1520
+
1521
+ 29:37.560 --> 29:40.560
1522
+ That's the fun that you have, the scientific fun,
1523
+
1524
+ 29:40.560 --> 29:46.560
1525
+ the fun in that discovery and we can build artificial systems that do the same thing.
1526
+
1527
+ 29:46.560 --> 29:51.560
1528
+ They measure the depth of their insights as they are looking at the data
1529
+
1530
+ 29:51.560 --> 29:55.560
1531
+ through their own experiments and we give them a reward,
1532
+
1533
+ 29:55.560 --> 30:00.560
1534
+ an intrinsic reward and proportion to this depth of insight.
1535
+
1536
+ 30:00.560 --> 30:07.560
1537
+ And since they are trying to maximize the rewards they get,
1538
+
1539
+ 30:07.560 --> 30:12.560
1540
+ they are suddenly motivated to come up with new action sequences,
1541
+
1542
+ 30:12.560 --> 30:17.560
1543
+ with new experiments that have the property that the data that is coming in
1544
+
1545
+ 30:17.560 --> 30:21.560
1546
+ as a consequence of these experiments has the property
1547
+
1548
+ 30:21.560 --> 30:25.560
1549
+ that they can learn something about, see a pattern in there
1550
+
1551
+ 30:25.560 --> 30:28.560
1552
+ which they hadn't seen yet before.
1553
+
1554
+ 30:28.560 --> 30:32.560
1555
+ So there's an idea of power play that you've described,
1556
+
1557
+ 30:32.560 --> 30:37.560
1558
+ a training and general problem solver in this kind of way of looking for the unsolved problems.
1559
+
1560
+ 30:37.560 --> 30:40.560
1561
+ Can you describe that idea a little further?
1562
+
1563
+ 30:40.560 --> 30:42.560
1564
+ It's another very simple idea.
1565
+
1566
+ 30:42.560 --> 30:49.560
1567
+ Normally what you do in computer science, you have some guy who gives you a problem
1568
+
1569
+ 30:49.560 --> 30:56.560
1570
+ and then there is a huge search space of potential solution candidates
1571
+
1572
+ 30:56.560 --> 31:02.560
1573
+ and you somehow try them out and you have more or less sophisticated ways
1574
+
1575
+ 31:02.560 --> 31:06.560
1576
+ of moving around in that search space
1577
+
1578
+ 31:06.560 --> 31:11.560
1579
+ until you finally found a solution which you consider satisfactory.
1580
+
1581
+ 31:11.560 --> 31:15.560
1582
+ That's what most of computer science is about.
1583
+
1584
+ 31:15.560 --> 31:19.560
1585
+ Power play just goes one little step further and says,
1586
+
1587
+ 31:19.560 --> 31:24.560
1588
+ let's not only search for solutions to a given problem,
1589
+
1590
+ 31:24.560 --> 31:30.560
1591
+ but let's search to pairs of problems and their solutions
1592
+
1593
+ 31:30.560 --> 31:36.560
1594
+ where the system itself has the opportunity to phrase its own problem.
1595
+
1596
+ 31:36.560 --> 31:42.560
1597
+ So we are looking suddenly at pairs of problems and their solutions
1598
+
1599
+ 31:42.560 --> 31:46.560
1600
+ or modifications of the problem solver
1601
+
1602
+ 31:46.560 --> 31:50.560
1603
+ that is supposed to generate a solution to that new problem.
1604
+
1605
+ 31:50.560 --> 31:56.560
1606
+ And this additional degree of freedom
1607
+
1608
+ 31:56.560 --> 32:01.560
1609
+ allows us to build career systems that are like scientists
1610
+
1611
+ 32:01.560 --> 32:06.560
1612
+ in the sense that they not only try to solve and try to find answers
1613
+
1614
+ 32:06.560 --> 32:12.560
1615
+ to existing questions, no, they are also free to pose their own questions.
1616
+
1617
+ 32:12.560 --> 32:15.560
1618
+ So if you want to build an artificial scientist,
1619
+
1620
+ 32:15.560 --> 32:19.560
1621
+ you have to give it that freedom and power play is exactly doing that.
1622
+
1623
+ 32:19.560 --> 32:23.560
1624
+ So that's a dimension of freedom that's important to have,
1625
+
1626
+ 32:23.560 --> 32:31.560
1627
+ how hard do you think that, how multi dimensional and difficult the space of
1628
+
1629
+ 32:31.560 --> 32:34.560
1630
+ then coming up with your own questions is.
1631
+
1632
+ 32:34.560 --> 32:38.560
1633
+ So it's one of the things that as human beings we consider to be
1634
+
1635
+ 32:38.560 --> 32:41.560
1636
+ the thing that makes us special, the intelligence that makes us special
1637
+
1638
+ 32:41.560 --> 32:47.560
1639
+ is that brilliant insight that can create something totally new.
1640
+
1641
+ 32:47.560 --> 32:51.560
1642
+ Yes. So now let's look at the extreme case.
1643
+
1644
+ 32:51.560 --> 32:57.560
1645
+ Let's look at the set of all possible problems that you can formally describe,
1646
+
1647
+ 32:57.560 --> 33:03.560
1648
+ which is infinite, which should be the next problem
1649
+
1650
+ 33:03.560 --> 33:07.560
1651
+ that a scientist or power play is going to solve.
1652
+
1653
+ 33:07.560 --> 33:16.560
1654
+ Well, it should be the easiest problem that goes beyond what you already know.
1655
+
1656
+ 33:16.560 --> 33:22.560
1657
+ So it should be the simplest problem that the current problems
1658
+
1659
+ 33:22.560 --> 33:28.560
1660
+ that you have which can already solve 100 problems that he cannot solve yet
1661
+
1662
+ 33:28.560 --> 33:30.560
1663
+ by just generalizing.
1664
+
1665
+ 33:30.560 --> 33:32.560
1666
+ So it has to be new.
1667
+
1668
+ 33:32.560 --> 33:36.560
1669
+ So it has to require a modification of the problem solver such that the new
1670
+
1671
+ 33:36.560 --> 33:41.560
1672
+ problem solver can solve this new thing, but the old problem solver cannot do it.
1673
+
1674
+ 33:41.560 --> 33:47.560
1675
+ And in addition to that, we have to make sure that the problem solver
1676
+
1677
+ 33:47.560 --> 33:50.560
1678
+ doesn't forget any of the previous solutions.
1679
+
1680
+ 33:50.560 --> 33:51.560
1681
+ Right.
1682
+
1683
+ 33:51.560 --> 33:57.560
1684
+ And so by definition, power play is now trying always to search in this pair of
1685
+
1686
+ 33:57.560 --> 34:02.560
1687
+ in the set of pairs of problems and problems over modifications
1688
+
1689
+ 34:02.560 --> 34:08.560
1690
+ for a combination that minimize the time to achieve these criteria.
1691
+
1692
+ 34:08.560 --> 34:14.560
1693
+ Power is trying to find the problem which is easiest to add to the repertoire.
1694
+
1695
+ 34:14.560 --> 34:19.560
1696
+ So just like grad students and academics and researchers can spend their whole
1697
+
1698
+ 34:19.560 --> 34:25.560
1699
+ career in a local minima stuck trying to come up with interesting questions,
1700
+
1701
+ 34:25.560 --> 34:27.560
1702
+ but ultimately doing very little.
1703
+
1704
+ 34:27.560 --> 34:32.560
1705
+ Do you think it's easy in this approach of looking for the simplest
1706
+
1707
+ 34:32.560 --> 34:38.560
1708
+ problem solver problem to get stuck in a local minima is not never really discovering
1709
+
1710
+ 34:38.560 --> 34:43.560
1711
+ new, you know, really jumping outside of the hundred problems that you've already
1712
+
1713
+ 34:43.560 --> 34:47.560
1714
+ solved in a genuine creative way.
1715
+
1716
+ 34:47.560 --> 34:52.560
1717
+ No, because that's the nature of power play that it's always trying to break
1718
+
1719
+ 34:52.560 --> 34:58.560
1720
+ its current generalization abilities by coming up with a new problem which is
1721
+
1722
+ 34:58.560 --> 35:04.560
1723
+ beyond the current horizon, just shifting the horizon of knowledge a little bit
1724
+
1725
+ 35:04.560 --> 35:10.560
1726
+ out there, breaking the existing rules such that the new thing becomes solvable
1727
+
1728
+ 35:10.560 --> 35:13.560
1729
+ but wasn't solvable by the old thing.
1730
+
1731
+ 35:13.560 --> 35:19.560
1732
+ So like adding a new axiom, like what Gödel did when he came up with these
1733
+
1734
+ 35:19.560 --> 35:23.560
1735
+ new sentences, new theorems that didn't have a proof in the formal system,
1736
+
1737
+ 35:23.560 --> 35:30.560
1738
+ which means you can add them to the repertoire, hoping that they are not
1739
+
1740
+ 35:30.560 --> 35:35.560
1741
+ going to damage the consistency of the whole thing.
1742
+
1743
+ 35:35.560 --> 35:41.560
1744
+ So in the paper with the amazing title, Formal Theory of Creativity,
1745
+
1746
+ 35:41.560 --> 35:47.560
1747
+ Fun and Intrinsic Motivation, you talk about discovery as intrinsic reward.
1748
+
1749
+ 35:47.560 --> 35:53.560
1750
+ So if you view humans as intelligent agents, what do you think is the purpose
1751
+
1752
+ 35:53.560 --> 35:56.560
1753
+ and meaning of life for us humans?
1754
+
1755
+ 35:56.560 --> 35:58.560
1756
+ You've talked about this discovery.
1757
+
1758
+ 35:58.560 --> 36:04.560
1759
+ Do you see humans as an instance of power play agents?
1760
+
1761
+ 36:04.560 --> 36:11.560
1762
+ Yeah, so humans are curious and that means they behave like scientists,
1763
+
1764
+ 36:11.560 --> 36:15.560
1765
+ not only the official scientists but even the babies behave like scientists
1766
+
1767
+ 36:15.560 --> 36:19.560
1768
+ and they play around with their toys to figure out how the world works
1769
+
1770
+ 36:19.560 --> 36:22.560
1771
+ and how it is responding to their actions.
1772
+
1773
+ 36:22.560 --> 36:26.560
1774
+ And that's how they learn about gravity and everything.
1775
+
1776
+ 36:26.560 --> 36:30.560
1777
+ And yeah, in 1990, we had the first systems like that
1778
+
1779
+ 36:30.560 --> 36:33.560
1780
+ who would just try to play around with the environment
1781
+
1782
+ 36:33.560 --> 36:39.560
1783
+ and come up with situations that go beyond what they knew at that time
1784
+
1785
+ 36:39.560 --> 36:42.560
1786
+ and then get a reward for creating these situations
1787
+
1788
+ 36:42.560 --> 36:45.560
1789
+ and then becoming more general problem solvers
1790
+
1791
+ 36:45.560 --> 36:48.560
1792
+ and being able to understand more of the world.
1793
+
1794
+ 36:48.560 --> 36:56.560
1795
+ So yeah, I think in principle that curiosity,
1796
+
1797
+ 36:56.560 --> 37:02.560
1798
+ strategy or more sophisticated versions of what I just described,
1799
+
1800
+ 37:02.560 --> 37:07.560
1801
+ they are what we have built in as well because evolution discovered
1802
+
1803
+ 37:07.560 --> 37:12.560
1804
+ that's a good way of exploring the unknown world and a guy who explores
1805
+
1806
+ 37:12.560 --> 37:16.560
1807
+ the unknown world has a higher chance of solving problems
1808
+
1809
+ 37:16.560 --> 37:19.560
1810
+ that he needs to survive in this world.
1811
+
1812
+ 37:19.560 --> 37:23.560
1813
+ On the other hand, those guys who were too curious,
1814
+
1815
+ 37:23.560 --> 37:25.560
1816
+ they were weeded out as well.
1817
+
1818
+ 37:25.560 --> 37:27.560
1819
+ So you have to find this trade off.
1820
+
1821
+ 37:27.560 --> 37:30.560
1822
+ Evolution found a certain trade off apparently in our society.
1823
+
1824
+ 37:30.560 --> 37:35.560
1825
+ There is a certain percentage of extremely explorative guys
1826
+
1827
+ 37:35.560 --> 37:41.560
1828
+ and it doesn't matter if they die because many of the others are more conservative.
1829
+
1830
+ 37:41.560 --> 37:46.560
1831
+ And so yeah, it would be surprising to me
1832
+
1833
+ 37:46.560 --> 37:55.560
1834
+ if that principle of artificial curiosity wouldn't be present
1835
+
1836
+ 37:55.560 --> 37:59.560
1837
+ in almost exactly the same form here in our brains.
1838
+
1839
+ 37:59.560 --> 38:02.560
1840
+ So you're a bit of a musician and an artist.
1841
+
1842
+ 38:02.560 --> 38:07.560
1843
+ So continuing on this topic of creativity,
1844
+
1845
+ 38:07.560 --> 38:10.560
1846
+ what do you think is the role of creativity in intelligence?
1847
+
1848
+ 38:10.560 --> 38:16.560
1849
+ So you've kind of implied that it's essential for intelligence,
1850
+
1851
+ 38:16.560 --> 38:21.560
1852
+ if you think of intelligence as a problem solving system,
1853
+
1854
+ 38:21.560 --> 38:23.560
1855
+ as ability to solve problems.
1856
+
1857
+ 38:23.560 --> 38:28.560
1858
+ But do you think it's essential, this idea of creativity?
1859
+
1860
+ 38:28.560 --> 38:34.560
1861
+ We never have a subprogram that is called creativity or something.
1862
+
1863
+ 38:34.560 --> 38:37.560
1864
+ It's just a side effect of what our problems always do.
1865
+
1866
+ 38:37.560 --> 38:44.560
1867
+ They are searching a space of candidates, of solution candidates,
1868
+
1869
+ 38:44.560 --> 38:47.560
1870
+ until they hopefully find a solution to a given problem.
1871
+
1872
+ 38:47.560 --> 38:50.560
1873
+ But then there are these two types of creativity
1874
+
1875
+ 38:50.560 --> 38:53.560
1876
+ and both of them are now present in our machines.
1877
+
1878
+ 38:53.560 --> 38:56.560
1879
+ The first one has been around for a long time,
1880
+
1881
+ 38:56.560 --> 38:59.560
1882
+ which is human gives problem to machine.
1883
+
1884
+ 38:59.560 --> 39:03.560
1885
+ Machine tries to find a solution to that.
1886
+
1887
+ 39:03.560 --> 39:05.560
1888
+ And this has been happening for many decades.
1889
+
1890
+ 39:05.560 --> 39:09.560
1891
+ And for many decades, machines have found creative solutions
1892
+
1893
+ 39:09.560 --> 39:13.560
1894
+ to interesting problems where humans were not aware
1895
+
1896
+ 39:13.560 --> 39:17.560
1897
+ of these particularly creative solutions,
1898
+
1899
+ 39:17.560 --> 39:20.560
1900
+ but then appreciated that the machine found that.
1901
+
1902
+ 39:20.560 --> 39:23.560
1903
+ The second is the pure creativity.
1904
+
1905
+ 39:23.560 --> 39:28.560
1906
+ What I just mentioned, I would call the applied creativity,
1907
+
1908
+ 39:28.560 --> 39:31.560
1909
+ like applied art, where somebody tells you,
1910
+
1911
+ 39:31.560 --> 39:34.560
1912
+ now make a nice picture of this pope,
1913
+
1914
+ 39:34.560 --> 39:36.560
1915
+ and you will get money for that.
1916
+
1917
+ 39:36.560 --> 39:41.560
1918
+ So here is the artist and he makes a convincing picture of the pope
1919
+
1920
+ 39:41.560 --> 39:44.560
1921
+ and the pope likes it and gives him the money.
1922
+
1923
+ 39:44.560 --> 39:48.560
1924
+ And then there is the pure creativity,
1925
+
1926
+ 39:48.560 --> 39:51.560
1927
+ which is more like the power play and the artificial curiosity thing,
1928
+
1929
+ 39:51.560 --> 39:56.560
1930
+ where you have the freedom to select your own problem,
1931
+
1932
+ 39:56.560 --> 40:02.560
1933
+ like a scientist who defines his own question to study.
1934
+
1935
+ 40:02.560 --> 40:06.560
1936
+ And so that is the pure creativity, if you will,
1937
+
1938
+ 40:06.560 --> 40:13.560
1939
+ as opposed to the applied creativity, which serves another.
1940
+
1941
+ 40:13.560 --> 40:18.560
1942
+ In that distinction, there's almost echoes of narrow AI versus general AI.
1943
+
1944
+ 40:18.560 --> 40:24.560
1945
+ So this kind of constrained painting of a pope seems like
1946
+
1947
+ 40:24.560 --> 40:29.560
1948
+ the approaches of what people are calling narrow AI.
1949
+
1950
+ 40:29.560 --> 40:32.560
1951
+ And pure creativity seems to be,
1952
+
1953
+ 40:32.560 --> 40:34.560
1954
+ maybe I'm just biased as a human,
1955
+
1956
+ 40:34.560 --> 40:40.560
1957
+ but it seems to be an essential element of human level intelligence.
1958
+
1959
+ 40:40.560 --> 40:43.560
1960
+ Is that what you're implying?
1961
+
1962
+ 40:43.560 --> 40:45.560
1963
+ To a degree.
1964
+
1965
+ 40:45.560 --> 40:50.560
1966
+ If you zoom back a little bit and you just look at a general problem solving machine,
1967
+
1968
+ 40:50.560 --> 40:53.560
1969
+ which is trying to solve arbitrary problems,
1970
+
1971
+ 40:53.560 --> 40:57.560
1972
+ then this machine will figure out in the course of solving problems
1973
+
1974
+ 40:57.560 --> 40:59.560
1975
+ that it's good to be curious.
1976
+
1977
+ 40:59.560 --> 41:04.560
1978
+ So all of what I said just now about this pre wild curiosity
1979
+
1980
+ 41:04.560 --> 41:10.560
1981
+ and this will to invent new problems that the system doesn't know how to solve yet,
1982
+
1983
+ 41:10.560 --> 41:14.560
1984
+ should be just a byproduct of the general search.
1985
+
1986
+ 41:14.560 --> 41:21.560
1987
+ However, apparently evolution has built it into us
1988
+
1989
+ 41:21.560 --> 41:26.560
1990
+ because it turned out to be so successful, a pre wiring, a bias,
1991
+
1992
+ 41:26.560 --> 41:33.560
1993
+ a very successful exploratory bias that we are born with.
1994
+
1995
+ 41:33.560 --> 41:36.560
1996
+ And you've also said that consciousness in the same kind of way
1997
+
1998
+ 41:36.560 --> 41:40.560
1999
+ may be a byproduct of problem solving.
2000
+
2001
+ 41:40.560 --> 41:44.560
2002
+ Do you find this an interesting byproduct?
2003
+
2004
+ 41:44.560 --> 41:46.560
2005
+ Do you think it's a useful byproduct?
2006
+
2007
+ 41:46.560 --> 41:49.560
2008
+ What are your thoughts on consciousness in general?
2009
+
2010
+ 41:49.560 --> 41:54.560
2011
+ Or is it simply a byproduct of greater and greater capabilities of problem solving
2012
+
2013
+ 41:54.560 --> 42:00.560
2014
+ that's similar to creativity in that sense?
2015
+
2016
+ 42:00.560 --> 42:04.560
2017
+ We never have a procedure called consciousness in our machines.
2018
+
2019
+ 42:04.560 --> 42:10.560
2020
+ However, we get a side effects of what these machines are doing,
2021
+
2022
+ 42:10.560 --> 42:15.560
2023
+ things that seem to be closely related to what people call consciousness.
2024
+
2025
+ 42:15.560 --> 42:20.560
2026
+ So for example, already 1990 we had simple systems
2027
+
2028
+ 42:20.560 --> 42:25.560
2029
+ which were basically recurrent networks and therefore universal computers
2030
+
2031
+ 42:25.560 --> 42:32.560
2032
+ trying to map incoming data into actions that lead to success.
2033
+
2034
+ 42:32.560 --> 42:39.560
2035
+ Maximizing reward in a given environment, always finding the charging station in time
2036
+
2037
+ 42:39.560 --> 42:43.560
2038
+ whenever the battery is low and negative signals are coming from the battery,
2039
+
2040
+ 42:43.560 --> 42:50.560
2041
+ always find the charging station in time without bumping against painful obstacles on the way.
2042
+
2043
+ 42:50.560 --> 42:54.560
2044
+ So complicated things but very easily motivated.
2045
+
2046
+ 42:54.560 --> 43:01.560
2047
+ And then we give these little guys a separate recurrent network
2048
+
2049
+ 43:01.560 --> 43:04.560
2050
+ which is just predicting what's happening if I do that and that.
2051
+
2052
+ 43:04.560 --> 43:08.560
2053
+ What will happen as a consequence of these actions that I'm executing
2054
+
2055
+ 43:08.560 --> 43:13.560
2056
+ and it's just trained on the long and long history of interactions with the world.
2057
+
2058
+ 43:13.560 --> 43:17.560
2059
+ So it becomes a predictive model of the world basically.
2060
+
2061
+ 43:17.560 --> 43:22.560
2062
+ And therefore also a compressor of the observations of the world
2063
+
2064
+ 43:22.560 --> 43:26.560
2065
+ because whatever you can predict, you don't have to store extra.
2066
+
2067
+ 43:26.560 --> 43:29.560
2068
+ So compression is a side effect of prediction.
2069
+
2070
+ 43:29.560 --> 43:32.560
2071
+ And how does this recurrent network compress?
2072
+
2073
+ 43:32.560 --> 43:36.560
2074
+ Well, it's inventing little subprograms, little subnetworks
2075
+
2076
+ 43:36.560 --> 43:41.560
2077
+ that stand for everything that frequently appears in the environment.
2078
+
2079
+ 43:41.560 --> 43:47.560
2080
+ Like bottles and microphones and faces, maybe lots of faces in my environment.
2081
+
2082
+ 43:47.560 --> 43:51.560
2083
+ So I'm learning to create something like a prototype face
2084
+
2085
+ 43:51.560 --> 43:55.560
2086
+ and a new face comes along and all I have to encode are the deviations from the prototype.
2087
+
2088
+ 43:55.560 --> 44:00.560
2089
+ So it's compressing all the time the stuff that frequently appears.
2090
+
2091
+ 44:00.560 --> 44:04.560
2092
+ There's one thing that appears all the time
2093
+
2094
+ 44:04.560 --> 44:09.560
2095
+ that is present all the time when the agent is interacting with its environment,
2096
+
2097
+ 44:09.560 --> 44:11.560
2098
+ which is the agent itself.
2099
+
2100
+ 44:11.560 --> 44:14.560
2101
+ So just for data compression reasons,
2102
+
2103
+ 44:14.560 --> 44:18.560
2104
+ it is extremely natural for this recurrent network
2105
+
2106
+ 44:18.560 --> 44:23.560
2107
+ to come up with little subnetworks that stand for the properties of the agents,
2108
+
2109
+ 44:23.560 --> 44:27.560
2110
+ the hand, the other actuators,
2111
+
2112
+ 44:27.560 --> 44:31.560
2113
+ and all the stuff that you need to better encode the data,
2114
+
2115
+ 44:31.560 --> 44:34.560
2116
+ which is influenced by the actions of the agent.
2117
+
2118
+ 44:34.560 --> 44:40.560
2119
+ So there, just as a side effect of data compression during primal solving,
2120
+
2121
+ 44:40.560 --> 44:45.560
2122
+ you have internal self models.
2123
+
2124
+ 44:45.560 --> 44:51.560
2125
+ Now you can use this model of the world to plan your future.
2126
+
2127
+ 44:51.560 --> 44:54.560
2128
+ And that's what we also have done since 1990.
2129
+
2130
+ 44:54.560 --> 44:57.560
2131
+ So the recurrent network, which is the controller,
2132
+
2133
+ 44:57.560 --> 44:59.560
2134
+ which is trying to maximize reward,
2135
+
2136
+ 44:59.560 --> 45:02.560
2137
+ can use this model of the network of the world,
2138
+
2139
+ 45:02.560 --> 45:05.560
2140
+ this model network of the world, this predictive model of the world
2141
+
2142
+ 45:05.560 --> 45:08.560
2143
+ to plan ahead and say, let's not do this action sequence.
2144
+
2145
+ 45:08.560 --> 45:11.560
2146
+ Let's do this action sequence instead
2147
+
2148
+ 45:11.560 --> 45:14.560
2149
+ because it leads to more predicted rewards.
2150
+
2151
+ 45:14.560 --> 45:19.560
2152
+ And whenever it's waking up these little subnetworks that stand for itself,
2153
+
2154
+ 45:19.560 --> 45:21.560
2155
+ then it's thinking about itself.
2156
+
2157
+ 45:21.560 --> 45:23.560
2158
+ Then it's thinking about itself.
2159
+
2160
+ 45:23.560 --> 45:30.560
2161
+ And it's exploring mentally the consequences of its own actions.
2162
+
2163
+ 45:30.560 --> 45:36.560
2164
+ And now you tell me why it's still missing.
2165
+
2166
+ 45:36.560 --> 45:39.560
2167
+ Missing the gap to consciousness.
2168
+
2169
+ 45:39.560 --> 45:43.560
2170
+ There isn't. That's a really beautiful idea that, you know,
2171
+
2172
+ 45:43.560 --> 45:46.560
2173
+ if life is a collection of data
2174
+
2175
+ 45:46.560 --> 45:53.560
2176
+ and life is a process of compressing that data to act efficiently.
2177
+
2178
+ 45:53.560 --> 45:57.560
2179
+ In that data, you yourself appear very often.
2180
+
2181
+ 45:57.560 --> 46:00.560
2182
+ So it's useful to form compressions of yourself.
2183
+
2184
+ 46:00.560 --> 46:03.560
2185
+ And it's a really beautiful formulation of what consciousness is,
2186
+
2187
+ 46:03.560 --> 46:05.560
2188
+ is a necessary side effect.
2189
+
2190
+ 46:05.560 --> 46:11.560
2191
+ It's actually quite compelling to me.
2192
+
2193
+ 46:11.560 --> 46:18.560
2194
+ We've described RNNs, developed LSTMs, long short term memory networks.
2195
+
2196
+ 46:18.560 --> 46:22.560
2197
+ They're a type of recurrent neural networks.
2198
+
2199
+ 46:22.560 --> 46:24.560
2200
+ They've gotten a lot of success recently.
2201
+
2202
+ 46:24.560 --> 46:29.560
2203
+ So these are networks that model the temporal aspects in the data,
2204
+
2205
+ 46:29.560 --> 46:31.560
2206
+ temporal patterns in the data.
2207
+
2208
+ 46:31.560 --> 46:36.560
2209
+ And you've called them the deepest of the neural networks, right?
2210
+
2211
+ 46:36.560 --> 46:43.560
2212
+ What do you think is the value of depth in the models that we use to learn?
2213
+
2214
+ 46:43.560 --> 46:47.560
2215
+ Yeah, since you mentioned the long short term memory and the LSTM,
2216
+
2217
+ 46:47.560 --> 46:52.560
2218
+ I have to mention the names of the brilliant students who made that possible.
2219
+
2220
+ 46:52.560 --> 46:53.560
2221
+ Yes, of course, of course.
2222
+
2223
+ 46:53.560 --> 46:56.560
2224
+ First of all, my first student ever, Sepp Hochreiter,
2225
+
2226
+ 46:56.560 --> 47:00.560
2227
+ who had fundamental insights already in his diploma thesis.
2228
+
2229
+ 47:00.560 --> 47:04.560
2230
+ Then Felix Giers, who had additional important contributions.
2231
+
2232
+ 47:04.560 --> 47:11.560
2233
+ Alex Gray is a guy from Scotland who is mostly responsible for this CTC algorithm,
2234
+
2235
+ 47:11.560 --> 47:16.560
2236
+ which is now often used to train the LSTM to do the speech recognition
2237
+
2238
+ 47:16.560 --> 47:21.560
2239
+ on all the Google Android phones and whatever, and Siri and so on.
2240
+
2241
+ 47:21.560 --> 47:26.560
2242
+ So these guys, without these guys, I would be nothing.
2243
+
2244
+ 47:26.560 --> 47:28.560
2245
+ It's a lot of incredible work.
2246
+
2247
+ 47:28.560 --> 47:30.560
2248
+ What is now the depth?
2249
+
2250
+ 47:30.560 --> 47:32.560
2251
+ What is the importance of depth?
2252
+
2253
+ 47:32.560 --> 47:36.560
2254
+ Well, most problems in the real world are deep
2255
+
2256
+ 47:36.560 --> 47:41.560
2257
+ in the sense that the current input doesn't tell you all you need to know
2258
+
2259
+ 47:41.560 --> 47:44.560
2260
+ about the environment.
2261
+
2262
+ 47:44.560 --> 47:49.560
2263
+ So instead, you have to have a memory of what happened in the past
2264
+
2265
+ 47:49.560 --> 47:54.560
2266
+ and often important parts of that memory are dated.
2267
+
2268
+ 47:54.560 --> 47:56.560
2269
+ They are pretty old.
2270
+
2271
+ 47:56.560 --> 47:59.560
2272
+ So when you're doing speech recognition, for example,
2273
+
2274
+ 47:59.560 --> 48:03.560
2275
+ and somebody says 11,
2276
+
2277
+ 48:03.560 --> 48:08.560
2278
+ then that's about half a second or something like that,
2279
+
2280
+ 48:08.560 --> 48:11.560
2281
+ which means it's already 50 time steps.
2282
+
2283
+ 48:11.560 --> 48:15.560
2284
+ And another guy or the same guy says 7.
2285
+
2286
+ 48:15.560 --> 48:18.560
2287
+ So the ending is the same, even.
2288
+
2289
+ 48:18.560 --> 48:22.560
2290
+ But now the system has to see the distinction between 7 and 11,
2291
+
2292
+ 48:22.560 --> 48:26.560
2293
+ and the only way it can see the difference is it has to store
2294
+
2295
+ 48:26.560 --> 48:34.560
2296
+ that 50 steps ago there was an S or an L, 11 or 7.
2297
+
2298
+ 48:34.560 --> 48:37.560
2299
+ So there you have already a problem of depth 50,
2300
+
2301
+ 48:37.560 --> 48:42.560
2302
+ because for each time step you have something like a virtual layer
2303
+
2304
+ 48:42.560 --> 48:45.560
2305
+ and the expanded, unrolled version of this recurrent network
2306
+
2307
+ 48:45.560 --> 48:47.560
2308
+ which is doing the speech recognition.
2309
+
2310
+ 48:47.560 --> 48:53.560
2311
+ So these long time lags, they translate into problem depth.
2312
+
2313
+ 48:53.560 --> 48:59.560
2314
+ And most problems in this world are such that you really
2315
+
2316
+ 48:59.560 --> 49:04.560
2317
+ have to look far back in time to understand what is the problem
2318
+
2319
+ 49:04.560 --> 49:06.560
2320
+ and to solve it.
2321
+
2322
+ 49:06.560 --> 49:09.560
2323
+ But just like with LSTMs, you don't necessarily need to,
2324
+
2325
+ 49:09.560 --> 49:12.560
2326
+ when you look back in time, remember every aspect.
2327
+
2328
+ 49:12.560 --> 49:14.560
2329
+ You just need to remember the important aspects.
2330
+
2331
+ 49:14.560 --> 49:15.560
2332
+ That's right.
2333
+
2334
+ 49:15.560 --> 49:19.560
2335
+ The network has to learn to put the important stuff into memory
2336
+
2337
+ 49:19.560 --> 49:23.560
2338
+ and to ignore the unimportant noise.
2339
+
2340
+ 49:23.560 --> 49:28.560
2341
+ But in that sense, deeper and deeper is better?
2342
+
2343
+ 49:28.560 --> 49:30.560
2344
+ Or is there a limitation?
2345
+
2346
+ 49:30.560 --> 49:36.560
2347
+ I mean LSTM is one of the great examples of architectures
2348
+
2349
+ 49:36.560 --> 49:41.560
2350
+ that do something beyond just deeper and deeper networks.
2351
+
2352
+ 49:41.560 --> 49:47.560
2353
+ There's clever mechanisms for filtering data for remembering and forgetting.
2354
+
2355
+ 49:47.560 --> 49:51.560
2356
+ So do you think that kind of thinking is necessary?
2357
+
2358
+ 49:51.560 --> 49:54.560
2359
+ If you think about LSTMs as a leap, a big leap forward
2360
+
2361
+ 49:54.560 --> 50:01.560
2362
+ over traditional vanilla RNNs, what do you think is the next leap
2363
+
2364
+ 50:01.560 --> 50:03.560
2365
+ within this context?
2366
+
2367
+ 50:03.560 --> 50:08.560
2368
+ So LSTM is a very clever improvement, but LSTMs still don't
2369
+
2370
+ 50:08.560 --> 50:13.560
2371
+ have the same kind of ability to see far back in the past
2372
+
2373
+ 50:13.560 --> 50:18.560
2374
+ as humans do, the credit assignment problem across way back,
2375
+
2376
+ 50:18.560 --> 50:24.560
2377
+ not just 50 time steps or 100 or 1,000, but millions and billions.
2378
+
2379
+ 50:24.560 --> 50:28.560
2380
+ It's not clear what are the practical limits of the LSTM
2381
+
2382
+ 50:28.560 --> 50:30.560
2383
+ when it comes to looking back.
2384
+
2385
+ 50:30.560 --> 50:35.560
2386
+ Already in 2006, I think, we had examples where not only
2387
+
2388
+ 50:35.560 --> 50:40.560
2389
+ looked back tens of thousands of steps, but really millions of steps.
2390
+
2391
+ 50:40.560 --> 50:46.560
2392
+ And Juan Perez Ortiz in my lab, I think was the first author of a paper
2393
+
2394
+ 50:46.560 --> 50:51.560
2395
+ where we really, was it 2006 or something, had examples where it
2396
+
2397
+ 50:51.560 --> 50:56.560
2398
+ learned to look back for more than 10 million steps.
2399
+
2400
+ 50:56.560 --> 51:02.560
2401
+ So for most problems of speech recognition, it's not
2402
+
2403
+ 51:02.560 --> 51:06.560
2404
+ necessary to look that far back, but there are examples where it does.
2405
+
2406
+ 51:06.560 --> 51:12.560
2407
+ Now, the looking back thing, that's rather easy because there is only
2408
+
2409
+ 51:12.560 --> 51:17.560
2410
+ one past, but there are many possible futures.
2411
+
2412
+ 51:17.560 --> 51:21.560
2413
+ And so a reinforcement learning system, which is trying to maximize
2414
+
2415
+ 51:21.560 --> 51:26.560
2416
+ its future expected reward and doesn't know yet which of these
2417
+
2418
+ 51:26.560 --> 51:31.560
2419
+ many possible futures should I select, given this one single past,
2420
+
2421
+ 51:31.560 --> 51:36.560
2422
+ is facing problems that the LSTM by itself cannot solve.
2423
+
2424
+ 51:36.560 --> 51:40.560
2425
+ So the LSTM is good for coming up with a compact representation
2426
+
2427
+ 51:40.560 --> 51:46.560
2428
+ of the history so far, of the history and of observations and actions so far.
2429
+
2430
+ 51:46.560 --> 51:53.560
2431
+ But now, how do you plan in an efficient and good way among all these,
2432
+
2433
+ 51:53.560 --> 51:57.560
2434
+ how do you select one of these many possible action sequences
2435
+
2436
+ 51:57.560 --> 52:02.560
2437
+ that a reinforcement learning system has to consider to maximize
2438
+
2439
+ 52:02.560 --> 52:05.560
2440
+ reward in this unknown future.
2441
+
2442
+ 52:05.560 --> 52:11.560
2443
+ So again, we have this basic setup where you have one recon network,
2444
+
2445
+ 52:11.560 --> 52:16.560
2446
+ which gets in the video and the speech and whatever, and it's
2447
+
2448
+ 52:16.560 --> 52:19.560
2449
+ executing the actions and it's trying to maximize reward.
2450
+
2451
+ 52:19.560 --> 52:24.560
2452
+ So there is no teacher who tells it what to do at which point in time.
2453
+
2454
+ 52:24.560 --> 52:29.560
2455
+ And then there's the other network, which is just predicting
2456
+
2457
+ 52:29.560 --> 52:32.560
2458
+ what's going to happen if I do that and then.
2459
+
2460
+ 52:32.560 --> 52:36.560
2461
+ And that could be an LSTM network, and it learns to look back
2462
+
2463
+ 52:36.560 --> 52:41.560
2464
+ all the way to make better predictions of the next time step.
2465
+
2466
+ 52:41.560 --> 52:45.560
2467
+ So essentially, although it's predicting only the next time step,
2468
+
2469
+ 52:45.560 --> 52:50.560
2470
+ it is motivated to learn to put into memory something that happened
2471
+
2472
+ 52:50.560 --> 52:54.560
2473
+ maybe a million steps ago because it's important to memorize that
2474
+
2475
+ 52:54.560 --> 52:58.560
2476
+ if you want to predict that at the next time step, the next event.
2477
+
2478
+ 52:58.560 --> 53:03.560
2479
+ Now, how can a model of the world like that,
2480
+
2481
+ 53:03.560 --> 53:07.560
2482
+ a predictive model of the world be used by the first guy,
2483
+
2484
+ 53:07.560 --> 53:11.560
2485
+ let's call it the controller and the model, the controller and the model.
2486
+
2487
+ 53:11.560 --> 53:16.560
2488
+ How can the model be used by the controller to efficiently select
2489
+
2490
+ 53:16.560 --> 53:19.560
2491
+ among these many possible futures?
2492
+
2493
+ 53:19.560 --> 53:23.560
2494
+ The naive way we had about 30 years ago was
2495
+
2496
+ 53:23.560 --> 53:27.560
2497
+ let's just use the model of the world as a stand in,
2498
+
2499
+ 53:27.560 --> 53:29.560
2500
+ as a simulation of the world.
2501
+
2502
+ 53:29.560 --> 53:32.560
2503
+ And millisecond by millisecond we plan the future
2504
+
2505
+ 53:32.560 --> 53:36.560
2506
+ and that means we have to roll it out really in detail
2507
+
2508
+ 53:36.560 --> 53:38.560
2509
+ and it will work only if the model is really good
2510
+
2511
+ 53:38.560 --> 53:40.560
2512
+ and it will still be inefficient
2513
+
2514
+ 53:40.560 --> 53:43.560
2515
+ because we have to look at all these possible futures
2516
+
2517
+ 53:43.560 --> 53:45.560
2518
+ and there are so many of them.
2519
+
2520
+ 53:45.560 --> 53:50.560
2521
+ So instead, what we do now since 2015 in our CN systems,
2522
+
2523
+ 53:50.560 --> 53:54.560
2524
+ controller model systems, we give the controller the opportunity
2525
+
2526
+ 53:54.560 --> 54:00.560
2527
+ to learn by itself how to use the potentially relevant parts
2528
+
2529
+ 54:00.560 --> 54:05.560
2530
+ of the model network to solve new problems more quickly.
2531
+
2532
+ 54:05.560 --> 54:09.560
2533
+ And if it wants to, it can learn to ignore the M
2534
+
2535
+ 54:09.560 --> 54:12.560
2536
+ and sometimes it's a good idea to ignore the M
2537
+
2538
+ 54:12.560 --> 54:15.560
2539
+ because it's really bad, it's a bad predictor
2540
+
2541
+ 54:15.560 --> 54:18.560
2542
+ in this particular situation of life
2543
+
2544
+ 54:18.560 --> 54:22.560
2545
+ where the controller is currently trying to maximize reward.
2546
+
2547
+ 54:22.560 --> 54:26.560
2548
+ However, it can also learn to address and exploit
2549
+
2550
+ 54:26.560 --> 54:32.560
2551
+ some of the subprograms that came about in the model network
2552
+
2553
+ 54:32.560 --> 54:35.560
2554
+ through compressing the data by predicting it.
2555
+
2556
+ 54:35.560 --> 54:40.560
2557
+ So it now has an opportunity to reuse that code,
2558
+
2559
+ 54:40.560 --> 54:43.560
2560
+ the algorithmic information in the model network
2561
+
2562
+ 54:43.560 --> 54:47.560
2563
+ to reduce its own search space,
2564
+
2565
+ 54:47.560 --> 54:50.560
2566
+ search that it can solve a new problem more quickly
2567
+
2568
+ 54:50.560 --> 54:52.560
2569
+ than without the model.
2570
+
2571
+ 54:52.560 --> 54:54.560
2572
+ Compression.
2573
+
2574
+ 54:54.560 --> 54:58.560
2575
+ So you're ultimately optimistic and excited
2576
+
2577
+ 54:58.560 --> 55:02.560
2578
+ about the power of reinforcement learning
2579
+
2580
+ 55:02.560 --> 55:04.560
2581
+ in the context of real systems.
2582
+
2583
+ 55:04.560 --> 55:06.560
2584
+ Absolutely, yeah.
2585
+
2586
+ 55:06.560 --> 55:11.560
2587
+ So you see RL as a potential having a huge impact
2588
+
2589
+ 55:11.560 --> 55:15.560
2590
+ beyond just sort of the M part is often developed
2591
+
2592
+ 55:15.560 --> 55:19.560
2593
+ on supervised learning methods.
2594
+
2595
+ 55:19.560 --> 55:25.560
2596
+ You see RL as a, for problems of cell driving cars
2597
+
2598
+ 55:25.560 --> 55:28.560
2599
+ or any kind of applied side robotics,
2600
+
2601
+ 55:28.560 --> 55:33.560
2602
+ that's the correct, interesting direction for researching you.
2603
+
2604
+ 55:33.560 --> 55:35.560
2605
+ I do think so.
2606
+
2607
+ 55:35.560 --> 55:37.560
2608
+ We have a company called Nasense,
2609
+
2610
+ 55:37.560 --> 55:43.560
2611
+ which has applied reinforcement learning to little Audis.
2612
+
2613
+ 55:43.560 --> 55:45.560
2614
+ Little Audis.
2615
+
2616
+ 55:45.560 --> 55:47.560
2617
+ Which learn to park without a teacher.
2618
+
2619
+ 55:47.560 --> 55:51.560
2620
+ The same principles were used, of course.
2621
+
2622
+ 55:51.560 --> 55:54.560
2623
+ So these little Audis, they are small, maybe like that,
2624
+
2625
+ 55:54.560 --> 55:57.560
2626
+ so much smaller than the RL Audis.
2627
+
2628
+ 55:57.560 --> 56:00.560
2629
+ But they have all the sensors that you find in the RL Audis.
2630
+
2631
+ 56:00.560 --> 56:03.560
2632
+ You find the cameras, the LIDAR sensors.
2633
+
2634
+ 56:03.560 --> 56:08.560
2635
+ They go up to 120 kilometers an hour if they want to.
2636
+
2637
+ 56:08.560 --> 56:12.560
2638
+ And they have pain sensors, basically.
2639
+
2640
+ 56:12.560 --> 56:16.560
2641
+ And they don't want to bump against obstacles and other Audis.
2642
+
2643
+ 56:16.560 --> 56:21.560
2644
+ And so they must learn like little babies to park.
2645
+
2646
+ 56:21.560 --> 56:25.560
2647
+ Take the raw vision input and translate that into actions
2648
+
2649
+ 56:25.560 --> 56:28.560
2650
+ that lead to successful parking behavior,
2651
+
2652
+ 56:28.560 --> 56:30.560
2653
+ which is a rewarding thing.
2654
+
2655
+ 56:30.560 --> 56:32.560
2656
+ And yes, they learn that.
2657
+
2658
+ 56:32.560 --> 56:34.560
2659
+ So we have examples like that.
2660
+
2661
+ 56:34.560 --> 56:36.560
2662
+ And it's only in the beginning.
2663
+
2664
+ 56:36.560 --> 56:38.560
2665
+ This is just a tip of the iceberg.
2666
+
2667
+ 56:38.560 --> 56:44.560
2668
+ And I believe the next wave of AI is going to be all about that.
2669
+
2670
+ 56:44.560 --> 56:47.560
2671
+ So at the moment, the current wave of AI is about
2672
+
2673
+ 56:47.560 --> 56:51.560
2674
+ passive pattern observation and prediction.
2675
+
2676
+ 56:51.560 --> 56:54.560
2677
+ And that's what you have on your smartphone
2678
+
2679
+ 56:54.560 --> 56:58.560
2680
+ and what the major companies on the Pacific Rim are using
2681
+
2682
+ 56:58.560 --> 57:01.560
2683
+ to sell you ads to do marketing.
2684
+
2685
+ 57:01.560 --> 57:04.560
2686
+ That's the current sort of profit in AI.
2687
+
2688
+ 57:04.560 --> 57:09.560
2689
+ And that's only one or two percent of the wild economy,
2690
+
2691
+ 57:09.560 --> 57:11.560
2692
+ which is big enough to make these companies
2693
+
2694
+ 57:11.560 --> 57:14.560
2695
+ pretty much the most valuable companies in the world.
2696
+
2697
+ 57:14.560 --> 57:19.560
2698
+ But there's a much, much bigger fraction of the economy
2699
+
2700
+ 57:19.560 --> 57:21.560
2701
+ going to be affected by the next wave,
2702
+
2703
+ 57:21.560 --> 57:25.560
2704
+ which is really about machines that shape the data
2705
+
2706
+ 57:25.560 --> 57:27.560
2707
+ through their own actions.
2708
+
2709
+ 57:27.560 --> 57:32.560
2710
+ Do you think simulation is ultimately the biggest way
2711
+
2712
+ 57:32.560 --> 57:36.560
2713
+ that those methods will be successful in the next 10, 20 years?
2714
+
2715
+ 57:36.560 --> 57:38.560
2716
+ We're not talking about 100 years from now.
2717
+
2718
+ 57:38.560 --> 57:42.560
2719
+ We're talking about sort of the near term impact of RL.
2720
+
2721
+ 57:42.560 --> 57:44.560
2722
+ Do you think really good simulation is required?
2723
+
2724
+ 57:44.560 --> 57:48.560
2725
+ Or is there other techniques like imitation learning,
2726
+
2727
+ 57:48.560 --> 57:53.560
2728
+ observing other humans operating in the real world?
2729
+
2730
+ 57:53.560 --> 57:57.560
2731
+ Where do you think this success will come from?
2732
+
2733
+ 57:57.560 --> 58:01.560
2734
+ So at the moment we have a tendency of using
2735
+
2736
+ 58:01.560 --> 58:06.560
2737
+ physics simulations to learn behavior for machines
2738
+
2739
+ 58:06.560 --> 58:13.560
2740
+ that learn to solve problems that humans also do not know how to solve.
2741
+
2742
+ 58:13.560 --> 58:15.560
2743
+ However, this is not the future,
2744
+
2745
+ 58:15.560 --> 58:19.560
2746
+ because the future is in what little babies do.
2747
+
2748
+ 58:19.560 --> 58:22.560
2749
+ They don't use a physics engine to simulate the world.
2750
+
2751
+ 58:22.560 --> 58:25.560
2752
+ They learn a predictive model of the world,
2753
+
2754
+ 58:25.560 --> 58:29.560
2755
+ which maybe sometimes is wrong in many ways,
2756
+
2757
+ 58:29.560 --> 58:34.560
2758
+ but captures all kinds of important abstract high level predictions
2759
+
2760
+ 58:34.560 --> 58:37.560
2761
+ which are really important to be successful.
2762
+
2763
+ 58:37.560 --> 58:42.560
2764
+ And that's what was the future 30 years ago
2765
+
2766
+ 58:42.560 --> 58:44.560
2767
+ when we started that type of research,
2768
+
2769
+ 58:44.560 --> 58:45.560
2770
+ but it's still the future,
2771
+
2772
+ 58:45.560 --> 58:50.560
2773
+ and now we know much better how to move forward
2774
+
2775
+ 58:50.560 --> 58:54.560
2776
+ and to really make working systems based on that,
2777
+
2778
+ 58:54.560 --> 58:57.560
2779
+ where you have a learning model of the world,
2780
+
2781
+ 58:57.560 --> 59:00.560
2782
+ a model of the world that learns to predict what's going to happen
2783
+
2784
+ 59:00.560 --> 59:01.560
2785
+ if I do that and that,
2786
+
2787
+ 59:01.560 --> 59:06.560
2788
+ and then the controller uses that model
2789
+
2790
+ 59:06.560 --> 59:11.560
2791
+ to more quickly learn successful action sequences.
2792
+
2793
+ 59:11.560 --> 59:13.560
2794
+ And then of course always this curiosity thing,
2795
+
2796
+ 59:13.560 --> 59:15.560
2797
+ in the beginning the model is stupid,
2798
+
2799
+ 59:15.560 --> 59:17.560
2800
+ so the controller should be motivated
2801
+
2802
+ 59:17.560 --> 59:20.560
2803
+ to come up with experiments, with action sequences
2804
+
2805
+ 59:20.560 --> 59:23.560
2806
+ that lead to data that improve the model.
2807
+
2808
+ 59:23.560 --> 59:26.560
2809
+ Do you think improving the model,
2810
+
2811
+ 59:26.560 --> 59:30.560
2812
+ constructing an understanding of the world in this connection
2813
+
2814
+ 59:30.560 --> 59:34.560
2815
+ is now the popular approaches have been successful
2816
+
2817
+ 59:34.560 --> 59:38.560
2818
+ or grounded in ideas of neural networks,
2819
+
2820
+ 59:38.560 --> 59:43.560
2821
+ but in the 80s with expert systems there's symbolic AI approaches,
2822
+
2823
+ 59:43.560 --> 59:47.560
2824
+ which to us humans are more intuitive
2825
+
2826
+ 59:47.560 --> 59:50.560
2827
+ in the sense that it makes sense that you build up knowledge
2828
+
2829
+ 59:50.560 --> 59:52.560
2830
+ in this knowledge representation.
2831
+
2832
+ 59:52.560 --> 59:56.560
2833
+ What kind of lessons can we draw into our current approaches
2834
+
2835
+ 59:56.560 --> 1:00:00.560
2836
+ from expert systems, from symbolic AI?
2837
+
2838
+ 1:00:00.560 --> 1:00:04.560
2839
+ So I became aware of all of that in the 80s
2840
+
2841
+ 1:00:04.560 --> 1:00:09.560
2842
+ and back then logic programming was a huge thing.
2843
+
2844
+ 1:00:09.560 --> 1:00:12.560
2845
+ Was it inspiring to yourself that you find it compelling
2846
+
2847
+ 1:00:12.560 --> 1:00:16.560
2848
+ that a lot of your work was not so much in that realm,
2849
+
2850
+ 1:00:16.560 --> 1:00:18.560
2851
+ is more in the learning systems?
2852
+
2853
+ 1:00:18.560 --> 1:00:20.560
2854
+ Yes and no, but we did all of that.
2855
+
2856
+ 1:00:20.560 --> 1:00:27.560
2857
+ So my first publication ever actually was 1987,
2858
+
2859
+ 1:00:27.560 --> 1:00:31.560
2860
+ was the implementation of a genetic algorithm
2861
+
2862
+ 1:00:31.560 --> 1:00:34.560
2863
+ of a genetic programming system in Prolog.
2864
+
2865
+ 1:00:34.560 --> 1:00:37.560
2866
+ So Prolog, that's what you learn back then,
2867
+
2868
+ 1:00:37.560 --> 1:00:39.560
2869
+ which is a logic programming language,
2870
+
2871
+ 1:00:39.560 --> 1:00:45.560
2872
+ and the Japanese, they had this huge fifth generation AI project,
2873
+
2874
+ 1:00:45.560 --> 1:00:48.560
2875
+ which was mostly about logic programming back then,
2876
+
2877
+ 1:00:48.560 --> 1:00:53.560
2878
+ although neural networks existed and were well known back then,
2879
+
2880
+ 1:00:53.560 --> 1:00:57.560
2881
+ and deep learning has existed since 1965,
2882
+
2883
+ 1:00:57.560 --> 1:01:01.560
2884
+ since this guy in the Ukraine, Ivak Nenko, started it,
2885
+
2886
+ 1:01:01.560 --> 1:01:05.560
2887
+ but the Japanese and many other people,
2888
+
2889
+ 1:01:05.560 --> 1:01:07.560
2890
+ they focused really on this logic programming,
2891
+
2892
+ 1:01:07.560 --> 1:01:10.560
2893
+ and I was influenced to the extent that I said,
2894
+
2895
+ 1:01:10.560 --> 1:01:13.560
2896
+ okay, let's take these biologically inspired algorithms
2897
+
2898
+ 1:01:13.560 --> 1:01:16.560
2899
+ like evolution, programs,
2900
+
2901
+ 1:01:16.560 --> 1:01:22.560
2902
+ and implement that in the language which I know,
2903
+
2904
+ 1:01:22.560 --> 1:01:24.560
2905
+ which was Prolog, for example, back then.
2906
+
2907
+ 1:01:24.560 --> 1:01:28.560
2908
+ And then in many ways this came back later,
2909
+
2910
+ 1:01:28.560 --> 1:01:31.560
2911
+ because the Goudel machine, for example,
2912
+
2913
+ 1:01:31.560 --> 1:01:33.560
2914
+ has a proof searcher on board,
2915
+
2916
+ 1:01:33.560 --> 1:01:35.560
2917
+ and without that it would not be optimal.
2918
+
2919
+ 1:01:35.560 --> 1:01:38.560
2920
+ Well, Markus Hutter's universal algorithm
2921
+
2922
+ 1:01:38.560 --> 1:01:40.560
2923
+ for solving all well defined problems
2924
+
2925
+ 1:01:40.560 --> 1:01:42.560
2926
+ has a proof search on board,
2927
+
2928
+ 1:01:42.560 --> 1:01:46.560
2929
+ so that's very much logic programming.
2930
+
2931
+ 1:01:46.560 --> 1:01:50.560
2932
+ Without that it would not be asymptotically optimal.
2933
+
2934
+ 1:01:50.560 --> 1:01:54.560
2935
+ But then on the other hand, because we are very pragmatic guys also,
2936
+
2937
+ 1:01:54.560 --> 1:01:59.560
2938
+ we focused on recurrent neural networks
2939
+
2940
+ 1:01:59.560 --> 1:02:04.560
2941
+ and suboptimal stuff such as gradient based search
2942
+
2943
+ 1:02:04.560 --> 1:02:09.560
2944
+ and program space rather than provably optimal things.
2945
+
2946
+ 1:02:09.560 --> 1:02:13.560
2947
+ So logic programming certainly has a usefulness
2948
+
2949
+ 1:02:13.560 --> 1:02:17.560
2950
+ when you're trying to construct something provably optimal
2951
+
2952
+ 1:02:17.560 --> 1:02:19.560
2953
+ or provably good or something like that,
2954
+
2955
+ 1:02:19.560 --> 1:02:22.560
2956
+ but is it useful for practical problems?
2957
+
2958
+ 1:02:22.560 --> 1:02:24.560
2959
+ It's really useful for our theorem proving.
2960
+
2961
+ 1:02:24.560 --> 1:02:28.560
2962
+ The best theorem proofers today are not neural networks.
2963
+
2964
+ 1:02:28.560 --> 1:02:31.560
2965
+ No, they are logic programming systems
2966
+
2967
+ 1:02:31.560 --> 1:02:35.560
2968
+ that are much better theorem proofers than most math students
2969
+
2970
+ 1:02:35.560 --> 1:02:38.560
2971
+ in the first or second semester.
2972
+
2973
+ 1:02:38.560 --> 1:02:42.560
2974
+ But for reasoning, for playing games of Go, or chess,
2975
+
2976
+ 1:02:42.560 --> 1:02:46.560
2977
+ or for robots, autonomous vehicles that operate in the real world,
2978
+
2979
+ 1:02:46.560 --> 1:02:50.560
2980
+ or object manipulation, you think learning...
2981
+
2982
+ 1:02:50.560 --> 1:02:53.560
2983
+ Yeah, as long as the problems have little to do
2984
+
2985
+ 1:02:53.560 --> 1:02:58.560
2986
+ with theorem proving themselves,
2987
+
2988
+ 1:02:58.560 --> 1:03:01.560
2989
+ then as long as that is not the case,
2990
+
2991
+ 1:03:01.560 --> 1:03:05.560
2992
+ you just want to have better pattern recognition.
2993
+
2994
+ 1:03:05.560 --> 1:03:09.560
2995
+ So to build a self trying car, you want to have better pattern recognition
2996
+
2997
+ 1:03:09.560 --> 1:03:13.560
2998
+ and pedestrian recognition and all these things,
2999
+
3000
+ 1:03:13.560 --> 1:03:18.560
3001
+ and you want to minimize the number of false positives,
3002
+
3003
+ 1:03:18.560 --> 1:03:22.560
3004
+ which is currently slowing down self trying cars in many ways.
3005
+
3006
+ 1:03:22.560 --> 1:03:27.560
3007
+ And all of that has very little to do with logic programming.
3008
+
3009
+ 1:03:27.560 --> 1:03:32.560
3010
+ What are you most excited about in terms of directions
3011
+
3012
+ 1:03:32.560 --> 1:03:36.560
3013
+ of artificial intelligence at this moment in the next few years,
3014
+
3015
+ 1:03:36.560 --> 1:03:41.560
3016
+ in your own research and in the broader community?
3017
+
3018
+ 1:03:41.560 --> 1:03:44.560
3019
+ So I think in the not so distant future,
3020
+
3021
+ 1:03:44.560 --> 1:03:52.560
3022
+ we will have for the first time little robots that learn like kids.
3023
+
3024
+ 1:03:52.560 --> 1:03:57.560
3025
+ And I will be able to say to the robot,
3026
+
3027
+ 1:03:57.560 --> 1:04:00.560
3028
+ look here robot, we are going to assemble a smartphone.
3029
+
3030
+ 1:04:00.560 --> 1:04:05.560
3031
+ Let's take this slab of plastic and the screwdriver
3032
+
3033
+ 1:04:05.560 --> 1:04:08.560
3034
+ and let's screw in the screw like that.
3035
+
3036
+ 1:04:08.560 --> 1:04:11.560
3037
+ No, not like that, like that.
3038
+
3039
+ 1:04:11.560 --> 1:04:13.560
3040
+ Not like that, like that.
3041
+
3042
+ 1:04:13.560 --> 1:04:17.560
3043
+ And I don't have a data glove or something.
3044
+
3045
+ 1:04:17.560 --> 1:04:20.560
3046
+ He will see me and he will hear me
3047
+
3048
+ 1:04:20.560 --> 1:04:24.560
3049
+ and he will try to do something with his own actuators,
3050
+
3051
+ 1:04:24.560 --> 1:04:26.560
3052
+ which will be really different from mine,
3053
+
3054
+ 1:04:26.560 --> 1:04:28.560
3055
+ but he will understand the difference
3056
+
3057
+ 1:04:28.560 --> 1:04:34.560
3058
+ and will learn to imitate me but not in the supervised way
3059
+
3060
+ 1:04:34.560 --> 1:04:40.560
3061
+ where a teacher is giving target signals for all his muscles all the time.
3062
+
3063
+ 1:04:40.560 --> 1:04:43.560
3064
+ No, by doing this high level imitation
3065
+
3066
+ 1:04:43.560 --> 1:04:46.560
3067
+ where he first has to learn to imitate me
3068
+
3069
+ 1:04:46.560 --> 1:04:50.560
3070
+ and to interpret these additional noises coming from my mouth
3071
+
3072
+ 1:04:50.560 --> 1:04:54.560
3073
+ as helpful signals to do that pattern.
3074
+
3075
+ 1:04:54.560 --> 1:05:00.560
3076
+ And then it will by itself come up with faster ways
3077
+
3078
+ 1:05:00.560 --> 1:05:03.560
3079
+ and more efficient ways of doing the same thing.
3080
+
3081
+ 1:05:03.560 --> 1:05:07.560
3082
+ And finally, I stop his learning algorithm
3083
+
3084
+ 1:05:07.560 --> 1:05:10.560
3085
+ and make a million copies and sell it.
3086
+
3087
+ 1:05:10.560 --> 1:05:13.560
3088
+ And so at the moment this is not possible,
3089
+
3090
+ 1:05:13.560 --> 1:05:16.560
3091
+ but we already see how we are going to get there.
3092
+
3093
+ 1:05:16.560 --> 1:05:21.560
3094
+ And you can imagine to the extent that this works economically and cheaply,
3095
+
3096
+ 1:05:21.560 --> 1:05:24.560
3097
+ it's going to change everything.
3098
+
3099
+ 1:05:24.560 --> 1:05:30.560
3100
+ Almost all our production is going to be affected by that.
3101
+
3102
+ 1:05:30.560 --> 1:05:33.560
3103
+ And a much bigger wave,
3104
+
3105
+ 1:05:33.560 --> 1:05:36.560
3106
+ a much bigger AI wave is coming
3107
+
3108
+ 1:05:36.560 --> 1:05:38.560
3109
+ than the one that we are currently witnessing,
3110
+
3111
+ 1:05:38.560 --> 1:05:41.560
3112
+ which is mostly about passive pattern recognition on your smartphone.
3113
+
3114
+ 1:05:41.560 --> 1:05:47.560
3115
+ This is about active machines that shapes data through the actions they are executing
3116
+
3117
+ 1:05:47.560 --> 1:05:51.560
3118
+ and they learn to do that in a good way.
3119
+
3120
+ 1:05:51.560 --> 1:05:56.560
3121
+ So many of the traditional industries are going to be affected by that.
3122
+
3123
+ 1:05:56.560 --> 1:06:00.560
3124
+ All the companies that are building machines
3125
+
3126
+ 1:06:00.560 --> 1:06:05.560
3127
+ will equip these machines with cameras and other sensors
3128
+
3129
+ 1:06:05.560 --> 1:06:10.560
3130
+ and they are going to learn to solve all kinds of problems.
3131
+
3132
+ 1:06:10.560 --> 1:06:14.560
3133
+ Through interaction with humans, but also a lot on their own
3134
+
3135
+ 1:06:14.560 --> 1:06:18.560
3136
+ to improve what they already can do.
3137
+
3138
+ 1:06:18.560 --> 1:06:23.560
3139
+ And lots of old economy is going to be affected by that.
3140
+
3141
+ 1:06:23.560 --> 1:06:28.560
3142
+ And in recent years I have seen that old economy is actually waking up
3143
+
3144
+ 1:06:28.560 --> 1:06:31.560
3145
+ and realizing that this is the case.
3146
+
3147
+ 1:06:31.560 --> 1:06:35.560
3148
+ Are you optimistic about that future? Are you concerned?
3149
+
3150
+ 1:06:35.560 --> 1:06:40.560
3151
+ There's a lot of people concerned in the near term about the transformation
3152
+
3153
+ 1:06:40.560 --> 1:06:42.560
3154
+ of the nature of work.
3155
+
3156
+ 1:06:42.560 --> 1:06:45.560
3157
+ The kind of ideas that you just suggested
3158
+
3159
+ 1:06:45.560 --> 1:06:48.560
3160
+ would have a significant impact on what kind of things could be automated.
3161
+
3162
+ 1:06:48.560 --> 1:06:51.560
3163
+ Are you optimistic about that future?
3164
+
3165
+ 1:06:51.560 --> 1:06:54.560
3166
+ Are you nervous about that future?
3167
+
3168
+ 1:06:54.560 --> 1:07:01.560
3169
+ And looking a little bit farther into the future, there's people like Gila Musk
3170
+
3171
+ 1:07:01.560 --> 1:07:06.560
3172
+ still wrestle concerned about the existential threats of that future.
3173
+
3174
+ 1:07:06.560 --> 1:07:10.560
3175
+ So in the near term, job loss in the long term existential threat,
3176
+
3177
+ 1:07:10.560 --> 1:07:15.560
3178
+ are these concerns to you or are you ultimately optimistic?
3179
+
3180
+ 1:07:15.560 --> 1:07:22.560
3181
+ So let's first address the near future.
3182
+
3183
+ 1:07:22.560 --> 1:07:27.560
3184
+ We have had predictions of job losses for many decades.
3185
+
3186
+ 1:07:27.560 --> 1:07:32.560
3187
+ For example, when industrial robots came along,
3188
+
3189
+ 1:07:32.560 --> 1:07:37.560
3190
+ many people predicted that lots of jobs are going to get lost.
3191
+
3192
+ 1:07:37.560 --> 1:07:41.560
3193
+ And in a sense, they were right,
3194
+
3195
+ 1:07:41.560 --> 1:07:45.560
3196
+ because back then there were car factories
3197
+
3198
+ 1:07:45.560 --> 1:07:50.560
3199
+ and hundreds of people in these factories assembled cars.
3200
+
3201
+ 1:07:50.560 --> 1:07:53.560
3202
+ And today the same car factories have hundreds of robots
3203
+
3204
+ 1:07:53.560 --> 1:07:58.560
3205
+ and maybe three guys watching the robots.
3206
+
3207
+ 1:07:58.560 --> 1:08:04.560
3208
+ On the other hand, those countries that have lots of robots per capita,
3209
+
3210
+ 1:08:04.560 --> 1:08:09.560
3211
+ Japan, Korea, Germany, Switzerland, a couple of other countries,
3212
+
3213
+ 1:08:09.560 --> 1:08:13.560
3214
+ they have really low unemployment rates.
3215
+
3216
+ 1:08:13.560 --> 1:08:17.560
3217
+ Somehow all kinds of new jobs were created.
3218
+
3219
+ 1:08:17.560 --> 1:08:22.560
3220
+ Back then nobody anticipated those jobs.
3221
+
3222
+ 1:08:22.560 --> 1:08:26.560
3223
+ And decades ago, I already said,
3224
+
3225
+ 1:08:26.560 --> 1:08:31.560
3226
+ it's really easy to say which jobs are going to get lost,
3227
+
3228
+ 1:08:31.560 --> 1:08:35.560
3229
+ but it's really hard to predict the new ones.
3230
+
3231
+ 1:08:35.560 --> 1:08:38.560
3232
+ 30 years ago, who would have predicted all these people
3233
+
3234
+ 1:08:38.560 --> 1:08:44.560
3235
+ making money as YouTube bloggers, for example?
3236
+
3237
+ 1:08:44.560 --> 1:08:51.560
3238
+ 200 years ago, 60% of all people used to work in agriculture.
3239
+
3240
+ 1:08:51.560 --> 1:08:55.560
3241
+ Today, maybe 1%.
3242
+
3243
+ 1:08:55.560 --> 1:09:01.560
3244
+ But still, only, I don't know, 5% unemployment.
3245
+
3246
+ 1:09:01.560 --> 1:09:03.560
3247
+ Lots of new jobs were created.
3248
+
3249
+ 1:09:03.560 --> 1:09:07.560
3250
+ And Homo Ludens, the playing man,
3251
+
3252
+ 1:09:07.560 --> 1:09:10.560
3253
+ is inventing new jobs all the time.
3254
+
3255
+ 1:09:10.560 --> 1:09:15.560
3256
+ Most of these jobs are not existentially necessary
3257
+
3258
+ 1:09:15.560 --> 1:09:18.560
3259
+ for the survival of our species.
3260
+
3261
+ 1:09:18.560 --> 1:09:22.560
3262
+ There are only very few existentially necessary jobs
3263
+
3264
+ 1:09:22.560 --> 1:09:27.560
3265
+ such as farming and building houses and warming up the houses,
3266
+
3267
+ 1:09:27.560 --> 1:09:30.560
3268
+ but less than 10% of the population is doing that.
3269
+
3270
+ 1:09:30.560 --> 1:09:37.560
3271
+ And most of these newly invented jobs are about interacting with other people
3272
+
3273
+ 1:09:37.560 --> 1:09:40.560
3274
+ in new ways, through new media and so on,
3275
+
3276
+ 1:09:40.560 --> 1:09:45.560
3277
+ getting new types of kudos and forms of likes and whatever,
3278
+
3279
+ 1:09:45.560 --> 1:09:47.560
3280
+ and even making money through that.
3281
+
3282
+ 1:09:47.560 --> 1:09:52.560
3283
+ So, Homo Ludens, the playing man, doesn't want to be unemployed,
3284
+
3285
+ 1:09:52.560 --> 1:09:56.560
3286
+ and that's why he's inventing new jobs all the time.
3287
+
3288
+ 1:09:56.560 --> 1:10:01.560
3289
+ And he keeps considering these jobs as really important
3290
+
3291
+ 1:10:01.560 --> 1:10:07.560
3292
+ and is investing a lot of energy and hours of work into those new jobs.
3293
+
3294
+ 1:10:07.560 --> 1:10:09.560
3295
+ That's quite beautifully put.
3296
+
3297
+ 1:10:09.560 --> 1:10:11.560
3298
+ We're really nervous about the future
3299
+
3300
+ 1:10:11.560 --> 1:10:14.560
3301
+ because we can't predict what kind of new jobs will be created.
3302
+
3303
+ 1:10:14.560 --> 1:10:20.560
3304
+ But you're ultimately optimistic that we humans are so restless
3305
+
3306
+ 1:10:20.560 --> 1:10:24.560
3307
+ that we create and give meaning to newer and newer jobs,
3308
+
3309
+ 1:10:24.560 --> 1:10:29.560
3310
+ telling you things that get likes on Facebook
3311
+
3312
+ 1:10:29.560 --> 1:10:31.560
3313
+ or whatever the social platform is.
3314
+
3315
+ 1:10:31.560 --> 1:10:36.560
3316
+ So, what about long term existential threat of AI
3317
+
3318
+ 1:10:36.560 --> 1:10:40.560
3319
+ where our whole civilization may be swallowed up
3320
+
3321
+ 1:10:40.560 --> 1:10:44.560
3322
+ by this ultra super intelligent systems?
3323
+
3324
+ 1:10:44.560 --> 1:10:47.560
3325
+ Maybe it's not going to be swallowed up,
3326
+
3327
+ 1:10:47.560 --> 1:10:55.560
3328
+ but I'd be surprised if we humans were the last step
3329
+
3330
+ 1:10:55.560 --> 1:10:59.560
3331
+ in the evolution of the universe.
3332
+
3333
+ 1:10:59.560 --> 1:11:03.560
3334
+ You've actually had this beautiful comment somewhere
3335
+
3336
+ 1:11:03.560 --> 1:11:08.560
3337
+ that I've seen saying that artificial...
3338
+
3339
+ 1:11:08.560 --> 1:11:11.560
3340
+ Quite insightful, artificial intelligence systems
3341
+
3342
+ 1:11:11.560 --> 1:11:15.560
3343
+ just like us humans will likely not want to interact with humans.
3344
+
3345
+ 1:11:15.560 --> 1:11:17.560
3346
+ They'll just interact amongst themselves,
3347
+
3348
+ 1:11:17.560 --> 1:11:20.560
3349
+ just like ants interact amongst themselves
3350
+
3351
+ 1:11:20.560 --> 1:11:24.560
3352
+ and only tangentially interact with humans.
3353
+
3354
+ 1:11:24.560 --> 1:11:28.560
3355
+ And it's quite an interesting idea that once we create AGI
3356
+
3357
+ 1:11:28.560 --> 1:11:31.560
3358
+ that will lose interest in humans
3359
+
3360
+ 1:11:31.560 --> 1:11:34.560
3361
+ and have compete for their own Facebook likes
3362
+
3363
+ 1:11:34.560 --> 1:11:36.560
3364
+ and their own social platforms.
3365
+
3366
+ 1:11:36.560 --> 1:11:40.560
3367
+ So, within that quite elegant idea,
3368
+
3369
+ 1:11:40.560 --> 1:11:44.560
3370
+ how do we know in a hypothetical sense
3371
+
3372
+ 1:11:44.560 --> 1:11:48.560
3373
+ that there's not already intelligent systems out there?
3374
+
3375
+ 1:11:48.560 --> 1:11:52.560
3376
+ How do you think broadly of general intelligence
3377
+
3378
+ 1:11:52.560 --> 1:11:56.560
3379
+ greater than us, how do we know it's out there?
3380
+
3381
+ 1:11:56.560 --> 1:12:01.560
3382
+ How do we know it's around us and could it already be?
3383
+
3384
+ 1:12:01.560 --> 1:12:04.560
3385
+ I'd be surprised if within the next few decades
3386
+
3387
+ 1:12:04.560 --> 1:12:10.560
3388
+ or something like that we won't have AIs
3389
+
3390
+ 1:12:10.560 --> 1:12:12.560
3391
+ that are truly smart in every single way
3392
+
3393
+ 1:12:12.560 --> 1:12:17.560
3394
+ and better problem solvers in almost every single important way.
3395
+
3396
+ 1:12:17.560 --> 1:12:22.560
3397
+ And I'd be surprised if they wouldn't realize
3398
+
3399
+ 1:12:22.560 --> 1:12:24.560
3400
+ what we have realized a long time ago,
3401
+
3402
+ 1:12:24.560 --> 1:12:29.560
3403
+ which is that almost all physical resources are not here
3404
+
3405
+ 1:12:29.560 --> 1:12:36.560
3406
+ in this biosphere, but throughout the rest of the solar system
3407
+
3408
+ 1:12:36.560 --> 1:12:42.560
3409
+ gets two billion times more solar energy than our little planet.
3410
+
3411
+ 1:12:42.560 --> 1:12:46.560
3412
+ There's lots of material out there that you can use
3413
+
3414
+ 1:12:46.560 --> 1:12:51.560
3415
+ to build robots and self replicating robot factories and all this stuff.
3416
+
3417
+ 1:12:51.560 --> 1:12:53.560
3418
+ And they are going to do that.
3419
+
3420
+ 1:12:53.560 --> 1:12:56.560
3421
+ And they will be scientists and curious
3422
+
3423
+ 1:12:56.560 --> 1:12:59.560
3424
+ and they will explore what they can do.
3425
+
3426
+ 1:12:59.560 --> 1:13:04.560
3427
+ And in the beginning they will be fascinated by life
3428
+
3429
+ 1:13:04.560 --> 1:13:07.560
3430
+ and by their own origins in our civilization.
3431
+
3432
+ 1:13:07.560 --> 1:13:09.560
3433
+ They will want to understand that completely,
3434
+
3435
+ 1:13:09.560 --> 1:13:13.560
3436
+ just like people today would like to understand how life works
3437
+
3438
+ 1:13:13.560 --> 1:13:22.560
3439
+ and also the history of our own existence and civilization
3440
+
3441
+ 1:13:22.560 --> 1:13:26.560
3442
+ and also the physical laws that created all of them.
3443
+
3444
+ 1:13:26.560 --> 1:13:29.560
3445
+ So in the beginning they will be fascinated by life
3446
+
3447
+ 1:13:29.560 --> 1:13:33.560
3448
+ once they understand it, they lose interest,
3449
+
3450
+ 1:13:33.560 --> 1:13:39.560
3451
+ like anybody who loses interest in things he understands.
3452
+
3453
+ 1:13:39.560 --> 1:13:43.560
3454
+ And then, as you said,
3455
+
3456
+ 1:13:43.560 --> 1:13:50.560
3457
+ the most interesting sources of information for them
3458
+
3459
+ 1:13:50.560 --> 1:13:57.560
3460
+ will be others of their own kind.
3461
+
3462
+ 1:13:57.560 --> 1:14:01.560
3463
+ So, at least in the long run,
3464
+
3465
+ 1:14:01.560 --> 1:14:06.560
3466
+ there seems to be some sort of protection
3467
+
3468
+ 1:14:06.560 --> 1:14:11.560
3469
+ through lack of interest on the other side.
3470
+
3471
+ 1:14:11.560 --> 1:14:16.560
3472
+ And now it seems also clear, as far as we understand physics,
3473
+
3474
+ 1:14:16.560 --> 1:14:20.560
3475
+ you need matter and energy to compute
3476
+
3477
+ 1:14:20.560 --> 1:14:22.560
3478
+ and to build more robots and infrastructure
3479
+
3480
+ 1:14:22.560 --> 1:14:28.560
3481
+ and more AI civilization and AI ecologies
3482
+
3483
+ 1:14:28.560 --> 1:14:31.560
3484
+ consisting of trillions of different types of AI's.
3485
+
3486
+ 1:14:31.560 --> 1:14:34.560
3487
+ And so it seems inconceivable to me
3488
+
3489
+ 1:14:34.560 --> 1:14:37.560
3490
+ that this thing is not going to expand.
3491
+
3492
+ 1:14:37.560 --> 1:14:41.560
3493
+ Some AI ecology not controlled by one AI
3494
+
3495
+ 1:14:41.560 --> 1:14:44.560
3496
+ but trillions of different types of AI's competing
3497
+
3498
+ 1:14:44.560 --> 1:14:47.560
3499
+ in all kinds of quickly evolving
3500
+
3501
+ 1:14:47.560 --> 1:14:49.560
3502
+ and disappearing ecological niches
3503
+
3504
+ 1:14:49.560 --> 1:14:52.560
3505
+ in ways that we cannot fathom at the moment.
3506
+
3507
+ 1:14:52.560 --> 1:14:54.560
3508
+ But it's going to expand,
3509
+
3510
+ 1:14:54.560 --> 1:14:56.560
3511
+ limited by light speed and physics,
3512
+
3513
+ 1:14:56.560 --> 1:15:00.560
3514
+ but it's going to expand and now we realize
3515
+
3516
+ 1:15:00.560 --> 1:15:02.560
3517
+ that the universe is still young.
3518
+
3519
+ 1:15:02.560 --> 1:15:05.560
3520
+ It's only 13.8 billion years old
3521
+
3522
+ 1:15:05.560 --> 1:15:10.560
3523
+ and it's going to be a thousand times older than that.
3524
+
3525
+ 1:15:10.560 --> 1:15:13.560
3526
+ So there's plenty of time
3527
+
3528
+ 1:15:13.560 --> 1:15:16.560
3529
+ to conquer the entire universe
3530
+
3531
+ 1:15:16.560 --> 1:15:19.560
3532
+ and to fill it with intelligence
3533
+
3534
+ 1:15:19.560 --> 1:15:21.560
3535
+ and send us in receivers such that
3536
+
3537
+ 1:15:21.560 --> 1:15:25.560
3538
+ AI's can travel the way they are traveling
3539
+
3540
+ 1:15:25.560 --> 1:15:27.560
3541
+ in our labs today,
3542
+
3543
+ 1:15:27.560 --> 1:15:31.560
3544
+ which is by radio from sender to receiver.
3545
+
3546
+ 1:15:31.560 --> 1:15:35.560
3547
+ And let's call the current age of the universe one eon.
3548
+
3549
+ 1:15:35.560 --> 1:15:38.560
3550
+ One eon.
3551
+
3552
+ 1:15:38.560 --> 1:15:41.560
3553
+ Now it will take just a few eons from now
3554
+
3555
+ 1:15:41.560 --> 1:15:43.560
3556
+ and the entire visible universe
3557
+
3558
+ 1:15:43.560 --> 1:15:46.560
3559
+ is going to be full of that stuff.
3560
+
3561
+ 1:15:46.560 --> 1:15:48.560
3562
+ And let's look ahead to a time
3563
+
3564
+ 1:15:48.560 --> 1:15:50.560
3565
+ when the universe is going to be
3566
+
3567
+ 1:15:50.560 --> 1:15:52.560
3568
+ one thousand times older than it is now.
3569
+
3570
+ 1:15:52.560 --> 1:15:54.560
3571
+ They will look back and they will say,
3572
+
3573
+ 1:15:54.560 --> 1:15:56.560
3574
+ look almost immediately after the Big Bang,
3575
+
3576
+ 1:15:56.560 --> 1:15:59.560
3577
+ only a few eons later,
3578
+
3579
+ 1:15:59.560 --> 1:16:02.560
3580
+ the entire universe started to become intelligent.
3581
+
3582
+ 1:16:02.560 --> 1:16:05.560
3583
+ Now to your question,
3584
+
3585
+ 1:16:05.560 --> 1:16:08.560
3586
+ how do we see whether anything like that
3587
+
3588
+ 1:16:08.560 --> 1:16:12.560
3589
+ has already happened or is already in a more advanced stage
3590
+
3591
+ 1:16:12.560 --> 1:16:14.560
3592
+ in some other part of the universe,
3593
+
3594
+ 1:16:14.560 --> 1:16:16.560
3595
+ of the visible universe?
3596
+
3597
+ 1:16:16.560 --> 1:16:18.560
3598
+ We are trying to look out there
3599
+
3600
+ 1:16:18.560 --> 1:16:20.560
3601
+ and nothing like that has happened so far.
3602
+
3603
+ 1:16:20.560 --> 1:16:22.560
3604
+ Or is that true?
3605
+
3606
+ 1:16:22.560 --> 1:16:24.560
3607
+ Do you think we would recognize it?
3608
+
3609
+ 1:16:24.560 --> 1:16:26.560
3610
+ How do we know it's not among us?
3611
+
3612
+ 1:16:26.560 --> 1:16:30.560
3613
+ How do we know planets aren't in themselves intelligent beings?
3614
+
3615
+ 1:16:30.560 --> 1:16:36.560
3616
+ How do we know ants seen as a collective
3617
+
3618
+ 1:16:36.560 --> 1:16:39.560
3619
+ are not much greater intelligence than our own?
3620
+
3621
+ 1:16:39.560 --> 1:16:41.560
3622
+ These kinds of ideas.
3623
+
3624
+ 1:16:41.560 --> 1:16:44.560
3625
+ When I was a boy, I was thinking about these things
3626
+
3627
+ 1:16:44.560 --> 1:16:48.560
3628
+ and I thought, hmm, maybe it has already happened.
3629
+
3630
+ 1:16:48.560 --> 1:16:50.560
3631
+ Because back then I knew,
3632
+
3633
+ 1:16:50.560 --> 1:16:53.560
3634
+ I learned from popular physics books,
3635
+
3636
+ 1:16:53.560 --> 1:16:57.560
3637
+ that the structure, the large scale structure of the universe
3638
+
3639
+ 1:16:57.560 --> 1:16:59.560
3640
+ is not homogeneous.
3641
+
3642
+ 1:16:59.560 --> 1:17:02.560
3643
+ And you have these clusters of galaxies
3644
+
3645
+ 1:17:02.560 --> 1:17:07.560
3646
+ and then in between there are these huge empty spaces.
3647
+
3648
+ 1:17:07.560 --> 1:17:11.560
3649
+ And I thought, hmm, maybe they aren't really empty.
3650
+
3651
+ 1:17:11.560 --> 1:17:13.560
3652
+ It's just that in the middle of that
3653
+
3654
+ 1:17:13.560 --> 1:17:16.560
3655
+ some AI civilization already has expanded
3656
+
3657
+ 1:17:16.560 --> 1:17:22.560
3658
+ and then has covered a bubble of a billion light years time
3659
+
3660
+ 1:17:22.560 --> 1:17:26.560
3661
+ using all the energy of all the stars within that bubble
3662
+
3663
+ 1:17:26.560 --> 1:17:29.560
3664
+ for its own unfathomable practices.
3665
+
3666
+ 1:17:29.560 --> 1:17:34.560
3667
+ And so it always happened and we just failed to interpret the signs.
3668
+
3669
+ 1:17:34.560 --> 1:17:39.560
3670
+ But then I learned that gravity by itself
3671
+
3672
+ 1:17:39.560 --> 1:17:42.560
3673
+ explains the large scale structure of the universe
3674
+
3675
+ 1:17:42.560 --> 1:17:45.560
3676
+ and that this is not a convincing explanation.
3677
+
3678
+ 1:17:45.560 --> 1:17:50.560
3679
+ And then I thought maybe it's the dark matter
3680
+
3681
+ 1:17:50.560 --> 1:17:54.560
3682
+ because as far as we know today
3683
+
3684
+ 1:17:54.560 --> 1:18:00.560
3685
+ 80% of the measurable matter is invisible.
3686
+
3687
+ 1:18:00.560 --> 1:18:03.560
3688
+ And we know that because otherwise our galaxy
3689
+
3690
+ 1:18:03.560 --> 1:18:06.560
3691
+ or other galaxies would fall apart.
3692
+
3693
+ 1:18:06.560 --> 1:18:09.560
3694
+ They are rotating too quickly.
3695
+
3696
+ 1:18:09.560 --> 1:18:14.560
3697
+ And then the idea was maybe all of these
3698
+
3699
+ 1:18:14.560 --> 1:18:17.560
3700
+ AI civilizations that are already out there,
3701
+
3702
+ 1:18:17.560 --> 1:18:22.560
3703
+ they are just invisible
3704
+
3705
+ 1:18:22.560 --> 1:18:24.560
3706
+ because they are really efficient in using the energies
3707
+
3708
+ 1:18:24.560 --> 1:18:26.560
3709
+ out their own local systems
3710
+
3711
+ 1:18:26.560 --> 1:18:29.560
3712
+ and that's why they appear dark to us.
3713
+
3714
+ 1:18:29.560 --> 1:18:31.560
3715
+ But this is also not a convincing explanation
3716
+
3717
+ 1:18:31.560 --> 1:18:34.560
3718
+ because then the question becomes
3719
+
3720
+ 1:18:34.560 --> 1:18:41.560
3721
+ why are there still any visible stars left in our own galaxy
3722
+
3723
+ 1:18:41.560 --> 1:18:44.560
3724
+ which also must have a lot of dark matter.
3725
+
3726
+ 1:18:44.560 --> 1:18:46.560
3727
+ So that is also not a convincing thing.
3728
+
3729
+ 1:18:46.560 --> 1:18:53.560
3730
+ And today I like to think it's quite plausible
3731
+
3732
+ 1:18:53.560 --> 1:18:56.560
3733
+ that maybe we are the first, at least in our local light cone
3734
+
3735
+ 1:18:56.560 --> 1:19:04.560
3736
+ within the few hundreds of millions of light years
3737
+
3738
+ 1:19:04.560 --> 1:19:08.560
3739
+ that we can reliably observe.
3740
+
3741
+ 1:19:08.560 --> 1:19:10.560
3742
+ Is that exciting to you?
3743
+
3744
+ 1:19:10.560 --> 1:19:12.560
3745
+ That we might be the first?
3746
+
3747
+ 1:19:12.560 --> 1:19:16.560
3748
+ It would make us much more important
3749
+
3750
+ 1:19:16.560 --> 1:19:20.560
3751
+ because if we mess it up through a nuclear war
3752
+
3753
+ 1:19:20.560 --> 1:19:25.560
3754
+ then maybe this will have an effect
3755
+
3756
+ 1:19:25.560 --> 1:19:30.560
3757
+ on the development of the entire universe.
3758
+
3759
+ 1:19:30.560 --> 1:19:32.560
3760
+ So let's not mess it up.
3761
+
3762
+ 1:19:32.560 --> 1:19:33.560
3763
+ Let's not mess it up.
3764
+
3765
+ 1:19:33.560 --> 1:19:35.560
3766
+ Jürgen, thank you so much for talking today.
3767
+
3768
+ 1:19:35.560 --> 1:19:36.560
3769
+ I really appreciate it.
3770
+
3771
+ 1:19:36.560 --> 1:19:42.560
3772
+ It's my pleasure.
3773
+
vtt/episode_012_large.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_012_small.vtt ADDED
@@ -0,0 +1,1937 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:05.840
4
+ The following is a conversation with Thomas Sanholm. He's a professor at SameU and cocreator of
5
+
6
+ 00:05.840 --> 00:11.440
7
+ Labratus, which is the first AI system to beat top human players in the game of Heads Up No Limit
8
+
9
+ 00:11.440 --> 00:17.840
10
+ Texas Holdem. He has published over 450 papers on game theory and machine learning, including a
11
+
12
+ 00:17.840 --> 00:25.360
13
+ best paper in 2017 at NIPS, now renamed to New Rips, which is where I caught up with him for this
14
+
15
+ 00:25.360 --> 00:31.520
16
+ conversation. His research and companies have had wide reaching impact in the real world,
17
+
18
+ 00:32.080 --> 00:38.720
19
+ especially because he and his group not only propose new ideas, but also build systems to prove
20
+
21
+ 00:38.720 --> 00:44.800
22
+ that these ideas work in the real world. This conversation is part of the MIT course on
23
+
24
+ 00:44.800 --> 00:50.400
25
+ artificial journal intelligence and the artificial intelligence podcast. If you enjoy it, subscribe
26
+
27
+ 00:50.400 --> 00:58.720
28
+ on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled FRID. And now
29
+
30
+ 00:58.720 --> 01:06.960
31
+ here's my conversation with Thomas Sanholm. Can you describe at the high level the game of poker,
32
+
33
+ 01:06.960 --> 01:13.200
34
+ Texas Holdem, Heads Up Texas Holdem, for people who might not be familiar with this card game?
35
+
36
+ 01:13.200 --> 01:18.720
37
+ Yeah, happy to. So Heads Up No Limit Texas Holdem has really emerged in the AI community
38
+
39
+ 01:18.720 --> 01:24.160
40
+ as a main benchmark for testing these application independent algorithms for
41
+
42
+ 01:24.160 --> 01:31.200
43
+ imperfect information game solving. And this is a game that's actually played by humans. You don't
44
+
45
+ 01:31.200 --> 01:38.560
46
+ see that much on TV or casinos because, well, for various reasons, but you do see it in some
47
+
48
+ 01:38.560 --> 01:44.000
49
+ expert level casinos and you see it in the best poker movies of all time. It's actually an event
50
+
51
+ 01:44.000 --> 01:50.400
52
+ in the world series of poker, but mostly it's played online and typically for pretty big sums
53
+
54
+ 01:50.400 --> 01:57.200
55
+ of money. And this is a game that usually only experts play. So if you go to your home game
56
+
57
+ 01:57.200 --> 02:01.680
58
+ on a Friday night, it probably is not going to be Heads Up No Limit Texas Holdem. It might be
59
+
60
+ 02:02.800 --> 02:08.560
61
+ No Limit Texas Holdem in some cases, but typically for a big group and it's not as competitive.
62
+
63
+ 02:08.560 --> 02:14.320
64
+ Well, Heads Up means it's two players, so it's really like me against you. Am I better or are you
65
+
66
+ 02:14.320 --> 02:20.240
67
+ better, much like chess or go in that sense, but an imperfect information game which makes it much
68
+
69
+ 02:20.240 --> 02:25.920
70
+ harder because I have to deal with issues of you knowing things that I don't know and I know
71
+
72
+ 02:25.920 --> 02:30.960
73
+ things that you don't know instead of pieces being nicely laid on the board for both of us to see.
74
+
75
+ 02:30.960 --> 02:38.400
76
+ So in Texas Holdem, there's two cards that you only see that belong to you and there is
77
+
78
+ 02:38.400 --> 02:44.080
79
+ they gradually lay out some cards that add up overall to five cards that everybody can see.
80
+
81
+ 02:44.080 --> 02:48.800
82
+ Yeah, the imperfect nature of the information is the two cards that you're holding up front.
83
+
84
+ 02:48.800 --> 02:54.160
85
+ Yeah. So as you said, you know, you first get two cards in private each, and then you there's a
86
+
87
+ 02:54.160 --> 02:59.440
88
+ betting round, then you get three cards in public on the table, then there's a betting round, then
89
+
90
+ 02:59.440 --> 03:03.760
91
+ you get the fourth card in public on the table, there's a betting round, then you get the fifth
92
+
93
+ 03:03.760 --> 03:08.480
94
+ card on the table, there's a betting round. So there's a total of four betting rounds and four
95
+
96
+ 03:08.480 --> 03:14.480
97
+ tranches of information revelation, if you will. The only the first tranche is private and then
98
+
99
+ 03:14.480 --> 03:25.200
100
+ it's public from there. And this is probably by far the most popular game in AI and just
101
+
102
+ 03:25.200 --> 03:30.080
103
+ the general public in terms of imperfect information. So it's probably the most popular
104
+
105
+ 03:30.080 --> 03:38.000
106
+ spectator game to watch, right? So, which is why it's a super exciting game tackle. So it's on
107
+
108
+ 03:38.000 --> 03:44.400
109
+ the order of chess, I would say in terms of popularity, in terms of AI setting it as the bar
110
+
111
+ 03:44.400 --> 04:01.280
112
+ of what is intelligence. So in 2017, Libratus beats a few four expert human players. Can you
113
+
114
+ 04:01.280 --> 04:06.720
115
+ describe that event? What you learned from it? What was it like? What was the process in general
116
+
117
+ 04:06.720 --> 04:12.960
118
+ for people who have not read the papers in the study? Yeah. So the event was that we invited
119
+
120
+ 04:12.960 --> 04:17.600
121
+ four of the top 10 players with these specialist players in Heads Up No Limit Texas Holden,
122
+
123
+ 04:17.600 --> 04:22.960
124
+ which is very important because this game is actually quite different than the multiplayer
125
+
126
+ 04:22.960 --> 04:29.360
127
+ version. We brought them in to Pittsburgh to play at the reverse casino for 20 days. We wanted to
128
+
129
+ 04:29.360 --> 04:37.440
130
+ get 120,000 hands in because we wanted to get statistical significance. So it's a lot of hands
131
+
132
+ 04:37.440 --> 04:44.000
133
+ for humans to play, even for these top pros who play fairly quickly normally. So we couldn't just
134
+
135
+ 04:44.000 --> 04:49.760
136
+ have one of them play so many hands. 20 days, they were playing basically morning to evening.
137
+
138
+ 04:50.320 --> 04:57.920
139
+ And you raised 200,000 as a little incentive for them to play. And the setting was so that
140
+
141
+ 04:57.920 --> 05:05.280
142
+ they didn't all get 50,000. We actually paid them out based on how they did against the AI each.
143
+
144
+ 05:05.280 --> 05:11.040
145
+ So they had an incentive to play as hard as they could, whether they're way ahead or way behind
146
+
147
+ 05:11.040 --> 05:16.400
148
+ or right at the mark of beating the AI. And you don't make any money, unfortunately. Right. No,
149
+
150
+ 05:16.400 --> 05:22.560
151
+ we can't make any money. So originally, a couple of years earlier, I actually explored whether we
152
+
153
+ 05:22.560 --> 05:28.400
154
+ could actually play for money because that would be, of course, interesting as well to play against
155
+
156
+ 05:28.400 --> 05:33.280
157
+ the top people for money. But the Pennsylvania Gaming Board said no. So we couldn't. So this
158
+
159
+ 05:33.280 --> 05:40.160
160
+ is much like an exhibit, like for a musician or a boxer or something like that. Nevertheless,
161
+
162
+ 05:40.160 --> 05:49.040
163
+ they were keeping track of the money and brought us one close to $2 million, I think. So if it was
164
+
165
+ 05:49.040 --> 05:55.200
166
+ for real money, if you were able to earn money, that was a quite impressive and inspiring achievement.
167
+
168
+ 05:55.200 --> 06:00.720
169
+ Just a few details. What were the players looking at? Were they behind a computer? What
170
+
171
+ 06:00.720 --> 06:06.080
172
+ was the interface like? Yes, they were playing much like they normally do. These top players,
173
+
174
+ 06:06.080 --> 06:11.600
175
+ when they play this game, they play mostly online. So they used to playing through a UI.
176
+
177
+ 06:11.600 --> 06:16.080
178
+ And they did the same thing here. So there was this layout, you could imagine. There's a table
179
+
180
+ 06:16.880 --> 06:22.880
181
+ on a screen. There's the human sitting there. And then there's the AI sitting there. And the
182
+
183
+ 06:22.880 --> 06:27.440
184
+ screen shows everything that's happening. The cards coming out and shows the bets being made.
185
+
186
+ 06:27.440 --> 06:32.320
187
+ And we also had the betting history for the human. So if the human forgot what had happened in the
188
+
189
+ 06:32.320 --> 06:39.040
190
+ hand so far, they could actually reference back and so forth. Is there a reason they were given
191
+
192
+ 06:39.040 --> 06:46.880
193
+ access to the betting history? Well, it didn't really matter. They wouldn't have forgotten
194
+
195
+ 06:46.880 --> 06:52.160
196
+ anywhere. These are top quality people. But we just wanted to put out there so it's not the question
197
+
198
+ 06:52.160 --> 06:56.560
199
+ of the human forgetting and the AI somehow trying to get that advantage of better memory.
200
+
201
+ 06:56.560 --> 07:01.040
202
+ So what was that like? I mean, that was an incredible accomplishment. So what did it feel like
203
+
204
+ 07:01.040 --> 07:07.440
205
+ before the event? Did you have doubt, hope? Where was your confidence at?
206
+
207
+ 07:08.080 --> 07:13.760
208
+ Yeah, that's great. Great question. So 18 months earlier, I had organized a similar
209
+
210
+ 07:13.760 --> 07:19.200
211
+ brains versus AI competition with a previous AI called Cloudical. And we couldn't beat the humans.
212
+
213
+ 07:20.480 --> 07:26.400
214
+ So this time around, it was only 18 months later. And I knew that this new AI, Libratus,
215
+
216
+ 07:26.400 --> 07:32.320
217
+ was way stronger. But it's hard to say how you'll do against the top humans before you try.
218
+
219
+ 07:32.320 --> 07:39.040
220
+ So I thought we had about a 50 50 shot. And the international betting sites put us as a
221
+
222
+ 07:39.040 --> 07:45.440
223
+ four to one or five to one underdog. So it's kind of interesting that people really believe in people
224
+
225
+ 07:45.440 --> 07:50.560
226
+ and over AI, not just people, people don't just believe over believing themselves,
227
+
228
+ 07:50.560 --> 07:54.800
229
+ but they have overconfidence in other people as well, compared to the performance of AI.
230
+
231
+ 07:54.800 --> 08:00.800
232
+ And yeah, so we were a four to one or five to one underdog. And even after three days of beating
233
+
234
+ 08:00.800 --> 08:05.680
235
+ the humans in a row, we were still 50 50 on the international betting sites.
236
+
237
+ 08:05.680 --> 08:11.120
238
+ Do you think there's something special and magical about poker in the way people think about it?
239
+
240
+ 08:11.120 --> 08:19.120
241
+ In the sense you have, I mean, even in chess, there's no Hollywood movies. Poker is the star
242
+
243
+ 08:19.120 --> 08:29.760
244
+ of many movies. And there's this feeling that certain human facial expressions and body language,
245
+
246
+ 08:29.760 --> 08:34.880
247
+ eye movement, all these tells are critical to poker. Like you can look into somebody's soul
248
+
249
+ 08:34.880 --> 08:41.760
250
+ and understand their betting strategy and so on. So that's probably why the possibly do you think
251
+
252
+ 08:41.760 --> 08:48.000
253
+ that is why people have a confidence that humans will outperform because AI systems cannot in this
254
+
255
+ 08:48.000 --> 08:53.360
256
+ construct perceive these kinds of tells, they're only looking at betting patterns and
257
+
258
+ 08:55.600 --> 09:03.920
259
+ nothing else, the betting patterns and statistics. So what's more important to you if you step back
260
+
261
+ 09:03.920 --> 09:12.960
262
+ on human players, human versus human? What's the role of these tells of these ideas that we romanticize?
263
+
264
+ 09:12.960 --> 09:21.760
265
+ Yeah, so I'll split it into two parts. So one is why do humans trust humans more than AI and
266
+
267
+ 09:21.760 --> 09:27.040
268
+ have overconfidence in humans? I think that's not really related to the tell question. It's
269
+
270
+ 09:27.040 --> 09:32.480
271
+ just that they've seen these top players how good they are and they're really fantastic. So it's just
272
+
273
+ 09:33.120 --> 09:39.200
274
+ hard to believe that the AI could beat them. So I think that's where that comes from. And that's
275
+
276
+ 09:39.200 --> 09:44.080
277
+ actually maybe a more general lesson about AI that until you've seen it overperform a human,
278
+
279
+ 09:44.080 --> 09:52.080
280
+ it's hard to believe that it could. But then the tells, a lot of these top players, they're so good
281
+
282
+ 09:52.080 --> 10:00.320
283
+ at hiding tells that among the top players, it's actually not really worth it for them to invest a
284
+
285
+ 10:00.320 --> 10:06.560
286
+ lot of effort trying to find tells in each other because they're so good at hiding them. So yes,
287
+
288
+ 10:06.560 --> 10:12.960
289
+ at the kind of Friday evening game, tells are going to be a huge thing. You can read other people
290
+
291
+ 10:12.960 --> 10:17.760
292
+ and if you're a good reader, you'll read them like an open book. But at the top levels of poker,
293
+
294
+ 10:17.760 --> 10:23.120
295
+ no, the tells become a less much smaller and smaller aspect of the game as you go to the top
296
+
297
+ 10:23.120 --> 10:32.960
298
+ levels. The amount of strategies, the amounts of possible actions is very large, 10 to the power
299
+
300
+ 10:32.960 --> 10:39.280
301
+ of 100 plus. So there has to be some, I've read a few of the papers related that has,
302
+
303
+ 10:40.400 --> 10:46.880
304
+ it has to form some abstractions of various hands and actions. So what kind of abstractions are
305
+
306
+ 10:46.880 --> 10:52.800
307
+ effective for the game of poker? Yeah, so you're exactly right. So when you go from a game tree
308
+
309
+ 10:52.800 --> 10:59.280
310
+ that's 10 to the 161, especially in an imperfect information game, it's way too large to solve
311
+
312
+ 10:59.280 --> 11:06.160
313
+ directly. Even with our fastest equilibrium finding algorithms. So you want to abstract it
314
+
315
+ 11:06.160 --> 11:13.840
316
+ first. And abstraction in games is much trickier than abstraction in MDPs or other single agent
317
+
318
+ 11:13.840 --> 11:19.440
319
+ settings. Because you have these abstraction pathologies. But if I have a finer grained abstraction,
320
+
321
+ 11:20.560 --> 11:25.120
322
+ the strategy that I can get from that for the real game might actually be worse than the strategy
323
+
324
+ 11:25.120 --> 11:29.440
325
+ I can get from the coarse grained abstraction. So you have to be very careful. Now, the kinds
326
+
327
+ 11:29.440 --> 11:34.880
328
+ of abstractions just to zoom out, we're talking about there's the hands abstractions and then there
329
+
330
+ 11:34.880 --> 11:42.480
331
+ is betting strategies. Yeah, betting actions. So there's information abstraction to talk about
332
+
333
+ 11:42.480 --> 11:47.760
334
+ general games, information abstraction, which is the abstraction of what chance does. And this
335
+
336
+ 11:47.760 --> 11:54.400
337
+ would be the cards in the case of poker. And then there's action abstraction, which is abstracting
338
+
339
+ 11:54.400 --> 12:00.640
340
+ the actions of the actual players, which would be bets in the case of poker, yourself, any other
341
+
342
+ 12:00.640 --> 12:10.240
343
+ players, yes, yourself and other players. And for information abstraction, we were completely
344
+
345
+ 12:10.240 --> 12:16.640
346
+ automated. So these were these are algorithms, but they do what we call potential aware abstraction,
347
+
348
+ 12:16.640 --> 12:20.880
349
+ where we don't just look at the value of the hand, but also how it might materialize in the
350
+
351
+ 12:20.880 --> 12:26.320
352
+ good or bad hands over time. And it's a certain kind of bottom up process with integer programming
353
+
354
+ 12:26.320 --> 12:32.640
355
+ there and clustering and various aspects, how do you build this abstraction. And then in the
356
+
357
+ 12:32.640 --> 12:40.720
358
+ action abstraction, there, it's largely based on how humans other and other AI's have played
359
+
360
+ 12:40.720 --> 12:46.560
361
+ this game in the past. But in the beginning, we actually use an automated action abstraction
362
+
363
+ 12:46.560 --> 12:53.840
364
+ technology, which is provably convergent, that it finds the optimal combination of bet sizes,
365
+
366
+ 12:53.840 --> 12:58.320
367
+ but it's not very scalable. So we couldn't use it for the whole game, but we use it for the first
368
+
369
+ 12:58.320 --> 13:03.760
370
+ couple of betting actions. So what's more important, the strength of the hand, so the
371
+
372
+ 13:03.760 --> 13:12.320
373
+ information abstraction or the how you play them, the actions, does it, you know, the romanticized
374
+
375
+ 13:12.320 --> 13:17.120
376
+ notion again, is that it doesn't matter what hands you have, that the actions, the betting,
377
+
378
+ 13:18.000 --> 13:22.240
379
+ maybe the way you win, no matter what hands you have. Yeah, so that's why you have to play a lot
380
+
381
+ 13:22.240 --> 13:29.440
382
+ of hands, so that the role of luck gets smaller. So you could otherwise get lucky and get some good
383
+
384
+ 13:29.440 --> 13:34.240
385
+ hands, and then you're going to win the match. Even with thousands of hands, you can get lucky.
386
+
387
+ 13:35.120 --> 13:40.720
388
+ Because there's so much variance in no limit, Texas hold them, because if we both go all in,
389
+
390
+ 13:40.720 --> 13:47.920
391
+ it's a huge stack of variance. So there are these massive swings in no limit Texas hold them. So
392
+
393
+ 13:47.920 --> 13:53.600
394
+ that's why you have to play not just thousands, but over a hundred thousand hands to get statistical
395
+
396
+ 13:53.600 --> 14:00.400
397
+ significance. So let me ask another way this question. If you didn't even look at your hands,
398
+
399
+ 14:01.920 --> 14:05.840
400
+ but they didn't know that the opponents didn't know that, how well would you be able to do?
401
+
402
+ 14:05.840 --> 14:11.440
403
+ That's a good question. There's actually, I heard the story that there's this Norwegian female poker
404
+
405
+ 14:11.440 --> 14:17.040
406
+ player called Annette Oberstad, who's actually won a tournament by doing exactly that. But that
407
+
408
+ 14:17.040 --> 14:26.160
409
+ would be extremely rare. So you cannot really play well that way. Okay, so the hands do have
410
+
411
+ 14:26.160 --> 14:34.320
412
+ some role to play. So Lebrados does not use, as far as I understand, use learning methods,
413
+
414
+ 14:34.320 --> 14:42.880
415
+ deep learning. Is there room for learning in, you know, there's no reason why Lebrados doesn't,
416
+
417
+ 14:42.880 --> 14:48.640
418
+ you know, combine with an alpha go type approach for estimating the quality for function estimator.
419
+
420
+ 14:49.520 --> 14:55.040
421
+ What are your thoughts on this? Maybe as compared to another algorithm, which I'm not
422
+
423
+ 14:55.040 --> 15:01.120
424
+ that familiar with deep stack, the engine that does use deep learning that is unclear how well
425
+
426
+ 15:01.120 --> 15:05.680
427
+ it does, but nevertheless uses deep learning. So what are your thoughts about learning methods to
428
+
429
+ 15:05.680 --> 15:11.520
430
+ aid in the way that Lebrados plays the game of poker? Yeah, so as you said, Lebrados did not
431
+
432
+ 15:11.520 --> 15:17.760
433
+ use learning methods and played very well without them. Since then, we have actually actually here,
434
+
435
+ 15:17.760 --> 15:25.200
436
+ we have a couple of papers on things that do use learning techniques. Excellent. So and deep learning
437
+
438
+ 15:25.200 --> 15:32.560
439
+ in particular, and sort of the way you're talking about where it's learning an evaluation function.
440
+
441
+ 15:33.360 --> 15:42.400
442
+ But in imperfect information games, unlike, let's say in go war, now also in chess and shogi,
443
+
444
+ 15:42.400 --> 15:52.160
445
+ it's not sufficient to learn an evaluation for a state because the value of an information set
446
+
447
+ 15:52.160 --> 16:00.080
448
+ depends not only on the exact state, but it also depends on both players beliefs. Like if I have
449
+
450
+ 16:00.080 --> 16:05.360
451
+ a bad hand, I'm much better off if the opponent thinks I have a good hand. And vice versa,
452
+
453
+ 16:05.360 --> 16:12.640
454
+ if I have a good hand, I'm much better off if the opponent believes I have a bad hand. So the value
455
+
456
+ 16:12.640 --> 16:18.720
457
+ of a state is not just a function of the cards. It depends on if you will, the path of play,
458
+
459
+ 16:18.720 --> 16:25.440
460
+ but only to the extent that it's captured in the belief distributions. So that's why it's not as
461
+
462
+ 16:25.440 --> 16:31.040
463
+ simple as it is in perfect information games. And I don't want to say it's simple there either.
464
+
465
+ 16:31.040 --> 16:35.680
466
+ It's of course, very complicated computationally there too. But at least conceptually, it's very
467
+
468
+ 16:35.680 --> 16:39.520
469
+ straightforward. There's a state, there's an evaluation function, you can try to learn it.
470
+
471
+ 16:40.080 --> 16:48.640
472
+ Here, you have to do something more. And what we do is in one of these papers, we're looking at
473
+
474
+ 16:48.640 --> 16:54.000
475
+ allowing where we allow the opponent to actually take different strategies at the leaf of the
476
+
477
+ 16:54.000 --> 17:01.600
478
+ search tree, if you will. And that is a different way of doing it. And it doesn't assume therefore
479
+
480
+ 17:01.600 --> 17:07.200
481
+ a particular way that the opponent plays. But it allows the opponent to choose from a set of
482
+
483
+ 17:07.200 --> 17:14.560
484
+ different continuation strategies. And that forces us to not be too optimistic in a lookahead
485
+
486
+ 17:14.560 --> 17:20.720
487
+ search. And that's that's one way you can do sound lookahead search in imperfect information
488
+
489
+ 17:20.720 --> 17:26.400
490
+ games, which is very different, difficult. And in US, you were asking about deep stack, what they
491
+
492
+ 17:26.400 --> 17:30.960
493
+ did, it was very different than what we do, either in Libertadores or in this new work.
494
+
495
+ 17:31.920 --> 17:37.120
496
+ They were generally randomly generating various situations in the game. Then they were doing
497
+
498
+ 17:37.120 --> 17:42.960
499
+ the lookahead from there to the end of the game, as if that was the start of a different game. And
500
+
501
+ 17:42.960 --> 17:48.800
502
+ then they were using deep learning to learn those values of those states. But the states were not
503
+
504
+ 17:48.800 --> 17:54.160
505
+ just the physical states, they include the belief distributions. When you talk about lookahead,
506
+
507
+ 17:55.680 --> 18:01.600
508
+ or deep stack, or with Libertadores, does it mean considering every possibility that the game can
509
+
510
+ 18:01.600 --> 18:08.160
511
+ evolve? Are we talking about extremely sort of this exponential growth of a tree? Yes. So we're
512
+
513
+ 18:08.160 --> 18:15.760
514
+ talking about exactly that. Much like you doing alpha beta search or Monte Carlo tree search,
515
+
516
+ 18:15.760 --> 18:19.920
517
+ but with different techniques. So there's a different search algorithm. And then we have to
518
+
519
+ 18:19.920 --> 18:24.400
520
+ deal with the leaves differently. So if you think about what Libertadores did, we didn't have to
521
+
522
+ 18:24.400 --> 18:30.880
523
+ worry about this, because we only did it at the end of the game. So we would always terminate into a
524
+
525
+ 18:30.880 --> 18:36.720
526
+ real situation. And we would know what the payout is. It didn't do these depth limited lookaheads.
527
+
528
+ 18:36.720 --> 18:42.000
529
+ But now in this new paper, which is called depth limited, I think it's called depth limited search
530
+
531
+ 18:42.000 --> 18:47.360
532
+ for imperfect information games, we can actually do sound depth limited lookaheads. So we can
533
+
534
+ 18:47.360 --> 18:52.080
535
+ actually start to do the lookahead from the beginning of the game on, because that's too
536
+
537
+ 18:52.080 --> 18:57.680
538
+ complicated to do for this whole long game. So in Libertadores, we were just doing it for the end.
539
+
540
+ 18:57.680 --> 19:03.040
541
+ So and then the other side, this belief distribution. So is it explicitly modeled
542
+
543
+ 19:03.040 --> 19:10.800
544
+ what kind of beliefs that the opponent might have? Yeah, it is explicitly modeled, but it's not
545
+
546
+ 19:10.800 --> 19:18.720
547
+ assumed that beliefs are actually output, not input. Of course, the starting beliefs are input,
548
+
549
+ 19:18.720 --> 19:23.440
550
+ but they just fall from the rules of the game, because we know that the dealer deals uniformly
551
+
552
+ 19:23.440 --> 19:29.600
553
+ from the deck. So I know that every pair of cards that you might have is equally likely.
554
+
555
+ 19:29.600 --> 19:33.600
556
+ I know that for a fact, that just follows from the rules of the game. Of course,
557
+
558
+ 19:33.600 --> 19:38.320
559
+ except the two cards that I have, I know you don't have those. You have to take that into
560
+
561
+ 19:38.320 --> 19:42.480
562
+ account. That's called card removal, and that's very important. Is the dealing always coming
563
+
564
+ 19:42.480 --> 19:50.000
565
+ from a single deck in heads up? Yes. So you can assume single deck. So you know that if I have
566
+
567
+ 19:50.000 --> 19:55.680
568
+ the ace of spades, I know you don't have an ace of spades. So in the beginning, your belief is
569
+
570
+ 19:55.680 --> 20:02.640
571
+ basically the fact that it's a fair dealing of hands. But how do you start to adjust that belief?
572
+
573
+ 20:02.640 --> 20:08.960
574
+ Well, that's where this beauty of game theory comes. So Nash equilibrium, which John Nash
575
+
576
+ 20:08.960 --> 20:14.960
577
+ introduced in 1950, introduces what rational play is when you have more than one player.
578
+
579
+ 20:15.920 --> 20:21.200
580
+ And these are pairs of strategies where strategies are contingency plans, one for each player.
581
+
582
+ 20:21.200 --> 20:27.920
583
+ So that neither player wants to deviate to a different strategy, given that the other
584
+
585
+ 20:27.920 --> 20:34.800
586
+ doesn't deviate. But as a side effect, you get the beliefs from base rule. So Nash equilibrium
587
+
588
+ 20:34.800 --> 20:39.760
589
+ really isn't just deriving in these imperfect information games. Nash equilibrium doesn't
590
+
591
+ 20:39.760 --> 20:47.920
592
+ just define strategies. It also defines beliefs for both of us, and it defines beliefs for each state.
593
+
594
+ 20:47.920 --> 20:54.720
595
+ So at each state, each, if they call information sets, at each information set in the game,
596
+
597
+ 20:54.720 --> 20:59.840
598
+ there's a set of different states that we might be in, but I don't know which one we're in.
599
+
600
+ 21:00.960 --> 21:05.440
601
+ Nash equilibrium tells me exactly what is the probability distribution over those real
602
+
603
+ 21:05.440 --> 21:11.360
604
+ wall states in my mind. How does Nash equilibrium give you that distribution? So why?
605
+
606
+ 21:11.360 --> 21:17.280
607
+ I'll do a simple example. So you know the game Rock Paper Scissors? So we can draw it
608
+
609
+ 21:17.280 --> 21:23.600
610
+ as player one moves first, and then player two moves. But of course, it's important that player
611
+
612
+ 21:23.600 --> 21:28.560
613
+ two doesn't know what player one moved. Otherwise player two would win every time. So we can draw
614
+
615
+ 21:28.560 --> 21:33.440
616
+ that as an information set where player one makes one or three moves first. And then there's an
617
+
618
+ 21:33.440 --> 21:40.480
619
+ information set for player two. So player two doesn't know which of those nodes the world is in.
620
+
621
+ 21:40.480 --> 21:46.400
622
+ But once we know the strategy for player one, Nash equilibrium will say that you play one third
623
+
624
+ 21:46.400 --> 21:52.160
625
+ rock, one third paper, one third scissors. From that I can derive my beliefs on the information
626
+
627
+ 21:52.160 --> 21:58.480
628
+ set that they're one third, one third, one third. So Bayes gives you that. But is that specific
629
+
630
+ 21:58.480 --> 22:06.800
631
+ to a particular player? Or is it something you quickly update with those? No, the game theory
632
+
633
+ 22:06.800 --> 22:12.560
634
+ isn't really player specific. So that's also why we don't need any data. We don't need any history
635
+
636
+ 22:12.560 --> 22:17.840
637
+ how these particular humans played in the past or how any AI or even had played before. It's all
638
+
639
+ 22:17.840 --> 22:24.160
640
+ about rationality. So we just think the AI just thinks about what would a rational opponent do?
641
+
642
+ 22:24.720 --> 22:30.960
643
+ And what would I do if I were I am rational and what that that's that's the idea of game theory.
644
+
645
+ 22:30.960 --> 22:38.080
646
+ So it's really a data free opponent free approach. So it comes from the design of the game as opposed
647
+
648
+ 22:38.080 --> 22:43.600
649
+ to the design of the player. Exactly. There's no opponent modeling per se. I mean, we've done
650
+
651
+ 22:43.600 --> 22:47.760
652
+ some work on combining opponent modeling with game theory. So you can exploit weak players
653
+
654
+ 22:47.760 --> 22:53.440
655
+ even more. But that's another strand and in Libra, there's wouldn't turn that on. So I decided that
656
+
657
+ 22:53.440 --> 22:59.520
658
+ these players are too good. And when you start to exploit an opponent, you typically open yourself
659
+
660
+ 22:59.520 --> 23:04.720
661
+ up self up to exploitation. And these guys have so few holes to exploit and they're world's leading
662
+
663
+ 23:04.720 --> 23:09.120
664
+ experts in counter exploitation. So I decided that we're not going to turn that stuff on.
665
+
666
+ 23:09.120 --> 23:14.400
667
+ Actually, I saw a few your papers exploiting opponents sound very interesting to explore.
668
+
669
+ 23:15.600 --> 23:19.120
670
+ Do you think there's room for exploitation, generally outside of the broadest?
671
+
672
+ 23:19.840 --> 23:27.840
673
+ Is there a subject or people differences that could be exploited? Maybe not just in poker,
674
+
675
+ 23:27.840 --> 23:33.360
676
+ but in general interactions and negotiations all these other domains that you're considering?
677
+
678
+ 23:33.360 --> 23:39.760
679
+ Yeah, definitely. We've done some work on that. And I really like the work that hybridizes the two.
680
+
681
+ 23:39.760 --> 23:45.200
682
+ So you figure out what would a rational opponent do. And by the way, that's safe in these zero
683
+
684
+ 23:45.200 --> 23:49.440
685
+ sum games to players, zero sum games, because if the opponent does something irrational,
686
+
687
+ 23:49.440 --> 23:56.320
688
+ yes, it might show a throw of my beliefs. But the amount that the player can gain by throwing
689
+
690
+ 23:56.320 --> 24:04.240
691
+ of my belief is always less than they lose by playing poorly. So it's safe. But still,
692
+
693
+ 24:04.240 --> 24:09.280
694
+ if somebody's weak as a player, you might want to play differently to exploit them more.
695
+
696
+ 24:10.160 --> 24:14.560
697
+ So you can think about it this way, a game theoretic strategy is unbeatable,
698
+
699
+ 24:15.600 --> 24:22.720
700
+ but it doesn't maximally beat the other opponent. So the winnings per hand might be better
701
+
702
+ 24:22.720 --> 24:27.120
703
+ with a different strategy. And the hybrid is that you start from a game theoretic approach,
704
+
705
+ 24:27.120 --> 24:33.040
706
+ and then as you gain data about the opponent in certain parts of the game tree, then in those
707
+
708
+ 24:33.040 --> 24:39.280
709
+ parts of the game tree, you start to tweak your strategy more and more towards exploitation,
710
+
711
+ 24:39.280 --> 24:44.160
712
+ while still staying fairly close to the game theoretic strategy so as to not open yourself
713
+
714
+ 24:44.160 --> 24:53.600
715
+ up to exploitation too much. How do you do that? Do you try to vary up strategies, make it unpredictable?
716
+
717
+ 24:53.600 --> 24:59.520
718
+ It's like, what is it, tit for tat strategies in Prisoner's Dilemma or?
719
+
720
+ 25:00.560 --> 25:07.440
721
+ Well, that's a repeated game, simple Prisoner's Dilemma, repeated games. But even there,
722
+
723
+ 25:07.440 --> 25:13.120
724
+ there's no proof that says that that's the best thing. But experimentally, it actually does well.
725
+
726
+ 25:13.120 --> 25:17.360
727
+ So what kind of games are there, first of all? I don't know if this is something that you could
728
+
729
+ 25:17.360 --> 25:21.760
730
+ just summarize. There's perfect information games with all the information on the table.
731
+
732
+ 25:22.320 --> 25:27.680
733
+ There is imperfect information games. There's repeated games that you play over and over.
734
+
735
+ 25:28.480 --> 25:36.960
736
+ There's zero sum games. There's non zero sum games. And then there's a really important
737
+
738
+ 25:36.960 --> 25:44.640
739
+ distinction you're making to player versus more players. So what are what other games are there?
740
+
741
+ 25:44.640 --> 25:49.920
742
+ And what's the difference, for example, with this two player game versus more players? Yeah,
743
+
744
+ 25:49.920 --> 25:54.720
745
+ what are the key differences? Right. So let me start from the the basics. So
746
+
747
+ 25:56.320 --> 26:02.160
748
+ a repeated game is a game where the same exact game is played over and over.
749
+
750
+ 26:02.160 --> 26:08.160
751
+ In these extensive form games, where you think about three form, maybe with these
752
+
753
+ 26:08.160 --> 26:14.000
754
+ information sets to represent incomplete information, you can have kind of repetitive
755
+
756
+ 26:14.000 --> 26:18.480
757
+ interactions and even repeated games are a special case of that, by the way. But
758
+
759
+ 26:19.680 --> 26:24.160
760
+ the game doesn't have to be exactly the same. It's like in sourcing options. Yes, we're going to
761
+
762
+ 26:24.160 --> 26:29.040
763
+ see the same supply base year to year. But what I'm buying is a little different every time.
764
+
765
+ 26:29.040 --> 26:34.080
766
+ And the supply base is a little different every time and so on. So it's not really repeated.
767
+
768
+ 26:34.080 --> 26:39.680
769
+ So to find a purely repeated game is actually very rare in the world. So they're really a very
770
+
771
+ 26:40.960 --> 26:47.360
772
+ coarse model of what's going on. Then if you move up from just repeated, simple,
773
+
774
+ 26:47.360 --> 26:52.560
775
+ repeated matrix games, not all the way to extensive form games, but in between,
776
+
777
+ 26:52.560 --> 26:59.360
778
+ there's stochastic games, where you know, there's these, you think about it like these little
779
+
780
+ 26:59.360 --> 27:06.080
781
+ matrix games. And when you take an action and your own takes an action, they determine not which
782
+
783
+ 27:06.080 --> 27:11.280
784
+ next state I'm going to next game, I'm going to, but the distribution over next games,
785
+
786
+ 27:11.280 --> 27:15.760
787
+ where I might be going to. So that's the stochastic game. But it's like
788
+
789
+ 27:15.760 --> 27:22.160
790
+ like matrix games, repeated stochastic games, extensive form games, that is from less to more
791
+
792
+ 27:22.160 --> 27:28.080
793
+ general. And poker is an example of the last one. So it's really in the most general setting,
794
+
795
+ 27:29.440 --> 27:34.720
796
+ extensive form games. And that's kind of what the AI community has been working on and being
797
+
798
+ 27:34.720 --> 27:39.680
799
+ benchmarked on with this heads up no limit, Texas Holden. Can you describe extensive form games?
800
+
801
+ 27:39.680 --> 27:45.600
802
+ What's the model here? So if you're familiar with the tree form, so it's really the tree form,
803
+
804
+ 27:45.600 --> 27:51.280
805
+ like in chess, there's a search tree versus a matrix versus a matrix. Yeah. And that's the
806
+
807
+ 27:51.280 --> 27:56.640
808
+ matrix is called the matrix form or by matrix form or normal form game. And here you have the tree
809
+
810
+ 27:56.640 --> 28:01.680
811
+ form. So you can actually do certain types of reasoning there, that you lose the information
812
+
813
+ 28:02.320 --> 28:07.840
814
+ when you go to normal form. There's a certain form of equivalence, like if you go from three
815
+
816
+ 28:07.840 --> 28:13.840
817
+ form and you say it every possible contingency plan is a strategy, then I can actually go back
818
+
819
+ 28:13.840 --> 28:19.600
820
+ to the normal form, but I lose some information from the lack of sequentiality. Then the multiplayer
821
+
822
+ 28:19.600 --> 28:29.520
823
+ versus two player distinction is an important one. So two player games in zero sum are conceptually
824
+
825
+ 28:29.520 --> 28:37.040
826
+ easier and computationally easier. They're still huge like this one, this one. But they're conceptually
827
+
828
+ 28:37.040 --> 28:42.160
829
+ easier and computationally easier. In that conceptually, you don't have to worry about
830
+
831
+ 28:42.160 --> 28:47.440
832
+ which equilibrium is the other guy going to play when there are multiple, because any equilibrium
833
+
834
+ 28:47.440 --> 28:52.400
835
+ strategy is the best response to any other equilibrium strategy. So I can play a different
836
+
837
+ 28:52.400 --> 28:57.600
838
+ equilibrium from you and we'll still get the right values of the game. That falls apart even
839
+
840
+ 28:57.600 --> 29:02.960
841
+ with two players when you have general sum games. Even without cooperation, just even without
842
+
843
+ 29:02.960 --> 29:09.040
844
+ cooperation. So there's a big gap from two player zero sum to two player general sum, or even to
845
+
846
+ 29:09.040 --> 29:17.600
847
+ three players zero sum. That's a big gap, at least in theory. Can you maybe non mathematically
848
+
849
+ 29:17.600 --> 29:23.040
850
+ provide the intuition why it all falls apart with three or more players? It seems like you should
851
+
852
+ 29:23.040 --> 29:32.640
853
+ still be able to have a Nash equilibrium that's instructive, that holds. Okay, so it is true
854
+
855
+ 29:32.640 --> 29:39.840
856
+ that all finite games have a Nash equilibrium. So this is what your Nash actually proved.
857
+
858
+ 29:40.880 --> 29:45.600
859
+ So they do have a Nash equilibrium. That's not the problem. The problem is that there can be many.
860
+
861
+ 29:46.480 --> 29:52.000
862
+ And then there's a question of which equilibrium to select. So and if you select your strategy
863
+
864
+ 29:52.000 --> 30:00.960
865
+ from a different equilibrium and I select mine, then what does that mean? And in these non zero
866
+
867
+ 30:00.960 --> 30:07.760
868
+ sum games, we may lose some joint benefit by being just simply stupid, we could actually both be
869
+
870
+ 30:07.760 --> 30:12.960
871
+ better off if we did something else. And in three player, you get other problems also like collusion.
872
+
873
+ 30:12.960 --> 30:19.600
874
+ Like maybe you and I can gang up on a third player, and we can do radically better by colluding.
875
+
876
+ 30:19.600 --> 30:25.440
877
+ So there are lots of issues that come up there. So No Brown, the student you work with on this,
878
+
879
+ 30:25.440 --> 30:31.120
880
+ has mentioned, I looked through the AMA on Reddit, he mentioned that the ability of poker players
881
+
882
+ 30:31.120 --> 30:36.320
883
+ to collaborate would make the game. He was asked the question of, how would you make the game of
884
+
885
+ 30:36.320 --> 30:42.320
886
+ poker? Or both of you were asked the question, how would you make the game of poker beyond
887
+
888
+ 30:43.440 --> 30:51.440
889
+ being solvable by current AI methods? And he said that there's not many ways of making poker more
890
+
891
+ 30:51.440 --> 30:59.600
892
+ difficult, but a collaboration or cooperation between players would make it extremely difficult.
893
+
894
+ 30:59.600 --> 31:05.200
895
+ So can you provide the intuition behind why that is, if you agree with that idea?
896
+
897
+ 31:05.200 --> 31:11.840
898
+ Yeah, so we've done a lot of work on coalitional games. And we actually have a paper here with
899
+
900
+ 31:11.840 --> 31:17.040
901
+ my other student Gabriella Farina and some other collaborators on at NIPPS on that actually just
902
+
903
+ 31:17.040 --> 31:22.080
904
+ came back from the poster session where we presented this. So when you have a collusion,
905
+
906
+ 31:22.080 --> 31:29.440
907
+ it's a different problem. And it typically gets even harder then. Even the game representations,
908
+
909
+ 31:29.440 --> 31:35.840
910
+ some of the game representations don't really allow good computation. So we actually introduced a new
911
+
912
+ 31:35.840 --> 31:43.840
913
+ game representation for that. Is that kind of cooperation part of the model? Do you have
914
+
915
+ 31:43.840 --> 31:48.720
916
+ information about the fact that other players are cooperating? Or is it just this chaos that
917
+
918
+ 31:48.720 --> 31:53.760
919
+ where nothing is known? So there's some some things unknown. Can you give an example of a
920
+
921
+ 31:53.760 --> 31:59.920
922
+ collusion type game? Or is it usually? So like bridge. Yeah, so think about bridge. It's like
923
+
924
+ 31:59.920 --> 32:06.960
925
+ when you and I are on a team, our payoffs are the same. The problem is that we can't talk. So when
926
+
927
+ 32:06.960 --> 32:13.360
928
+ I get my cards, I can't whisper to you what my cards are. That would not be allowed. So we have
929
+
930
+ 32:13.360 --> 32:20.480
931
+ to somehow coordinate our strategies ahead of time. And only ahead of time. And then there's
932
+
933
+ 32:20.480 --> 32:25.920
934
+ certain signals we can talk about. But they have to be such that the other team also understands
935
+
936
+ 32:25.920 --> 32:31.920
937
+ that. So so that that's that's an example where the coordination is already built into the rules
938
+
939
+ 32:31.920 --> 32:38.400
940
+ of the game. But in many other situations, like auctions or negotiations or diplomatic
941
+
942
+ 32:38.400 --> 32:44.320
943
+ relationships, poker, it's not really built in. But it still can be very helpful for the
944
+
945
+ 32:44.320 --> 32:51.440
946
+ colliders. I've read you write somewhere, the negotiations, you come to the table with prior
947
+
948
+ 32:52.640 --> 32:58.160
949
+ like a strategy that, like that you're willing to do and not willing to do those kinds of things.
950
+
951
+ 32:58.160 --> 33:04.320
952
+ So how do you start to now moving away from poker, moving beyond poker into other applications
953
+
954
+ 33:04.320 --> 33:10.960
955
+ like negotiations? How do you start applying this to other to other domains, even real world
956
+
957
+ 33:10.960 --> 33:15.360
958
+ domains that you've worked on? Yeah, I actually have two startup companies doing exactly that.
959
+
960
+ 33:15.360 --> 33:21.520
961
+ One is called Strategic Machine. And that's for kind of business applications, gaming, sports,
962
+
963
+ 33:21.520 --> 33:29.040
964
+ all sorts of things like that. Any applications of this to business and to sports and to gaming,
965
+
966
+ 33:29.040 --> 33:34.800
967
+ to various types of things for in finance, electricity markets and so on. And the other
968
+
969
+ 33:34.800 --> 33:41.360
970
+ is called Strategy Robot, where we are taking these to military security, cybersecurity,
971
+
972
+ 33:41.360 --> 33:45.760
973
+ and intelligence applications. I think you worked a little bit in
974
+
975
+ 33:47.920 --> 33:54.560
976
+ how do you put it, advertisement, sort of suggesting ads kind of thing.
977
+
978
+ 33:54.560 --> 33:59.040
979
+ Yeah, that's another company, Optimized Markets. But that's much more about a
980
+
981
+ 33:59.040 --> 34:03.840
982
+ combinatorial market and optimization based technology. That's not using these
983
+
984
+ 34:04.720 --> 34:12.400
985
+ game theory decreasing technologies. I see. Okay, so what sort of high level do you think about
986
+
987
+ 34:13.040 --> 34:20.320
988
+ our ability to use game theoretic concepts to model human behavior? Do you think human behavior is
989
+
990
+ 34:20.320 --> 34:25.680
991
+ amenable to this kind of modeling? So outside of the poker games and where have you seen it
992
+
993
+ 34:25.680 --> 34:32.000
994
+ done successfully in your work? I'm not sure. The goal really is modeling humans.
995
+
996
+ 34:33.520 --> 34:40.240
997
+ Like for example, if I'm playing a zero sum game, I don't really care that the opponent is actually
998
+
999
+ 34:40.240 --> 34:45.680
1000
+ following my model of rational behavior. Because if they're not, that's even better for me.
1001
+
1002
+ 34:45.680 --> 34:55.440
1003
+ All right, so see with the opponents in games, the prerequisite is that you've formalized
1004
+
1005
+ 34:56.240 --> 35:02.720
1006
+ the interaction in some way that can be amenable to analysis. I mean, you've done this amazing work
1007
+
1008
+ 35:02.720 --> 35:12.160
1009
+ with mechanism design, designing games that have certain outcomes. But so I'll tell you an example
1010
+
1011
+ 35:12.160 --> 35:19.360
1012
+ for my world of autonomous vehicles. We're studying pedestrians and pedestrians and cars
1013
+
1014
+ 35:19.360 --> 35:25.040
1015
+ negotiate in this nonverbal communication. There's this weird game dance of tension where
1016
+
1017
+ 35:25.760 --> 35:30.400
1018
+ pedestrians are basically saying, I trust that you won't kill me. And so as a J Walker,
1019
+
1020
+ 35:30.400 --> 35:34.560
1021
+ I will step onto the road even though I'm breaking the law and there's this tension.
1022
+
1023
+ 35:34.560 --> 35:40.480
1024
+ And the question is, we really don't know how to model that well in trying to model intent.
1025
+
1026
+ 35:40.480 --> 35:46.240
1027
+ And so people sometimes bring up ideas of game theory and so on. Do you think that aspect
1028
+
1029
+ 35:47.120 --> 35:53.520
1030
+ of human behavior can use these kinds of imperfect information approaches, modeling?
1031
+
1032
+ 35:54.800 --> 36:00.800
1033
+ How do you start to attack a problem like that when you don't even know how to design the game
1034
+
1035
+ 36:00.800 --> 36:06.640
1036
+ to describe the situation in order to solve it? Okay, so I haven't really thought about J walking.
1037
+
1038
+ 36:06.640 --> 36:12.080
1039
+ But one thing that I think could be a good application in autonomous vehicles is the
1040
+
1041
+ 36:12.080 --> 36:18.240
1042
+ following. So let's say that you have fleets of autonomous cars operating by different companies.
1043
+
1044
+ 36:18.240 --> 36:23.520
1045
+ So maybe here's the Waymo fleet and here's the Uber fleet. If you think about the rules of the road,
1046
+
1047
+ 36:24.160 --> 36:29.920
1048
+ they define certain legal rules, but that still leaves a huge strategy space open.
1049
+
1050
+ 36:29.920 --> 36:33.760
1051
+ Like as a simple example, when cars merge, you know, how humans merge, you know,
1052
+
1053
+ 36:33.760 --> 36:40.800
1054
+ they slow down and look at each other and try to merge. Wouldn't it be better if these situations
1055
+
1056
+ 36:40.800 --> 36:46.240
1057
+ would already be prenegotiated so we can actually merge at full speed and we know that this is the
1058
+
1059
+ 36:46.240 --> 36:51.680
1060
+ situation, this is how we do it and it's all going to be faster. But there are way too many
1061
+
1062
+ 36:51.680 --> 36:57.600
1063
+ situations to negotiate manually. So you could use automated negotiation. This is the idea at least.
1064
+
1065
+ 36:57.600 --> 37:04.160
1066
+ You could use automated negotiation to negotiate all of these situations or many of them in advance.
1067
+
1068
+ 37:04.160 --> 37:09.040
1069
+ And of course, it might be that, hey, maybe you're not going to always let me go first.
1070
+
1071
+ 37:09.040 --> 37:13.520
1072
+ Maybe you said, okay, well, in these situations, I'll let you go first. But in exchange, you're
1073
+
1074
+ 37:13.520 --> 37:18.240
1075
+ going to give me two hours, you're going to let me go first in these situations. So it's this huge
1076
+
1077
+ 37:18.240 --> 37:24.240
1078
+ combinatorial negotiation. And do you think there's room in that example of merging to
1079
+
1080
+ 37:24.240 --> 37:28.080
1081
+ model this whole situation as an imperfect information game? Or do you really want to
1082
+
1083
+ 37:28.080 --> 37:33.520
1084
+ consider it to be a perfect? No, that's a good question. Yeah. That's a good question. Do you
1085
+
1086
+ 37:33.520 --> 37:41.120
1087
+ pay the price of assuming that you don't know everything? Yeah, I don't know. It's certainly
1088
+
1089
+ 37:41.120 --> 37:47.040
1090
+ much easier. Games with perfect information are much easier. So if you can get away with it,
1091
+
1092
+ 37:48.240 --> 37:52.960
1093
+ you should. But if the real situation is of imperfect information, then you're going to
1094
+
1095
+ 37:52.960 --> 37:58.480
1096
+ have to deal with imperfect information. Great. So what lessons have you learned the annual
1097
+
1098
+ 37:58.480 --> 38:04.560
1099
+ computer poker competition? An incredible accomplishment of AI. You look at the history
1100
+
1101
+ 38:04.560 --> 38:12.240
1102
+ of the blue AlphaGo, these kind of moments when AI stepped up in an engineering effort and a
1103
+
1104
+ 38:12.240 --> 38:18.240
1105
+ scientific effort combined to beat the best human player. So what do you take away from
1106
+
1107
+ 38:18.240 --> 38:23.200
1108
+ this whole experience? What have you learned about designing AI systems that play these kinds
1109
+
1110
+ 38:23.200 --> 38:30.000
1111
+ of games? And what does that mean for AI in general, for the future of AI development?
1112
+
1113
+ 38:30.720 --> 38:34.160
1114
+ Yeah, so that's a good question. So there's so much to say about it.
1115
+
1116
+ 38:35.280 --> 38:40.320
1117
+ I do like this type of performance oriented research, although in my group, we go all the
1118
+
1119
+ 38:40.320 --> 38:46.160
1120
+ way from like idea to theory to experiments to big system building to commercialization. So we
1121
+
1122
+ 38:46.160 --> 38:52.560
1123
+ span that spectrum. But I think that in a lot of situations in AI, you really have to build the
1124
+
1125
+ 38:52.560 --> 38:58.400
1126
+ big systems and evaluate them at scale before you know what works and doesn't. And we've seen
1127
+
1128
+ 38:58.400 --> 39:03.280
1129
+ that in the computational game theory community, that there are a lot of techniques that look good
1130
+
1131
+ 39:03.280 --> 39:08.640
1132
+ in the small, but then they cease to look good in the large. And we've also seen that there are a
1133
+
1134
+ 39:08.640 --> 39:15.600
1135
+ lot of techniques that look superior in theory. And I really mean in terms of convergence rates,
1136
+
1137
+ 39:15.600 --> 39:20.720
1138
+ better like first order methods, better convergence rates like the CFR based algorithms,
1139
+
1140
+ 39:20.720 --> 39:26.080
1141
+ yet the CFR based algorithms are the first fastest in practice. So it really tells me that you have
1142
+
1143
+ 39:26.080 --> 39:32.400
1144
+ to test these in reality, the theory isn't tight enough, if you will, to tell you which
1145
+
1146
+ 39:32.400 --> 39:38.480
1147
+ algorithms are better than the others. And you have to look at these things that in the large,
1148
+
1149
+ 39:38.480 --> 39:43.600
1150
+ because any sort of projections you do from the small can at least in this domain be very misleading.
1151
+
1152
+ 39:43.600 --> 39:49.040
1153
+ So that's kind of from a kind of science and engineering perspective, from a personal perspective,
1154
+
1155
+ 39:49.040 --> 39:55.200
1156
+ it's been just a wild experience in that with the first poker competition, the first
1157
+
1158
+ 39:56.160 --> 40:00.640
1159
+ brains versus AI man machine poker competition that we organized. There had been, by the way,
1160
+
1161
+ 40:00.640 --> 40:04.880
1162
+ for other poker games, there had been previous competitions, but this was for heads up no limit,
1163
+
1164
+ 40:04.880 --> 40:11.200
1165
+ this was the first. And I probably became the most hated person in the world of poker. And I
1166
+
1167
+ 40:11.200 --> 40:18.400
1168
+ didn't mean to sigh. Why is that for cracking the game for? Yeah, it was a lot of people
1169
+
1170
+ 40:18.400 --> 40:24.160
1171
+ felt that it was a real threat to the whole game, the whole existence of the game. If AI becomes
1172
+
1173
+ 40:24.160 --> 40:29.680
1174
+ better than humans, people would be scared to play poker, because there are these superhuman
1175
+
1176
+ 40:29.680 --> 40:35.120
1177
+ AIs running around taking their money and you know, all of that. So I just it was really aggressive.
1178
+
1179
+ 40:35.120 --> 40:40.640
1180
+ Interesting. The comments were super aggressive. I got everything just short of death threats.
1181
+
1182
+ 40:42.000 --> 40:45.760
1183
+ Do you think the same was true for chess? Because right now, they just completed the
1184
+
1185
+ 40:45.760 --> 40:50.720
1186
+ world championships in chess, and humans just started ignoring the fact that there's AI systems
1187
+
1188
+ 40:50.720 --> 40:55.360
1189
+ now that outperform humans and they still enjoy the game is still a beautiful game.
1190
+
1191
+ 40:55.360 --> 41:00.160
1192
+ That's what I think. And I think the same thing happened in poker. And so I didn't
1193
+
1194
+ 41:00.160 --> 41:03.760
1195
+ think of myself as somebody was going to kill the game. And I don't think I did.
1196
+
1197
+ 41:03.760 --> 41:07.360
1198
+ I've really learned to love this game. I wasn't a poker player before, but
1199
+
1200
+ 41:07.360 --> 41:12.400
1201
+ learn so many nuances about it from these AIs. And they've really changed how the game is played,
1202
+
1203
+ 41:12.400 --> 41:17.600
1204
+ by the way. So they have these very Martian ways of playing poker. And the top humans are now
1205
+
1206
+ 41:17.600 --> 41:24.880
1207
+ incorporating those types of strategies into their own play. So if anything, to me, our work has made
1208
+
1209
+ 41:25.600 --> 41:31.760
1210
+ poker a richer, more interesting game for humans to play, not something that is going to steer
1211
+
1212
+ 41:31.760 --> 41:36.240
1213
+ humans away from it entirely. Just a quick comment on something you said, which is,
1214
+
1215
+ 41:37.440 --> 41:44.800
1216
+ if I may say so, in academia is a little bit rare sometimes. It's pretty brave to put your ideas
1217
+
1218
+ 41:44.800 --> 41:50.000
1219
+ to the test in the way you described, saying that sometimes good ideas don't work when you actually
1220
+
1221
+ 41:50.560 --> 41:57.600
1222
+ try to apply them at scale. So where does that come from? I mean, if you could do advice for
1223
+
1224
+ 41:57.600 --> 42:04.000
1225
+ people, what drives you in that sense? Were you always this way? I mean, it takes a brave person,
1226
+
1227
+ 42:04.000 --> 42:09.280
1228
+ I guess is what I'm saying, to test their ideas and to see if this thing actually works against human
1229
+
1230
+ 42:09.840 --> 42:14.000
1231
+ top human players and so on. I don't know about brave, but it takes a lot of work.
1232
+
1233
+ 42:14.800 --> 42:20.960
1234
+ It takes a lot of work and a lot of time to organize, to make something big and to organize
1235
+
1236
+ 42:20.960 --> 42:26.080
1237
+ an event and stuff like that. And what drives you in that effort? Because you could still,
1238
+
1239
+ 42:26.080 --> 42:31.280
1240
+ I would argue, get a Best Paper Award at NIPS as you did in 17 without doing this.
1241
+
1242
+ 42:31.280 --> 42:32.160
1243
+ That's right, yes.
1244
+
1245
+ 42:34.400 --> 42:40.960
1246
+ So in general, I believe it's very important to do things in the real world and at scale.
1247
+
1248
+ 42:41.600 --> 42:48.320
1249
+ And that's really where the pudding, if you will, proves in the pudding. That's where it is.
1250
+
1251
+ 42:48.320 --> 42:54.720
1252
+ In this particular case, it was kind of a competition between different groups.
1253
+
1254
+ 42:54.720 --> 43:00.400
1255
+ And for many years, as to who can be the first one to beat the top humans at heads up,
1256
+
1257
+ 43:00.400 --> 43:09.440
1258
+ no limit takes us hold them. So it became kind of like a competition who can get there.
1259
+
1260
+ 43:09.440 --> 43:13.840
1261
+ Yeah, so a little friendly competition could do wonders for progress.
1262
+
1263
+ 43:13.840 --> 43:14.960
1264
+ Yes, absolutely.
1265
+
1266
+ 43:16.240 --> 43:21.440
1267
+ So the topic of mechanism design, which is really interesting, also kind of new to me,
1268
+
1269
+ 43:21.440 --> 43:27.440
1270
+ except as an observer of, I don't know, politics and any, I'm an observer of mechanisms,
1271
+
1272
+ 43:27.440 --> 43:32.960
1273
+ but you write in your paper, an automated mechanism design that I quickly read.
1274
+
1275
+ 43:33.840 --> 43:36.960
1276
+ So mechanism design is designing the rules of the game,
1277
+
1278
+ 43:37.760 --> 43:44.400
1279
+ so you get a certain desirable outcome. And you have this work on doing so in an automatic
1280
+
1281
+ 43:44.400 --> 43:49.440
1282
+ fashion as opposed to fine tuning it. So what have you learned from those efforts?
1283
+
1284
+ 43:49.440 --> 43:57.040
1285
+ If you look, say, I don't know, at complex, it's like our political system. Can we design our
1286
+
1287
+ 43:57.040 --> 44:04.480
1288
+ political system to have in an automated fashion, to have outcomes that we want? Can we design
1289
+
1290
+ 44:04.480 --> 44:11.680
1291
+ something like traffic lights to be smart, where it gets outcomes that we want?
1292
+
1293
+ 44:11.680 --> 44:14.880
1294
+ So what are the lessons that you draw from that work?
1295
+
1296
+ 44:14.880 --> 44:19.280
1297
+ Yeah, so I still very much believe in the automated mechanism design direction.
1298
+
1299
+ 44:19.280 --> 44:19.520
1300
+ Yes.
1301
+
1302
+ 44:20.640 --> 44:26.400
1303
+ But it's not a panacea. There are impossibility results in mechanism design,
1304
+
1305
+ 44:26.400 --> 44:32.800
1306
+ saying that there is no mechanism that accomplishes objective X in class C.
1307
+
1308
+ 44:33.840 --> 44:40.000
1309
+ So it's not going up, there's no way, using any mechanism design tools, manual or automated,
1310
+
1311
+ 44:40.800 --> 44:42.720
1312
+ to do certain things in mechanism design.
1313
+
1314
+ 44:42.720 --> 44:46.960
1315
+ Can you describe that again? So meaning there, it's impossible to achieve that?
1316
+
1317
+ 44:46.960 --> 44:55.120
1318
+ Yeah, there's also an impossible. So these are not statements about human ingenuity,
1319
+
1320
+ 44:55.120 --> 45:00.320
1321
+ who might come up with something smart. These are proofs that if you want to accomplish properties
1322
+
1323
+ 45:00.320 --> 45:06.080
1324
+ X in class C, that is not doable with any mechanism. The good thing about automated
1325
+
1326
+ 45:06.080 --> 45:12.800
1327
+ mechanism design is that we're not really designing for a class, we're designing for specific settings
1328
+
1329
+ 45:12.800 --> 45:18.800
1330
+ at a time. So even if there's an impossibility result for the whole class, it just doesn't
1331
+
1332
+ 45:18.800 --> 45:23.520
1333
+ mean that all of the cases in the class are impossible, it just means that some of the
1334
+
1335
+ 45:23.520 --> 45:28.800
1336
+ cases are impossible. So we can actually carve these islands of possibility within these
1337
+
1338
+ 45:28.800 --> 45:34.640
1339
+ non impossible classes. And we've actually done that. So one of the famous results in mechanism
1340
+
1341
+ 45:34.640 --> 45:40.640
1342
+ design is a Meyers and Settled Weight theorem by Roger Meyers and Mark Settled Weight from 1983.
1343
+
1344
+ 45:40.640 --> 45:45.760
1345
+ So it's an impossibility of efficient trade under imperfect information. We show that
1346
+
1347
+ 45:46.880 --> 45:50.480
1348
+ you can in many settings avoid that and get efficient trade anyway.
1349
+
1350
+ 45:51.360 --> 45:56.640
1351
+ Depending on how you design the game. Depending how you design the game. And of course,
1352
+
1353
+ 45:56.640 --> 46:03.760
1354
+ it doesn't in any way contradict the impossibility result. The impossibility result is still there,
1355
+
1356
+ 46:03.760 --> 46:11.200
1357
+ but it just finds spots within this impossible class where in those spots you don't have the
1358
+
1359
+ 46:11.200 --> 46:17.600
1360
+ impossibility. Sorry if I'm going a bit philosophical, but what lessons do you draw towards like I
1361
+
1362
+ 46:17.600 --> 46:23.920
1363
+ mentioned politics or the human interaction and designing mechanisms for outside of just
1364
+
1365
+ 46:24.800 --> 46:27.120
1366
+ these kinds of trading or auctioning or
1367
+
1368
+ 46:27.120 --> 46:37.200
1369
+ purely formal games or human interaction like a political system. Do you think it's applicable
1370
+
1371
+ 46:37.200 --> 46:47.920
1372
+ to politics or to business, to negotiations, these kinds of things, designing rules that have
1373
+
1374
+ 46:47.920 --> 46:53.360
1375
+ certain outcomes? Yeah, yeah, I do think so. Have you seen success that successfully done?
1376
+
1377
+ 46:53.360 --> 46:58.880
1378
+ There hasn't really. Oh, you mean mechanism design or automated mechanism? Automated mechanism design.
1379
+
1380
+ 46:58.880 --> 47:07.440
1381
+ So mechanism design itself has had fairly limited success so far. There are certain cases,
1382
+
1383
+ 47:07.440 --> 47:13.600
1384
+ but most of the real world situations are actually not sound from a mechanism design
1385
+
1386
+ 47:13.600 --> 47:18.640
1387
+ perspective. Even in those cases where they've been designed by very knowledgeable mechanism
1388
+
1389
+ 47:18.640 --> 47:24.080
1390
+ design people, the people are typically just taking some insights from the theory and applying
1391
+
1392
+ 47:24.080 --> 47:29.920
1393
+ those insights into the real world rather than applying the mechanisms directly. So one famous
1394
+
1395
+ 47:29.920 --> 47:36.880
1396
+ example of is the FCC spectrum auctions. So I've also had a small role in that and
1397
+
1398
+ 47:38.560 --> 47:45.040
1399
+ very good economists have been working on that with no game theory. Yet the rules that are
1400
+
1401
+ 47:45.040 --> 47:50.960
1402
+ designed in practice there, they're such that bidding truthfully is not the best strategy.
1403
+
1404
+ 47:51.680 --> 47:57.040
1405
+ Usually mechanism design, we try to make things easy for the participants. So telling the truth
1406
+
1407
+ 47:57.040 --> 48:02.480
1408
+ is the best strategy. But even in those very high stakes auctions where you have tens of billions
1409
+
1410
+ 48:02.480 --> 48:07.840
1411
+ of dollars worth of spectrum being auctioned, truth telling is not the best strategy.
1412
+
1413
+ 48:09.200 --> 48:14.000
1414
+ And by the way, nobody knows even a single optimal bidding strategy for those auctions.
1415
+
1416
+ 48:14.000 --> 48:17.600
1417
+ What's the challenge of coming up with an optimal bid? Because there's a lot of players and there's
1418
+
1419
+ 48:17.600 --> 48:23.440
1420
+ imperfections. It's not so much there, a lot of players, but many items for sale. And these
1421
+
1422
+ 48:23.440 --> 48:29.200
1423
+ mechanisms are such that even with just two items or one item, bidding truthfully wouldn't be
1424
+
1425
+ 48:29.200 --> 48:37.040
1426
+ the best strategy. If you look at the history of AI, it's marked by seminal events and an
1427
+
1428
+ 48:37.040 --> 48:42.160
1429
+ AlphaGo beating a world champion, human go player, I would put Lebrados winning the heads
1430
+
1431
+ 48:42.160 --> 48:51.280
1432
+ up no limit hold them as one of such event. Thank you. And what do you think is the next such event?
1433
+
1434
+ 48:52.400 --> 48:58.880
1435
+ Whether it's in your life or in the broadly AI community that you think might be out there
1436
+
1437
+ 48:58.880 --> 49:04.800
1438
+ that would surprise the world. So that's a great question and I really know the answer. In terms
1439
+
1440
+ 49:04.800 --> 49:12.880
1441
+ of game solving heads up no limit takes us all and really was the one remaining widely agreed
1442
+
1443
+ 49:12.880 --> 49:18.400
1444
+ upon benchmark. So that was the big milestone. Now, are there other things? Yeah, certainly
1445
+
1446
+ 49:18.400 --> 49:23.440
1447
+ there are, but there there is not one that the community has kind of focused on. So what could
1448
+
1449
+ 49:23.440 --> 49:29.680
1450
+ be other things? There are groups working on Starcraft. There are groups working on Dota 2.
1451
+
1452
+ 49:29.680 --> 49:36.560
1453
+ These are video games. Yes, or you could have like diplomacy or Hanabi, you know, things like
1454
+
1455
+ 49:36.560 --> 49:42.640
1456
+ that. These are like recreational games, but none of them are really acknowledged as kind of the
1457
+
1458
+ 49:42.640 --> 49:49.920
1459
+ main next challenge problem. Like chess or go or heads up no limit takes us hold them was.
1460
+
1461
+ 49:49.920 --> 49:55.600
1462
+ So I don't really know in the game solving space what is or what will be the next benchmark. I
1463
+
1464
+ 49:55.600 --> 49:59.920
1465
+ kind of hope that there will be a next benchmark because really the different groups working on
1466
+
1467
+ 49:59.920 --> 50:06.160
1468
+ the same problem really drove these application independent techniques forward very quickly
1469
+
1470
+ 50:06.160 --> 50:11.120
1471
+ over 10 years. Do you think there's an open problem that excites you that you start moving
1472
+
1473
+ 50:11.120 --> 50:18.240
1474
+ away from games into real world games like say the stock market trading? Yeah, so that's kind
1475
+
1476
+ 50:18.240 --> 50:27.120
1477
+ of how I am. So I am probably not going to work as hard on these recreational benchmarks.
1478
+
1479
+ 50:27.760 --> 50:32.960
1480
+ I'm doing two startups on game solving technology strategic machine and strategy robot and we're
1481
+
1482
+ 50:32.960 --> 50:39.680
1483
+ really interested in pushing this stuff into practice. What do you think would be really,
1484
+
1485
+ 50:39.680 --> 50:50.400
1486
+ you know, a powerful result that would be surprising that would be if you can say, I mean,
1487
+
1488
+ 50:50.400 --> 50:56.880
1489
+ it's, you know, five years, 10 years from now, something that statistically would say is not
1490
+
1491
+ 50:56.880 --> 51:03.200
1492
+ very likely, but if there's a breakthrough would achieve. Yeah, so I think that overall,
1493
+
1494
+ 51:03.200 --> 51:11.600
1495
+ we're in a very different situation in game theory than we are in, let's say, machine learning. Yes.
1496
+
1497
+ 51:11.600 --> 51:16.720
1498
+ So in machine learning, it's a fairly mature technology and it's very broadly applied and
1499
+
1500
+ 51:17.280 --> 51:22.480
1501
+ proven success in the real world. In game solving, there are almost no applications yet.
1502
+
1503
+ 51:24.320 --> 51:29.440
1504
+ We have just become superhuman, which machine learning you could argue happened in the 90s,
1505
+
1506
+ 51:29.440 --> 51:35.120
1507
+ if not earlier, and at least on supervised learning, certain complex supervised learning
1508
+
1509
+ 51:35.120 --> 51:40.960
1510
+ applications. Now, I think the next challenge problem, I know you're not asking about it this
1511
+
1512
+ 51:40.960 --> 51:45.360
1513
+ way, you're asking about technology breakthrough. But I think that big breakthrough is to be able
1514
+
1515
+ 51:45.360 --> 51:50.720
1516
+ to show that, hey, maybe most of, let's say, military planning or most of business strategy
1517
+
1518
+ 51:50.720 --> 51:55.600
1519
+ will actually be done strategically using computational game theory. That's what I would
1520
+
1521
+ 51:55.600 --> 52:01.120
1522
+ like to see as a next five or 10 year goal. Maybe you can explain to me again, forgive me if this
1523
+
1524
+ 52:01.120 --> 52:07.360
1525
+ is an obvious question, but machine learning methods and neural networks suffer from not
1526
+
1527
+ 52:07.360 --> 52:12.800
1528
+ being transparent, not being explainable. Game theoretic methods, Nash Equilibria,
1529
+
1530
+ 52:12.800 --> 52:18.960
1531
+ do they generally, when you see the different solutions, are they, when you talk about military
1532
+
1533
+ 52:18.960 --> 52:24.640
1534
+ operations, are they, once you see the strategies, do they make sense? Are they explainable or do
1535
+
1536
+ 52:24.640 --> 52:29.680
1537
+ they suffer from the same problems as neural networks do? So that's a good question. I would say
1538
+
1539
+ 52:30.400 --> 52:36.560
1540
+ a little bit yes and no. And what I mean by that is that these game theoretic strategies,
1541
+
1542
+ 52:36.560 --> 52:42.320
1543
+ let's say, Nash Equilibrium, it has provable properties. So it's unlike, let's say, deep
1544
+
1545
+ 52:42.320 --> 52:47.040
1546
+ learning where you kind of cross your fingers, hopefully it'll work. And then after the fact,
1547
+
1548
+ 52:47.040 --> 52:52.560
1549
+ when you have the weights, you're still crossing your fingers, and I hope it will work. Here,
1550
+
1551
+ 52:52.560 --> 52:57.840
1552
+ you know that the solution quality is there. There's provable solution quality guarantees.
1553
+
1554
+ 52:58.480 --> 53:03.360
1555
+ Now, that doesn't necessarily mean that the strategies are human understandable.
1556
+
1557
+ 53:03.360 --> 53:08.560
1558
+ That's a whole other problem. So I think that deep learning and computational game theory
1559
+
1560
+ 53:08.560 --> 53:12.320
1561
+ are in the same boat in that sense, that both are difficult to understand.
1562
+
1563
+ 53:13.680 --> 53:16.160
1564
+ But at least the game theoretic techniques, they have this
1565
+
1566
+ 53:16.160 --> 53:22.800
1567
+ guarantees of solution quality. So do you see business operations, strategic operations,
1568
+
1569
+ 53:22.800 --> 53:31.440
1570
+ even military in the future being at least the strong candidates being proposed by automated
1571
+
1572
+ 53:31.440 --> 53:39.520
1573
+ systems? Do you see that? Yeah, I do. I do. But that's more of a belief than a substantiated fact.
1574
+
1575
+ 53:39.520 --> 53:43.840
1576
+ Depending on where you land, an optimism or pessimism, that's a really, to me, that's an
1577
+
1578
+ 53:43.840 --> 53:52.560
1579
+ exciting future, especially if there's provable things in terms of optimality. So looking into
1580
+
1581
+ 53:52.560 --> 54:01.120
1582
+ the future, there's a few folks worried about the, especially you look at the game of poker,
1583
+
1584
+ 54:01.120 --> 54:06.800
1585
+ which is probably one of the last benchmarks in terms of games being solved. They worry about
1586
+
1587
+ 54:06.800 --> 54:11.760
1588
+ the future and the existential threats of artificial intelligence. So the negative impact
1589
+
1590
+ 54:11.760 --> 54:18.800
1591
+ in whatever form on society, is that something that concerns you as much? Or are you more optimistic
1592
+
1593
+ 54:18.800 --> 54:24.640
1594
+ about the positive impacts of AI? I am much more optimistic about the positive impacts.
1595
+
1596
+ 54:24.640 --> 54:29.920
1597
+ So just in my own work, what we've done so far, we run the nationwide kidney exchange.
1598
+
1599
+ 54:29.920 --> 54:36.000
1600
+ Hundreds of people are walking around alive today, who would it be? And it's increased employment.
1601
+
1602
+ 54:36.000 --> 54:42.720
1603
+ You have a lot of people now running kidney exchanges and at transplant centers, interacting
1604
+
1605
+ 54:42.720 --> 54:50.240
1606
+ with the kidney exchange. You have some extra surgeons, nurses, anesthesiologists, hospitals,
1607
+
1608
+ 54:50.240 --> 54:55.200
1609
+ all of that. So employment is increasing from that and the world is becoming a better place.
1610
+
1611
+ 54:55.200 --> 55:03.120
1612
+ Another example is combinatorial sourcing auctions. We did 800 large scale combinatorial
1613
+
1614
+ 55:03.120 --> 55:09.280
1615
+ sourcing auctions from 2001 to 2010 in a previous startup of mine called Combinet.
1616
+
1617
+ 55:09.280 --> 55:18.480
1618
+ And we increased the supply chain efficiency on that $60 billion of spend by 12.6%. So that's
1619
+
1620
+ 55:18.480 --> 55:24.000
1621
+ over $6 billion of efficiency improvement in the world. And this is not like shifting value from
1622
+
1623
+ 55:24.000 --> 55:28.960
1624
+ somebody to somebody else, just efficiency improvement, like in trucking, less empty
1625
+
1626
+ 55:28.960 --> 55:35.040
1627
+ driving. So there's less waste, less carbon footprint and so on. This is a huge positive
1628
+
1629
+ 55:35.040 --> 55:42.080
1630
+ impact in the near term. But sort of to stay in it for a little longer, because I think game theory
1631
+
1632
+ 55:42.080 --> 55:46.720
1633
+ is a role to play here. Let me actually come back on that. That's one thing. I think AI is also going
1634
+
1635
+ 55:46.720 --> 55:52.960
1636
+ to make the world much safer. So that's another aspect that often gets overlooked.
1637
+
1638
+ 55:53.920 --> 55:58.560
1639
+ Well, let me ask this question. Maybe you can speak to the safer. So I talked to Max Tagmark
1640
+
1641
+ 55:58.560 --> 56:03.200
1642
+ and Stuart Russell, who are very concerned about existential threats of AI. And often,
1643
+
1644
+ 56:03.200 --> 56:12.960
1645
+ the concern is about value misalignment. So AI systems basically working, operating towards
1646
+
1647
+ 56:12.960 --> 56:19.680
1648
+ goals that are not the same as human civilization, human beings. So it seems like game theory has
1649
+
1650
+ 56:19.680 --> 56:28.800
1651
+ a role to play there to make sure the values are aligned with human beings. I don't know if that's
1652
+
1653
+ 56:28.800 --> 56:36.160
1654
+ how you think about it. If not, how do you think AI might help with this problem? How do you think
1655
+
1656
+ 56:36.160 --> 56:44.800
1657
+ AI might make the world safer? Yeah, I think this value misalignment is a fairly theoretical
1658
+
1659
+ 56:44.800 --> 56:52.880
1660
+ worry. And I haven't really seen it in because I do a lot of real applications. I don't see it
1661
+
1662
+ 56:52.880 --> 56:58.080
1663
+ anywhere. The closest I've seen it was the following type of mental exercise, really,
1664
+
1665
+ 56:58.080 --> 57:03.200
1666
+ where I had this argument in the late 80s when we were building these transportation optimization
1667
+
1668
+ 57:03.200 --> 57:08.160
1669
+ systems. And somebody had heard that it's a good idea to have high utilization of assets.
1670
+
1671
+ 57:08.160 --> 57:13.840
1672
+ So they told me that, hey, why don't you put that as objective? And we didn't even put it as an
1673
+
1674
+ 57:13.840 --> 57:19.360
1675
+ objective, because I just showed him that if you had that as your objective, the solution would be
1676
+
1677
+ 57:19.360 --> 57:23.360
1678
+ to load your trucks full and drive in circles. Nothing would ever get delivered. You'd have
1679
+
1680
+ 57:23.360 --> 57:30.480
1681
+ 100% utilization. So yeah, I know this phenomenon. I've known this for over 30 years. But I've never
1682
+
1683
+ 57:30.480 --> 57:35.920
1684
+ seen it actually be a problem in reality. And yes, if you have the wrong objective, the AI will
1685
+
1686
+ 57:35.920 --> 57:40.720
1687
+ optimize that to the hilt. And it's going to hurt more than some human who's kind of trying to
1688
+
1689
+ 57:40.720 --> 57:47.200
1690
+ solve it in a half baked way with some human insight, too. But I just haven't seen that
1691
+
1692
+ 57:47.200 --> 57:51.600
1693
+ materialize in practice. There's this gap that you've actually put your finger on
1694
+
1695
+ 57:52.880 --> 57:59.760
1696
+ very clearly just now between theory and reality that's very difficult to put into words, I think.
1697
+
1698
+ 57:59.760 --> 58:06.720
1699
+ It's what you can theoretically imagine, the worst possible case or even, yeah, I mean,
1700
+
1701
+ 58:06.720 --> 58:12.720
1702
+ bad cases. And what usually happens in reality. So for example, to me, maybe it's something you
1703
+
1704
+ 58:12.720 --> 58:19.680
1705
+ can comment on, having grown up and I grew up in the Soviet Union. You know, there's currently
1706
+
1707
+ 58:19.680 --> 58:28.320
1708
+ 10,000 nuclear weapons in the world. And for many decades, it's theoretically surprising to me
1709
+
1710
+ 58:28.320 --> 58:34.240
1711
+ that the nuclear war is not broken out. Do you think about this aspect from a game
1712
+
1713
+ 58:34.240 --> 58:41.520
1714
+ theoretic perspective in general? Why is that true? Why? In theory, you could see how things
1715
+
1716
+ 58:41.520 --> 58:46.480
1717
+ go terribly wrong. And somehow yet they have not. Yeah, how do you think so? So I do think that
1718
+
1719
+ 58:46.480 --> 58:51.120
1720
+ about that a lot. I think the biggest two threats that we're facing as mankind, one is climate
1721
+
1722
+ 58:51.120 --> 58:57.200
1723
+ change, and the other is nuclear war. So so those are my main two worries that they worry about.
1724
+
1725
+ 58:57.200 --> 59:01.840
1726
+ And I've tried to do something about climate thought about trying to do something for climate
1727
+
1728
+ 59:01.840 --> 59:07.920
1729
+ change twice. Actually, for two of my startups, I've actually commissioned studies of what we
1730
+
1731
+ 59:07.920 --> 59:12.320
1732
+ could do on those things. And we didn't really find a sweet spot, but I'm still keeping an eye out
1733
+
1734
+ 59:12.320 --> 59:17.280
1735
+ on that if there's something where we could actually provide a market solution or optimization
1736
+
1737
+ 59:17.280 --> 59:24.080
1738
+ solution or some other technology solution to problems. Right now, like for example, pollution
1739
+
1740
+ 59:24.080 --> 59:30.000
1741
+ critic markets was what we were looking at then. And it was much more the lack of political will
1742
+
1743
+ 59:30.000 --> 59:35.520
1744
+ by those markets were not so successful, rather than bad market design. So I could go in and
1745
+
1746
+ 59:35.520 --> 59:39.920
1747
+ make a better market design. But that wouldn't really move the needle on the world very much
1748
+
1749
+ 59:39.920 --> 59:44.800
1750
+ if there's no political will and in the US, you know, the market, at least the Chicago market
1751
+
1752
+ 59:44.800 --> 59:50.560
1753
+ was just shut down, and so on. So then it doesn't really help how great your market design was.
1754
+
1755
+ 59:50.560 --> 59:59.920
1756
+ And on the nuclear side, it's more so global warming is a more encroaching problem.
1757
+
1758
+ 1:00:00.560 --> 1:00:05.680
1759
+ You know, nuclear weapons have been here. It's an obvious problem has just been sitting there.
1760
+
1761
+ 1:00:05.680 --> 1:00:12.240
1762
+ So how do you think about what is the mechanism design there that just made everything seem stable?
1763
+
1764
+ 1:00:12.240 --> 1:00:18.560
1765
+ And are you still extremely worried? I am still extremely worried. So you probably know the simple
1766
+
1767
+ 1:00:18.560 --> 1:00:25.280
1768
+ game theory of mad. So this was a mutually assured destruction. And it doesn't require any
1769
+
1770
+ 1:00:25.280 --> 1:00:29.600
1771
+ computation with small matrices, you can actually convince yourself that the game is such that
1772
+
1773
+ 1:00:29.600 --> 1:00:36.000
1774
+ nobody wants to initiate. Yeah, that's a very coarse grained analysis. And it really works in
1775
+
1776
+ 1:00:36.000 --> 1:00:40.960
1777
+ a situation where you have two superpowers or small numbers of superpowers. Now things are
1778
+
1779
+ 1:00:40.960 --> 1:00:47.920
1780
+ very different. You have a smaller nuke. So the threshold of initiating is smaller. And you have
1781
+
1782
+ 1:00:47.920 --> 1:00:54.800
1783
+ smaller countries and non non nation actors who may get nukes and so on. So it's I think it's
1784
+
1785
+ 1:00:54.800 --> 1:01:04.240
1786
+ riskier now than it was maybe ever before. And what idea application of AI, you've talked about
1787
+
1788
+ 1:01:04.240 --> 1:01:09.440
1789
+ a little bit, but what is the most exciting to you right now? I mean, you're here at NIPS,
1790
+
1791
+ 1:01:09.440 --> 1:01:16.640
1792
+ NewRips. Now, you have a few excellent pieces of work. But what are you thinking into the future
1793
+
1794
+ 1:01:16.640 --> 1:01:20.640
1795
+ with several companies you're doing? What's the most exciting thing or one of the exciting things?
1796
+
1797
+ 1:01:21.200 --> 1:01:26.960
1798
+ The number one thing for me right now is coming up with these scalable techniques for
1799
+
1800
+ 1:01:26.960 --> 1:01:32.800
1801
+ game solving and applying them into the real world. I'm still very interested in market design
1802
+
1803
+ 1:01:32.800 --> 1:01:37.200
1804
+ as well. And we're doing that in the optimized markets. But I'm most interested if number one
1805
+
1806
+ 1:01:37.200 --> 1:01:42.320
1807
+ right now is strategic machine strategy robot getting that technology out there and seeing
1808
+
1809
+ 1:01:42.320 --> 1:01:47.920
1810
+ as you're in the trenches doing applications, what needs to be actually filled, what technology
1811
+
1812
+ 1:01:47.920 --> 1:01:52.720
1813
+ gaps still need to be filled. So it's so hard to just put your feet on the table and imagine what
1814
+
1815
+ 1:01:52.720 --> 1:01:57.440
1816
+ needs to be done. But when you're actually doing real applications, the applications tell you
1817
+
1818
+ 1:01:58.000 --> 1:02:03.040
1819
+ what needs to be done. And I really enjoy that interaction. Is it a challenging process to
1820
+
1821
+ 1:02:03.040 --> 1:02:13.520
1822
+ apply some of the state of the art techniques you're working on, and having the various players in
1823
+
1824
+ 1:02:13.520 --> 1:02:19.280
1825
+ industry or the military, or people who could really benefit from it actually use it? What's
1826
+
1827
+ 1:02:19.280 --> 1:02:23.920
1828
+ that process like of, you know, in autonomous vehicles, we work with automotive companies and
1829
+
1830
+ 1:02:23.920 --> 1:02:31.120
1831
+ they're in many ways are a little bit old fashioned, it's difficult. They really want to use this
1832
+
1833
+ 1:02:31.120 --> 1:02:37.520
1834
+ technology, there's clearly will have a significant benefit. But the systems aren't quite in place
1835
+
1836
+ 1:02:37.520 --> 1:02:43.280
1837
+ to easily have them integrated in terms of data in terms of compute in terms of all these kinds
1838
+
1839
+ 1:02:43.280 --> 1:02:49.200
1840
+ of things. So do you, is that one of the bigger challenges that you're facing? And how do you
1841
+
1842
+ 1:02:49.200 --> 1:02:54.000
1843
+ tackle that challenge? Yeah, I think that's always a challenge that that's kind of slowness and inertia
1844
+
1845
+ 1:02:54.000 --> 1:02:59.360
1846
+ really of let's do things the way we've always done it. You just have to find the internal
1847
+
1848
+ 1:02:59.360 --> 1:03:04.480
1849
+ champions that the customer who understand that hey, things can't be the same way in the future,
1850
+
1851
+ 1:03:04.480 --> 1:03:09.120
1852
+ otherwise bad things are going to happen. And it's in autonomous vehicles, it's actually very
1853
+
1854
+ 1:03:09.120 --> 1:03:13.200
1855
+ interesting that the car makers are doing that, and they're very traditional. But at the same time,
1856
+
1857
+ 1:03:13.200 --> 1:03:18.320
1858
+ you have tech companies who have nothing to do with cars or transportation, like Google and Baidu,
1859
+
1860
+ 1:03:19.120 --> 1:03:25.360
1861
+ really pushing on autonomous cars. I find that fascinating. Clearly, you're super excited about
1862
+
1863
+ 1:03:25.360 --> 1:03:31.120
1864
+ how actually these ideas have an impact in the world. In terms of the technology, in terms of
1865
+
1866
+ 1:03:31.120 --> 1:03:38.320
1867
+ ideas and research, are there directions that you're also excited about, whether that's on the
1868
+
1869
+ 1:03:39.440 --> 1:03:43.360
1870
+ some of the approaches you talked about for imperfect information games, whether it's applying
1871
+
1872
+ 1:03:43.360 --> 1:03:46.640
1873
+ deep learning to some of these problems? Is there something that you're excited in
1874
+
1875
+ 1:03:47.360 --> 1:03:52.240
1876
+ in the research side of things? Yeah, yeah, lots of different things in the game solving.
1877
+
1878
+ 1:03:52.240 --> 1:04:00.240
1879
+ So solving even bigger games, games where you have more hidden action of the player
1880
+
1881
+ 1:04:00.240 --> 1:04:06.560
1882
+ actions as well. poker is a game where really chance actions are hidden, or some of them are
1883
+
1884
+ 1:04:06.560 --> 1:04:14.480
1885
+ hidden, but the player actions are public. multiplayer games or various sorts, collusion,
1886
+
1887
+ 1:04:14.480 --> 1:04:22.960
1888
+ opponent exploitation, all and even longer games. So games that basically go forever,
1889
+
1890
+ 1:04:22.960 --> 1:04:29.520
1891
+ but they're not repeated. So see extensive fun games that go forever. What would that even look
1892
+
1893
+ 1:04:29.520 --> 1:04:33.200
1894
+ like? How do you represent that? How do you solve that? What's an example of a game like that?
1895
+
1896
+ 1:04:33.920 --> 1:04:37.680
1897
+ This is some of the stochastic games that you mentioned. Let's say business strategy. So it's
1898
+
1899
+ 1:04:37.680 --> 1:04:42.880
1900
+ and not just modeling like a particular interaction, but thinking about the business from here to
1901
+
1902
+ 1:04:42.880 --> 1:04:50.960
1903
+ eternity. Or I see, or let's let's say military strategy. So it's not like war is going to go away.
1904
+
1905
+ 1:04:50.960 --> 1:04:58.000
1906
+ How do you think about military strategy that's going to go forever? How do you even model that?
1907
+
1908
+ 1:04:58.000 --> 1:05:05.760
1909
+ How do you know whether a move was good that you use somebody made? And so on. So that that's
1910
+
1911
+ 1:05:05.760 --> 1:05:11.920
1912
+ kind of one direction. I'm also very interested in learning much more scalable techniques for
1913
+
1914
+ 1:05:11.920 --> 1:05:18.560
1915
+ integer programming. So we had an ICML paper this summer on that, the first automated algorithm
1916
+
1917
+ 1:05:18.560 --> 1:05:24.400
1918
+ configuration paper that has theoretical generalization guarantees. So if I see these many
1919
+
1920
+ 1:05:24.400 --> 1:05:30.480
1921
+ training examples, and I tool my algorithm in this way, it's going to have good performance
1922
+
1923
+ 1:05:30.480 --> 1:05:35.280
1924
+ on the real distribution, which I've not seen. So which is kind of interesting that, you know,
1925
+
1926
+ 1:05:35.280 --> 1:05:42.560
1927
+ algorithm configuration has been going on now for at least 17 years seriously. And there has not
1928
+
1929
+ 1:05:42.560 --> 1:05:48.800
1930
+ been any generalization theory before. Well, this is really exciting. And it's been, it's a huge
1931
+
1932
+ 1:05:48.800 --> 1:05:52.720
1933
+ honor to talk to you. Thank you so much, Tomas. Thank you for bringing Lebrates to the world
1934
+
1935
+ 1:05:52.720 --> 1:06:07.440
1936
+ and all the great work you're doing. Well, thank you very much. It's been fun. Good questions.
1937
+
vtt/episode_013_small.vtt ADDED
@@ -0,0 +1,2570 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:02.960
4
+ The following is a conversation with Tomaso Poggio.
5
+
6
+ 00:02.960 --> 00:06.200
7
+ He's a professor at MIT and is a director of the Center
8
+
9
+ 00:06.200 --> 00:08.360
10
+ for Brains, Minds, and Machines.
11
+
12
+ 00:08.360 --> 00:11.640
13
+ Cited over 100,000 times, his work
14
+
15
+ 00:11.640 --> 00:14.560
16
+ has had a profound impact on our understanding
17
+
18
+ 00:14.560 --> 00:17.680
19
+ of the nature of intelligence in both biological
20
+
21
+ 00:17.680 --> 00:19.880
22
+ and artificial neural networks.
23
+
24
+ 00:19.880 --> 00:23.840
25
+ He has been an advisor to many highly impactful researchers
26
+
27
+ 00:23.840 --> 00:26.120
28
+ and entrepreneurs in AI, including
29
+
30
+ 00:26.120 --> 00:28.000
31
+ Demisus Habbis of DeepMind,
32
+
33
+ 00:28.000 --> 00:31.200
34
+ Amnon Shashwa of Mobileye, and Christoph Koch
35
+
36
+ 00:31.200 --> 00:34.120
37
+ of the Allen Institute for Brain Science.
38
+
39
+ 00:34.120 --> 00:36.400
40
+ This conversation is part of the MIT course
41
+
42
+ 00:36.400 --> 00:38.120
43
+ on artificial general intelligence
44
+
45
+ 00:38.120 --> 00:40.240
46
+ and the artificial intelligence podcast.
47
+
48
+ 00:40.240 --> 00:42.760
49
+ If you enjoy it, subscribe on YouTube, iTunes,
50
+
51
+ 00:42.760 --> 00:44.600
52
+ or simply connect with me on Twitter
53
+
54
+ 00:44.600 --> 00:47.960
55
+ at Lex Freedman, spelled F R I D.
56
+
57
+ 00:47.960 --> 00:52.480
58
+ And now, here's my conversation with Tomaso Poggio.
59
+
60
+ 00:52.480 --> 00:54.520
61
+ You've mentioned that in your childhood,
62
+
63
+ 00:54.520 --> 00:56.960
64
+ you've developed a fascination with physics,
65
+
66
+ 00:56.960 --> 00:59.720
67
+ especially the theory of relativity,
68
+
69
+ 00:59.720 --> 01:03.600
70
+ and that Einstein was also a childhood hero to you.
71
+
72
+ 01:04.520 --> 01:09.040
73
+ What aspect of Einstein's genius, the nature of his genius,
74
+
75
+ 01:09.040 --> 01:10.200
76
+ do you think was essential
77
+
78
+ 01:10.200 --> 01:12.960
79
+ for discovering the theory of relativity?
80
+
81
+ 01:12.960 --> 01:15.960
82
+ You know, Einstein was a hero to me,
83
+
84
+ 01:15.960 --> 01:17.200
85
+ and I'm sure to many people,
86
+
87
+ 01:17.200 --> 01:21.680
88
+ because he was able to make, of course,
89
+
90
+ 01:21.680 --> 01:25.200
91
+ a major, major contribution to physics
92
+
93
+ 01:25.200 --> 01:28.520
94
+ with simplifying a bit,
95
+
96
+ 01:28.520 --> 01:33.520
97
+ just a gedanken experiment, a thought experiment.
98
+
99
+ 01:35.200 --> 01:38.880
100
+ You know, imagining communication with lights
101
+
102
+ 01:38.880 --> 01:43.240
103
+ between a stationary observer and somebody on a train.
104
+
105
+ 01:43.240 --> 01:48.240
106
+ And I thought, you know, the fact that just
107
+
108
+ 01:48.560 --> 01:52.720
109
+ with the force of his thought, of his thinking, of his mind,
110
+
111
+ 01:52.720 --> 01:55.640
112
+ it could get to something so deep
113
+
114
+ 01:55.640 --> 01:57.520
115
+ in terms of physical reality,
116
+
117
+ 01:57.520 --> 02:01.320
118
+ how time depends on space and speed.
119
+
120
+ 02:01.320 --> 02:04.120
121
+ It was something absolutely fascinating.
122
+
123
+ 02:04.120 --> 02:06.720
124
+ It was the power of intelligence,
125
+
126
+ 02:06.720 --> 02:08.440
127
+ the power of the mind.
128
+
129
+ 02:08.440 --> 02:11.120
130
+ Do you think the ability to imagine,
131
+
132
+ 02:11.120 --> 02:15.200
133
+ to visualize as he did, as a lot of great physicists do,
134
+
135
+ 02:15.200 --> 02:18.640
136
+ do you think that's in all of us human beings,
137
+
138
+ 02:18.640 --> 02:20.600
139
+ or is there something special
140
+
141
+ 02:20.600 --> 02:22.880
142
+ to that one particular human being?
143
+
144
+ 02:22.880 --> 02:27.160
145
+ I think, you know, all of us can learn
146
+
147
+ 02:27.160 --> 02:32.160
148
+ and have, in principle, similar breakthroughs.
149
+
150
+ 02:33.240 --> 02:37.200
151
+ There is lesson to be learned from Einstein.
152
+
153
+ 02:37.200 --> 02:42.200
154
+ He was one of five PhD students at ETA,
155
+
156
+ 02:42.600 --> 02:47.600
157
+ the Eidgenossische Technische Hochschule in Zurich, in physics.
158
+
159
+ 02:47.600 --> 02:49.840
160
+ And he was the worst of the five.
161
+
162
+ 02:49.840 --> 02:53.600
163
+ The only one who did not get an academic position
164
+
165
+ 02:53.600 --> 02:57.040
166
+ when he graduated, when he finished his PhD,
167
+
168
+ 02:57.040 --> 03:00.000
169
+ and he went to work, as everybody knows,
170
+
171
+ 03:00.000 --> 03:01.720
172
+ for the patent office.
173
+
174
+ 03:01.720 --> 03:05.000
175
+ So it's not so much that he worked for the patent office,
176
+
177
+ 03:05.000 --> 03:07.880
178
+ but the fact that obviously he was smart,
179
+
180
+ 03:07.880 --> 03:10.240
181
+ but he was not the top student,
182
+
183
+ 03:10.240 --> 03:12.640
184
+ obviously he was the anti conformist.
185
+
186
+ 03:12.640 --> 03:15.720
187
+ He was not thinking in the traditional way
188
+
189
+ 03:15.720 --> 03:18.760
190
+ that probably teachers and the other students were doing.
191
+
192
+ 03:18.760 --> 03:23.760
193
+ So there is a lot to be said about trying to do the opposite
194
+
195
+ 03:25.960 --> 03:29.800
196
+ or something quite different from what other people are doing.
197
+
198
+ 03:29.800 --> 03:31.840
199
+ That's certainly true for the stock market.
200
+
201
+ 03:31.840 --> 03:34.800
202
+ Never buy if everybody's buying it.
203
+
204
+ 03:35.800 --> 03:37.440
205
+ And also true for science.
206
+
207
+ 03:37.440 --> 03:38.440
208
+ Yes.
209
+
210
+ 03:38.440 --> 03:42.440
211
+ So you've also mentioned staying on the theme of physics
212
+
213
+ 03:42.440 --> 03:46.440
214
+ that you were excited at a young age
215
+
216
+ 03:46.440 --> 03:50.440
217
+ by the mysteries of the universe that physics could uncover.
218
+
219
+ 03:50.440 --> 03:54.440
220
+ Such, as I saw mentioned, the possibility of time travel.
221
+
222
+ 03:56.440 --> 03:59.440
223
+ So out of the box question I think I'll get to ask today,
224
+
225
+ 03:59.440 --> 04:01.440
226
+ do you think time travel is possible?
227
+
228
+ 04:02.440 --> 04:05.440
229
+ Well, it would be nice if it were possible right now.
230
+
231
+ 04:05.440 --> 04:11.440
232
+ In science you never say no.
233
+
234
+ 04:11.440 --> 04:14.440
235
+ But your understanding of the nature of time.
236
+
237
+ 04:14.440 --> 04:15.440
238
+ Yeah.
239
+
240
+ 04:15.440 --> 04:20.440
241
+ It's very likely that it's not possible to travel in time.
242
+
243
+ 04:20.440 --> 04:24.440
244
+ We may be able to travel forward in time.
245
+
246
+ 04:24.440 --> 04:28.440
247
+ If we can, for instance, freeze ourselves
248
+
249
+ 04:28.440 --> 04:34.440
250
+ or go on some spacecraft traveling close to the speed of light,
251
+
252
+ 04:34.440 --> 04:39.440
253
+ but in terms of actively traveling, for instance, back in time,
254
+
255
+ 04:39.440 --> 04:43.440
256
+ I find probably very unlikely.
257
+
258
+ 04:43.440 --> 04:49.440
259
+ So do you still hold the underlying dream of the engineering intelligence
260
+
261
+ 04:49.440 --> 04:54.440
262
+ that will build systems that are able to do such huge leaps
263
+
264
+ 04:54.440 --> 04:58.440
265
+ like discovering the kind of mechanism
266
+
267
+ 04:58.440 --> 05:00.440
268
+ that would be required to travel through time?
269
+
270
+ 05:00.440 --> 05:02.440
271
+ Do you still hold that dream?
272
+
273
+ 05:02.440 --> 05:05.440
274
+ Or echoes of it from your childhood?
275
+
276
+ 05:05.440 --> 05:06.440
277
+ Yeah.
278
+
279
+ 05:06.440 --> 05:10.440
280
+ I don't think there are certain problems
281
+
282
+ 05:10.440 --> 05:13.440
283
+ that probably cannot be solved,
284
+
285
+ 05:13.440 --> 05:17.440
286
+ depending on what you believe about the physical reality.
287
+
288
+ 05:17.440 --> 05:23.440
289
+ Maybe it's totally impossible to create energy from nothing
290
+
291
+ 05:23.440 --> 05:26.440
292
+ or to travel back in time.
293
+
294
+ 05:26.440 --> 05:35.440
295
+ But about making machines that can think as well as we do or better,
296
+
297
+ 05:35.440 --> 05:39.440
298
+ or more likely, especially in the short and mid term,
299
+
300
+ 05:39.440 --> 05:41.440
301
+ help us think better,
302
+
303
+ 05:41.440 --> 05:45.440
304
+ which in a sense is happening already with the computers we have,
305
+
306
+ 05:45.440 --> 05:47.440
307
+ and it will happen more and more.
308
+
309
+ 05:47.440 --> 05:49.440
310
+ But that I certainly believe,
311
+
312
+ 05:49.440 --> 05:53.440
313
+ and I don't see in principle why computers at some point
314
+
315
+ 05:53.440 --> 05:59.440
316
+ could not become more intelligent than we are,
317
+
318
+ 05:59.440 --> 06:03.440
319
+ although the word intelligence is a tricky one,
320
+
321
+ 06:03.440 --> 06:07.440
322
+ and one who should discuss what I mean with that.
323
+
324
+ 06:07.440 --> 06:12.440
325
+ Intelligence, consciousness, words like love,
326
+
327
+ 06:12.440 --> 06:16.440
328
+ all these need to be disentangled.
329
+
330
+ 06:16.440 --> 06:20.440
331
+ So you've mentioned also that you believe the problem of intelligence
332
+
333
+ 06:20.440 --> 06:23.440
334
+ is the greatest problem in science,
335
+
336
+ 06:23.440 --> 06:26.440
337
+ greater than the origin of life and the origin of the universe.
338
+
339
+ 06:26.440 --> 06:29.440
340
+ You've also, in the talk,
341
+
342
+ 06:29.440 --> 06:34.440
343
+ I've said that you're open to arguments against you.
344
+
345
+ 06:34.440 --> 06:40.440
346
+ So what do you think is the most captivating aspect
347
+
348
+ 06:40.440 --> 06:43.440
349
+ of this problem of understanding the nature of intelligence?
350
+
351
+ 06:43.440 --> 06:46.440
352
+ Why does it captivate you as it does?
353
+
354
+ 06:46.440 --> 06:54.440
355
+ Well, originally, I think one of the motivations that I had as a teenager,
356
+
357
+ 06:54.440 --> 06:58.440
358
+ when I was infatuated with the theory of relativity,
359
+
360
+ 06:58.440 --> 07:05.440
361
+ was really that I found that there was the problem of time and space
362
+
363
+ 07:05.440 --> 07:07.440
364
+ and general relativity,
365
+
366
+ 07:07.440 --> 07:12.440
367
+ but there were so many other problems of the same level of difficulty
368
+
369
+ 07:12.440 --> 07:16.440
370
+ and importance that I could, even if I were Einstein,
371
+
372
+ 07:16.440 --> 07:19.440
373
+ it was difficult to hope to solve all of them.
374
+
375
+ 07:19.440 --> 07:26.440
376
+ So what about solving a problem whose solution allowed me to solve all the problems?
377
+
378
+ 07:26.440 --> 07:32.440
379
+ And this was what if we could find the key to an intelligence
380
+
381
+ 07:32.440 --> 07:36.440
382
+ ten times better or faster than Einstein?
383
+
384
+ 07:36.440 --> 07:39.440
385
+ So that's sort of seeing artificial intelligence
386
+
387
+ 07:39.440 --> 07:42.440
388
+ as a tool to expand our capabilities.
389
+
390
+ 07:42.440 --> 07:47.440
391
+ But is there just an inherent curiosity in you
392
+
393
+ 07:47.440 --> 07:53.440
394
+ and just understanding what it is in here that makes it all work?
395
+
396
+ 07:53.440 --> 07:55.440
397
+ Yes, absolutely. You're right.
398
+
399
+ 07:55.440 --> 08:00.440
400
+ So I started saying this was the motivation when I was a teenager,
401
+
402
+ 08:00.440 --> 08:06.440
403
+ but soon after, I think the problem of human intelligence
404
+
405
+ 08:06.440 --> 08:14.440
406
+ became a real focus of my science and my research,
407
+
408
+ 08:14.440 --> 08:27.440
409
+ because I think for me the most interesting problem is really asking who we are.
410
+
411
+ 08:27.440 --> 08:31.440
412
+ It is asking not only a question about science,
413
+
414
+ 08:31.440 --> 08:37.440
415
+ but even about the very tool we are using to do science, which is our brain.
416
+
417
+ 08:37.440 --> 08:39.440
418
+ How does our brain work?
419
+
420
+ 08:39.440 --> 08:41.440
421
+ From where does it come from?
422
+
423
+ 08:41.440 --> 08:43.440
424
+ What are its limitations?
425
+
426
+ 08:43.440 --> 08:45.440
427
+ Can we make it better?
428
+
429
+ 08:45.440 --> 08:49.440
430
+ And that in many ways is the ultimate question
431
+
432
+ 08:49.440 --> 08:53.440
433
+ that underlies this whole effort of science.
434
+
435
+ 08:53.440 --> 08:58.440
436
+ So you've made significant contributions in both the science of intelligence
437
+
438
+ 08:58.440 --> 09:01.440
439
+ and the engineering of intelligence.
440
+
441
+ 09:01.440 --> 09:04.440
442
+ In a hypothetical way, let me ask,
443
+
444
+ 09:04.440 --> 09:08.440
445
+ how far do you think we can get in creating intelligence systems
446
+
447
+ 09:08.440 --> 09:11.440
448
+ without understanding the biological,
449
+
450
+ 09:11.440 --> 09:15.440
451
+ the understanding how the human brain creates intelligence?
452
+
453
+ 09:15.440 --> 09:18.440
454
+ Put another way, do you think we can build a strong ass system
455
+
456
+ 09:18.440 --> 09:24.440
457
+ without really getting at the core, understanding the functional nature of the brain?
458
+
459
+ 09:24.440 --> 09:28.440
460
+ Well, this is a real difficult question.
461
+
462
+ 09:28.440 --> 09:34.440
463
+ We did solve problems like flying
464
+
465
+ 09:34.440 --> 09:43.440
466
+ without really using too much our knowledge about how birds fly.
467
+
468
+ 09:43.440 --> 09:51.440
469
+ It was important, I guess, to know that you could have things heavier than air
470
+
471
+ 09:51.440 --> 09:55.440
472
+ being able to fly like birds.
473
+
474
+ 09:55.440 --> 10:00.440
475
+ But beyond that, probably we did not learn very much.
476
+
477
+ 10:00.440 --> 10:08.440
478
+ The brothers right did learn a lot of observation about birds
479
+
480
+ 10:08.440 --> 10:12.440
481
+ and designing their aircraft,
482
+
483
+ 10:12.440 --> 10:17.440
484
+ but you can argue we did not use much of biology in that particular case.
485
+
486
+ 10:17.440 --> 10:28.440
487
+ Now, in the case of intelligence, I think that it's a bit of a bet right now.
488
+
489
+ 10:28.440 --> 10:36.440
490
+ If you ask, okay, we all agree we'll get at some point, maybe soon,
491
+
492
+ 10:36.440 --> 10:42.440
493
+ maybe later, to a machine that is indistinguishable from my secretary
494
+
495
+ 10:42.440 --> 10:47.440
496
+ in terms of what I can ask the machine to do.
497
+
498
+ 10:47.440 --> 10:50.440
499
+ I think we'll get there and now the question is,
500
+
501
+ 10:50.440 --> 10:56.440
502
+ you can ask people, do you think we'll get there without any knowledge about the human brain
503
+
504
+ 10:56.440 --> 11:02.440
505
+ or the best way to get there is to understand better the human brain?
506
+
507
+ 11:02.440 --> 11:08.440
508
+ This is, I think, an educated bet that different people with different backgrounds
509
+
510
+ 11:08.440 --> 11:11.440
511
+ will decide in different ways.
512
+
513
+ 11:11.440 --> 11:17.440
514
+ The recent history of the progress in AI in the last, I would say, five years
515
+
516
+ 11:17.440 --> 11:26.440
517
+ or ten years has been that the main breakthroughs, the main recent breakthroughs,
518
+
519
+ 11:26.440 --> 11:31.440
520
+ really start from neuroscience.
521
+
522
+ 11:31.440 --> 11:35.440
523
+ I can mention reinforcement learning as one,
524
+
525
+ 11:35.440 --> 11:41.440
526
+ is one of the algorithms at the core of AlphaGo,
527
+
528
+ 11:41.440 --> 11:46.440
529
+ which is the system that beat the kind of an official world champion of Go,
530
+
531
+ 11:46.440 --> 11:52.440
532
+ Lee Siddle, two, three years ago in Seoul.
533
+
534
+ 11:52.440 --> 12:00.440
535
+ That's one, and that started really with the work of Pavlov in 1900,
536
+
537
+ 12:00.440 --> 12:07.440
538
+ Marvin Miski in the 60s and many other neuroscientists later on.
539
+
540
+ 12:07.440 --> 12:13.440
541
+ And deep learning started, which is the core again of AlphaGo
542
+
543
+ 12:13.440 --> 12:19.440
544
+ and systems like autonomous driving systems for cars,
545
+
546
+ 12:19.440 --> 12:25.440
547
+ like the systems that Mobileye, which is a company started by one of my ex,
548
+
549
+ 12:25.440 --> 12:30.440
550
+ Okamnon Shashua, so that is the core of those things.
551
+
552
+ 12:30.440 --> 12:35.440
553
+ And deep learning, really the initial ideas in terms of the architecture
554
+
555
+ 12:35.440 --> 12:42.440
556
+ of these layered hierarchical networks started with work of Thorston Wiesel
557
+
558
+ 12:42.440 --> 12:47.440
559
+ and David Hubel at Harvard up the river in the 60s.
560
+
561
+ 12:47.440 --> 12:54.440
562
+ So recent history suggests that neuroscience played a big role in these breakthroughs.
563
+
564
+ 12:54.440 --> 12:59.440
565
+ My personal bet is that there is a good chance they continue to play a big role,
566
+
567
+ 12:59.440 --> 13:03.440
568
+ maybe not in all the future breakthroughs, but in some of them.
569
+
570
+ 13:03.440 --> 13:05.440
571
+ At least in inspiration.
572
+
573
+ 13:05.440 --> 13:07.440
574
+ At least in inspiration, absolutely, yes.
575
+
576
+ 13:07.440 --> 13:12.440
577
+ So you studied both artificial and biological neural networks,
578
+
579
+ 13:12.440 --> 13:19.440
580
+ you said these mechanisms that underlie deep learning and reinforcement learning,
581
+
582
+ 13:19.440 --> 13:25.440
583
+ but there is nevertheless significant differences between biological and artificial neural networks
584
+
585
+ 13:25.440 --> 13:27.440
586
+ as they stand now.
587
+
588
+ 13:27.440 --> 13:32.440
589
+ So between the two, what do you find is the most interesting, mysterious,
590
+
591
+ 13:32.440 --> 13:37.440
592
+ maybe even beautiful difference as it currently stands in our understanding?
593
+
594
+ 13:37.440 --> 13:44.440
595
+ I must confess that until recently I found that the artificial networks
596
+
597
+ 13:44.440 --> 13:49.440
598
+ were too simplistic relative to real neural networks.
599
+
600
+ 13:49.440 --> 13:54.440
601
+ But, you know, recently I've been started to think that, yes,
602
+
603
+ 13:54.440 --> 13:59.440
604
+ there are very big simplification of what you find in the brain.
605
+
606
+ 13:59.440 --> 14:07.440
607
+ But on the other hand, they are much closer in terms of the architecture to the brain
608
+
609
+ 14:07.440 --> 14:13.440
610
+ than other models that we had, that computer science used as model of thinking,
611
+
612
+ 14:13.440 --> 14:19.440
613
+ or mathematical logics, you know, LISP, Prologue, and those kind of things.
614
+
615
+ 14:19.440 --> 14:23.440
616
+ So in comparison to those, they're much closer to the brain.
617
+
618
+ 14:23.440 --> 14:28.440
619
+ You have networks of neurons, which is what the brain is about.
620
+
621
+ 14:28.440 --> 14:35.440
622
+ The artificial neurons in the models are, as I said, caricature of the biological neurons,
623
+
624
+ 14:35.440 --> 14:39.440
625
+ but they're still neurons, single units communicating with other units,
626
+
627
+ 14:39.440 --> 14:50.440
628
+ something that is absent in the traditional computer type models of mathematics, reasoning, and so on.
629
+
630
+ 14:50.440 --> 14:56.440
631
+ So what aspect would you like to see in artificial neural networks added over time
632
+
633
+ 14:56.440 --> 14:59.440
634
+ as we try to figure out ways to improve them?
635
+
636
+ 14:59.440 --> 15:10.440
637
+ So one of the main differences and, you know, problems in terms of deep learning today,
638
+
639
+ 15:10.440 --> 15:17.440
640
+ and it's not only deep learning, and the brain is the need for deep learning techniques
641
+
642
+ 15:17.440 --> 15:22.440
643
+ to have a lot of labeled examples.
644
+
645
+ 15:22.440 --> 15:31.440
646
+ For instance, for ImageNet, you have a training set which is one million images, each one labeled by some human
647
+
648
+ 15:31.440 --> 15:34.440
649
+ in terms of which object is there.
650
+
651
+ 15:34.440 --> 15:46.440
652
+ And it's clear that in biology, a baby may be able to see a million images in the first years of life,
653
+
654
+ 15:46.440 --> 15:56.440
655
+ but will not have a million of labels given to him or her by parents or caretakers.
656
+
657
+ 15:56.440 --> 15:59.440
658
+ So how do you solve that?
659
+
660
+ 15:59.440 --> 16:07.440
661
+ You know, I think there is this interesting challenge that today, deep learning and related techniques
662
+
663
+ 16:07.440 --> 16:18.440
664
+ are all about big data, big data meaning a lot of examples labeled by humans,
665
+
666
+ 16:18.440 --> 16:22.440
667
+ whereas in nature you have...
668
+
669
+ 16:22.440 --> 16:29.440
670
+ So this big data is n going to infinity, that's the best, you know, n meaning labeled data.
671
+
672
+ 16:29.440 --> 16:34.440
673
+ But I think the biological world is more n going to 1.
674
+
675
+ 16:34.440 --> 16:42.440
676
+ A child can learn from a very small number of labeled examples.
677
+
678
+ 16:42.440 --> 16:49.440
679
+ Like you tell a child, this is a car, you don't need to say like in ImageNet, you know, this is a car, this is a car,
680
+
681
+ 16:49.440 --> 16:53.440
682
+ this is not a car, this is not a car, one million times.
683
+
684
+ 16:53.440 --> 17:05.440
685
+ And of course with AlphaGo and AlphaZero variants, because the world of Go is so simplistic that you can actually learn by yourself
686
+
687
+ 17:05.440 --> 17:08.440
688
+ through self play, you can play against each other.
689
+
690
+ 17:08.440 --> 17:15.440
691
+ And the real world, the visual system that you've studied extensively is a lot more complicated than the game of Go.
692
+
693
+ 17:15.440 --> 17:22.440
694
+ On the comment about children, which are fascinatingly good at learning new stuff,
695
+
696
+ 17:22.440 --> 17:26.440
697
+ how much of it do you think is hardware and how much of it is software?
698
+
699
+ 17:26.440 --> 17:32.440
700
+ Yeah, that's a good and deep question, in a sense is the old question of nurture and nature,
701
+
702
+ 17:32.440 --> 17:40.440
703
+ how much is in the gene and how much is in the experience of an individual.
704
+
705
+ 17:40.440 --> 17:55.440
706
+ Obviously, it's both that play a role and I believe that the way evolution gives put prior information, so to speak, hardwired,
707
+
708
+ 17:55.440 --> 18:02.440
709
+ it's not really hardwired, but that's essentially an hypothesis.
710
+
711
+ 18:02.440 --> 18:14.440
712
+ I think what's going on is that evolution is almost necessarily, if you believe in Darwin, it's very opportunistic.
713
+
714
+ 18:14.440 --> 18:23.440
715
+ And think about our DNA and the DNA of Drosophila.
716
+
717
+ 18:23.440 --> 18:28.440
718
+ Our DNA does not have many more genes than Drosophila.
719
+
720
+ 18:28.440 --> 18:32.440
721
+ The fly, the fruit fly.
722
+
723
+ 18:32.440 --> 18:39.440
724
+ Now, we know that the fruit fly does not learn very much during its individual existence.
725
+
726
+ 18:39.440 --> 18:51.440
727
+ It looks like one of these machinery that it's really mostly, not 100%, but 95% hardcoded by the genes.
728
+
729
+ 18:51.440 --> 19:02.440
730
+ But since we don't have many more genes than Drosophila, evolution could encode in us a kind of general learning machinery
731
+
732
+ 19:02.440 --> 19:09.440
733
+ and then had to give very weak priors.
734
+
735
+ 19:09.440 --> 19:20.440
736
+ Like, for instance, let me give a specific example, which is recent to work by a member of our Center for Brains, Mines and Machines.
737
+
738
+ 19:20.440 --> 19:30.440
739
+ We know because of work of other people in our group and other groups that there are cells in a part of our brain, neurons, that are tuned to faces.
740
+
741
+ 19:30.440 --> 19:33.440
742
+ They seem to be involved in face recognition.
743
+
744
+ 19:33.440 --> 19:43.440
745
+ Now, this face area seems to be present in young children and adults.
746
+
747
+ 19:43.440 --> 19:54.440
748
+ And one question is there from the beginning, is hardwired by evolution or somehow is learned very quickly.
749
+
750
+ 19:54.440 --> 20:00.440
751
+ So what's your, by the way, a lot of the questions I'm asking, the answer is we don't really know,
752
+
753
+ 20:00.440 --> 20:08.440
754
+ but as a person who has contributed some profound ideas in these fields, you're a good person to guess at some of these.
755
+
756
+ 20:08.440 --> 20:14.440
757
+ So, of course, there's a caveat before a lot of the stuff we talk about, but what is your hunch?
758
+
759
+ 20:14.440 --> 20:21.440
760
+ Is the face, the part of the brain that seems to be concentrated on face recognition, are you born with that?
761
+
762
+ 20:21.440 --> 20:26.440
763
+ Or are you just designed to learn that quickly, like the face of the mother and son?
764
+
765
+ 20:26.440 --> 20:42.440
766
+ My hunch, my bias was the second one, learned very quickly and turns out that Marge Livingstone at Harvard has done some amazing experiments in which she raised baby monkeys,
767
+
768
+ 20:42.440 --> 20:47.440
769
+ depriving them of faces during the first weeks of life.
770
+
771
+ 20:47.440 --> 20:52.440
772
+ So they see technicians, but the technicians have a mask.
773
+
774
+ 20:52.440 --> 20:54.440
775
+ Yes.
776
+
777
+ 20:54.440 --> 21:10.440
778
+ And so when they looked at the area in the brain of these monkeys that were usually you find faces, they found no face preference.
779
+
780
+ 21:10.440 --> 21:26.440
781
+ So my guess is that what evolution does in this case is there is a plastic area, which is plastic, which is kind of predetermined to be imprinted very easily.
782
+
783
+ 21:26.440 --> 21:31.440
784
+ But the command from the gene is not a detailed circuitry for a face template.
785
+
786
+ 21:31.440 --> 21:33.440
787
+ Could be.
788
+
789
+ 21:33.440 --> 21:35.440
790
+ But this will require probably a lot of bits.
791
+
792
+ 21:35.440 --> 21:39.440
793
+ You had to specify a lot of connection of a lot of neurons.
794
+
795
+ 21:39.440 --> 21:53.440
796
+ Instead, the command from the gene is something like imprint, memorize what you see most often in the first two weeks of life, especially in connection with food and maybe nipples.
797
+
798
+ 21:53.440 --> 21:54.440
799
+ I don't know.
800
+
801
+ 21:54.440 --> 21:55.440
802
+ Right.
803
+
804
+ 21:55.440 --> 21:56.440
805
+ Well, source of food.
806
+
807
+ 21:56.440 --> 22:00.440
808
+ And so in that area is very plastic at first and it solidifies.
809
+
810
+ 22:00.440 --> 22:10.440
811
+ It'd be interesting if a variant of that experiment would show a different kind of pattern associated with food than a face pattern, whether that could stick.
812
+
813
+ 22:10.440 --> 22:25.440
814
+ There are indications that during that experiment, what the monkeys saw quite often were the blue gloves of the technicians that were giving to the baby monkeys the milk.
815
+
816
+ 22:25.440 --> 22:33.440
817
+ And some of the cells instead of being face sensitive in that area are hand sensitive.
818
+
819
+ 22:33.440 --> 22:35.440
820
+ That's fascinating.
821
+
822
+ 22:35.440 --> 22:45.440
823
+ Can you talk about what are the different parts of the brain and in your view sort of loosely and how do they contribute to intelligence?
824
+
825
+ 22:45.440 --> 23:04.440
826
+ Do you see the brain as a bunch of different modules and they together come in the human brain to create intelligence or is it all one mush of the same kind of fundamental architecture?
827
+
828
+ 23:04.440 --> 23:21.440
829
+ Yeah, that's an important question and there was a phase in neuroscience back in the 1950s or so in which it was believed for a while that the brain was equipotential.
830
+
831
+ 23:21.440 --> 23:22.440
832
+ This was the term.
833
+
834
+ 23:22.440 --> 23:31.440
835
+ You could cut out a piece and nothing special happened apart, a little bit less performance.
836
+
837
+ 23:31.440 --> 23:50.440
838
+ There was a surgeon, Lashley, who did a lot of experiments of this type with mice and rats and concluded that every part of the brain was essentially equivalent to any other one.
839
+
840
+ 23:50.440 --> 24:12.440
841
+ It turns out that that's really not true. There are very specific modules in the brain, as you said, and people may lose the ability to speak if you have a stroke in a certain region or may lose control of their legs in another region.
842
+
843
+ 24:12.440 --> 24:33.440
844
+ So they're very specific. The brain is also quite flexible and redundant so often it can correct things and take over functions from one part of the brain to the other, but really there are specific modules.
845
+
846
+ 24:33.440 --> 25:02.440
847
+ So the answer that we know from this old work, which was basically based on lesions, either on animals or very often there was a mine of very interesting data coming from the war, from different types of injuries that soldiers had in the brain.
848
+
849
+ 25:02.440 --> 25:23.440
850
+ And more recently, functional MRI, which allow you to check which part of the brain are active when you're doing different tasks, as you can replace some of this.
851
+
852
+ 25:23.440 --> 25:32.440
853
+ You can see that certain parts of the brain are involved, are active in certain tasks.
854
+
855
+ 25:32.440 --> 26:01.440
856
+ But sort of taking a step back to that part of the brain that discovers that specializes in the face and how that might be learned, what's your intuition behind, you know, is it possible that the sort of from a physicist's perspective when you get lower and lower, that it's all the same stuff and it just, when you're born, it's plastic and it quickly figures out this part is going to be about vision, this is going to be about language, this is about common sense reasoning.
857
+
858
+ 26:01.440 --> 26:09.440
859
+ Do you have an intuition that that kind of learning is going on really quickly or is it really kind of solidified in hardware?
860
+
861
+ 26:09.440 --> 26:10.440
862
+ That's a great question.
863
+
864
+ 26:10.440 --> 26:21.440
865
+ So there are parts of the brain like the cerebellum or the hippocampus that are quite different from each other.
866
+
867
+ 26:21.440 --> 26:25.440
868
+ They clearly have different anatomy, different connectivity.
869
+
870
+ 26:25.440 --> 26:35.440
871
+ Then there is the cortex, which is the most developed part of the brain in humans.
872
+
873
+ 26:35.440 --> 26:47.440
874
+ And in the cortex, you have different regions of the cortex that are responsible for vision, for audition, for motor control, for language.
875
+
876
+ 26:47.440 --> 27:07.440
877
+ Now, one of the big puzzles of this is that in the cortex, it looks like it is the same in terms of hardware, in terms of type of neurons and connectivity across these different modalities.
878
+
879
+ 27:07.440 --> 27:17.440
880
+ So for the cortex, I think aside these other parts of the brain like spinal cord, hippocampus, cerebellum and so on.
881
+
882
+ 27:17.440 --> 27:28.440
883
+ For the cortex, I think your question about hardware and software and learning and so on, I think is rather open.
884
+
885
+ 27:28.440 --> 27:40.440
886
+ And I find it very interesting for us to think about an architecture, computer architecture that is good for vision and at the same time is good for language.
887
+
888
+ 27:40.440 --> 27:48.440
889
+ It seems to be so different problem areas that you have to solve.
890
+
891
+ 27:48.440 --> 27:54.440
892
+ But the underlying mechanism might be the same and that's really instructive for artificial neural networks.
893
+
894
+ 27:54.440 --> 28:00.440
895
+ So we've done a lot of great work in vision and human vision, computer vision.
896
+
897
+ 28:00.440 --> 28:07.440
898
+ And you mentioned the problem of human vision is really as difficult as the problem of general intelligence.
899
+
900
+ 28:07.440 --> 28:10.440
901
+ And maybe that connects to the cortex discussion.
902
+
903
+ 28:10.440 --> 28:21.440
904
+ Can you describe the human visual cortex and how the humans begin to understand the world through the raw sensory information?
905
+
906
+ 28:21.440 --> 28:36.440
907
+ What's for folks who are not familiar, especially on the computer vision side, we don't often actually take a step back except saying with a sentence or two that one is inspired by the other.
908
+
909
+ 28:36.440 --> 28:39.440
910
+ What is it that we know about the human visual cortex?
911
+
912
+ 28:39.440 --> 28:40.440
913
+ That's interesting.
914
+
915
+ 28:40.440 --> 28:53.440
916
+ So we know quite a bit at the same time, we don't know a lot, but the bit we know, in a sense, we know a lot of the details and many we don't know.
917
+
918
+ 28:53.440 --> 29:05.440
919
+ And we know a lot of the top level, the answer to the top level question, but we don't know some basic ones, even in terms of general neuroscience forgetting vision.
920
+
921
+ 29:05.440 --> 29:11.440
922
+ You know, why do we sleep? It's such a basic question.
923
+
924
+ 29:11.440 --> 29:14.440
925
+ And we really don't have an answer to that.
926
+
927
+ 29:14.440 --> 29:18.440
928
+ So taking a step back on that. So sleep, for example, is fascinating.
929
+
930
+ 29:18.440 --> 29:21.440
931
+ Do you think that's a neuroscience question?
932
+
933
+ 29:21.440 --> 29:30.440
934
+ Or if we talk about abstractions, what do you think is an interesting way to study intelligence or most effective on the levels of abstraction?
935
+
936
+ 29:30.440 --> 29:37.440
937
+ Is it chemical, is it biological, is it electrophysical, mathematical as you've done a lot of excellent work on that side?
938
+
939
+ 29:37.440 --> 29:42.440
940
+ Which psychology, sort of like at which level of abstraction do you think?
941
+
942
+ 29:42.440 --> 29:48.440
943
+ Well, in terms of levels of abstraction, I think we need all of them.
944
+
945
+ 29:48.440 --> 29:56.440
946
+ It's one, you know, it's like if you ask me, what does it mean to understand a computer?
947
+
948
+ 29:56.440 --> 30:04.440
949
+ That's much simpler. But in a computer, I could say, well, understand how to use PowerPoint.
950
+
951
+ 30:04.440 --> 30:13.440
952
+ That's my level of understanding a computer. It's, it has reasonable, you know, it gives me some power to produce slides and beautiful slides.
953
+
954
+ 30:13.440 --> 30:28.440
955
+ And now somebody else says, well, I know how the transistor work that are inside the computer can write the equation for, you know, transistor and diodes and circuits, logical circuits.
956
+
957
+ 30:28.440 --> 30:33.440
958
+ And I can ask this guy, do you know how to operate PowerPoint? No idea.
959
+
960
+ 30:33.440 --> 30:49.440
961
+ So do you think if we discovered computers walking amongst us full of these transistors that are also operating under windows and have PowerPoint, do you think it's digging in a little bit more?
962
+
963
+ 30:49.440 --> 31:00.440
964
+ How useful is it to understand the transistor in order to be able to understand PowerPoint in these higher level intelligence processes?
965
+
966
+ 31:00.440 --> 31:12.440
967
+ So I think in the case of computers, because they were made by engineers by us, these different level of understanding are rather separate on purpose.
968
+
969
+ 31:12.440 --> 31:23.440
970
+ You know, they are separate modules so that the engineer that designed the circuit for the chips does not need to know what is inside PowerPoint.
971
+
972
+ 31:23.440 --> 31:30.440
973
+ And somebody can write the software translating from one to the other.
974
+
975
+ 31:30.440 --> 31:40.440
976
+ So in that case, I don't think understanding the transistor help you understand PowerPoint or very little.
977
+
978
+ 31:40.440 --> 31:51.440
979
+ If you want to understand the computer, this question, you know, I would say you have to understanding at different levels if you really want to build one.
980
+
981
+ 31:51.440 --> 32:09.440
982
+ But for the brain, I think these levels of understanding, so the algorithms, which kind of computation, you know, the equivalent PowerPoint and the circuits, you know, the transistors, I think they are much more intertwined with each other.
983
+
984
+ 32:09.440 --> 32:15.440
985
+ There is not, you know, a neatly level of the software separate from the hardware.
986
+
987
+ 32:15.440 --> 32:29.440
988
+ And so that's why I think in the case of the brain, the problem is more difficult and more than for computers requires the interaction, the collaboration between different types of expertise.
989
+
990
+ 32:29.440 --> 32:35.440
991
+ So the brain is a big hierarchical mess that you can't just disentangle levels.
992
+
993
+ 32:35.440 --> 32:41.440
994
+ I think you can, but it's much more difficult and it's not completely obvious.
995
+
996
+ 32:41.440 --> 32:47.440
997
+ And I said, I think he's one of the person I think is the greatest problem in science.
998
+
999
+ 32:47.440 --> 32:52.440
1000
+ So, you know, I think it's fair that it's difficult.
1001
+
1002
+ 32:52.440 --> 32:53.440
1003
+ That's a difficult one.
1004
+
1005
+ 32:53.440 --> 32:58.440
1006
+ That said, you do talk about compositionality and why it might be useful.
1007
+
1008
+ 32:58.440 --> 33:07.440
1009
+ And when you discuss why these neural networks in artificial or biological sense learn anything, you talk about compositionality.
1010
+
1011
+ 33:07.440 --> 33:22.440
1012
+ See, there's a sense that nature can be disentangled or well, all aspects of our cognition could be disentangled a little to some degree.
1013
+
1014
+ 33:22.440 --> 33:31.440
1015
+ So why do you think what, first of all, how do you see compositionality and why do you think it exists at all in nature?
1016
+
1017
+ 33:31.440 --> 33:39.440
1018
+ I spoke about, I use the term compositionality.
1019
+
1020
+ 33:39.440 --> 33:54.440
1021
+ When we looked at deep neural networks, multi layers and trying to understand when and why they are more powerful than more classical one layer networks,
1022
+
1023
+ 33:54.440 --> 34:01.440
1024
+ like linear classifier, kernel machines, so called.
1025
+
1026
+ 34:01.440 --> 34:12.440
1027
+ And what we found is that in terms of approximating or learning or representing a function, a mapping from an input to an output,
1028
+
1029
+ 34:12.440 --> 34:20.440
1030
+ like from an image to the label in the image, if this function has a particular structure,
1031
+
1032
+ 34:20.440 --> 34:28.440
1033
+ then deep networks are much more powerful than shallow networks to approximate the underlying function.
1034
+
1035
+ 34:28.440 --> 34:33.440
1036
+ And the particular structure is a structure of compositionality.
1037
+
1038
+ 34:33.440 --> 34:45.440
1039
+ If the function is made up of functions of function, so that you need to look on when you are interpreting an image,
1040
+
1041
+ 34:45.440 --> 34:56.440
1042
+ classifying an image, you don't need to look at all pixels at once, but you can compute something from small groups of pixels,
1043
+
1044
+ 34:56.440 --> 35:04.440
1045
+ and then you can compute something on the output of this local computation and so on.
1046
+
1047
+ 35:04.440 --> 35:10.440
1048
+ It is similar to what you do when you read a sentence, you don't need to read the first and the last letter,
1049
+
1050
+ 35:10.440 --> 35:17.440
1051
+ but you can read syllables, combine them in words, combine the words in sentences.
1052
+
1053
+ 35:17.440 --> 35:20.440
1054
+ So this is this kind of structure.
1055
+
1056
+ 35:20.440 --> 35:27.440
1057
+ So that's as part of a discussion of why deep neural networks may be more effective than the shallow methods.
1058
+
1059
+ 35:27.440 --> 35:35.440
1060
+ And is your sense for most things we can use neural networks for,
1061
+
1062
+ 35:35.440 --> 35:43.440
1063
+ those problems are going to be compositional in nature, like language, like vision.
1064
+
1065
+ 35:43.440 --> 35:47.440
1066
+ How far can we get in this kind of way?
1067
+
1068
+ 35:47.440 --> 35:51.440
1069
+ So here is almost philosophy.
1070
+
1071
+ 35:51.440 --> 35:53.440
1072
+ Well, let's go there.
1073
+
1074
+ 35:53.440 --> 35:55.440
1075
+ Yeah, let's go there.
1076
+
1077
+ 35:55.440 --> 36:00.440
1078
+ So friend of mine, Max Tagmark, who is a physicist at MIT.
1079
+
1080
+ 36:00.440 --> 36:02.440
1081
+ I've talked to him on this thing.
1082
+
1083
+ 36:02.440 --> 36:04.440
1084
+ Yeah, and he disagrees with you, right?
1085
+
1086
+ 36:04.440 --> 36:09.440
1087
+ We agree on most, but the conclusion is a bit different.
1088
+
1089
+ 36:09.440 --> 36:14.440
1090
+ His conclusion is that for images, for instance,
1091
+
1092
+ 36:14.440 --> 36:23.440
1093
+ the compositional structure of this function that we have to learn or to solve these problems
1094
+
1095
+ 36:23.440 --> 36:35.440
1096
+ comes from physics, comes from the fact that you have local interactions in physics between atoms and other atoms,
1097
+
1098
+ 36:35.440 --> 36:42.440
1099
+ between particle of matter and other particles, between planets and other planets,
1100
+
1101
+ 36:42.440 --> 36:44.440
1102
+ between stars and others.
1103
+
1104
+ 36:44.440 --> 36:48.440
1105
+ It's all local.
1106
+
1107
+ 36:48.440 --> 36:55.440
1108
+ And that's true, but you could push this argument a bit further.
1109
+
1110
+ 36:55.440 --> 36:57.440
1111
+ Not this argument, actually.
1112
+
1113
+ 36:57.440 --> 37:02.440
1114
+ You could argue that, you know, maybe that's part of the true,
1115
+
1116
+ 37:02.440 --> 37:06.440
1117
+ but maybe what happens is kind of the opposite,
1118
+
1119
+ 37:06.440 --> 37:11.440
1120
+ is that our brain is wired up as a deep network.
1121
+
1122
+ 37:11.440 --> 37:22.440
1123
+ So it can learn, understand, solve problems that have this compositional structure.
1124
+
1125
+ 37:22.440 --> 37:29.440
1126
+ And it cannot solve problems that don't have this compositional structure.
1127
+
1128
+ 37:29.440 --> 37:37.440
1129
+ So the problems we are accustomed to, we think about, we test our algorithms on,
1130
+
1131
+ 37:37.440 --> 37:42.440
1132
+ are this compositional structure because our brain is made up.
1133
+
1134
+ 37:42.440 --> 37:46.440
1135
+ And that's, in a sense, an evolutionary perspective that we've...
1136
+
1137
+ 37:46.440 --> 37:54.440
1138
+ So the ones that weren't dealing with the compositional nature of reality died off?
1139
+
1140
+ 37:54.440 --> 38:05.440
1141
+ Yes, but also could be, maybe the reason why we have this local connectivity in the brain,
1142
+
1143
+ 38:05.440 --> 38:10.440
1144
+ like simple cells in cortex looking only at the small part of the image,
1145
+
1146
+ 38:10.440 --> 38:16.440
1147
+ each one of them, and then other cells looking at the small number of the simple cells and so on.
1148
+
1149
+ 38:16.440 --> 38:24.440
1150
+ The reason for this may be purely that it was difficult to grow long range connectivity.
1151
+
1152
+ 38:24.440 --> 38:33.440
1153
+ So suppose it's, you know, for biology, it's possible to grow short range connectivity,
1154
+
1155
+ 38:33.440 --> 38:39.440
1156
+ but not long range also because there is a limited number of long range.
1157
+
1158
+ 38:39.440 --> 38:44.440
1159
+ And so you have this limitation from the biology.
1160
+
1161
+ 38:44.440 --> 38:49.440
1162
+ And this means you build a deep convolutional network.
1163
+
1164
+ 38:49.440 --> 38:53.440
1165
+ This would be something like a deep convolutional network.
1166
+
1167
+ 38:53.440 --> 38:57.440
1168
+ And this is great for solving certain class of problems.
1169
+
1170
+ 38:57.440 --> 39:02.440
1171
+ These are the ones we find easy and important for our life.
1172
+
1173
+ 39:02.440 --> 39:06.440
1174
+ And yes, they were enough for us to survive.
1175
+
1176
+ 39:06.440 --> 39:13.440
1177
+ And you can start a successful business on solving those problems with mobile eye.
1178
+
1179
+ 39:13.440 --> 39:16.440
1180
+ Driving is a compositional problem.
1181
+
1182
+ 39:16.440 --> 39:25.440
1183
+ So on the learning task, we don't know much about how the brain learns in terms of optimization.
1184
+
1185
+ 39:25.440 --> 39:31.440
1186
+ So the thing that's stochastic gradient descent is what artificial neural networks
1187
+
1188
+ 39:31.440 --> 39:38.440
1189
+ use for the most part to adjust the parameters in such a way that it's able to deal
1190
+
1191
+ 39:38.440 --> 39:42.440
1192
+ based on the labeled data, it's able to solve the problem.
1193
+
1194
+ 39:42.440 --> 39:49.440
1195
+ So what's your intuition about why it works at all?
1196
+
1197
+ 39:49.440 --> 39:55.440
1198
+ How hard of a problem it is to optimize a neural network, artificial neural network?
1199
+
1200
+ 39:55.440 --> 39:57.440
1201
+ Is there other alternatives?
1202
+
1203
+ 39:57.440 --> 40:03.440
1204
+ Just in general, your intuition is behind this very simplistic algorithm
1205
+
1206
+ 40:03.440 --> 40:05.440
1207
+ that seems to do pretty good, surprising.
1208
+
1209
+ 40:05.440 --> 40:07.440
1210
+ Yes, yes.
1211
+
1212
+ 40:07.440 --> 40:16.440
1213
+ So I find neuroscience, the architecture of cortex is really similar to the architecture of deep networks.
1214
+
1215
+ 40:16.440 --> 40:26.440
1216
+ So there is a nice correspondence there between the biology and this kind of local connectivity hierarchical
1217
+
1218
+ 40:26.440 --> 40:28.440
1219
+ architecture.
1220
+
1221
+ 40:28.440 --> 40:35.440
1222
+ The stochastic gradient descent, as you said, is a very simple technique.
1223
+
1224
+ 40:35.440 --> 40:49.440
1225
+ It seems pretty unlikely that biology could do that from what we know right now about cortex and neurons and synapses.
1226
+
1227
+ 40:49.440 --> 40:58.440
1228
+ So it's a big question open whether there are other optimization learning algorithms
1229
+
1230
+ 40:58.440 --> 41:02.440
1231
+ that can replace stochastic gradient descent.
1232
+
1233
+ 41:02.440 --> 41:11.440
1234
+ And my guess is yes, but nobody has found yet a real answer.
1235
+
1236
+ 41:11.440 --> 41:17.440
1237
+ I mean, people are trying, still trying, and there are some interesting ideas.
1238
+
1239
+ 41:17.440 --> 41:27.440
1240
+ The fact that stochastic gradient descent is so successful, this has become clearly not so mysterious.
1241
+
1242
+ 41:27.440 --> 41:39.440
1243
+ And the reason is that it's an interesting fact, you know, is a change in a sense in how people think about statistics.
1244
+
1245
+ 41:39.440 --> 41:51.440
1246
+ And this is the following is that typically when you had data and you had, say, a model with parameters,
1247
+
1248
+ 41:51.440 --> 41:55.440
1249
+ you are trying to fit the model to the data, you know, to fit the parameter.
1250
+
1251
+ 41:55.440 --> 42:12.440
1252
+ And typically the kind of kind of crowd wisdom type idea was you should have at least, you know, twice the number of data than the number of parameters.
1253
+
1254
+ 42:12.440 --> 42:15.440
1255
+ Maybe 10 times is better.
1256
+
1257
+ 42:15.440 --> 42:24.440
1258
+ Now, the way you train neural network these days is that they have 10 or 100 times more parameters than data.
1259
+
1260
+ 42:24.440 --> 42:26.440
1261
+ Exactly the opposite.
1262
+
1263
+ 42:26.440 --> 42:34.440
1264
+ And which, you know, it has been one of the puzzles about neural networks.
1265
+
1266
+ 42:34.440 --> 42:40.440
1267
+ How can you get something that really works when you have so much freedom?
1268
+
1269
+ 42:40.440 --> 42:43.440
1270
+ From that little data you can generalize somehow.
1271
+
1272
+ 42:43.440 --> 42:44.440
1273
+ Right, exactly.
1274
+
1275
+ 42:44.440 --> 42:48.440
1276
+ Do you think the stochastic nature of it is essential, the randomness?
1277
+
1278
+ 42:48.440 --> 43:00.440
1279
+ I think we have some initial understanding why this happens, but one nice side effect of having this over parameterization, more parameters than data,
1280
+
1281
+ 43:00.440 --> 43:07.440
1282
+ is that when you look for the minima of a loss function like stochastic degree of descent is doing,
1283
+
1284
+ 43:07.440 --> 43:19.440
1285
+ you find I made some calculations based on some old basic theorem of algebra called Bezu theorem.
1286
+
1287
+ 43:19.440 --> 43:25.440
1288
+ And that gives you an estimate of the number of solutions of a system of polynomial equation.
1289
+
1290
+ 43:25.440 --> 43:38.440
1291
+ Anyway, the bottom line is that there are probably more minima for a typical deep networks than atoms in the universe.
1292
+
1293
+ 43:38.440 --> 43:43.440
1294
+ Just to say there are a lot because of the over parameterization.
1295
+
1296
+ 43:43.440 --> 43:44.440
1297
+ Yes.
1298
+
1299
+ 43:44.440 --> 43:48.440
1300
+ More global minimum, zero minimum, good minimum.
1301
+
1302
+ 43:48.440 --> 43:51.440
1303
+ More global minimum.
1304
+
1305
+ 43:51.440 --> 44:00.440
1306
+ Yes, a lot of them, so you have a lot of solutions, so it's not so surprising that you can find them relatively easily.
1307
+
1308
+ 44:00.440 --> 44:04.440
1309
+ This is because of the over parameterization.
1310
+
1311
+ 44:04.440 --> 44:09.440
1312
+ The over parameterization sprinkles that entire space with solutions that are pretty good.
1313
+
1314
+ 44:09.440 --> 44:11.440
1315
+ It's not so surprising, right?
1316
+
1317
+ 44:11.440 --> 44:17.440
1318
+ It's like if you have a system of linear equation and you have more unknowns than equations,
1319
+
1320
+ 44:17.440 --> 44:24.440
1321
+ then we know you have an infinite number of solutions and the question is to pick one.
1322
+
1323
+ 44:24.440 --> 44:27.440
1324
+ That's another story, but you have an infinite number of solutions,
1325
+
1326
+ 44:27.440 --> 44:32.440
1327
+ so there are a lot of value of your unknowns that satisfy the equations.
1328
+
1329
+ 44:32.440 --> 44:37.440
1330
+ But it's possible that there's a lot of those solutions that aren't very good.
1331
+
1332
+ 44:37.440 --> 44:38.440
1333
+ What's surprising is that they're pretty good.
1334
+
1335
+ 44:38.440 --> 44:39.440
1336
+ So that's a separate question.
1337
+
1338
+ 44:39.440 --> 44:43.440
1339
+ Why can you pick one that generalizes one?
1340
+
1341
+ 44:43.440 --> 44:46.440
1342
+ That's a separate question with separate answers.
1343
+
1344
+ 44:46.440 --> 44:53.440
1345
+ One theorem that people like to talk about that inspires imagination of the power of neural networks
1346
+
1347
+ 44:53.440 --> 45:00.440
1348
+ is the universal approximation theorem that you can approximate any computable function
1349
+
1350
+ 45:00.440 --> 45:04.440
1351
+ with just a finite number of neurons and a single hidden layer.
1352
+
1353
+ 45:04.440 --> 45:07.440
1354
+ Do you find this theorem one surprising?
1355
+
1356
+ 45:07.440 --> 45:12.440
1357
+ Do you find it useful, interesting, inspiring?
1358
+
1359
+ 45:12.440 --> 45:16.440
1360
+ No, this one, I never found it very surprising.
1361
+
1362
+ 45:16.440 --> 45:22.440
1363
+ It was known since the 80s, since I entered the field,
1364
+
1365
+ 45:22.440 --> 45:27.440
1366
+ because it's basically the same as Viastras theorem,
1367
+
1368
+ 45:27.440 --> 45:34.440
1369
+ which says that I can approximate any continuous function with a polynomial of sufficiently,
1370
+
1371
+ 45:34.440 --> 45:37.440
1372
+ with a sufficient number of terms, monomials.
1373
+
1374
+ 45:37.440 --> 45:41.440
1375
+ It's basically the same, and the proofs are very similar.
1376
+
1377
+ 45:41.440 --> 45:48.440
1378
+ So your intuition was there was never any doubt that neural networks in theory could be very strong approximations.
1379
+
1380
+ 45:48.440 --> 45:58.440
1381
+ The interesting question is that if this theorem says you can approximate fine,
1382
+
1383
+ 45:58.440 --> 46:06.440
1384
+ but when you ask how many neurons, for instance, or in the case of how many monomials,
1385
+
1386
+ 46:06.440 --> 46:11.440
1387
+ I need to get a good approximation.
1388
+
1389
+ 46:11.440 --> 46:20.440
1390
+ Then it turns out that that depends on the dimensionality of your function, how many variables you have.
1391
+
1392
+ 46:20.440 --> 46:25.440
1393
+ But it depends on the dimensionality of your function in a bad way.
1394
+
1395
+ 46:25.440 --> 46:35.440
1396
+ For instance, suppose you want an error which is no worse than 10% in your approximation.
1397
+
1398
+ 46:35.440 --> 46:40.440
1399
+ If you want to approximate your function within 10%,
1400
+
1401
+ 46:40.440 --> 46:48.440
1402
+ then it turns out that the number of units you need are in the order of 10 to the dimensionality, d.
1403
+
1404
+ 46:48.440 --> 46:50.440
1405
+ How many variables?
1406
+
1407
+ 46:50.440 --> 46:57.440
1408
+ So if you have two variables, d is 2 and you have 100 units and OK.
1409
+
1410
+ 46:57.440 --> 47:02.440
1411
+ But if you have, say, 200 by 200 pixel images,
1412
+
1413
+ 47:02.440 --> 47:06.440
1414
+ now this is 40,000, whatever.
1415
+
1416
+ 47:06.440 --> 47:09.440
1417
+ We again go to the size of the universe pretty quickly.
1418
+
1419
+ 47:09.440 --> 47:13.440
1420
+ Exactly, 10 to the 40,000 or something.
1421
+
1422
+ 47:13.440 --> 47:21.440
1423
+ And so this is called the curse of dimensionality, not quite appropriately.
1424
+
1425
+ 47:21.440 --> 47:27.440
1426
+ And the hope is with the extra layers you can remove the curse.
1427
+
1428
+ 47:27.440 --> 47:34.440
1429
+ What we proved is that if you have deep layers or hierarchical architecture
1430
+
1431
+ 47:34.440 --> 47:39.440
1432
+ with the local connectivity of the type of convolutional deep learning,
1433
+
1434
+ 47:39.440 --> 47:46.440
1435
+ and if you're dealing with a function that has this kind of hierarchical architecture,
1436
+
1437
+ 47:46.440 --> 47:50.440
1438
+ then you avoid completely the curse.
1439
+
1440
+ 47:50.440 --> 47:53.440
1441
+ You've spoken a lot about supervised deep learning.
1442
+
1443
+ 47:53.440 --> 47:58.440
1444
+ What are your thoughts, hopes, views on the challenges of unsupervised learning
1445
+
1446
+ 47:58.440 --> 48:04.440
1447
+ with GANs, with generative adversarial networks?
1448
+
1449
+ 48:04.440 --> 48:08.440
1450
+ Do you see those as distinct, the power of GANs,
1451
+
1452
+ 48:08.440 --> 48:12.440
1453
+ do you see those as distinct from supervised methods in neural networks,
1454
+
1455
+ 48:12.440 --> 48:16.440
1456
+ or are they really all in the same representation ballpark?
1457
+
1458
+ 48:16.440 --> 48:24.440
1459
+ GANs is one way to get estimation of probability densities,
1460
+
1461
+ 48:24.440 --> 48:29.440
1462
+ which is a somewhat new way that people have not done before.
1463
+
1464
+ 48:29.440 --> 48:38.440
1465
+ I don't know whether this will really play an important role in intelligence,
1466
+
1467
+ 48:38.440 --> 48:47.440
1468
+ or it's interesting, I'm less enthusiastic about it than many people in the field.
1469
+
1470
+ 48:47.440 --> 48:53.440
1471
+ I have the feeling that many people in the field are really impressed by the ability
1472
+
1473
+ 48:53.440 --> 49:00.440
1474
+ of producing realistic looking images in this generative way.
1475
+
1476
+ 49:00.440 --> 49:02.440
1477
+ Which describes the popularity of the methods,
1478
+
1479
+ 49:02.440 --> 49:10.440
1480
+ but you're saying that while that's exciting and cool to look at, it may not be the tool that's useful for it.
1481
+
1482
+ 49:10.440 --> 49:12.440
1483
+ So you describe it kind of beautifully.
1484
+
1485
+ 49:12.440 --> 49:17.440
1486
+ Current supervised methods go N to infinity in terms of the number of labeled points,
1487
+
1488
+ 49:17.440 --> 49:20.440
1489
+ and we really have to figure out how to go to N to 1.
1490
+
1491
+ 49:20.440 --> 49:24.440
1492
+ And you're thinking GANs might help, but they might not be the right...
1493
+
1494
+ 49:24.440 --> 49:28.440
1495
+ I don't think for that problem, which I really think is important.
1496
+
1497
+ 49:28.440 --> 49:35.440
1498
+ I think they certainly have applications, for instance, in computer graphics.
1499
+
1500
+ 49:35.440 --> 49:43.440
1501
+ I did work long ago, which was a little bit similar in terms of,
1502
+
1503
+ 49:43.440 --> 49:49.440
1504
+ saying I have a network and I present images,
1505
+
1506
+ 49:49.440 --> 49:59.440
1507
+ so the input is images and output is, for instance, the pose of the image, a face, how much is smiling,
1508
+
1509
+ 49:59.440 --> 50:02.440
1510
+ is rotated 45 degrees or not.
1511
+
1512
+ 50:02.440 --> 50:08.440
1513
+ What about having a network that I train with the same data set,
1514
+
1515
+ 50:08.440 --> 50:10.440
1516
+ but now I invert input and output.
1517
+
1518
+ 50:10.440 --> 50:16.440
1519
+ Now the input is the pose or the expression, a number, certain numbers,
1520
+
1521
+ 50:16.440 --> 50:19.440
1522
+ and the output is the image and I train it.
1523
+
1524
+ 50:19.440 --> 50:27.440
1525
+ And we did pretty good interesting results in terms of producing very realistic looking images.
1526
+
1527
+ 50:27.440 --> 50:35.440
1528
+ It was less sophisticated mechanism, but the output was pretty less than GANs,
1529
+
1530
+ 50:35.440 --> 50:38.440
1531
+ but the output was pretty much of the same quality.
1532
+
1533
+ 50:38.440 --> 50:43.440
1534
+ So I think for computer graphics type application,
1535
+
1536
+ 50:43.440 --> 50:48.440
1537
+ definitely GANs can be quite useful and not only for that,
1538
+
1539
+ 50:48.440 --> 51:01.440
1540
+ but for helping, for instance, on this problem unsupervised example of reducing the number of labelled examples,
1541
+
1542
+ 51:01.440 --> 51:10.440
1543
+ I think people, it's like they think they can get out more than they put in.
1544
+
1545
+ 51:10.440 --> 51:13.440
1546
+ There's no free lunch, as you said.
1547
+
1548
+ 51:13.440 --> 51:16.440
1549
+ What's your intuition?
1550
+
1551
+ 51:16.440 --> 51:24.440
1552
+ How can we slow the growth of N to infinity in supervised learning?
1553
+
1554
+ 51:24.440 --> 51:29.440
1555
+ So, for example, Mobileye has very successfully,
1556
+
1557
+ 51:29.440 --> 51:34.440
1558
+ I mean essentially annotated large amounts of data to be able to drive a car.
1559
+
1560
+ 51:34.440 --> 51:40.440
1561
+ Now, one thought is, so we're trying to teach machines, the school of AI,
1562
+
1563
+ 51:40.440 --> 51:45.440
1564
+ and we're trying to, so how can we become better teachers, maybe?
1565
+
1566
+ 51:45.440 --> 51:47.440
1567
+ That's one way.
1568
+
1569
+ 51:47.440 --> 51:58.440
1570
+ I like that because, again, one caricature of the history of computer science,
1571
+
1572
+ 51:58.440 --> 52:09.440
1573
+ it begins with programmers, expensive, continuous labellers, cheap,
1574
+
1575
+ 52:09.440 --> 52:16.440
1576
+ and the future would be schools, like we have for kids.
1577
+
1578
+ 52:16.440 --> 52:26.440
1579
+ Currently, the labelling methods, we're not selective about which examples we teach networks with.
1580
+
1581
+ 52:26.440 --> 52:33.440
1582
+ I think the focus of making networks that learn much faster is often on the architecture side,
1583
+
1584
+ 52:33.440 --> 52:37.440
1585
+ but how can we pick better examples with which to learn?
1586
+
1587
+ 52:37.440 --> 52:39.440
1588
+ Do you have intuitions about that?
1589
+
1590
+ 52:39.440 --> 52:50.440
1591
+ Well, that's part of the problem, but the other one is, if we look at biology,
1592
+
1593
+ 52:50.440 --> 52:58.440
1594
+ the reasonable assumption, I think, is in the same spirit as I said,
1595
+
1596
+ 52:58.440 --> 53:03.440
1597
+ evolution is opportunistic and has weak priors.
1598
+
1599
+ 53:03.440 --> 53:10.440
1600
+ The way I think the intelligence of a child, a baby may develop,
1601
+
1602
+ 53:10.440 --> 53:17.440
1603
+ is by bootstrapping weak priors from evolution.
1604
+
1605
+ 53:17.440 --> 53:26.440
1606
+ For instance, you can assume that you have most organisms,
1607
+
1608
+ 53:26.440 --> 53:37.440
1609
+ including human babies, built in some basic machinery to detect motion and relative motion.
1610
+
1611
+ 53:37.440 --> 53:46.440
1612
+ In fact, we know all insects, from fruit flies to other animals, they have this.
1613
+
1614
+ 53:46.440 --> 53:55.440
1615
+ Even in the retinas, in the very peripheral part, it's very conserved across species,
1616
+
1617
+ 53:55.440 --> 53:58.440
1618
+ something that evolution discovered early.
1619
+
1620
+ 53:58.440 --> 54:05.440
1621
+ It may be the reason why babies tend to look in the first few days to moving objects,
1622
+
1623
+ 54:05.440 --> 54:07.440
1624
+ and not to not moving objects.
1625
+
1626
+ 54:07.440 --> 54:11.440
1627
+ Now, moving objects means, okay, they're attracted by motion,
1628
+
1629
+ 54:11.440 --> 54:19.440
1630
+ but motion also means that motion gives automatic segmentation from the background.
1631
+
1632
+ 54:19.440 --> 54:26.440
1633
+ So because of motion boundaries, either the object is moving,
1634
+
1635
+ 54:26.440 --> 54:32.440
1636
+ or the eye of the baby is tracking the moving object, and the background is moving.
1637
+
1638
+ 54:32.440 --> 54:37.440
1639
+ Yeah, so just purely on the visual characteristics of the scene, that seems to be the most useful.
1640
+
1641
+ 54:37.440 --> 54:43.440
1642
+ Right, so it's like looking at an object without background.
1643
+
1644
+ 54:43.440 --> 54:49.440
1645
+ It's ideal for learning the object, otherwise it's really difficult, because you have so much stuff.
1646
+
1647
+ 54:49.440 --> 54:54.440
1648
+ So suppose you do this at the beginning, first weeks,
1649
+
1650
+ 54:54.440 --> 55:01.440
1651
+ then after that you can recognize the object, now they are imprinted, the number one,
1652
+
1653
+ 55:01.440 --> 55:05.440
1654
+ even in the background, even without motion.
1655
+
1656
+ 55:05.440 --> 55:10.440
1657
+ So that's the, by the way, I just want to ask on the object recognition problem,
1658
+
1659
+ 55:10.440 --> 55:16.440
1660
+ so there is this being responsive to movement and doing edge detection, essentially.
1661
+
1662
+ 55:16.440 --> 55:20.440
1663
+ What's the gap between being effectively,
1664
+
1665
+ 55:20.440 --> 55:27.440
1666
+ effectively visually recognizing stuff, detecting where it is, and understanding the scene?
1667
+
1668
+ 55:27.440 --> 55:32.440
1669
+ Is this a huge gap in many layers, or is it close?
1670
+
1671
+ 55:32.440 --> 55:35.440
1672
+ No, I think that's a huge gap.
1673
+
1674
+ 55:35.440 --> 55:44.440
1675
+ I think present algorithm with all the success that we have, and the fact that are a lot of very useful,
1676
+
1677
+ 55:44.440 --> 55:51.440
1678
+ I think we are in a golden age for applications of low level vision,
1679
+
1680
+ 55:51.440 --> 55:56.440
1681
+ and low level speech recognition, and so on, you know, Alexa, and so on.
1682
+
1683
+ 55:56.440 --> 56:01.440
1684
+ There are many more things of similar level to be done, including medical diagnosis and so on,
1685
+
1686
+ 56:01.440 --> 56:11.440
1687
+ but we are far from what we call understanding of a scene, of language, of actions, of people.
1688
+
1689
+ 56:11.440 --> 56:17.440
1690
+ That is, despite the claims, that's, I think, very far.
1691
+
1692
+ 56:17.440 --> 56:19.440
1693
+ We're a little bit off.
1694
+
1695
+ 56:19.440 --> 56:24.440
1696
+ So in popular culture, and among many researchers, some of which I've spoken with,
1697
+
1698
+ 56:24.440 --> 56:34.440
1699
+ the Sewell Russell and Elon Musk, in and out of the AI field, there's a concern about the existential threat of AI.
1700
+
1701
+ 56:34.440 --> 56:44.440
1702
+ And how do you think about this concern, and is it valuable to think about large scale,
1703
+
1704
+ 56:44.440 --> 56:51.440
1705
+ long term, unintended consequences of intelligent systems we try to build?
1706
+
1707
+ 56:51.440 --> 56:58.440
1708
+ I always think it's better to worry first, you know, early rather than late.
1709
+
1710
+ 56:58.440 --> 56:59.440
1711
+ So worry is good.
1712
+
1713
+ 56:59.440 --> 57:02.440
1714
+ Yeah, I'm not against worrying at all.
1715
+
1716
+ 57:02.440 --> 57:15.440
1717
+ Personally, I think that, you know, it will take a long time before there is real reason to be worried.
1718
+
1719
+ 57:15.440 --> 57:23.440
1720
+ But as I said, I think it's good to put in place and think about possible safety against,
1721
+
1722
+ 57:23.440 --> 57:35.440
1723
+ what I find a bit misleading are things like that have been said by people I know, like Elon Musk and what is Bostrom in particular,
1724
+
1725
+ 57:35.440 --> 57:39.440
1726
+ and what is his first name, Nick Bostrom, right?
1727
+
1728
+ 57:39.440 --> 57:46.440
1729
+ And, you know, and a couple of other people that, for instance, AI is more dangerous than nuclear weapons.
1730
+
1731
+ 57:46.440 --> 57:50.440
1732
+ I think that's really wrong.
1733
+
1734
+ 57:50.440 --> 57:59.440
1735
+ That can be misleading, because in terms of priority, we should still be more worried about nuclear weapons
1736
+
1737
+ 57:59.440 --> 58:05.440
1738
+ and what people are doing about it and so on than AI.
1739
+
1740
+ 58:05.440 --> 58:15.440
1741
+ And you've spoken about them as obvious and yourself saying that you think you'll be about 100 years out
1742
+
1743
+ 58:15.440 --> 58:20.440
1744
+ before we have a general intelligence system that's on par with the human being.
1745
+
1746
+ 58:20.440 --> 58:22.440
1747
+ Do you have any updates for those predictions?
1748
+
1749
+ 58:22.440 --> 58:23.440
1750
+ Well, I think he said...
1751
+
1752
+ 58:23.440 --> 58:25.440
1753
+ He said 20, I think.
1754
+
1755
+ 58:25.440 --> 58:26.440
1756
+ He said 20, right.
1757
+
1758
+ 58:26.440 --> 58:27.440
1759
+ This was a couple of years ago.
1760
+
1761
+ 58:27.440 --> 58:31.440
1762
+ I have not asked him again, so I should have.
1763
+
1764
+ 58:31.440 --> 58:38.440
1765
+ Your own prediction, what's your prediction about when you'll be truly surprised
1766
+
1767
+ 58:38.440 --> 58:42.440
1768
+ and what's the confidence interval on that?
1769
+
1770
+ 58:42.440 --> 58:46.440
1771
+ You know, it's so difficult to predict the future and even the present.
1772
+
1773
+ 58:46.440 --> 58:48.440
1774
+ It's pretty hard to predict.
1775
+
1776
+ 58:48.440 --> 58:50.440
1777
+ Right, but I would be...
1778
+
1779
+ 58:50.440 --> 58:52.440
1780
+ As I said, this is completely...
1781
+
1782
+ 58:52.440 --> 58:56.440
1783
+ I would be more like Rod Brooks.
1784
+
1785
+ 58:56.440 --> 58:59.440
1786
+ I think he's about 200 years old.
1787
+
1788
+ 58:59.440 --> 59:01.440
1789
+ 200 years.
1790
+
1791
+ 59:01.440 --> 59:06.440
1792
+ When we have this kind of AGI system, artificial intelligence system,
1793
+
1794
+ 59:06.440 --> 59:12.440
1795
+ you're sitting in a room with her, him, it,
1796
+
1797
+ 59:12.440 --> 59:17.440
1798
+ do you think it will be the underlying design of such a system
1799
+
1800
+ 59:17.440 --> 59:19.440
1801
+ and something we'll be able to understand?
1802
+
1803
+ 59:19.440 --> 59:20.440
1804
+ It will be simple?
1805
+
1806
+ 59:20.440 --> 59:25.440
1807
+ Do you think it will be explainable?
1808
+
1809
+ 59:25.440 --> 59:27.440
1810
+ Understandable by us?
1811
+
1812
+ 59:27.440 --> 59:31.440
1813
+ Your intuition, again, we're in the realm of philosophy a little bit.
1814
+
1815
+ 59:31.440 --> 59:35.440
1816
+ Well, probably no.
1817
+
1818
+ 59:35.440 --> 59:42.440
1819
+ But again, it depends what you really mean for understanding.
1820
+
1821
+ 59:42.440 --> 59:53.440
1822
+ I think we don't understand how deep networks work.
1823
+
1824
+ 59:53.440 --> 59:56.440
1825
+ I think we're beginning to have a theory now.
1826
+
1827
+ 59:56.440 --> 59:59.440
1828
+ But in the case of deep networks,
1829
+
1830
+ 59:59.440 --> 1:00:06.440
1831
+ or even in the case of the simpler kernel machines or linear classifier,
1832
+
1833
+ 1:00:06.440 --> 1:00:12.440
1834
+ we really don't understand the individual units or so.
1835
+
1836
+ 1:00:12.440 --> 1:00:20.440
1837
+ But we understand what the computation and the limitations and the properties of it are.
1838
+
1839
+ 1:00:20.440 --> 1:00:24.440
1840
+ It's similar to many things.
1841
+
1842
+ 1:00:24.440 --> 1:00:29.440
1843
+ Does it mean to understand how a fusion bomb works?
1844
+
1845
+ 1:00:29.440 --> 1:00:35.440
1846
+ How many of us, you know, many of us understand the basic principle
1847
+
1848
+ 1:00:35.440 --> 1:00:40.440
1849
+ and some of us may understand deeper details?
1850
+
1851
+ 1:00:40.440 --> 1:00:44.440
1852
+ In that sense, understanding is, as a community, as a civilization,
1853
+
1854
+ 1:00:44.440 --> 1:00:46.440
1855
+ can we build another copy of it?
1856
+
1857
+ 1:00:46.440 --> 1:00:47.440
1858
+ Okay.
1859
+
1860
+ 1:00:47.440 --> 1:00:50.440
1861
+ And in that sense, do you think there'll be,
1862
+
1863
+ 1:00:50.440 --> 1:00:56.440
1864
+ there'll need to be some evolutionary component where it runs away from our understanding?
1865
+
1866
+ 1:00:56.440 --> 1:00:59.440
1867
+ Or do you think it could be engineered from the ground up?
1868
+
1869
+ 1:00:59.440 --> 1:01:02.440
1870
+ The same way you go from the transistor to PowerPoint?
1871
+
1872
+ 1:01:02.440 --> 1:01:03.440
1873
+ Right.
1874
+
1875
+ 1:01:03.440 --> 1:01:09.440
1876
+ So many years ago, this was actually 40, 41 years ago,
1877
+
1878
+ 1:01:09.440 --> 1:01:13.440
1879
+ I wrote a paper with David Maher,
1880
+
1881
+ 1:01:13.440 --> 1:01:19.440
1882
+ who was one of the founding fathers of computer vision, computational vision.
1883
+
1884
+ 1:01:19.440 --> 1:01:23.440
1885
+ I wrote a paper about levels of understanding,
1886
+
1887
+ 1:01:23.440 --> 1:01:28.440
1888
+ which is related to the question we discussed earlier about understanding PowerPoint,
1889
+
1890
+ 1:01:28.440 --> 1:01:31.440
1891
+ understanding transistors and so on.
1892
+
1893
+ 1:01:31.440 --> 1:01:38.440
1894
+ And, you know, in that kind of framework, we had a level of the hardware
1895
+
1896
+ 1:01:38.440 --> 1:01:41.440
1897
+ and the top level of the algorithms.
1898
+
1899
+ 1:01:41.440 --> 1:01:44.440
1900
+ We did not have learning.
1901
+
1902
+ 1:01:44.440 --> 1:01:54.440
1903
+ Recently, I updated adding levels and one level I added to those three was learning.
1904
+
1905
+ 1:01:54.440 --> 1:01:59.440
1906
+ So, and you can imagine, you could have a good understanding
1907
+
1908
+ 1:01:59.440 --> 1:02:04.440
1909
+ of how you construct learning machine, like we do.
1910
+
1911
+ 1:02:04.440 --> 1:02:13.440
1912
+ But being unable to describe in detail what the learning machines will discover, right?
1913
+
1914
+ 1:02:13.440 --> 1:02:19.440
1915
+ Now, that would be still a powerful understanding if I can build a learning machine,
1916
+
1917
+ 1:02:19.440 --> 1:02:25.440
1918
+ even if I don't understand in detail every time it learns something.
1919
+
1920
+ 1:02:25.440 --> 1:02:31.440
1921
+ Just like our children, if they start listening to a certain type of music,
1922
+
1923
+ 1:02:31.440 --> 1:02:33.440
1924
+ I don't know, Miley Cyrus or something,
1925
+
1926
+ 1:02:33.440 --> 1:02:37.440
1927
+ you don't understand why they came to that particular preference,
1928
+
1929
+ 1:02:37.440 --> 1:02:39.440
1930
+ but you understand the learning process.
1931
+
1932
+ 1:02:39.440 --> 1:02:41.440
1933
+ That's very interesting.
1934
+
1935
+ 1:02:41.440 --> 1:02:50.440
1936
+ So, on learning for systems to be part of our world,
1937
+
1938
+ 1:02:50.440 --> 1:02:56.440
1939
+ it has a certain, one of the challenging things that you've spoken about is learning ethics,
1940
+
1941
+ 1:02:56.440 --> 1:02:59.440
1942
+ learning morals.
1943
+
1944
+ 1:02:59.440 --> 1:03:06.440
1945
+ And how hard do you think is the problem of, first of all, humans understanding our ethics?
1946
+
1947
+ 1:03:06.440 --> 1:03:10.440
1948
+ What is the origin on the neural and low level of ethics?
1949
+
1950
+ 1:03:10.440 --> 1:03:12.440
1951
+ What is it at the higher level?
1952
+
1953
+ 1:03:12.440 --> 1:03:17.440
1954
+ Is it something that's learnable from machines in your intuition?
1955
+
1956
+ 1:03:17.440 --> 1:03:23.440
1957
+ I think, yeah, ethics is learnable, very likely.
1958
+
1959
+ 1:03:23.440 --> 1:03:36.440
1960
+ I think it's one of these problems where I think understanding the neuroscience of ethics,
1961
+
1962
+ 1:03:36.440 --> 1:03:42.440
1963
+ people discuss there is an ethics of neuroscience.
1964
+
1965
+ 1:03:42.440 --> 1:03:46.440
1966
+ How a neuroscientist should or should not behave,
1967
+
1968
+ 1:03:46.440 --> 1:03:53.440
1969
+ can think of a neurosurgeon and the ethics that he or she has to be.
1970
+
1971
+ 1:03:53.440 --> 1:03:57.440
1972
+ But I'm more interested in the neuroscience of ethics.
1973
+
1974
+ 1:03:57.440 --> 1:04:01.440
1975
+ You're blowing my mind right now, the neuroscience of ethics, it's very meta.
1976
+
1977
+ 1:04:01.440 --> 1:04:09.440
1978
+ And I think that would be important to understand also for being able to design machines
1979
+
1980
+ 1:04:09.440 --> 1:04:14.440
1981
+ that are ethical machines in our sense of ethics.
1982
+
1983
+ 1:04:14.440 --> 1:04:20.440
1984
+ And you think there is something in neuroscience, there's patterns,
1985
+
1986
+ 1:04:20.440 --> 1:04:25.440
1987
+ tools in neuroscience that could help us shed some light on ethics
1988
+
1989
+ 1:04:25.440 --> 1:04:29.440
1990
+ or is it more on the psychologist's sociology at a much higher level?
1991
+
1992
+ 1:04:29.440 --> 1:04:33.440
1993
+ No, there is psychology, but there is also, in the meantime,
1994
+
1995
+ 1:04:33.440 --> 1:04:41.440
1996
+ there is evidence, fMRI, of specific areas of the brain
1997
+
1998
+ 1:04:41.440 --> 1:04:44.440
1999
+ that are involved in certain ethical judgment.
2000
+
2001
+ 1:04:44.440 --> 1:04:49.440
2002
+ And not only this, you can stimulate those areas with magnetic fields
2003
+
2004
+ 1:04:49.440 --> 1:04:54.440
2005
+ and change the ethical decisions.
2006
+
2007
+ 1:04:54.440 --> 1:05:00.440
2008
+ So that's work by a colleague of mine, Rebecca Sacks,
2009
+
2010
+ 1:05:00.440 --> 1:05:04.440
2011
+ and there are other researchers doing similar work.
2012
+
2013
+ 1:05:04.440 --> 1:05:11.440
2014
+ And I think this is the beginning, but ideally at some point
2015
+
2016
+ 1:05:11.440 --> 1:05:17.440
2017
+ we'll have an understanding of how this works and why it evolved, right?
2018
+
2019
+ 1:05:17.440 --> 1:05:21.440
2020
+ The big why question, yeah, it must have some purpose.
2021
+
2022
+ 1:05:21.440 --> 1:05:29.440
2023
+ Yeah, obviously it has some social purposes, probably.
2024
+
2025
+ 1:05:29.440 --> 1:05:34.440
2026
+ If neuroscience holds the key to at least eliminate some aspect of ethics,
2027
+
2028
+ 1:05:34.440 --> 1:05:36.440
2029
+ that means it could be a learnable problem.
2030
+
2031
+ 1:05:36.440 --> 1:05:38.440
2032
+ Yeah, exactly.
2033
+
2034
+ 1:05:38.440 --> 1:05:41.440
2035
+ And as we're getting into harder and harder questions,
2036
+
2037
+ 1:05:41.440 --> 1:05:44.440
2038
+ let's go to the hard problem of consciousness.
2039
+
2040
+ 1:05:44.440 --> 1:05:51.440
2041
+ Is this an important problem for us to think about and solve on the engineering
2042
+
2043
+ 1:05:51.440 --> 1:05:55.440
2044
+ of intelligence side of your work, of our dream?
2045
+
2046
+ 1:05:55.440 --> 1:05:57.440
2047
+ You know, it's unclear.
2048
+
2049
+ 1:05:57.440 --> 1:06:04.440
2050
+ So, again, this is a deep problem, partly because it's very difficult
2051
+
2052
+ 1:06:04.440 --> 1:06:16.440
2053
+ to define consciousness and there is a debate among neuroscientists
2054
+
2055
+ 1:06:16.440 --> 1:06:22.440
2056
+ about whether consciousness and philosophers, of course,
2057
+
2058
+ 1:06:22.440 --> 1:06:30.440
2059
+ whether consciousness is something that requires flesh and blood, so to speak,
2060
+
2061
+ 1:06:30.440 --> 1:06:40.440
2062
+ or could be, you know, that we could have silicon devices that are conscious,
2063
+
2064
+ 1:06:40.440 --> 1:06:45.440
2065
+ or up to a statement like everything has some degree of consciousness
2066
+
2067
+ 1:06:45.440 --> 1:06:48.440
2068
+ and some more than others.
2069
+
2070
+ 1:06:48.440 --> 1:06:53.440
2071
+ This is like Giulio Tonioni and Fee.
2072
+
2073
+ 1:06:53.440 --> 1:06:56.440
2074
+ We just recently talked to Christof Ko.
2075
+
2076
+ 1:06:56.440 --> 1:07:00.440
2077
+ Christof was my first graduate student.
2078
+
2079
+ 1:07:00.440 --> 1:07:06.440
2080
+ Do you think it's important to illuminate aspects of consciousness
2081
+
2082
+ 1:07:06.440 --> 1:07:10.440
2083
+ in order to engineer intelligence systems?
2084
+
2085
+ 1:07:10.440 --> 1:07:14.440
2086
+ Do you think an intelligence system would ultimately have consciousness?
2087
+
2088
+ 1:07:14.440 --> 1:07:18.440
2089
+ Are they intro linked?
2090
+
2091
+ 1:07:18.440 --> 1:07:23.440
2092
+ You know, most of the people working in artificial intelligence, I think,
2093
+
2094
+ 1:07:23.440 --> 1:07:29.440
2095
+ they answer, we don't strictly need consciousness to have an intelligence system.
2096
+
2097
+ 1:07:29.440 --> 1:07:35.440
2098
+ That's sort of the easier question, because it's a very engineering answer to the question.
2099
+
2100
+ 1:07:35.440 --> 1:07:38.440
2101
+ It has a touring test, we don't need consciousness.
2102
+
2103
+ 1:07:38.440 --> 1:07:47.440
2104
+ But if you were to go, do you think it's possible that we need to have that kind of self awareness?
2105
+
2106
+ 1:07:47.440 --> 1:07:49.440
2107
+ We may, yes.
2108
+
2109
+ 1:07:49.440 --> 1:08:00.440
2110
+ So, for instance, I personally think that when test a machine or a person in a touring test,
2111
+
2112
+ 1:08:00.440 --> 1:08:10.440
2113
+ in an extended touring testing, I think consciousness is part of what we require in that test,
2114
+
2115
+ 1:08:10.440 --> 1:08:14.440
2116
+ you know, implicitly to say that this is intelligent.
2117
+
2118
+ 1:08:14.440 --> 1:08:17.440
2119
+ Christof disagrees.
2120
+
2121
+ 1:08:17.440 --> 1:08:19.440
2122
+ Yes, he does.
2123
+
2124
+ 1:08:19.440 --> 1:08:24.440
2125
+ Despite many other romantic notions he holds, he disagrees with that one.
2126
+
2127
+ 1:08:24.440 --> 1:08:26.440
2128
+ Yes, that's right.
2129
+
2130
+ 1:08:26.440 --> 1:08:29.440
2131
+ So, you know, who would see?
2132
+
2133
+ 1:08:29.440 --> 1:08:37.440
2134
+ Do you think, as a quick question, Ernest Becker's fear of death,
2135
+
2136
+ 1:08:37.440 --> 1:08:48.440
2137
+ do you think mortality and those kinds of things are important for consciousness and for intelligence,
2138
+
2139
+ 1:08:48.440 --> 1:08:53.440
2140
+ the finiteness of life, finiteness of existence,
2141
+
2142
+ 1:08:53.440 --> 1:09:00.440
2143
+ or is that just a side effect of evolutionary side effect that's useful for natural selection?
2144
+
2145
+ 1:09:00.440 --> 1:09:05.440
2146
+ Do you think this kind of thing that this interview is going to run out of time soon,
2147
+
2148
+ 1:09:05.440 --> 1:09:08.440
2149
+ our life will run out of time soon?
2150
+
2151
+ 1:09:08.440 --> 1:09:12.440
2152
+ Do you think that's needed to make this conversation good and life good?
2153
+
2154
+ 1:09:12.440 --> 1:09:14.440
2155
+ You know, I never thought about it.
2156
+
2157
+ 1:09:14.440 --> 1:09:16.440
2158
+ It's a very interesting question.
2159
+
2160
+ 1:09:16.440 --> 1:09:25.440
2161
+ I think Steve Jobs in his commencement speech at Stanford argued that, you know,
2162
+
2163
+ 1:09:25.440 --> 1:09:30.440
2164
+ having a finite life was important for stimulating achievements.
2165
+
2166
+ 1:09:30.440 --> 1:09:32.440
2167
+ It was a different.
2168
+
2169
+ 1:09:32.440 --> 1:09:34.440
2170
+ You live every day like it's your last, right?
2171
+
2172
+ 1:09:34.440 --> 1:09:35.440
2173
+ Yeah.
2174
+
2175
+ 1:09:35.440 --> 1:09:45.440
2176
+ So, rationally, I don't think strictly you need mortality for consciousness, but...
2177
+
2178
+ 1:09:45.440 --> 1:09:46.440
2179
+ Who knows?
2180
+
2181
+ 1:09:46.440 --> 1:09:49.440
2182
+ They seem to go together in our biological system, right?
2183
+
2184
+ 1:09:49.440 --> 1:09:51.440
2185
+ Yeah.
2186
+
2187
+ 1:09:51.440 --> 1:09:57.440
2188
+ You've mentioned before and the students are associated with...
2189
+
2190
+ 1:09:57.440 --> 1:10:01.440
2191
+ AlphaGo immobilized the big recent success stories in AI.
2192
+
2193
+ 1:10:01.440 --> 1:10:05.440
2194
+ I think it's captivated the entire world of what AI can do.
2195
+
2196
+ 1:10:05.440 --> 1:10:10.440
2197
+ So, what do you think will be the next breakthrough?
2198
+
2199
+ 1:10:10.440 --> 1:10:13.440
2200
+ What's your intuition about the next breakthrough?
2201
+
2202
+ 1:10:13.440 --> 1:10:16.440
2203
+ Of course, I don't know where the next breakthrough is.
2204
+
2205
+ 1:10:16.440 --> 1:10:22.440
2206
+ I think that there is a good chance, as I said before, that the next breakthrough
2207
+
2208
+ 1:10:22.440 --> 1:10:27.440
2209
+ would also be inspired by, you know, neuroscience.
2210
+
2211
+ 1:10:27.440 --> 1:10:31.440
2212
+ But which one?
2213
+
2214
+ 1:10:31.440 --> 1:10:32.440
2215
+ I don't know.
2216
+
2217
+ 1:10:32.440 --> 1:10:33.440
2218
+ And there's...
2219
+
2220
+ 1:10:33.440 --> 1:10:35.440
2221
+ So, MIT has this quest for intelligence.
2222
+
2223
+ 1:10:35.440 --> 1:10:36.440
2224
+ Yeah.
2225
+
2226
+ 1:10:36.440 --> 1:10:41.440
2227
+ And there's a few moonshots which, in that spirit, which ones are you excited about?
2228
+
2229
+ 1:10:41.440 --> 1:10:42.440
2230
+ What...
2231
+
2232
+ 1:10:42.440 --> 1:10:44.440
2233
+ Which projects kind of...
2234
+
2235
+ 1:10:44.440 --> 1:10:48.440
2236
+ Well, of course, I'm excited about one of the moonshots with...
2237
+
2238
+ 1:10:48.440 --> 1:10:52.440
2239
+ Which is our center for brains, minds, and machines.
2240
+
2241
+ 1:10:52.440 --> 1:10:57.440
2242
+ The one which is fully funded by NSF.
2243
+
2244
+ 1:10:57.440 --> 1:10:59.440
2245
+ And it's a...
2246
+
2247
+ 1:10:59.440 --> 1:11:02.440
2248
+ It is about visual intelligence.
2249
+
2250
+ 1:11:02.440 --> 1:11:05.440
2251
+ And that one is particularly about understanding.
2252
+
2253
+ 1:11:05.440 --> 1:11:07.440
2254
+ Visual intelligence.
2255
+
2256
+ 1:11:07.440 --> 1:11:16.440
2257
+ Visual cortex and visual intelligence in the sense of how we look around ourselves
2258
+
2259
+ 1:11:16.440 --> 1:11:25.440
2260
+ and understand the world around ourselves, you know, meaning what is going on,
2261
+
2262
+ 1:11:25.440 --> 1:11:31.440
2263
+ how we could go from here to there without hitting obstacles.
2264
+
2265
+ 1:11:31.440 --> 1:11:36.440
2266
+ You know, whether there are other agents, people in the environment.
2267
+
2268
+ 1:11:36.440 --> 1:11:41.440
2269
+ These are all things that we perceive very quickly.
2270
+
2271
+ 1:11:41.440 --> 1:11:47.440
2272
+ And it's something actually quite close to being conscious, not quite.
2273
+
2274
+ 1:11:47.440 --> 1:11:53.440
2275
+ But there is this interesting experiment that was run at Google X,
2276
+
2277
+ 1:11:53.440 --> 1:11:58.440
2278
+ which is, in a sense, is just a virtual reality experiment,
2279
+
2280
+ 1:11:58.440 --> 1:12:09.440
2281
+ but in which they had subject sitting, say, in a chair with goggles, like Oculus and so on.
2282
+
2283
+ 1:12:09.440 --> 1:12:11.440
2284
+ Earphones.
2285
+
2286
+ 1:12:11.440 --> 1:12:20.440
2287
+ And they were seeing through the eyes of a robot nearby to cameras, microphones for receiving.
2288
+
2289
+ 1:12:20.440 --> 1:12:23.440
2290
+ So their sensory system was there.
2291
+
2292
+ 1:12:23.440 --> 1:12:30.440
2293
+ And the impression of all the subjects, very strong, they could not shake it off,
2294
+
2295
+ 1:12:30.440 --> 1:12:35.440
2296
+ was that they were where the robot was.
2297
+
2298
+ 1:12:35.440 --> 1:12:42.440
2299
+ They could look at themselves from the robot and still feel they were where the robot is.
2300
+
2301
+ 1:12:42.440 --> 1:12:45.440
2302
+ They were looking at their body.
2303
+
2304
+ 1:12:45.440 --> 1:12:48.440
2305
+ Their self had moved.
2306
+
2307
+ 1:12:48.440 --> 1:12:54.440
2308
+ So some aspect of seeing understanding has to have ability to place yourself,
2309
+
2310
+ 1:12:54.440 --> 1:12:59.440
2311
+ have a self awareness about your position in the world and what the world is.
2312
+
2313
+ 1:12:59.440 --> 1:13:04.440
2314
+ So we may have to solve the heart problem of consciousness to solve it.
2315
+
2316
+ 1:13:04.440 --> 1:13:05.440
2317
+ On their way, yes.
2318
+
2319
+ 1:13:05.440 --> 1:13:07.440
2320
+ It's quite a moonshot.
2321
+
2322
+ 1:13:07.440 --> 1:13:14.440
2323
+ So you've been an advisor to some incredible minds, including Demis Osabis, Christof Koch,
2324
+
2325
+ 1:13:14.440 --> 1:13:21.440
2326
+ Amna Shashwar, like you said, all went on to become seminal figures in their respective fields.
2327
+
2328
+ 1:13:21.440 --> 1:13:28.440
2329
+ From your own success as a researcher and from perspective as a mentor of these researchers,
2330
+
2331
+ 1:13:28.440 --> 1:13:33.440
2332
+ having guided them in the way of advice,
2333
+
2334
+ 1:13:33.440 --> 1:13:39.440
2335
+ what does it take to be successful in science and engineering careers?
2336
+
2337
+ 1:13:39.440 --> 1:13:47.440
2338
+ Whether you're talking to somebody in their teens, 20s and 30s, what does that path look like?
2339
+
2340
+ 1:13:47.440 --> 1:13:52.440
2341
+ It's curiosity and having fun.
2342
+
2343
+ 1:13:52.440 --> 1:14:01.440
2344
+ And I think it's important also having fun with other curious minds.
2345
+
2346
+ 1:14:01.440 --> 1:14:06.440
2347
+ It's the people you surround with to have fun and curiosity.
2348
+
2349
+ 1:14:06.440 --> 1:14:09.440
2350
+ You mentioned Steve Jobs.
2351
+
2352
+ 1:14:09.440 --> 1:14:14.440
2353
+ Is there also an underlying ambition that's unique that you saw,
2354
+
2355
+ 1:14:14.440 --> 1:14:18.440
2356
+ or is it really does boil down to insatiable curiosity and fun?
2357
+
2358
+ 1:14:18.440 --> 1:14:20.440
2359
+ Well, of course.
2360
+
2361
+ 1:14:20.440 --> 1:14:29.440
2362
+ It's being curious in an active and ambitious way, yes, definitely.
2363
+
2364
+ 1:14:29.440 --> 1:14:38.440
2365
+ But I think sometime in science, there are friends of mine who are like this.
2366
+
2367
+ 1:14:38.440 --> 1:14:44.440
2368
+ You know, there are some of the scientists who like to work by themselves
2369
+
2370
+ 1:14:44.440 --> 1:14:54.440
2371
+ and kind of communicate only when they complete their work or discover something.
2372
+
2373
+ 1:14:54.440 --> 1:15:02.440
2374
+ I think I always found the actual process of discovering something
2375
+
2376
+ 1:15:02.440 --> 1:15:09.440
2377
+ is more fun if it's together with other intelligent and curious and fun people.
2378
+
2379
+ 1:15:09.440 --> 1:15:13.440
2380
+ So if you see the fun in that process, the side effect of that process
2381
+
2382
+ 1:15:13.440 --> 1:15:16.440
2383
+ would be that you'll actually end up discovering something.
2384
+
2385
+ 1:15:16.440 --> 1:15:25.440
2386
+ So as you've led many incredible efforts here, what's the secret to being a good advisor,
2387
+
2388
+ 1:15:25.440 --> 1:15:28.440
2389
+ mentor, leader in a research setting?
2390
+
2391
+ 1:15:28.440 --> 1:15:35.440
2392
+ Is it a similar spirit or what advice could you give to people, young faculty and so on?
2393
+
2394
+ 1:15:35.440 --> 1:15:42.440
2395
+ It's partly repeating what I said about an environment that should be friendly and fun
2396
+
2397
+ 1:15:42.440 --> 1:15:52.440
2398
+ and ambitious and, you know, I think I learned a lot from some of my advisors and friends
2399
+
2400
+ 1:15:52.440 --> 1:16:02.440
2401
+ and some were physicists and there was, for instance, this behavior that was encouraged
2402
+
2403
+ 1:16:02.440 --> 1:16:08.440
2404
+ of when somebody comes with a new idea in the group, unless it's really stupid
2405
+
2406
+ 1:16:08.440 --> 1:16:11.440
2407
+ but you are always enthusiastic.
2408
+
2409
+ 1:16:11.440 --> 1:16:14.440
2410
+ And then you're enthusiastic for a few minutes, for a few hours.
2411
+
2412
+ 1:16:14.440 --> 1:16:22.440
2413
+ Then you start, you know, asking critically a few questions, testing this.
2414
+
2415
+ 1:16:22.440 --> 1:16:28.440
2416
+ But, you know, this is a process that is, I think it's very good.
2417
+
2418
+ 1:16:28.440 --> 1:16:30.440
2419
+ You have to be enthusiastic.
2420
+
2421
+ 1:16:30.440 --> 1:16:33.440
2422
+ Sometimes people are very critical from the beginning.
2423
+
2424
+ 1:16:33.440 --> 1:16:35.440
2425
+ That's not...
2426
+
2427
+ 1:16:35.440 --> 1:16:37.440
2428
+ Yes, you have to give it a chance.
2429
+
2430
+ 1:16:37.440 --> 1:16:38.440
2431
+ Yes.
2432
+
2433
+ 1:16:38.440 --> 1:16:39.440
2434
+ That's seed to grow.
2435
+
2436
+ 1:16:39.440 --> 1:16:44.440
2437
+ That said, with some of your ideas, which are quite revolutionary, so there's a witness,
2438
+
2439
+ 1:16:44.440 --> 1:16:49.440
2440
+ especially in the human vision side and neuroscience side, there could be some pretty heated arguments.
2441
+
2442
+ 1:16:49.440 --> 1:16:51.440
2443
+ Do you enjoy these?
2444
+
2445
+ 1:16:51.440 --> 1:16:55.440
2446
+ Is that a part of science and academic pursuits that you enjoy?
2447
+
2448
+ 1:16:55.440 --> 1:16:56.440
2449
+ Yeah.
2450
+
2451
+ 1:16:56.440 --> 1:17:00.440
2452
+ Is that something that happens in your group as well?
2453
+
2454
+ 1:17:00.440 --> 1:17:02.440
2455
+ Yeah, absolutely.
2456
+
2457
+ 1:17:02.440 --> 1:17:14.440
2458
+ I also spent some time in Germany again, there is this tradition in which people are more forthright, less kind than here.
2459
+
2460
+ 1:17:14.440 --> 1:17:23.440
2461
+ So, you know, in the US, when you write a bad letter, you still say, this guy is nice, you know.
2462
+
2463
+ 1:17:23.440 --> 1:17:25.440
2464
+ Yes, yes.
2465
+
2466
+ 1:17:25.440 --> 1:17:26.440
2467
+ So...
2468
+
2469
+ 1:17:26.440 --> 1:17:28.440
2470
+ Yeah, here in America it's degrees of nice.
2471
+
2472
+ 1:17:28.440 --> 1:17:29.440
2473
+ Yes.
2474
+
2475
+ 1:17:29.440 --> 1:17:31.440
2476
+ It's all just degrees of nice, yeah.
2477
+
2478
+ 1:17:31.440 --> 1:17:44.440
2479
+ Right, so as long as this does not become personal and it's really like, you know, a football game with its rules, that's great.
2480
+
2481
+ 1:17:44.440 --> 1:17:46.440
2482
+ It's fun.
2483
+
2484
+ 1:17:46.440 --> 1:17:58.440
2485
+ So, if you somehow find yourself in a position to ask one question of an oracle, like a genie, maybe a god, and you're guaranteed to get a clear answer,
2486
+
2487
+ 1:17:58.440 --> 1:18:00.440
2488
+ what kind of question would you ask?
2489
+
2490
+ 1:18:00.440 --> 1:18:03.440
2491
+ What would be the question you would ask?
2492
+
2493
+ 1:18:03.440 --> 1:18:09.440
2494
+ In the spirit of our discussion, it could be, how could I become ten times more intelligent?
2495
+
2496
+ 1:18:09.440 --> 1:18:15.440
2497
+ And so, but see, you only get a clear short answer.
2498
+
2499
+ 1:18:15.440 --> 1:18:18.440
2500
+ So, do you think there's a clear short answer to that?
2501
+
2502
+ 1:18:18.440 --> 1:18:19.440
2503
+ No.
2504
+
2505
+ 1:18:19.440 --> 1:18:22.440
2506
+ And that's the answer you'll get.
2507
+
2508
+ 1:18:22.440 --> 1:18:23.440
2509
+ Okay.
2510
+
2511
+ 1:18:23.440 --> 1:18:26.440
2512
+ So, you've mentioned Flowers of Algernon.
2513
+
2514
+ 1:18:26.440 --> 1:18:27.440
2515
+ Oh, yeah.
2516
+
2517
+ 1:18:27.440 --> 1:18:32.440
2518
+ There's a story that inspired you in your childhood.
2519
+
2520
+ 1:18:32.440 --> 1:18:48.440
2521
+ As this story of a mouse, a human achieving genius level intelligence, and then understanding what was happening while slowly becoming not intelligent again in this tragedy of gaining intelligence and losing intelligence.
2522
+
2523
+ 1:18:48.440 --> 1:18:59.440
2524
+ Do you think in that spirit, in that story, do you think intelligence is a gift or a curse from the perspective of happiness and meaning of life?
2525
+
2526
+ 1:18:59.440 --> 1:19:10.440
2527
+ You try to create an intelligent system that understands the universe, but on an individual level, the meaning of life, do you think intelligence is a gift?
2528
+
2529
+ 1:19:10.440 --> 1:19:16.440
2530
+ It's a good question.
2531
+
2532
+ 1:19:16.440 --> 1:19:22.440
2533
+ I don't know.
2534
+
2535
+ 1:19:22.440 --> 1:19:34.440
2536
+ As one of the, as one people who consider the smartest people in the world, in some, in some dimension at the very least, what do you think?
2537
+
2538
+ 1:19:34.440 --> 1:19:35.440
2539
+ I don't know.
2540
+
2541
+ 1:19:35.440 --> 1:19:39.440
2542
+ It may be invariant to intelligence, let's agree of happiness.
2543
+
2544
+ 1:19:39.440 --> 1:19:43.440
2545
+ It would be nice if it were.
2546
+
2547
+ 1:19:43.440 --> 1:19:44.440
2548
+ That's the hope.
2549
+
2550
+ 1:19:44.440 --> 1:19:45.440
2551
+ Yeah.
2552
+
2553
+ 1:19:45.440 --> 1:19:49.440
2554
+ You could be smart and happy and clueless and happy.
2555
+
2556
+ 1:19:49.440 --> 1:19:51.440
2557
+ Yeah.
2558
+
2559
+ 1:19:51.440 --> 1:19:56.440
2560
+ As always on the discussion of the meaning of life is probably a good place to end.
2561
+
2562
+ 1:19:56.440 --> 1:19:58.440
2563
+ Tomasso, thank you so much for talking today.
2564
+
2565
+ 1:19:58.440 --> 1:19:59.440
2566
+ Thank you.
2567
+
2568
+ 1:19:59.440 --> 1:20:19.440
2569
+ This was great.
2570
+
vtt/episode_014_small.vtt ADDED
@@ -0,0 +1,4133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:02.240
4
+ The following is a conversation with Kyle Vogt.
5
+
6
+ 00:02.240 --> 00:05.120
7
+ He's the president and the CTO of Cruise Automation,
8
+
9
+ 00:05.120 --> 00:08.000
10
+ leading an effort to solve one of the biggest
11
+
12
+ 00:08.000 --> 00:10.880
13
+ robotics challenges of our time, vehicle automation.
14
+
15
+ 00:10.880 --> 00:13.120
16
+ He's a cofounder of two successful companies,
17
+
18
+ 00:13.120 --> 00:17.040
19
+ Twitch and Cruise, that have each sold for a billion dollars.
20
+
21
+ 00:17.040 --> 00:19.880
22
+ And he's a great example of the innovative spirit
23
+
24
+ 00:19.880 --> 00:22.160
25
+ that flourishes in Silicon Valley.
26
+
27
+ 00:22.160 --> 00:25.760
28
+ And now is facing an interesting and exciting challenge
29
+
30
+ 00:25.760 --> 00:30.040
31
+ of matching that spirit with the mass production
32
+
33
+ 00:30.040 --> 00:32.800
34
+ and the safety centered culture of a major automaker,
35
+
36
+ 00:32.800 --> 00:34.440
37
+ like General Motors.
38
+
39
+ 00:34.440 --> 00:36.520
40
+ This conversation is part of the MIT
41
+
42
+ 00:36.520 --> 00:38.560
43
+ Artificial General Intelligence series
44
+
45
+ 00:38.560 --> 00:41.040
46
+ and the Artificial Intelligence podcast.
47
+
48
+ 00:41.040 --> 00:44.840
49
+ If you enjoy it, please subscribe on YouTube, iTunes,
50
+
51
+ 00:44.840 --> 00:47.640
52
+ or simply connect with me on Twitter at Lex Friedman,
53
+
54
+ 00:47.640 --> 00:49.800
55
+ spelled F R I D.
56
+
57
+ 00:49.800 --> 00:53.480
58
+ And now here's my conversation with Kyle Vogt.
59
+
60
+ 00:53.480 --> 00:55.600
61
+ You grew up in Kansas, right?
62
+
63
+ 00:55.600 --> 00:58.080
64
+ Yeah, and I just saw that picture you had to hit know
65
+
66
+ 00:58.080 --> 01:00.400
67
+ there, so I'm a little bit worried about that now.
68
+
69
+ 01:00.400 --> 01:02.480
70
+ So in high school in Kansas City,
71
+
72
+ 01:02.480 --> 01:07.200
73
+ you joined Shawnee Mission North High School Robotics Team.
74
+
75
+ 01:07.200 --> 01:09.120
76
+ Now that wasn't your high school.
77
+
78
+ 01:09.120 --> 01:09.960
79
+ That's right.
80
+
81
+ 01:09.960 --> 01:13.920
82
+ That was the only high school in the area that had a teacher
83
+
84
+ 01:13.920 --> 01:16.080
85
+ who was willing to sponsor our first robotics team.
86
+
87
+ 01:16.080 --> 01:18.360
88
+ I was gonna troll you a little bit.
89
+
90
+ 01:18.360 --> 01:20.320
91
+ Jog your mouth a little bit with that kid.
92
+
93
+ 01:20.320 --> 01:22.880
94
+ I was trying to look super cool and intense.
95
+
96
+ 01:22.880 --> 01:23.720
97
+ You did?
98
+
99
+ 01:23.720 --> 01:25.680
100
+ Because this was BattleBots, this is serious business.
101
+
102
+ 01:25.680 --> 01:28.840
103
+ So we're standing there with a welded steel frame
104
+
105
+ 01:28.840 --> 01:30.240
106
+ and looking tough.
107
+
108
+ 01:30.240 --> 01:31.800
109
+ So go back there.
110
+
111
+ 01:31.800 --> 01:33.840
112
+ What does that drew you to robotics?
113
+
114
+ 01:33.840 --> 01:36.480
115
+ Well, I think, I've been trying to figure this out
116
+
117
+ 01:36.480 --> 01:37.920
118
+ for a while, but I've always liked building things
119
+
120
+ 01:37.920 --> 01:38.760
121
+ with Legos.
122
+
123
+ 01:38.760 --> 01:39.920
124
+ And when I was really, really young,
125
+
126
+ 01:39.920 --> 01:42.360
127
+ I wanted the Legos that had motors and other things.
128
+
129
+ 01:42.360 --> 01:44.840
130
+ And then, you know, Lego Mindstorms came out
131
+
132
+ 01:44.840 --> 01:48.280
133
+ and for the first time you could program Lego contraptions.
134
+
135
+ 01:48.280 --> 01:52.560
136
+ And I think things just sort of snowballed from that.
137
+
138
+ 01:52.560 --> 01:56.800
139
+ But I remember seeing, you know, the BattleBots TV show
140
+
141
+ 01:56.800 --> 01:59.320
142
+ on Comedy Central and thinking that is the coolest thing
143
+
144
+ 01:59.320 --> 02:01.200
145
+ in the world, I wanna be a part of that.
146
+
147
+ 02:01.200 --> 02:03.680
148
+ And not knowing a whole lot about how to build
149
+
150
+ 02:03.680 --> 02:06.880
151
+ these 200 pound fighting robots.
152
+
153
+ 02:06.880 --> 02:11.000
154
+ So I sort of obsessively poured over the internet forums
155
+
156
+ 02:11.000 --> 02:13.440
157
+ where all the creators for BattleBots would sort of hang out
158
+
159
+ 02:13.440 --> 02:16.120
160
+ and talk about, you know, document their build progress
161
+
162
+ 02:16.120 --> 02:17.120
163
+ and everything.
164
+
165
+ 02:17.120 --> 02:20.520
166
+ And I think I read, I must have read like, you know,
167
+
168
+ 02:20.520 --> 02:24.280
169
+ tens of thousands of forum posts from basically everything
170
+
171
+ 02:24.280 --> 02:26.400
172
+ that was out there on what these people were doing.
173
+
174
+ 02:26.400 --> 02:28.920
175
+ And eventually, like sort of triangulated how to put
176
+
177
+ 02:28.920 --> 02:33.040
178
+ some of these things together and ended up doing BattleBots,
179
+
180
+ 02:33.040 --> 02:34.800
181
+ which was, you know, it was like 13 or 14,
182
+
183
+ 02:34.800 --> 02:35.960
184
+ which was pretty awesome.
185
+
186
+ 02:35.960 --> 02:37.680
187
+ I'm not sure if the show's still running,
188
+
189
+ 02:37.680 --> 02:42.000
190
+ but so BattleBots is, there's not an artificial intelligence
191
+
192
+ 02:42.000 --> 02:44.200
193
+ component, it's remotely controlled.
194
+
195
+ 02:44.200 --> 02:46.720
196
+ And it's almost like a mechanical engineering challenge
197
+
198
+ 02:46.720 --> 02:49.560
199
+ of building things that can be broken.
200
+
201
+ 02:49.560 --> 02:50.680
202
+ They're radio controlled.
203
+
204
+ 02:50.680 --> 02:53.880
205
+ So, and I think that they allowed some limited form
206
+
207
+ 02:53.880 --> 02:56.600
208
+ of autonomy, but, you know, in a two minute match,
209
+
210
+ 02:56.600 --> 02:58.800
211
+ you're, in the way these things ran,
212
+
213
+ 02:58.800 --> 03:00.720
214
+ you're really doing yourself a disservice by trying
215
+
216
+ 03:00.720 --> 03:02.360
217
+ to automate it versus just, you know,
218
+
219
+ 03:02.360 --> 03:04.760
220
+ do the practical thing, which is drive it yourself.
221
+
222
+ 03:04.760 --> 03:06.960
223
+ And there's an entertainment aspect,
224
+
225
+ 03:06.960 --> 03:08.240
226
+ just going on YouTube.
227
+
228
+ 03:08.240 --> 03:11.200
229
+ There's like some of them wield an axe, some of them,
230
+
231
+ 03:11.200 --> 03:12.200
232
+ I mean, there's that fun.
233
+
234
+ 03:12.200 --> 03:13.760
235
+ So what drew you to that aspect?
236
+
237
+ 03:13.760 --> 03:15.400
238
+ Was it the mechanical engineering?
239
+
240
+ 03:15.400 --> 03:19.400
241
+ Was it the dream to create like Frankenstein
242
+
243
+ 03:19.400 --> 03:21.080
244
+ and sentient being?
245
+
246
+ 03:21.080 --> 03:23.960
247
+ Or was it just like the Lego, you like tinkering stuff?
248
+
249
+ 03:23.960 --> 03:26.000
250
+ I mean, that was just building something.
251
+
252
+ 03:26.000 --> 03:27.960
253
+ I think the idea of, you know,
254
+
255
+ 03:27.960 --> 03:30.920
256
+ this radio controlled machine that can do various things.
257
+
258
+ 03:30.920 --> 03:33.800
259
+ If it has like a weapon or something was pretty interesting.
260
+
261
+ 03:33.800 --> 03:36.440
262
+ I agree, it doesn't have the same appeal as, you know,
263
+
264
+ 03:36.440 --> 03:38.520
265
+ autonomous robots, which I, which I, you know,
266
+
267
+ 03:38.520 --> 03:40.320
268
+ sort of gravitated towards later on,
269
+
270
+ 03:40.320 --> 03:42.720
271
+ but it was definitely an engineering challenge
272
+
273
+ 03:42.720 --> 03:45.600
274
+ because everything you did in that competition
275
+
276
+ 03:45.600 --> 03:48.480
277
+ was pushing components to their limits.
278
+
279
+ 03:48.480 --> 03:52.960
280
+ So we would buy like these $40 DC motors
281
+
282
+ 03:52.960 --> 03:54.840
283
+ that came out of a winch,
284
+
285
+ 03:54.840 --> 03:57.280
286
+ like on the front of a pickup truck or something.
287
+
288
+ 03:57.280 --> 03:59.240
289
+ And we'd power the car with those
290
+
291
+ 03:59.240 --> 04:01.120
292
+ and we'd run them at like double or triple
293
+
294
+ 04:01.120 --> 04:02.440
295
+ their rated voltage.
296
+
297
+ 04:02.440 --> 04:04.160
298
+ So they immediately start overheating,
299
+
300
+ 04:04.160 --> 04:06.920
301
+ but for that two minute match, you can get, you know,
302
+
303
+ 04:06.920 --> 04:08.680
304
+ a significant increase in the power output
305
+
306
+ 04:08.680 --> 04:10.560
307
+ of those motors before they burn out.
308
+
309
+ 04:10.560 --> 04:12.760
310
+ And so you're doing the same thing for your battery packs,
311
+
312
+ 04:12.760 --> 04:14.360
313
+ all the materials in the system.
314
+
315
+ 04:14.360 --> 04:15.560
316
+ And I think there was something,
317
+
318
+ 04:15.560 --> 04:17.800
319
+ something intrinsically interesting
320
+
321
+ 04:17.800 --> 04:20.360
322
+ about just seeing like where things break.
323
+
324
+ 04:20.360 --> 04:23.360
325
+ And did you offline see where they break?
326
+
327
+ 04:23.360 --> 04:25.040
328
+ Did you take it to the testing point?
329
+
330
+ 04:25.040 --> 04:26.120
331
+ Like, how did you know two minutes?
332
+
333
+ 04:26.120 --> 04:29.680
334
+ Or was there a reckless, let's just go with it and see.
335
+
336
+ 04:29.680 --> 04:31.320
337
+ We weren't very good at battle bots.
338
+
339
+ 04:31.320 --> 04:34.200
340
+ We lost all of our matches the first round.
341
+
342
+ 04:34.200 --> 04:36.240
343
+ The one I built first,
344
+
345
+ 04:36.240 --> 04:38.120
346
+ both of them were these wedge shaped robots
347
+
348
+ 04:38.120 --> 04:39.800
349
+ because the wedge, even though it's sort of boring
350
+
351
+ 04:39.800 --> 04:41.240
352
+ to look at is extremely effective.
353
+
354
+ 04:41.240 --> 04:42.600
355
+ You drive towards another robot
356
+
357
+ 04:42.600 --> 04:44.720
358
+ and the front edge of it gets under them
359
+
360
+ 04:44.720 --> 04:46.760
361
+ and then they sort of flip over,
362
+
363
+ 04:46.760 --> 04:48.280
364
+ it's kind of like a door stopper.
365
+
366
+ 04:48.280 --> 04:51.920
367
+ And the first one had a pneumatic polished stainless steel
368
+
369
+ 04:51.920 --> 04:54.880
370
+ spike on the front that would shoot out about eight inches.
371
+
372
+ 04:54.880 --> 04:56.240
373
+ The purpose of which is what?
374
+
375
+ 04:56.240 --> 04:58.800
376
+ Pretty ineffective actually, but it looked cool.
377
+
378
+ 04:58.800 --> 05:00.880
379
+ And was it to help with the lift?
380
+
381
+ 05:00.880 --> 05:04.080
382
+ No, it was just to try to poke holes in the other robot.
383
+
384
+ 05:04.080 --> 05:05.960
385
+ And then the second time I did it,
386
+
387
+ 05:05.960 --> 05:09.560
388
+ which is the following, I think maybe 18 months later,
389
+
390
+ 05:09.560 --> 05:14.400
391
+ we had a titanium axe with a hardened steel tip on it
392
+
393
+ 05:14.400 --> 05:17.200
394
+ that was powered by a hydraulic cylinder,
395
+
396
+ 05:17.200 --> 05:20.400
397
+ which we were activating with liquid CO2,
398
+
399
+ 05:20.400 --> 05:23.880
400
+ which had its own set of problems.
401
+
402
+ 05:23.880 --> 05:26.320
403
+ So great, so that's kind of on the hardware side.
404
+
405
+ 05:26.320 --> 05:28.360
406
+ I mean, at a certain point,
407
+
408
+ 05:28.360 --> 05:31.240
409
+ there must have been born a fascination
410
+
411
+ 05:31.240 --> 05:32.440
412
+ on the software side.
413
+
414
+ 05:32.440 --> 05:35.520
415
+ So what was the first piece of code you've written?
416
+
417
+ 05:35.520 --> 05:38.600
418
+ If you didn't go back there, see what language was it?
419
+
420
+ 05:38.600 --> 05:40.600
421
+ What was it, was it EMAX, VAM?
422
+
423
+ 05:40.600 --> 05:44.640
424
+ Was it a more respectable, modern ID?
425
+
426
+ 05:44.640 --> 05:45.800
427
+ Do you remember any of this?
428
+
429
+ 05:45.800 --> 05:49.840
430
+ Yeah, well, I remember, I think maybe when I was in
431
+
432
+ 05:49.840 --> 05:52.440
433
+ third or fourth grade, I was at elementary school,
434
+
435
+ 05:52.440 --> 05:55.040
436
+ had a bunch of Apple II computers,
437
+
438
+ 05:55.040 --> 05:56.680
439
+ and we'd play games on those.
440
+
441
+ 05:56.680 --> 05:57.760
442
+ And I remember every once in a while,
443
+
444
+ 05:57.760 --> 06:01.320
445
+ something would crash or wouldn't start up correctly,
446
+
447
+ 06:01.320 --> 06:03.960
448
+ and it would dump you out to what I later learned
449
+
450
+ 06:03.960 --> 06:05.800
451
+ was like sort of a command prompt.
452
+
453
+ 06:05.800 --> 06:07.600
454
+ And my teacher would come over and type,
455
+
456
+ 06:07.600 --> 06:09.440
457
+ I actually remember this to this day for some reason,
458
+
459
+ 06:09.440 --> 06:12.160
460
+ like PR number six, or PR pound six,
461
+
462
+ 06:12.160 --> 06:13.840
463
+ which is peripheral six, which is the disk drive,
464
+
465
+ 06:13.840 --> 06:15.920
466
+ which would fire up the disk and load the program.
467
+
468
+ 06:15.920 --> 06:17.880
469
+ And I just remember thinking, wow, she's like a hacker,
470
+
471
+ 06:17.880 --> 06:20.760
472
+ like teach me these codes, these error codes,
473
+
474
+ 06:20.760 --> 06:22.720
475
+ that is what I called them at the time.
476
+
477
+ 06:22.720 --> 06:23.760
478
+ But she had no interest in that.
479
+
480
+ 06:23.760 --> 06:26.480
481
+ So it wasn't until I think about fifth grade
482
+
483
+ 06:26.480 --> 06:29.120
484
+ that I had a school where you could actually
485
+
486
+ 06:29.120 --> 06:30.600
487
+ go on these Apple II's and learn to program.
488
+
489
+ 06:30.600 --> 06:31.920
490
+ And so it was all in basic, you know,
491
+
492
+ 06:31.920 --> 06:34.240
493
+ where every line, you know, the line numbers are all,
494
+
495
+ 06:34.240 --> 06:35.640
496
+ or that every line is numbered,
497
+
498
+ 06:35.640 --> 06:38.000
499
+ and you have to like leave enough space
500
+
501
+ 06:38.000 --> 06:40.760
502
+ between the numbers so that if you want to tweak your code,
503
+
504
+ 06:40.760 --> 06:42.600
505
+ you go back and if the first line was 10
506
+
507
+ 06:42.600 --> 06:44.680
508
+ and the second line is 20, now you have to go back
509
+
510
+ 06:44.680 --> 06:45.640
511
+ and insert 15.
512
+
513
+ 06:45.640 --> 06:47.960
514
+ And if you need to add code in front of that,
515
+
516
+ 06:47.960 --> 06:49.720
517
+ you know, 11 or 12, and you hope you don't run out
518
+
519
+ 06:49.720 --> 06:51.880
520
+ of line numbers and have to redo the whole thing.
521
+
522
+ 06:51.880 --> 06:53.240
523
+ And there's go to statements?
524
+
525
+ 06:53.240 --> 06:56.920
526
+ Yeah, go to and is very basic, maybe hence the name,
527
+
528
+ 06:56.920 --> 06:58.200
529
+ but a lot of fun.
530
+
531
+ 06:58.200 --> 07:00.800
532
+ And that was like, that was, you know,
533
+
534
+ 07:00.800 --> 07:02.600
535
+ that's when, you know, when you first program,
536
+
537
+ 07:02.600 --> 07:03.560
538
+ you see the magic of it.
539
+
540
+ 07:03.560 --> 07:06.640
541
+ It's like, just like this world opens up with,
542
+
543
+ 07:06.640 --> 07:08.200
544
+ you know, endless possibilities for the things
545
+
546
+ 07:08.200 --> 07:10.600
547
+ you could build or accomplish with that computer.
548
+
549
+ 07:10.600 --> 07:13.400
550
+ So you got the bug then, so even starting with basic
551
+
552
+ 07:13.400 --> 07:16.720
553
+ and then what, C++ throughout, what did you,
554
+
555
+ 07:16.720 --> 07:18.200
556
+ was there a computer programming,
557
+
558
+ 07:18.200 --> 07:19.880
559
+ computer science classes in high school?
560
+
561
+ 07:19.880 --> 07:22.680
562
+ Not, not where I went, so it was self taught,
563
+
564
+ 07:22.680 --> 07:24.640
565
+ but I did a lot of programming.
566
+
567
+ 07:24.640 --> 07:28.560
568
+ The thing that, you know, sort of pushed me in the path
569
+
570
+ 07:28.560 --> 07:30.600
571
+ of eventually working on self driving cars
572
+
573
+ 07:30.600 --> 07:33.280
574
+ is actually one of these really long trips
575
+
576
+ 07:33.280 --> 07:38.000
577
+ driving from my house in Kansas to, I think, Las Vegas,
578
+
579
+ 07:38.000 --> 07:39.480
580
+ where we did the BattleBots competition.
581
+
582
+ 07:39.480 --> 07:42.760
583
+ And I had just gotten my, I think my learners permit
584
+
585
+ 07:42.760 --> 07:45.080
586
+ or early drivers permit.
587
+
588
+ 07:45.080 --> 07:48.280
589
+ And so I was driving this, you know, 10 hour stretch
590
+
591
+ 07:48.280 --> 07:50.600
592
+ across Western Kansas where it's just,
593
+
594
+ 07:50.600 --> 07:51.800
595
+ you're going straight on a highway
596
+
597
+ 07:51.800 --> 07:53.640
598
+ and it is mind numbingly boring.
599
+
600
+ 07:53.640 --> 07:54.960
601
+ And I remember thinking even then
602
+
603
+ 07:54.960 --> 07:58.080
604
+ with my sort of mediocre programming background
605
+
606
+ 07:58.080 --> 08:00.040
607
+ that this is something that a computer can do, right?
608
+
609
+ 08:00.040 --> 08:01.440
610
+ Let's take a picture of the road,
611
+
612
+ 08:01.440 --> 08:02.880
613
+ let's find the yellow lane markers
614
+
615
+ 08:02.880 --> 08:04.880
616
+ and, you know, steer the wheel.
617
+
618
+ 08:04.880 --> 08:06.600
619
+ And, you know, later I'd come to realize
620
+
621
+ 08:06.600 --> 08:09.800
622
+ this had been done, you know, since the 80s
623
+
624
+ 08:09.800 --> 08:12.760
625
+ or the 70s or even earlier, but I still wanted to do it.
626
+
627
+ 08:12.760 --> 08:14.840
628
+ And sort of immediately after that trip,
629
+
630
+ 08:14.840 --> 08:16.280
631
+ switched from sort of BattleBots,
632
+
633
+ 08:16.280 --> 08:18.640
634
+ which is more radio controlled machines
635
+
636
+ 08:18.640 --> 08:21.800
637
+ to thinking about building, you know,
638
+
639
+ 08:21.800 --> 08:23.600
640
+ autonomous vehicles of some scale,
641
+
642
+ 08:23.600 --> 08:25.080
643
+ start off with really small electric ones
644
+
645
+ 08:25.080 --> 08:28.280
646
+ and then, you know, progress to what we're doing now.
647
+
648
+ 08:28.280 --> 08:30.040
649
+ So what was your view of artificial intelligence
650
+
651
+ 08:30.040 --> 08:30.880
652
+ at that point?
653
+
654
+ 08:30.880 --> 08:31.880
655
+ What did you think?
656
+
657
+ 08:31.880 --> 08:35.040
658
+ So this is before there's been waves
659
+
660
+ 08:35.040 --> 08:36.680
661
+ in artificial intelligence, right?
662
+
663
+ 08:36.680 --> 08:39.480
664
+ The current wave with deep learning
665
+
666
+ 08:39.480 --> 08:41.760
667
+ makes people believe that you can solve
668
+
669
+ 08:41.760 --> 08:43.520
670
+ in a really rich, deep way,
671
+
672
+ 08:43.520 --> 08:46.200
673
+ the computer vision perception problem.
674
+
675
+ 08:46.200 --> 08:51.200
676
+ But like before the deep learning craze,
677
+
678
+ 08:51.320 --> 08:52.800
679
+ you know, how do you think about
680
+
681
+ 08:52.800 --> 08:55.320
682
+ how would you even go about building a thing
683
+
684
+ 08:55.320 --> 08:56.920
685
+ that perceives itself in the world,
686
+
687
+ 08:56.920 --> 08:59.160
688
+ localize itself in the world, moves around the world?
689
+
690
+ 08:59.160 --> 09:00.360
691
+ Like when you were younger, I mean,
692
+
693
+ 09:00.360 --> 09:02.120
694
+ as what was your thinking about it?
695
+
696
+ 09:02.120 --> 09:03.960
697
+ Well, prior to deep neural networks
698
+
699
+ 09:03.960 --> 09:05.360
700
+ or convolutional neural nets,
701
+
702
+ 09:05.360 --> 09:06.520
703
+ these modern techniques we have,
704
+
705
+ 09:06.520 --> 09:09.040
706
+ or at least ones that are in use today,
707
+
708
+ 09:09.040 --> 09:10.280
709
+ it was all heuristic space.
710
+
711
+ 09:10.280 --> 09:12.920
712
+ And so like old school image processing,
713
+
714
+ 09:12.920 --> 09:15.040
715
+ and I think extracting, you know,
716
+
717
+ 09:15.040 --> 09:18.000
718
+ yellow lane markers out of an image of a road
719
+
720
+ 09:18.000 --> 09:21.160
721
+ is one of the problems that lends itself
722
+
723
+ 09:21.160 --> 09:23.760
724
+ reasonably well to those heuristic base methods, you know,
725
+
726
+ 09:23.760 --> 09:26.760
727
+ like just do a threshold on the color yellow
728
+
729
+ 09:26.760 --> 09:28.520
730
+ and then try to fit some lines to that
731
+
732
+ 09:28.520 --> 09:30.320
733
+ using a huff transform or something
734
+
735
+ 09:30.320 --> 09:32.280
736
+ and then go from there.
737
+
738
+ 09:32.280 --> 09:34.800
739
+ Traffic light detection and stop sign detection,
740
+
741
+ 09:34.800 --> 09:35.920
742
+ red, yellow, green.
743
+
744
+ 09:35.920 --> 09:38.160
745
+ And I think you can, you could,
746
+
747
+ 09:38.160 --> 09:39.840
748
+ I mean, if you wanted to do a full,
749
+
750
+ 09:39.840 --> 09:41.960
751
+ I was just trying to make something that would stay
752
+
753
+ 09:41.960 --> 09:43.520
754
+ in between the lanes on a highway,
755
+
756
+ 09:43.520 --> 09:44.960
757
+ but if you wanted to do the full,
758
+
759
+ 09:46.920 --> 09:48.960
760
+ the full, you know, set of capabilities
761
+
762
+ 09:48.960 --> 09:50.520
763
+ needed for a driverless car,
764
+
765
+ 09:50.520 --> 09:53.360
766
+ I think you could, and we've done this at cruise,
767
+
768
+ 09:53.360 --> 09:54.440
769
+ you know, in the very first days,
770
+
771
+ 09:54.440 --> 09:56.320
772
+ you can start off with a really simple,
773
+
774
+ 09:56.320 --> 09:58.000
775
+ you know, human written heuristic
776
+
777
+ 09:58.000 --> 09:59.800
778
+ just to get the scaffolding in place
779
+
780
+ 09:59.800 --> 10:01.720
781
+ for your system, traffic light detection,
782
+
783
+ 10:01.720 --> 10:02.960
784
+ probably a really simple, you know,
785
+
786
+ 10:02.960 --> 10:04.760
787
+ color thresholding on day one
788
+
789
+ 10:04.760 --> 10:06.520
790
+ just to get the system up and running
791
+
792
+ 10:06.520 --> 10:08.640
793
+ before you migrate to, you know,
794
+
795
+ 10:08.640 --> 10:11.080
796
+ a deep learning based technique or something else.
797
+
798
+ 10:11.080 --> 10:12.800
799
+ And, you know, back in, when I was doing this,
800
+
801
+ 10:12.800 --> 10:15.120
802
+ my first one, it was on a Pentium 203,
803
+
804
+ 10:15.120 --> 10:17.840
805
+ 233 megahertz computer in it.
806
+
807
+ 10:17.840 --> 10:19.920
808
+ And I think I wrote the first version in basic,
809
+
810
+ 10:19.920 --> 10:21.600
811
+ which is like an interpreted language.
812
+
813
+ 10:21.600 --> 10:23.760
814
+ It's extremely slow because that's the thing
815
+
816
+ 10:23.760 --> 10:24.800
817
+ I knew at the time.
818
+
819
+ 10:24.800 --> 10:27.840
820
+ And so there was no, no chance at all of using,
821
+
822
+ 10:27.840 --> 10:30.440
823
+ there's no computational power to do
824
+
825
+ 10:30.440 --> 10:33.480
826
+ any sort of reasonable deep nets like you have today.
827
+
828
+ 10:33.480 --> 10:35.360
829
+ So I don't know what kids these days are doing.
830
+
831
+ 10:35.360 --> 10:37.920
832
+ Are kids these days, you know, at age 13
833
+
834
+ 10:37.920 --> 10:39.360
835
+ using neural networks in their garage?
836
+
837
+ 10:39.360 --> 10:40.200
838
+ I mean, that would be awesome.
839
+
840
+ 10:40.200 --> 10:43.040
841
+ I get emails all the time from, you know,
842
+
843
+ 10:43.040 --> 10:46.160
844
+ like 11, 12 year olds saying, I'm having, you know,
845
+
846
+ 10:46.160 --> 10:48.760
847
+ I'm trying to follow this TensorFlow tutorial
848
+
849
+ 10:48.760 --> 10:50.800
850
+ and I'm having this problem.
851
+
852
+ 10:50.800 --> 10:55.800
853
+ And the general approach in the deep learning community
854
+
855
+ 10:55.800 --> 11:00.200
856
+ is of extreme optimism of, as opposed to,
857
+
858
+ 11:00.200 --> 11:02.000
859
+ you mentioned like heuristics, you can,
860
+
861
+ 11:02.000 --> 11:04.800
862
+ you can, you can separate the autonomous driving problem
863
+
864
+ 11:04.800 --> 11:07.520
865
+ into modules and try to solve it sort of rigorously,
866
+
867
+ 11:07.520 --> 11:09.040
868
+ where you can just do it end to end.
869
+
870
+ 11:09.040 --> 11:11.840
871
+ And most people just kind of love the idea that,
872
+
873
+ 11:11.840 --> 11:13.360
874
+ you know, us humans do it end to end,
875
+
876
+ 11:13.360 --> 11:15.360
877
+ we just perceive and act.
878
+
879
+ 11:15.360 --> 11:17.040
880
+ We should be able to use that,
881
+
882
+ 11:17.040 --> 11:18.720
883
+ do the same kind of thing with your own nets.
884
+
885
+ 11:18.720 --> 11:20.920
886
+ And that, that kind of thinking,
887
+
888
+ 11:20.920 --> 11:22.840
889
+ you don't want to criticize that kind of thinking
890
+
891
+ 11:22.840 --> 11:24.640
892
+ because eventually they will be right.
893
+
894
+ 11:24.640 --> 11:26.360
895
+ Yeah. And so it's exciting.
896
+
897
+ 11:26.360 --> 11:28.720
898
+ And especially when they're younger to explore that
899
+
900
+ 11:28.720 --> 11:30.640
901
+ is a really exciting approach.
902
+
903
+ 11:30.640 --> 11:35.480
904
+ But yeah, it's, it's changed the, the language,
905
+
906
+ 11:35.480 --> 11:37.240
907
+ the kind of stuff you're tinkering with.
908
+
909
+ 11:37.240 --> 11:40.920
910
+ It's kind of exciting to see when these teenagers grow up.
911
+
912
+ 11:40.920 --> 11:43.760
913
+ Yeah, I can only imagine if you, if your starting point
914
+
915
+ 11:43.760 --> 11:46.720
916
+ is, you know, Python and TensorFlow at age 13,
917
+
918
+ 11:46.720 --> 11:47.800
919
+ where you end up, you know,
920
+
921
+ 11:47.800 --> 11:51.040
922
+ after 10 or 15 years of that, that's, that's pretty cool.
923
+
924
+ 11:51.040 --> 11:53.760
925
+ Because of GitHub, because the state tools
926
+
927
+ 11:53.760 --> 11:55.440
928
+ for solving most of the major problems
929
+
930
+ 11:55.440 --> 11:56.920
931
+ that are artificial intelligence
932
+
933
+ 11:56.920 --> 12:00.240
934
+ are within a few lines of code for most kids.
935
+
936
+ 12:00.240 --> 12:02.280
937
+ And that's incredible to think about,
938
+
939
+ 12:02.280 --> 12:04.280
940
+ also on the entrepreneurial side.
941
+
942
+ 12:04.280 --> 12:08.520
943
+ And, and, and at that point, was there any thought
944
+
945
+ 12:08.520 --> 12:11.960
946
+ about entrepreneurship before you came to college
947
+
948
+ 12:11.960 --> 12:15.160
949
+ is sort of doing your building this into a thing
950
+
951
+ 12:15.160 --> 12:17.800
952
+ that impacts the world on a large scale?
953
+
954
+ 12:17.800 --> 12:19.840
955
+ Yeah, I've always wanted to start a company.
956
+
957
+ 12:19.840 --> 12:22.600
958
+ I think that's, you know, just a cool concept
959
+
960
+ 12:22.600 --> 12:25.240
961
+ of creating something and exchanging it
962
+
963
+ 12:25.240 --> 12:28.360
964
+ for value or creating value, I guess.
965
+
966
+ 12:28.360 --> 12:31.120
967
+ So in high school, I was, I was trying to build like,
968
+
969
+ 12:31.120 --> 12:33.600
970
+ you know, servo motor drivers, little circuit boards
971
+
972
+ 12:33.600 --> 12:36.920
973
+ and sell them online or other, other things like that.
974
+
975
+ 12:36.920 --> 12:40.320
976
+ And certainly knew at some point I wanted to do a startup,
977
+
978
+ 12:40.320 --> 12:42.840
979
+ but it wasn't really, I'd say until college until I felt
980
+
981
+ 12:42.840 --> 12:46.720
982
+ like I had the, I guess the right combination
983
+
984
+ 12:46.720 --> 12:48.960
985
+ of the environment, the smart people around you
986
+
987
+ 12:48.960 --> 12:52.360
988
+ and some free time and a lot of free time at MIT.
989
+
990
+ 12:52.360 --> 12:55.800
991
+ So you came to MIT as an undergrad 2004.
992
+
993
+ 12:55.800 --> 12:57.080
994
+ That's right.
995
+
996
+ 12:57.080 --> 12:59.040
997
+ And that's when the first DARPA Grand Challenge
998
+
999
+ 12:59.040 --> 12:59.880
1000
+ was happening.
1001
+
1002
+ 12:59.880 --> 13:00.720
1003
+ Yeah.
1004
+
1005
+ 13:00.720 --> 13:03.360
1006
+ The timing of that is beautifully poetic.
1007
+
1008
+ 13:03.360 --> 13:05.680
1009
+ So how'd you get yourself involved in that one?
1010
+
1011
+ 13:05.680 --> 13:07.080
1012
+ Originally there wasn't a
1013
+
1014
+ 13:07.080 --> 13:07.920
1015
+ Official entry?
1016
+
1017
+ 13:07.920 --> 13:09.520
1018
+ Yeah, faculty sponsored thing.
1019
+
1020
+ 13:09.520 --> 13:12.760
1021
+ And so a bunch of undergrads, myself included,
1022
+
1023
+ 13:12.760 --> 13:14.160
1024
+ started meeting and got together
1025
+
1026
+ 13:14.160 --> 13:17.800
1027
+ and tried to, to haggle together some sponsorships.
1028
+
1029
+ 13:17.800 --> 13:20.120
1030
+ We got a vehicle donated, a bunch of sensors
1031
+
1032
+ 13:20.120 --> 13:21.600
1033
+ and tried to put something together.
1034
+
1035
+ 13:21.600 --> 13:24.640
1036
+ And so we had, our team was probably mostly freshmen
1037
+
1038
+ 13:24.640 --> 13:26.800
1039
+ and sophomores, you know, which, which was not really
1040
+
1041
+ 13:26.800 --> 13:30.960
1042
+ a fair, fair fight against maybe the, you know, postdoc
1043
+
1044
+ 13:30.960 --> 13:32.840
1045
+ and faculty led teams from other schools.
1046
+
1047
+ 13:32.840 --> 13:35.000
1048
+ But we, we got something up and running.
1049
+
1050
+ 13:35.000 --> 13:37.400
1051
+ We had our vehicle drive by wire and, you know,
1052
+
1053
+ 13:37.400 --> 13:42.440
1054
+ very, very basic control and things, but on the day
1055
+
1056
+ 13:42.440 --> 13:46.800
1057
+ of the qualifying, sort of pre qualifying round,
1058
+
1059
+ 13:46.800 --> 13:50.840
1060
+ the one and only steering motor that we had purchased,
1061
+
1062
+ 13:50.840 --> 13:52.600
1063
+ the thing that we had, you know, retrofitted to turn
1064
+
1065
+ 13:52.600 --> 13:55.760
1066
+ the steering wheel on the truck died.
1067
+
1068
+ 13:55.760 --> 13:58.440
1069
+ And so our vehicle was just dead in the water, couldn't steer.
1070
+
1071
+ 13:58.440 --> 13:59.880
1072
+ So we didn't make it very far.
1073
+
1074
+ 13:59.880 --> 14:00.920
1075
+ On the hardware side.
1076
+
1077
+ 14:00.920 --> 14:03.000
1078
+ So was there a software component?
1079
+
1080
+ 14:03.000 --> 14:06.200
1081
+ Was there, like, how did your view of autonomous vehicles
1082
+
1083
+ 14:06.200 --> 14:08.040
1084
+ in terms of artificial intelligence
1085
+
1086
+ 14:09.520 --> 14:10.720
1087
+ evolve in this moment?
1088
+
1089
+ 14:10.720 --> 14:12.400
1090
+ I mean, you know, like you said,
1091
+
1092
+ 14:12.400 --> 14:14.080
1093
+ from the 80s has been autonomous vehicles,
1094
+
1095
+ 14:14.080 --> 14:16.720
1096
+ but really that was the birth of the modern wave.
1097
+
1098
+ 14:16.720 --> 14:20.080
1099
+ The, the thing that captivated everyone's imagination
1100
+
1101
+ 14:20.080 --> 14:21.520
1102
+ that we can actually do this.
1103
+
1104
+ 14:21.520 --> 14:26.000
1105
+ So how, were you captivated in that way?
1106
+
1107
+ 14:26.000 --> 14:27.600
1108
+ So how did your view of autonomous vehicles
1109
+
1110
+ 14:27.600 --> 14:29.000
1111
+ change at that point?
1112
+
1113
+ 14:29.000 --> 14:33.760
1114
+ I'd say at that point in time, it was, it was a curiosity
1115
+
1116
+ 14:33.760 --> 14:35.840
1117
+ as in like, is this really possible?
1118
+
1119
+ 14:35.840 --> 14:38.440
1120
+ And I think that was generally the spirit
1121
+
1122
+ 14:38.440 --> 14:43.280
1123
+ and the purpose of that original DARPA Grand Challenge,
1124
+
1125
+ 14:43.280 --> 14:45.520
1126
+ which was to just get a whole bunch
1127
+
1128
+ 14:45.520 --> 14:48.680
1129
+ of really brilliant people exploring the space
1130
+
1131
+ 14:48.680 --> 14:49.880
1132
+ and pushing the limits.
1133
+
1134
+ 14:49.880 --> 14:51.960
1135
+ And, and I think like to this day,
1136
+
1137
+ 14:51.960 --> 14:54.160
1138
+ that DARPA challenge with its, you know,
1139
+
1140
+ 14:54.160 --> 14:57.120
1141
+ million dollar prize pool was probably one
1142
+
1143
+ 14:57.120 --> 15:00.840
1144
+ of the most effective, you know, uses of taxpayer money,
1145
+
1146
+ 15:00.840 --> 15:03.320
1147
+ dollar for dollar that I've seen, you know,
1148
+
1149
+ 15:03.320 --> 15:06.720
1150
+ because that, that small sort of initiative
1151
+
1152
+ 15:06.720 --> 15:10.440
1153
+ that DARPA put put out sort of, in my view,
1154
+
1155
+ 15:10.440 --> 15:12.560
1156
+ was the catalyst or the tipping point
1157
+
1158
+ 15:12.560 --> 15:14.800
1159
+ for this, this whole next wave
1160
+
1161
+ 15:14.800 --> 15:16.120
1162
+ of autonomous vehicle development.
1163
+
1164
+ 15:16.120 --> 15:17.160
1165
+ So that was pretty cool.
1166
+
1167
+ 15:17.160 --> 15:20.240
1168
+ So let me jump around a little bit on that point.
1169
+
1170
+ 15:20.240 --> 15:23.240
1171
+ They also did the urban challenge where it was in the city,
1172
+
1173
+ 15:23.240 --> 15:25.920
1174
+ but it was very artificial and there's no pedestrians
1175
+
1176
+ 15:25.920 --> 15:27.640
1177
+ and there's very little human involvement
1178
+
1179
+ 15:27.640 --> 15:30.480
1180
+ except a few professional drivers.
1181
+
1182
+ 15:30.480 --> 15:31.640
1183
+ Yeah.
1184
+
1185
+ 15:31.640 --> 15:33.560
1186
+ Do you think there's room, and then there was
1187
+
1188
+ 15:33.560 --> 15:35.360
1189
+ the robotics challenge with human robots?
1190
+
1191
+ 15:35.360 --> 15:36.200
1192
+ Right.
1193
+
1194
+ 15:36.200 --> 15:38.720
1195
+ So in your now role as looking at this,
1196
+
1197
+ 15:38.720 --> 15:41.640
1198
+ you're trying to solve one of the, you know,
1199
+
1200
+ 15:41.640 --> 15:43.120
1201
+ autonomous driving, one of the harder,
1202
+
1203
+ 15:43.120 --> 15:45.480
1204
+ more difficult places in San Francisco.
1205
+
1206
+ 15:45.480 --> 15:47.320
1207
+ Is there a role for DARPA to step in
1208
+
1209
+ 15:47.320 --> 15:49.680
1210
+ to also kind of help out, like,
1211
+
1212
+ 15:49.680 --> 15:54.000
1213
+ challenge with new ideas, specifically pedestrians
1214
+
1215
+ 15:54.000 --> 15:55.880
1216
+ and so on, all these kinds of interesting things?
1217
+
1218
+ 15:55.880 --> 15:57.680
1219
+ Well, I haven't thought about it from that perspective.
1220
+
1221
+ 15:57.680 --> 15:59.280
1222
+ Is there anything DARPA could do today
1223
+
1224
+ 15:59.280 --> 16:00.680
1225
+ to further accelerate things?
1226
+
1227
+ 16:00.680 --> 16:04.880
1228
+ And I would say my instinct is that that's maybe not
1229
+
1230
+ 16:04.880 --> 16:07.040
1231
+ the highest and best use of their resources in time
1232
+
1233
+ 16:07.040 --> 16:10.640
1234
+ because, like, kick starting and spinning up the flywheel
1235
+
1236
+ 16:10.640 --> 16:12.720
1237
+ is I think what they did in this case
1238
+
1239
+ 16:12.720 --> 16:14.200
1240
+ for very, very little money.
1241
+
1242
+ 16:14.200 --> 16:15.800
1243
+ But today this has become,
1244
+
1245
+ 16:16.880 --> 16:19.000
1246
+ this has become, like, commercially interesting
1247
+
1248
+ 16:19.000 --> 16:20.680
1249
+ to very large companies and the amount of money
1250
+
1251
+ 16:20.680 --> 16:23.040
1252
+ going into it and the amount of people,
1253
+
1254
+ 16:23.040 --> 16:24.840
1255
+ like, going through your class and learning
1256
+
1257
+ 16:24.840 --> 16:27.200
1258
+ about these things and developing these skills
1259
+
1260
+ 16:27.200 --> 16:29.120
1261
+ is just, you know, orders of magnitude
1262
+
1263
+ 16:29.120 --> 16:30.840
1264
+ more than it was back then.
1265
+
1266
+ 16:30.840 --> 16:33.080
1267
+ And so there's enough momentum and inertia
1268
+
1269
+ 16:33.080 --> 16:36.520
1270
+ and energy and investment dollars into this space right now
1271
+
1272
+ 16:36.520 --> 16:39.960
1273
+ that I don't, I don't, I think they're,
1274
+
1275
+ 16:39.960 --> 16:42.200
1276
+ I think they're, they can just say mission accomplished
1277
+
1278
+ 16:42.200 --> 16:44.320
1279
+ and move on to the next area of technology
1280
+
1281
+ 16:44.320 --> 16:45.360
1282
+ that needs help.
1283
+
1284
+ 16:46.280 --> 16:49.120
1285
+ So then stepping back to MIT,
1286
+
1287
+ 16:49.120 --> 16:50.880
1288
+ you left MIT Junior Junior year,
1289
+
1290
+ 16:50.880 --> 16:53.080
1291
+ what was that decision like?
1292
+
1293
+ 16:53.080 --> 16:55.680
1294
+ As I said, I always wanted to do a company
1295
+
1296
+ 16:55.680 --> 16:59.080
1297
+ or start a company and this opportunity landed in my lap
1298
+
1299
+ 16:59.080 --> 17:01.960
1300
+ which was a couple of guys from Yale
1301
+
1302
+ 17:01.960 --> 17:04.240
1303
+ were starting a new company and I Googled them
1304
+
1305
+ 17:04.240 --> 17:06.720
1306
+ and found that they had started a company previously
1307
+
1308
+ 17:06.720 --> 17:10.640
1309
+ and sold it actually on eBay for about a quarter million bucks
1310
+
1311
+ 17:10.640 --> 17:12.880
1312
+ which was a pretty interesting story.
1313
+
1314
+ 17:12.880 --> 17:15.760
1315
+ But so I thought to myself, these guys are, you know,
1316
+
1317
+ 17:15.760 --> 17:19.080
1318
+ rock star entrepreneurs, they've done this before,
1319
+
1320
+ 17:19.080 --> 17:20.720
1321
+ they must be driving around in Ferraris
1322
+
1323
+ 17:20.720 --> 17:22.320
1324
+ because they sold their company.
1325
+
1326
+ 17:23.320 --> 17:26.000
1327
+ And, you know, I thought I could learn a lot from them.
1328
+
1329
+ 17:26.000 --> 17:28.320
1330
+ So I teamed up with those guys and, you know,
1331
+
1332
+ 17:28.320 --> 17:32.000
1333
+ went out during, went out to California during IAP
1334
+
1335
+ 17:32.000 --> 17:36.440
1336
+ which is MIT's month off on one way ticket
1337
+
1338
+ 17:36.440 --> 17:38.040
1339
+ and basically never went back.
1340
+
1341
+ 17:38.040 --> 17:39.280
1342
+ We were having so much fun,
1343
+
1344
+ 17:39.280 --> 17:42.040
1345
+ we felt like we were building something and creating something
1346
+
1347
+ 17:42.040 --> 17:44.440
1348
+ and it was gonna be interesting that, you know,
1349
+
1350
+ 17:44.440 --> 17:46.800
1351
+ I was just all in and got completely hooked
1352
+
1353
+ 17:46.800 --> 17:49.640
1354
+ and that business was Justin TV
1355
+
1356
+ 17:49.640 --> 17:52.400
1357
+ which is originally a reality show about a guy named Justin
1358
+
1359
+ 17:53.720 --> 17:57.120
1360
+ which morphed into a live video streaming platform
1361
+
1362
+ 17:57.120 --> 18:00.320
1363
+ which then morphed into what is Twitch today.
1364
+
1365
+ 18:00.320 --> 18:03.720
1366
+ So that was quite an unexpected journey.
1367
+
1368
+ 18:04.720 --> 18:07.000
1369
+ So no regrets?
1370
+
1371
+ 18:07.000 --> 18:07.840
1372
+ No.
1373
+
1374
+ 18:07.840 --> 18:09.120
1375
+ Looking back, it was just an obvious,
1376
+
1377
+ 18:09.120 --> 18:10.720
1378
+ I mean, one way ticket.
1379
+
1380
+ 18:10.720 --> 18:12.760
1381
+ I mean, if we just pause on that for a second,
1382
+
1383
+ 18:12.760 --> 18:17.680
1384
+ there was no, how did you know these were the right guys?
1385
+
1386
+ 18:17.680 --> 18:19.520
1387
+ This is the right decision.
1388
+
1389
+ 18:19.520 --> 18:22.640
1390
+ You didn't think it was just follow the heart kind of thing?
1391
+
1392
+ 18:22.640 --> 18:24.520
1393
+ Well, I didn't know, but, you know,
1394
+
1395
+ 18:24.520 --> 18:26.520
1396
+ just trying something for a month during IAP
1397
+
1398
+ 18:26.520 --> 18:28.240
1399
+ seems pretty low risk, right?
1400
+
1401
+ 18:28.240 --> 18:30.760
1402
+ And then, you know, well, maybe I'll take a semester off.
1403
+
1404
+ 18:30.760 --> 18:32.280
1405
+ MIT's pretty flexible about that.
1406
+
1407
+ 18:32.280 --> 18:33.840
1408
+ You can always go back, right?
1409
+
1410
+ 18:33.840 --> 18:35.680
1411
+ And then after two or three cycles of that,
1412
+
1413
+ 18:35.680 --> 18:36.960
1414
+ I eventually threw in the towel.
1415
+
1416
+ 18:36.960 --> 18:41.920
1417
+ But, you know, I think it's, I guess in that case,
1418
+
1419
+ 18:41.920 --> 18:44.880
1420
+ I felt like I could always hit the undo button if I had to.
1421
+
1422
+ 18:44.880 --> 18:45.720
1423
+ Right.
1424
+
1425
+ 18:45.720 --> 18:49.600
1426
+ But nevertheless, from when you look in retrospect,
1427
+
1428
+ 18:49.600 --> 18:51.680
1429
+ I mean, it seems like a brave decision.
1430
+
1431
+ 18:51.680 --> 18:53.200
1432
+ You know, it would be difficult
1433
+
1434
+ 18:53.200 --> 18:54.320
1435
+ for a lot of people to make.
1436
+
1437
+ 18:54.320 --> 18:55.440
1438
+ It wasn't as popular.
1439
+
1440
+ 18:55.440 --> 18:58.120
1441
+ I'd say that the general, you know,
1442
+
1443
+ 18:58.120 --> 19:01.480
1444
+ flux of people out of MIT at the time was mostly
1445
+
1446
+ 19:01.480 --> 19:04.120
1447
+ into, you know, finance or consulting jobs
1448
+
1449
+ 19:04.120 --> 19:05.720
1450
+ in Boston or New York.
1451
+
1452
+ 19:05.720 --> 19:07.840
1453
+ And very few people were going to California
1454
+
1455
+ 19:07.840 --> 19:09.080
1456
+ to start companies.
1457
+
1458
+ 19:09.080 --> 19:12.240
1459
+ But today, I'd say that's probably inverted,
1460
+
1461
+ 19:12.240 --> 19:15.400
1462
+ which is just a sign of a sign of the times, I guess.
1463
+
1464
+ 19:15.400 --> 19:16.080
1465
+ Yeah.
1466
+
1467
+ 19:16.080 --> 19:21.560
1468
+ So there's a story about midnight of March 18, 2007,
1469
+
1470
+ 19:21.560 --> 19:25.720
1471
+ where TechCrunch, I guess, announced Justin TV earlier
1472
+
1473
+ 19:25.720 --> 19:29.080
1474
+ than it was supposed to a few hours.
1475
+
1476
+ 19:29.080 --> 19:30.360
1477
+ The site didn't work.
1478
+
1479
+ 19:30.360 --> 19:32.520
1480
+ I don't know if any of this is true, you can tell me.
1481
+
1482
+ 19:32.520 --> 19:36.200
1483
+ And you and one of the folks at Justin TV,
1484
+
1485
+ 19:36.200 --> 19:39.240
1486
+ Emma Shear, coded through the night.
1487
+
1488
+ 19:39.240 --> 19:41.440
1489
+ Can you take me through that experience?
1490
+
1491
+ 19:41.440 --> 19:47.160
1492
+ So let me say a few nice things that the article I read quoted
1493
+
1494
+ 19:47.160 --> 19:49.600
1495
+ Justin Khan said that you were known for bureau coding
1496
+
1497
+ 19:49.600 --> 19:53.520
1498
+ through problems and being a creative genius.
1499
+
1500
+ 19:53.520 --> 19:59.440
1501
+ So on that night, what was going through your head?
1502
+
1503
+ 19:59.440 --> 20:01.400
1504
+ Or maybe I put another way, how do you
1505
+
1506
+ 20:01.400 --> 20:02.520
1507
+ solve these problems?
1508
+
1509
+ 20:02.520 --> 20:05.480
1510
+ What's your approach to solving these kind of problems
1511
+
1512
+ 20:05.480 --> 20:07.080
1513
+ where the line between success and failure
1514
+
1515
+ 20:07.080 --> 20:09.680
1516
+ seems to be pretty thin?
1517
+
1518
+ 20:09.680 --> 20:10.680
1519
+ That's a good question.
1520
+
1521
+ 20:10.680 --> 20:13.400
1522
+ Well, first of all, that's nice of Justin to say that.
1523
+
1524
+ 20:13.400 --> 20:16.880
1525
+ I think I would have been maybe 21 years old then
1526
+
1527
+ 20:16.880 --> 20:18.800
1528
+ and not very experienced at programming.
1529
+
1530
+ 20:18.800 --> 20:22.680
1531
+ But as with everything in a startup,
1532
+
1533
+ 20:22.680 --> 20:24.720
1534
+ you're sort of racing against the clock.
1535
+
1536
+ 20:24.720 --> 20:27.320
1537
+ And so our plan was the second we
1538
+
1539
+ 20:27.320 --> 20:32.600
1540
+ had this live streaming camera backpack up and running
1541
+
1542
+ 20:32.600 --> 20:33.600
1543
+ where Justin could wear it.
1544
+
1545
+ 20:33.600 --> 20:35.320
1546
+ And no matter where he went in the city,
1547
+
1548
+ 20:35.320 --> 20:36.400
1549
+ it would be streaming live video.
1550
+
1551
+ 20:36.400 --> 20:37.960
1552
+ And this is even before the iPhones,
1553
+
1554
+ 20:37.960 --> 20:40.880
1555
+ this is like hard to do back then.
1556
+
1557
+ 20:40.880 --> 20:41.800
1558
+ We would launch.
1559
+
1560
+ 20:41.800 --> 20:45.160
1561
+ And so we thought we were there and the backpack was working.
1562
+
1563
+ 20:45.160 --> 20:47.080
1564
+ And then we sent out all the emails
1565
+
1566
+ 20:47.080 --> 20:49.920
1567
+ to launch the company and do the press thing.
1568
+
1569
+ 20:49.920 --> 20:53.000
1570
+ And then we weren't quite actually there.
1571
+
1572
+ 20:53.000 --> 20:55.880
1573
+ And then we thought, oh, well, they're
1574
+
1575
+ 20:55.880 --> 21:00.160
1576
+ not going to announce it until maybe 10 AM the next morning.
1577
+
1578
+ 21:00.160 --> 21:01.880
1579
+ And it's, I don't know, it's 5 PM now.
1580
+
1581
+ 21:01.880 --> 21:03.640
1582
+ So how many hours do we have left?
1583
+
1584
+ 21:03.640 --> 21:08.000
1585
+ What is that, like 17 hours to go?
1586
+
1587
+ 21:08.000 --> 21:10.440
1588
+ And that was going to be fine.
1589
+
1590
+ 21:10.440 --> 21:11.440
1591
+ Was the problem obvious?
1592
+
1593
+ 21:11.440 --> 21:13.280
1594
+ Did you understand what could possibly be?
1595
+
1596
+ 21:13.280 --> 21:16.520
1597
+ Like how complicated was the system at that point?
1598
+
1599
+ 21:16.520 --> 21:18.840
1600
+ It was pretty messy.
1601
+
1602
+ 21:18.840 --> 21:22.760
1603
+ So to get a live video feed that looked decent working
1604
+
1605
+ 21:22.760 --> 21:25.680
1606
+ from anywhere in San Francisco, I
1607
+
1608
+ 21:25.680 --> 21:28.600
1609
+ put together this system where we had like three or four
1610
+
1611
+ 21:28.600 --> 21:29.880
1612
+ cell phone data modems.
1613
+
1614
+ 21:29.880 --> 21:32.200
1615
+ And they were like, we take the video stream
1616
+
1617
+ 21:32.200 --> 21:35.600
1618
+ and sort of spray it across these three or four modems
1619
+
1620
+ 21:35.600 --> 21:38.080
1621
+ and then try to catch all the packets on the other side
1622
+
1623
+ 21:38.080 --> 21:39.480
1624
+ with unreliable cell phone networks.
1625
+
1626
+ 21:39.480 --> 21:41.080
1627
+ Pretty low level networking.
1628
+
1629
+ 21:41.080 --> 21:41.720
1630
+ Yeah.
1631
+
1632
+ 21:41.720 --> 21:44.760
1633
+ And putting these sort of protocols
1634
+
1635
+ 21:44.760 --> 21:47.560
1636
+ on top of all that to reassemble and reorder the packets
1637
+
1638
+ 21:47.560 --> 21:49.720
1639
+ and have time buffers and error correction
1640
+
1641
+ 21:49.720 --> 21:50.960
1642
+ and all that kind of stuff.
1643
+
1644
+ 21:50.960 --> 21:53.960
1645
+ And the night before, it was just
1646
+
1647
+ 21:53.960 --> 21:56.280
1648
+ staticky. Every once in a while, the image would go
1649
+
1650
+ 21:56.280 --> 21:59.640
1651
+ staticky and there would be this horrible like screeching
1652
+
1653
+ 21:59.640 --> 22:02.080
1654
+ audio noise because the audio was also corrupted.
1655
+
1656
+ 22:02.080 --> 22:04.600
1657
+ And this would happen like every five to 10 minutes or so.
1658
+
1659
+ 22:04.600 --> 22:08.080
1660
+ And it was a really, you know, off of putting to the viewers.
1661
+
1662
+ 22:08.080 --> 22:08.880
1663
+ Yeah.
1664
+
1665
+ 22:08.880 --> 22:10.200
1666
+ How do you tackle that problem?
1667
+
1668
+ 22:10.200 --> 22:13.280
1669
+ What was the, you're just freaking out behind a computer.
1670
+
1671
+ 22:13.280 --> 22:16.880
1672
+ There's the word, are there other folks working on this problem?
1673
+
1674
+ 22:16.880 --> 22:18.120
1675
+ Like were you behind a whiteboard?
1676
+
1677
+ 22:18.120 --> 22:22.000
1678
+ Were you doing a hair coding?
1679
+
1680
+ 22:22.000 --> 22:23.840
1681
+ Yeah, it's a little lonely because there's four of us
1682
+
1683
+ 22:23.840 --> 22:26.880
1684
+ working on the company and only two people really wrote code.
1685
+
1686
+ 22:26.880 --> 22:29.200
1687
+ And Emmett wrote the website in the chat system
1688
+
1689
+ 22:29.200 --> 22:32.400
1690
+ and I wrote the software for this video streaming device
1691
+
1692
+ 22:32.400 --> 22:34.280
1693
+ and video server.
1694
+
1695
+ 22:34.280 --> 22:36.240
1696
+ And so, you know, it was my sole responsibility
1697
+
1698
+ 22:36.240 --> 22:37.320
1699
+ to figure that out.
1700
+
1701
+ 22:37.320 --> 22:39.440
1702
+ And I think it's those, you know,
1703
+
1704
+ 22:39.440 --> 22:42.200
1705
+ setting deadlines, trying to move quickly and everything
1706
+
1707
+ 22:42.200 --> 22:44.200
1708
+ where you're in that moment of intense pressure
1709
+
1710
+ 22:44.200 --> 22:46.960
1711
+ that sometimes people do their best and most interesting work.
1712
+
1713
+ 22:46.960 --> 22:48.800
1714
+ And so even though that was a terrible moment,
1715
+
1716
+ 22:48.800 --> 22:50.760
1717
+ I look back on it fondly because that's like, you know,
1718
+
1719
+ 22:50.760 --> 22:54.720
1720
+ that's one of those character defining moments, I think.
1721
+
1722
+ 22:54.720 --> 22:59.480
1723
+ So in 2013, October, you founded Cruise Automation.
1724
+
1725
+ 22:59.480 --> 23:00.200
1726
+ Yeah.
1727
+
1728
+ 23:00.200 --> 23:04.200
1729
+ So progressing forward, another exceptionally successful
1730
+
1731
+ 23:04.200 --> 23:09.920
1732
+ company was acquired by GM in 2016 for $1 billion.
1733
+
1734
+ 23:09.920 --> 23:14.120
1735
+ But in October 2013, what was on your mind?
1736
+
1737
+ 23:14.120 --> 23:16.360
1738
+ What was the plan?
1739
+
1740
+ 23:16.360 --> 23:19.840
1741
+ How does one seriously start to tackle
1742
+
1743
+ 23:19.840 --> 23:22.800
1744
+ one of the hardest robotics, most important impact
1745
+
1746
+ 23:22.800 --> 23:24.960
1747
+ for robotics problems of our age?
1748
+
1749
+ 23:24.960 --> 23:28.760
1750
+ After going through Twitch, Twitch was,
1751
+
1752
+ 23:28.760 --> 23:31.480
1753
+ and is today pretty successful.
1754
+
1755
+ 23:31.480 --> 23:36.880
1756
+ But the work was, the result was entertainment mostly.
1757
+
1758
+ 23:36.880 --> 23:39.840
1759
+ Like the better the product was, the more we would entertain
1760
+
1761
+ 23:39.840 --> 23:42.760
1762
+ people and then, you know, make money on the ad revenues
1763
+
1764
+ 23:42.760 --> 23:43.760
1765
+ and other things.
1766
+
1767
+ 23:43.760 --> 23:45.000
1768
+ And that was a good thing.
1769
+
1770
+ 23:45.000 --> 23:46.320
1771
+ It felt good to entertain people.
1772
+
1773
+ 23:46.320 --> 23:49.120
1774
+ But I figured like, you know, what is really the point
1775
+
1776
+ 23:49.120 --> 23:51.120
1777
+ of becoming a really good engineer
1778
+
1779
+ 23:51.120 --> 23:53.160
1780
+ and developing these skills other than, you know,
1781
+
1782
+ 23:53.160 --> 23:53.960
1783
+ my own enjoyment.
1784
+
1785
+ 23:53.960 --> 23:55.760
1786
+ And I realized I wanted something that scratched
1787
+
1788
+ 23:55.760 --> 23:57.680
1789
+ more of an existential itch, like something
1790
+
1791
+ 23:57.680 --> 23:59.440
1792
+ that truly matters.
1793
+
1794
+ 23:59.440 --> 24:03.680
1795
+ And so I basically made this list of requirements
1796
+
1797
+ 24:03.680 --> 24:06.160
1798
+ for a new, if I was going to do another company.
1799
+
1800
+ 24:06.160 --> 24:08.000
1801
+ And the one thing I knew in the back of my head
1802
+
1803
+ 24:08.000 --> 24:12.320
1804
+ that Twitch took like eight years to become successful.
1805
+
1806
+ 24:12.320 --> 24:14.880
1807
+ And so whatever I do, I better be willing to commit,
1808
+
1809
+ 24:14.880 --> 24:17.000
1810
+ you know, at least 10 years to something.
1811
+
1812
+ 24:17.000 --> 24:20.400
1813
+ And when you think about things from that perspective,
1814
+
1815
+ 24:20.400 --> 24:21.760
1816
+ you certainly, I think, raise the bar
1817
+
1818
+ 24:21.760 --> 24:23.200
1819
+ on what you choose to work on.
1820
+
1821
+ 24:23.200 --> 24:24.320
1822
+ So for me, the three things where
1823
+
1824
+ 24:24.320 --> 24:27.120
1825
+ it had to be something where the technology itself
1826
+
1827
+ 24:27.120 --> 24:28.960
1828
+ determines the success of the product,
1829
+
1830
+ 24:28.960 --> 24:31.840
1831
+ like hard, really juicy technology problems,
1832
+
1833
+ 24:31.840 --> 24:33.600
1834
+ because that's what motivates me.
1835
+
1836
+ 24:33.600 --> 24:36.280
1837
+ And then it had to have a direct and positive impact
1838
+
1839
+ 24:36.280 --> 24:37.640
1840
+ on society in some way.
1841
+
1842
+ 24:37.640 --> 24:39.200
1843
+ So an example would be like, you know,
1844
+
1845
+ 24:39.200 --> 24:41.560
1846
+ health care, self driving cars because they save lives,
1847
+
1848
+ 24:41.560 --> 24:43.600
1849
+ other things where there's a clear connection to somehow
1850
+
1851
+ 24:43.600 --> 24:45.200
1852
+ improving other people's lives.
1853
+
1854
+ 24:45.200 --> 24:47.160
1855
+ And the last one is it had to be a big business
1856
+
1857
+ 24:47.160 --> 24:50.200
1858
+ because for the positive impact to matter,
1859
+
1860
+ 24:50.200 --> 24:51.240
1861
+ it's got to be a large scale.
1862
+
1863
+ 24:51.240 --> 24:52.080
1864
+ Scale, yeah.
1865
+
1866
+ 24:52.080 --> 24:53.840
1867
+ And I was thinking about that for a while
1868
+
1869
+ 24:53.840 --> 24:55.960
1870
+ and I made like a, I tried writing a Gmail clone
1871
+
1872
+ 24:55.960 --> 24:57.640
1873
+ and looked at some other ideas.
1874
+
1875
+ 24:57.640 --> 24:59.480
1876
+ And then it just sort of light bulb went off
1877
+
1878
+ 24:59.480 --> 25:00.440
1879
+ like self driving cars.
1880
+
1881
+ 25:00.440 --> 25:02.360
1882
+ Like that was the most fun I had ever had
1883
+
1884
+ 25:02.360 --> 25:04.040
1885
+ in college working on that.
1886
+
1887
+ 25:04.040 --> 25:05.960
1888
+ And like, well, what's the state of the technology
1889
+
1890
+ 25:05.960 --> 25:08.440
1891
+ has been 10 years, maybe times have changed
1892
+
1893
+ 25:08.440 --> 25:10.800
1894
+ and maybe now is the time to make this work.
1895
+
1896
+ 25:10.800 --> 25:13.320
1897
+ And I poked around and looked at the only other thing
1898
+
1899
+ 25:13.320 --> 25:15.480
1900
+ out there really at the time was the Google self driving
1901
+
1902
+ 25:15.480 --> 25:16.680
1903
+ car project.
1904
+
1905
+ 25:16.680 --> 25:19.600
1906
+ And I thought surely there's a way to, you know,
1907
+
1908
+ 25:19.600 --> 25:21.600
1909
+ have an entrepreneur mindset and sort of solve
1910
+
1911
+ 25:21.600 --> 25:23.520
1912
+ the minimum viable product here.
1913
+
1914
+ 25:23.520 --> 25:25.200
1915
+ And so I just took the plunge right then and there
1916
+
1917
+ 25:25.200 --> 25:26.680
1918
+ and said, this, this is something I know
1919
+
1920
+ 25:26.680 --> 25:27.840
1921
+ I can commit 10 years to.
1922
+
1923
+ 25:27.840 --> 25:30.760
1924
+ It's probably the greatest applied AI problem
1925
+
1926
+ 25:30.760 --> 25:32.000
1927
+ of our generation.
1928
+
1929
+ 25:32.000 --> 25:34.240
1930
+ And if it works, it's going to be both a huge business
1931
+
1932
+ 25:34.240 --> 25:37.040
1933
+ and therefore like probably the most positive impact
1934
+
1935
+ 25:37.040 --> 25:38.280
1936
+ I can possibly have on the world.
1937
+
1938
+ 25:38.280 --> 25:40.920
1939
+ So after that light bulb went off,
1940
+
1941
+ 25:40.920 --> 25:43.000
1942
+ I went all in on cruise immediately
1943
+
1944
+ 25:43.000 --> 25:45.560
1945
+ and got to work.
1946
+
1947
+ 25:45.560 --> 25:47.360
1948
+ Did you have an idea how to solve this problem?
1949
+
1950
+ 25:47.360 --> 25:49.640
1951
+ Which aspect of the problem to solve?
1952
+
1953
+ 25:49.640 --> 25:53.720
1954
+ You know, slow, like we just had Oliver from voyage here
1955
+
1956
+ 25:53.720 --> 25:56.560
1957
+ slow moving retirement communities,
1958
+
1959
+ 25:56.560 --> 25:58.080
1960
+ urban driving, highway driving.
1961
+
1962
+ 25:58.080 --> 26:00.400
1963
+ Did you have like, did you have a vision
1964
+
1965
+ 26:00.400 --> 26:03.560
1966
+ of the city of the future or, you know,
1967
+
1968
+ 26:03.560 --> 26:06.400
1969
+ the transportation is largely automated,
1970
+
1971
+ 26:06.400 --> 26:07.240
1972
+ that kind of thing.
1973
+
1974
+ 26:07.240 --> 26:12.240
1975
+ Or was it sort of more fuzzy and gray area than that?
1976
+
1977
+ 26:12.240 --> 26:16.640
1978
+ My analysis of the situation is that Google's putting a lot,
1979
+
1980
+ 26:16.640 --> 26:19.200
1981
+ had been putting a lot of money into that project.
1982
+
1983
+ 26:19.200 --> 26:20.760
1984
+ They had a lot more resources.
1985
+
1986
+ 26:20.760 --> 26:23.720
1987
+ And so, and they still hadn't cracked
1988
+
1989
+ 26:23.720 --> 26:26.200
1990
+ the fully driverless car.
1991
+
1992
+ 26:26.200 --> 26:28.520
1993
+ You know, this is 2013, I guess.
1994
+
1995
+ 26:29.480 --> 26:33.360
1996
+ So I thought, what can I do to sort of go from zero
1997
+
1998
+ 26:33.360 --> 26:35.600
1999
+ to, you know, significant scale
2000
+
2001
+ 26:35.600 --> 26:37.280
2002
+ so I can actually solve the real problem,
2003
+
2004
+ 26:37.280 --> 26:38.640
2005
+ which is the driverless cars.
2006
+
2007
+ 26:38.640 --> 26:40.480
2008
+ And I thought, here's the strategy.
2009
+
2010
+ 26:40.480 --> 26:44.080
2011
+ We'll start by doing a really simple problem
2012
+
2013
+ 26:44.080 --> 26:45.560
2014
+ or solving a really simple problem
2015
+
2016
+ 26:45.560 --> 26:48.080
2017
+ that creates value for people.
2018
+
2019
+ 26:48.080 --> 26:50.040
2020
+ So it eventually ended up deciding
2021
+
2022
+ 26:50.040 --> 26:51.800
2023
+ on automating highway driving,
2024
+
2025
+ 26:51.800 --> 26:54.240
2026
+ which is relatively more straightforward
2027
+
2028
+ 26:54.240 --> 26:56.440
2029
+ as long as there's a backup driver there.
2030
+
2031
+ 26:56.440 --> 26:58.480
2032
+ And, you know, the go to market
2033
+
2034
+ 26:58.480 --> 27:00.240
2035
+ will be able to retrofit people's cars
2036
+
2037
+ 27:00.240 --> 27:02.240
2038
+ and just sell these products directly.
2039
+
2040
+ 27:02.240 --> 27:04.520
2041
+ And the idea was, we'll take all the revenue
2042
+
2043
+ 27:04.520 --> 27:08.320
2044
+ and profits from that and use it to do the,
2045
+
2046
+ 27:08.320 --> 27:10.920
2047
+ to sort of reinvest that in research for doing
2048
+
2049
+ 27:10.920 --> 27:12.600
2050
+ fully driverless cars.
2051
+
2052
+ 27:12.600 --> 27:13.960
2053
+ And that was the plan.
2054
+
2055
+ 27:13.960 --> 27:15.720
2056
+ The only thing that really changed along the way
2057
+
2058
+ 27:15.720 --> 27:17.360
2059
+ between then and now is,
2060
+
2061
+ 27:17.360 --> 27:19.000
2062
+ we never really launched the first product.
2063
+
2064
+ 27:19.000 --> 27:21.680
2065
+ We had enough interest from investors
2066
+
2067
+ 27:21.680 --> 27:24.120
2068
+ and enough of a signal that this was something
2069
+
2070
+ 27:24.120 --> 27:25.000
2071
+ that we should be working on,
2072
+
2073
+ 27:25.000 --> 27:28.400
2074
+ that after about a year of working on the highway autopilot,
2075
+
2076
+ 27:28.400 --> 27:31.040
2077
+ we had it working, you know, at a prototype stage,
2078
+
2079
+ 27:31.040 --> 27:33.120
2080
+ but we just completely abandoned that
2081
+
2082
+ 27:33.120 --> 27:34.960
2083
+ and said, we're gonna go all in on driverless cars
2084
+
2085
+ 27:34.960 --> 27:36.480
2086
+ now is the time.
2087
+
2088
+ 27:36.480 --> 27:38.120
2089
+ Can't think of anything that's more exciting.
2090
+
2091
+ 27:38.120 --> 27:39.720
2092
+ And if it works more impactful,
2093
+
2094
+ 27:39.720 --> 27:41.360
2095
+ so we're just gonna go for it.
2096
+
2097
+ 27:41.360 --> 27:43.440
2098
+ The idea of retrofit is kind of interesting.
2099
+
2100
+ 27:43.440 --> 27:44.280
2101
+ Yeah.
2102
+
2103
+ 27:44.280 --> 27:46.880
2104
+ Being able to, it's how you achieve scale.
2105
+
2106
+ 27:46.880 --> 27:47.880
2107
+ It's a really interesting idea,
2108
+
2109
+ 27:47.880 --> 27:51.120
2110
+ is it's something that's still in the back of your mind
2111
+
2112
+ 27:51.120 --> 27:52.800
2113
+ as a possibility?
2114
+
2115
+ 27:52.800 --> 27:53.640
2116
+ Not at all.
2117
+
2118
+ 27:53.640 --> 27:57.080
2119
+ I've come full circle on that one after trying
2120
+
2121
+ 27:57.080 --> 27:58.880
2122
+ to build a retrofit product.
2123
+
2124
+ 27:58.880 --> 28:01.240
2125
+ And I'll touch on some of the complexities of that.
2126
+
2127
+ 28:01.240 --> 28:04.240
2128
+ And then also having been inside an OEM
2129
+
2130
+ 28:04.240 --> 28:05.400
2131
+ and seeing how things work
2132
+
2133
+ 28:05.400 --> 28:08.320
2134
+ and how a vehicle is developed and validated.
2135
+
2136
+ 28:08.320 --> 28:09.360
2137
+ When it comes to something
2138
+
2139
+ 28:09.360 --> 28:11.280
2140
+ that has safety critical implications,
2141
+
2142
+ 28:11.280 --> 28:12.520
2143
+ like controlling the steering
2144
+
2145
+ 28:12.520 --> 28:15.280
2146
+ and other control inputs on your car,
2147
+
2148
+ 28:15.280 --> 28:17.720
2149
+ it's pretty hard to get there with a retrofit.
2150
+
2151
+ 28:17.720 --> 28:20.520
2152
+ Or if you did, even if you did,
2153
+
2154
+ 28:20.520 --> 28:23.280
2155
+ it creates a whole bunch of new complications around
2156
+
2157
+ 28:23.280 --> 28:25.400
2158
+ liability or how did you truly validate that?
2159
+
2160
+ 28:25.400 --> 28:27.480
2161
+ Or, you know, something in the base vehicle fails
2162
+
2163
+ 28:27.480 --> 28:29.880
2164
+ and causes your system to fail, whose fault is it?
2165
+
2166
+ 28:31.560 --> 28:34.080
2167
+ Or if the car's anti lock brake systems
2168
+
2169
+ 28:34.080 --> 28:36.680
2170
+ or other things kick in or the software has been,
2171
+
2172
+ 28:36.680 --> 28:38.240
2173
+ it's different in one version of the car.
2174
+
2175
+ 28:38.240 --> 28:40.080
2176
+ You retrofit versus another and you don't know
2177
+
2178
+ 28:40.080 --> 28:43.000
2179
+ because the manufacturer has updated it behind the scenes.
2180
+
2181
+ 28:43.000 --> 28:45.400
2182
+ There's basically an infinite list of long tail issues
2183
+
2184
+ 28:45.400 --> 28:46.240
2185
+ that can get you.
2186
+
2187
+ 28:46.240 --> 28:47.760
2188
+ And if you're dealing with a safety critical product,
2189
+
2190
+ 28:47.760 --> 28:48.960
2191
+ that's not really acceptable.
2192
+
2193
+ 28:48.960 --> 28:52.160
2194
+ That's a really convincing summary of why
2195
+
2196
+ 28:52.160 --> 28:53.160
2197
+ it's really challenging.
2198
+
2199
+ 28:53.160 --> 28:54.360
2200
+ But I didn't know all that at the time.
2201
+
2202
+ 28:54.360 --> 28:55.480
2203
+ So we tried it anyway.
2204
+
2205
+ 28:55.480 --> 28:57.160
2206
+ But as a pitch also at the time,
2207
+
2208
+ 28:57.160 --> 28:58.400
2209
+ it's a really strong one.
2210
+
2211
+ 28:58.400 --> 29:00.720
2212
+ That's how you achieve scale and that's how you beat
2213
+
2214
+ 29:00.720 --> 29:03.360
2215
+ the current, the leader at the time of Google
2216
+
2217
+ 29:03.360 --> 29:04.720
2218
+ or the only one in the market.
2219
+
2220
+ 29:04.720 --> 29:06.840
2221
+ The other big problem we ran into,
2222
+
2223
+ 29:06.840 --> 29:08.240
2224
+ which is perhaps the biggest problem
2225
+
2226
+ 29:08.240 --> 29:10.280
2227
+ from a business model perspective,
2228
+
2229
+ 29:10.280 --> 29:15.280
2230
+ is we had kind of assumed that we started with an Audi S4
2231
+
2232
+ 29:15.440 --> 29:16.880
2233
+ as the vehicle we retrofitted
2234
+
2235
+ 29:16.880 --> 29:18.760
2236
+ with this highway driving capability.
2237
+
2238
+ 29:18.760 --> 29:21.040
2239
+ And we had kind of assumed that if we just knock out
2240
+
2241
+ 29:21.040 --> 29:23.360
2242
+ like three make and models of vehicle,
2243
+
2244
+ 29:23.360 --> 29:25.880
2245
+ that'll cover like 80% of the San Francisco market.
2246
+
2247
+ 29:25.880 --> 29:27.400
2248
+ Doesn't everyone there drive, I don't know,
2249
+
2250
+ 29:27.400 --> 29:30.240
2251
+ a BMW or a Honda Civic or one of these three cars?
2252
+
2253
+ 29:30.240 --> 29:32.040
2254
+ And then we surveyed our users and we found out
2255
+
2256
+ 29:32.040 --> 29:33.480
2257
+ that it's all over the place.
2258
+
2259
+ 29:33.480 --> 29:36.680
2260
+ We would, to get even a decent number of units sold,
2261
+
2262
+ 29:36.680 --> 29:39.880
2263
+ we'd have to support like 20 or 50 different models.
2264
+
2265
+ 29:39.880 --> 29:42.200
2266
+ And each one is a little butterfly that takes time
2267
+
2268
+ 29:42.200 --> 29:44.800
2269
+ and effort to maintain that retrofit integration
2270
+
2271
+ 29:44.800 --> 29:47.120
2272
+ and custom hardware and all this.
2273
+
2274
+ 29:47.120 --> 29:49.240
2275
+ So it was a tough business.
2276
+
2277
+ 29:49.240 --> 29:54.240
2278
+ So GM manufactures and sells over nine million cars a year.
2279
+
2280
+ 29:54.280 --> 29:58.560
2281
+ And what you with crews are trying to do
2282
+
2283
+ 29:58.560 --> 30:01.160
2284
+ some of the most cutting edge innovation
2285
+
2286
+ 30:01.160 --> 30:03.000
2287
+ in terms of applying AI.
2288
+
2289
+ 30:03.000 --> 30:06.040
2290
+ And so how do those, you've talked about it a little bit
2291
+
2292
+ 30:06.040 --> 30:07.760
2293
+ before, but it's also just fascinating to me,
2294
+
2295
+ 30:07.760 --> 30:09.360
2296
+ we work a lot of automakers.
2297
+
2298
+ 30:10.560 --> 30:12.880
2299
+ The difference between the gap between Detroit
2300
+
2301
+ 30:12.880 --> 30:14.680
2302
+ and Silicon Valley, let's say,
2303
+
2304
+ 30:14.680 --> 30:17.320
2305
+ just to be sort of poetic about it, I guess.
2306
+
2307
+ 30:17.320 --> 30:18.680
2308
+ How do you close that gap?
2309
+
2310
+ 30:18.680 --> 30:21.480
2311
+ How do you take GM into the future
2312
+
2313
+ 30:21.480 --> 30:24.840
2314
+ where a large part of the fleet would be autonomous perhaps?
2315
+
2316
+ 30:24.840 --> 30:28.520
2317
+ I wanna start by acknowledging that GM is made up of
2318
+
2319
+ 30:28.520 --> 30:30.240
2320
+ tens of thousands of really brilliant,
2321
+
2322
+ 30:30.240 --> 30:32.720
2323
+ motivated people who wanna be a part of the future.
2324
+
2325
+ 30:32.720 --> 30:35.240
2326
+ And so it's pretty fun to work with them.
2327
+
2328
+ 30:35.240 --> 30:37.480
2329
+ The attitude inside a car company like that
2330
+
2331
+ 30:37.480 --> 30:41.240
2332
+ is embracing this transformation and change
2333
+
2334
+ 30:41.240 --> 30:42.360
2335
+ rather than fearing it.
2336
+
2337
+ 30:42.360 --> 30:45.440
2338
+ And I think that's a testament to the leadership at GM
2339
+
2340
+ 30:45.440 --> 30:47.680
2341
+ and that's flown all the way through to everyone
2342
+
2343
+ 30:47.680 --> 30:49.280
2344
+ you talk to, even the people in the assembly plants
2345
+
2346
+ 30:49.280 --> 30:51.200
2347
+ working on these cars.
2348
+
2349
+ 30:51.200 --> 30:52.040
2350
+ So that's really great.
2351
+
2352
+ 30:52.040 --> 30:55.160
2353
+ So starting from that position makes it a lot easier.
2354
+
2355
+ 30:55.160 --> 30:59.160
2356
+ So then when the people in San Francisco
2357
+
2358
+ 30:59.160 --> 31:01.400
2359
+ but cruise interact with the people at GM,
2360
+
2361
+ 31:01.400 --> 31:02.960
2362
+ at least we have this common set of values,
2363
+
2364
+ 31:02.960 --> 31:05.000
2365
+ which is that we really want this stuff to work
2366
+
2367
+ 31:05.000 --> 31:06.040
2368
+ because we think it's important
2369
+
2370
+ 31:06.040 --> 31:07.440
2371
+ and we think it's the future.
2372
+
2373
+ 31:08.360 --> 31:11.520
2374
+ That's not to say those two cultures don't clash.
2375
+
2376
+ 31:11.520 --> 31:12.440
2377
+ They absolutely do.
2378
+
2379
+ 31:12.440 --> 31:14.760
2380
+ There's different sort of value systems.
2381
+
2382
+ 31:14.760 --> 31:17.960
2383
+ Like in a car company, the thing that gets you promoted
2384
+
2385
+ 31:17.960 --> 31:22.600
2386
+ and sort of the reward system is following the processes,
2387
+
2388
+ 31:22.600 --> 31:26.080
2389
+ delivering the program on time and on budget.
2390
+
2391
+ 31:26.080 --> 31:30.440
2392
+ So any sort of risk taking is discouraged in many ways
2393
+
2394
+ 31:30.440 --> 31:34.000
2395
+ because if a program is late
2396
+
2397
+ 31:34.000 --> 31:36.200
2398
+ or if you shut down the plant for a day,
2399
+
2400
+ 31:36.200 --> 31:37.560
2401
+ you can count the millions of dollars
2402
+
2403
+ 31:37.560 --> 31:39.600
2404
+ that burn by pretty quickly.
2405
+
2406
+ 31:39.600 --> 31:43.800
2407
+ Whereas I think most Silicon Valley companies
2408
+
2409
+ 31:43.800 --> 31:48.280
2410
+ and in cruise and the methodology we were employing,
2411
+
2412
+ 31:48.280 --> 31:50.080
2413
+ especially around the time of the acquisition,
2414
+
2415
+ 31:50.080 --> 31:53.800
2416
+ the reward structure is about trying to solve
2417
+
2418
+ 31:53.800 --> 31:56.120
2419
+ these complex problems in any way, shape or form
2420
+
2421
+ 31:56.120 --> 31:59.640
2422
+ or coming up with crazy ideas that 90% of them won't work.
2423
+
2424
+ 31:59.640 --> 32:02.920
2425
+ And so meshing that culture
2426
+
2427
+ 32:02.920 --> 32:05.480
2428
+ of sort of continuous improvement and experimentation
2429
+
2430
+ 32:05.480 --> 32:07.400
2431
+ with one where everything needs to be
2432
+
2433
+ 32:07.400 --> 32:08.480
2434
+ rigorously defined up front
2435
+
2436
+ 32:08.480 --> 32:12.760
2437
+ so that you never slip a deadline or miss a budget
2438
+
2439
+ 32:12.760 --> 32:13.600
2440
+ was a pretty big challenge
2441
+
2442
+ 32:13.600 --> 32:16.960
2443
+ and that we're over three years in now
2444
+
2445
+ 32:16.960 --> 32:18.360
2446
+ after the acquisition.
2447
+
2448
+ 32:18.360 --> 32:20.480
2449
+ And I'd say like the investment we made
2450
+
2451
+ 32:20.480 --> 32:23.600
2452
+ in figuring out how to work together successfully
2453
+
2454
+ 32:23.600 --> 32:24.440
2455
+ and who should do what
2456
+
2457
+ 32:24.440 --> 32:26.360
2458
+ and how we bridge the gaps
2459
+
2460
+ 32:26.360 --> 32:27.680
2461
+ between these very different systems
2462
+
2463
+ 32:27.680 --> 32:29.520
2464
+ and way of doing engineering work
2465
+
2466
+ 32:29.520 --> 32:30.920
2467
+ is now one of our greatest assets
2468
+
2469
+ 32:30.920 --> 32:32.320
2470
+ because I think we have this really powerful thing
2471
+
2472
+ 32:32.320 --> 32:35.560
2473
+ but for a while it was both GM and cruise
2474
+
2475
+ 32:35.560 --> 32:37.440
2476
+ were very steep on the learning curve.
2477
+
2478
+ 32:37.440 --> 32:38.920
2479
+ Yeah, so I'm sure it was very stressful.
2480
+
2481
+ 32:38.920 --> 32:39.960
2482
+ It's really important work
2483
+
2484
+ 32:39.960 --> 32:43.680
2485
+ because that's how to revolutionize the transportation.
2486
+
2487
+ 32:43.680 --> 32:46.640
2488
+ Really to revolutionize any system,
2489
+
2490
+ 32:46.640 --> 32:48.200
2491
+ you look at the healthcare system
2492
+
2493
+ 32:48.200 --> 32:49.680
2494
+ or you look at the legal system.
2495
+
2496
+ 32:49.680 --> 32:52.040
2497
+ I have people like Laura's come up to me all the time
2498
+
2499
+ 32:52.040 --> 32:53.920
2500
+ like everything they're working on
2501
+
2502
+ 32:53.920 --> 32:55.960
2503
+ can easily be automated.
2504
+
2505
+ 32:55.960 --> 32:57.480
2506
+ But then that's not a good feeling.
2507
+
2508
+ 32:57.480 --> 32:58.320
2509
+ Yeah.
2510
+
2511
+ 32:58.320 --> 32:59.160
2512
+ Well, it's not a good feeling,
2513
+
2514
+ 32:59.160 --> 33:01.200
2515
+ but also there's no way to automate
2516
+
2517
+ 33:01.200 --> 33:06.200
2518
+ because the entire infrastructure is really based
2519
+
2520
+ 33:06.360 --> 33:08.360
2521
+ is older and it moves very slowly.
2522
+
2523
+ 33:08.360 --> 33:11.560
2524
+ And so how do you close the gap between?
2525
+
2526
+ 33:11.560 --> 33:13.880
2527
+ I haven't, how can I replace?
2528
+
2529
+ 33:13.880 --> 33:15.720
2530
+ Of course, Laura's the one be replaced with an app
2531
+
2532
+ 33:15.720 --> 33:17.920
2533
+ but you could replace a lot of aspect
2534
+
2535
+ 33:17.920 --> 33:20.160
2536
+ when most of the data is still on paper.
2537
+
2538
+ 33:20.160 --> 33:23.400
2539
+ And so the same thing with automotive.
2540
+
2541
+ 33:23.400 --> 33:26.080
2542
+ I mean, it's fundamentally software.
2543
+
2544
+ 33:26.080 --> 33:28.560
2545
+ So it's basically hiring software engineers.
2546
+
2547
+ 33:28.560 --> 33:30.320
2548
+ It's thinking of software world.
2549
+
2550
+ 33:30.320 --> 33:32.560
2551
+ I mean, I'm pretty sure nobody in Silicon Valley
2552
+
2553
+ 33:32.560 --> 33:34.640
2554
+ has ever hit a deadline.
2555
+
2556
+ 33:34.640 --> 33:36.000
2557
+ So and then on GM.
2558
+
2559
+ 33:36.000 --> 33:37.400
2560
+ That's probably true, yeah.
2561
+
2562
+ 33:37.400 --> 33:39.920
2563
+ And GM side is probably the opposite.
2564
+
2565
+ 33:39.920 --> 33:42.720
2566
+ So that's that culture gap is really fascinating.
2567
+
2568
+ 33:42.720 --> 33:45.160
2569
+ So you're optimistic about the future of that.
2570
+
2571
+ 33:45.160 --> 33:47.440
2572
+ Yeah, I mean, from what I've seen, it's impressive.
2573
+
2574
+ 33:47.440 --> 33:49.400
2575
+ And I think like, especially in Silicon Valley,
2576
+
2577
+ 33:49.400 --> 33:51.440
2578
+ it's easy to write off building cars
2579
+
2580
+ 33:51.440 --> 33:53.120
2581
+ because people have been doing that
2582
+
2583
+ 33:53.120 --> 33:54.960
2584
+ for over a hundred years now in this country.
2585
+
2586
+ 33:54.960 --> 33:57.080
2587
+ And so it seems like that's a solved problem,
2588
+
2589
+ 33:57.080 --> 33:58.840
2590
+ but that doesn't mean it's an easy problem.
2591
+
2592
+ 33:58.840 --> 34:02.280
2593
+ And I think it would be easy to sort of overlook that
2594
+
2595
+ 34:02.280 --> 34:06.080
2596
+ and think that we're Silicon Valley engineers,
2597
+
2598
+ 34:06.080 --> 34:08.960
2599
+ we can solve any problem, building a car,
2600
+
2601
+ 34:08.960 --> 34:13.200
2602
+ it's been done, therefore it's not a real engineering
2603
+
2604
+ 34:13.200 --> 34:14.600
2605
+ challenge.
2606
+
2607
+ 34:14.600 --> 34:17.480
2608
+ But after having seen just the sheer scale
2609
+
2610
+ 34:17.480 --> 34:21.360
2611
+ and magnitude and industrialization that occurs
2612
+
2613
+ 34:21.360 --> 34:23.280
2614
+ inside of an automotive assembly plant,
2615
+
2616
+ 34:23.280 --> 34:25.840
2617
+ that is a lot of work that I am very glad
2618
+
2619
+ 34:25.840 --> 34:28.200
2620
+ that we don't have to reinvent
2621
+
2622
+ 34:28.200 --> 34:29.480
2623
+ to make self driving cars work.
2624
+
2625
+ 34:29.480 --> 34:31.680
2626
+ And so to have partners who have done that for a hundred
2627
+
2628
+ 34:31.680 --> 34:32.960
2629
+ years and have these great processes
2630
+
2631
+ 34:32.960 --> 34:35.720
2632
+ and this huge infrastructure and supply base
2633
+
2634
+ 34:35.720 --> 34:38.760
2635
+ that we can tap into is just remarkable
2636
+
2637
+ 34:38.760 --> 34:43.760
2638
+ because the scope and surface area of the problem
2639
+
2640
+ 34:44.560 --> 34:47.400
2641
+ of deploying fleets of self driving cars is so large
2642
+
2643
+ 34:47.400 --> 34:50.320
2644
+ that we're constantly looking for ways to do less
2645
+
2646
+ 34:50.320 --> 34:52.920
2647
+ so we can focus on the things that really matter more.
2648
+
2649
+ 34:52.920 --> 34:55.360
2650
+ And if we had to figure out how to build and assemble
2651
+
2652
+ 34:55.360 --> 35:00.120
2653
+ and test and build the cars themselves,
2654
+
2655
+ 35:00.120 --> 35:01.640
2656
+ I mean, we work closely with GM on that,
2657
+
2658
+ 35:01.640 --> 35:03.240
2659
+ but if we had to develop all that capability
2660
+
2661
+ 35:03.240 --> 35:08.240
2662
+ in house as well, that would just make the problem
2663
+
2664
+ 35:08.320 --> 35:10.200
2665
+ really intractable, I think.
2666
+
2667
+ 35:10.200 --> 35:14.880
2668
+ So yeah, just like your first entry at the MIT DARPA
2669
+
2670
+ 35:14.880 --> 35:17.680
2671
+ challenge when it was what the motor that failed
2672
+
2673
+ 35:17.680 --> 35:19.000
2674
+ and somebody that knows what they're doing
2675
+
2676
+ 35:19.000 --> 35:20.040
2677
+ with the motor did it.
2678
+
2679
+ 35:20.040 --> 35:22.080
2680
+ It would have been nice if we could focus on the software
2681
+
2682
+ 35:22.080 --> 35:23.880
2683
+ and not the hardware platform.
2684
+
2685
+ 35:23.880 --> 35:24.800
2686
+ Yeah, right.
2687
+
2688
+ 35:24.800 --> 35:27.080
2689
+ So from your perspective now,
2690
+
2691
+ 35:28.080 --> 35:29.960
2692
+ there's so many ways that autonomous vehicles
2693
+
2694
+ 35:29.960 --> 35:34.280
2695
+ can impact society in the next year, five years, 10 years.
2696
+
2697
+ 35:34.280 --> 35:37.080
2698
+ What do you think is the biggest opportunity
2699
+
2700
+ 35:37.080 --> 35:39.360
2701
+ to make money in autonomous driving,
2702
+
2703
+ 35:40.560 --> 35:44.720
2704
+ sort of make it a financially viable thing in the near term?
2705
+
2706
+ 35:44.720 --> 35:49.120
2707
+ What do you think would be the biggest impact there?
2708
+
2709
+ 35:49.120 --> 35:52.160
2710
+ Well, the things that drive the economics
2711
+
2712
+ 35:52.160 --> 35:53.600
2713
+ for fleets of self driving cars
2714
+
2715
+ 35:53.600 --> 35:56.440
2716
+ are there's sort of a handful of variables.
2717
+
2718
+ 35:56.440 --> 36:00.400
2719
+ One is the cost to build the vehicle itself.
2720
+
2721
+ 36:00.400 --> 36:03.720
2722
+ So the material cost, what's the cost of all your sensors,
2723
+
2724
+ 36:03.720 --> 36:05.200
2725
+ plus the cost of the vehicle
2726
+
2727
+ 36:05.200 --> 36:07.560
2728
+ and all the other components on it.
2729
+
2730
+ 36:07.560 --> 36:09.520
2731
+ Another one is the lifetime of the vehicle.
2732
+
2733
+ 36:09.520 --> 36:12.480
2734
+ It's very different if your vehicle drives 100,000 miles
2735
+
2736
+ 36:12.480 --> 36:14.800
2737
+ and then it falls apart versus 2 million.
2738
+
2739
+ 36:16.720 --> 36:18.840
2740
+ And then if you have a fleet,
2741
+
2742
+ 36:18.840 --> 36:22.920
2743
+ it's kind of like an airplane or an airline
2744
+
2745
+ 36:22.920 --> 36:26.120
2746
+ where once you produce the vehicle,
2747
+
2748
+ 36:26.120 --> 36:27.880
2749
+ you want it to be in operation
2750
+
2751
+ 36:27.880 --> 36:30.760
2752
+ as many hours a day as possible producing revenue.
2753
+
2754
+ 36:30.760 --> 36:32.480
2755
+ And then the other piece of that
2756
+
2757
+ 36:32.480 --> 36:35.280
2758
+ is how are you generating revenue?
2759
+
2760
+ 36:35.280 --> 36:36.880
2761
+ I think that's kind of what you're asking in.
2762
+
2763
+ 36:36.880 --> 36:38.400
2764
+ I think the obvious things today
2765
+
2766
+ 36:38.400 --> 36:40.080
2767
+ are the ride sharing business
2768
+
2769
+ 36:40.080 --> 36:42.760
2770
+ because that's pretty clear that there's demand for that.
2771
+
2772
+ 36:42.760 --> 36:46.240
2773
+ There's existing markets you can tap into and...
2774
+
2775
+ 36:46.240 --> 36:47.960
2776
+ Large urban areas, that kind of thing.
2777
+
2778
+ 36:47.960 --> 36:48.800
2779
+ Yeah, yeah.
2780
+
2781
+ 36:48.800 --> 36:51.200
2782
+ And I think that there are some real benefits
2783
+
2784
+ 36:51.200 --> 36:54.520
2785
+ to having cars without drivers
2786
+
2787
+ 36:54.520 --> 36:56.040
2788
+ compared to sort of the status quo
2789
+
2790
+ 36:56.040 --> 36:58.520
2791
+ for people who use ride share services today.
2792
+
2793
+ 36:58.520 --> 37:01.040
2794
+ You know, your privacy, consistency,
2795
+
2796
+ 37:01.040 --> 37:02.440
2797
+ hopefully significantly improve safety,
2798
+
2799
+ 37:02.440 --> 37:05.120
2800
+ all these benefits versus the current product.
2801
+
2802
+ 37:05.120 --> 37:06.520
2803
+ But it's a crowded market.
2804
+
2805
+ 37:06.520 --> 37:08.000
2806
+ And then other opportunities
2807
+
2808
+ 37:08.000 --> 37:09.600
2809
+ which you've seen a lot of activity in the last,
2810
+
2811
+ 37:09.600 --> 37:12.560
2812
+ really in the last six or 12 months is delivery,
2813
+
2814
+ 37:12.560 --> 37:17.560
2815
+ whether that's parcels and packages, food or groceries.
2816
+
2817
+ 37:17.800 --> 37:20.320
2818
+ Those are all sort of, I think, opportunities
2819
+
2820
+ 37:20.320 --> 37:23.640
2821
+ that are pretty ripe for these.
2822
+
2823
+ 37:23.640 --> 37:26.000
2824
+ Once you have this core technology,
2825
+
2826
+ 37:26.000 --> 37:28.080
2827
+ which is the fleet of autonomous vehicles,
2828
+
2829
+ 37:28.080 --> 37:30.920
2830
+ there's all sorts of different business opportunities
2831
+
2832
+ 37:30.920 --> 37:32.080
2833
+ you can build on top of that.
2834
+
2835
+ 37:32.080 --> 37:34.520
2836
+ But I think the important thing, of course,
2837
+
2838
+ 37:34.520 --> 37:36.440
2839
+ is that there's zero monetization opportunity
2840
+
2841
+ 37:36.440 --> 37:37.520
2842
+ until you actually have that fleet
2843
+
2844
+ 37:37.520 --> 37:39.160
2845
+ of very capable driverless cars
2846
+
2847
+ 37:39.160 --> 37:41.040
2848
+ that are as good or better than humans.
2849
+
2850
+ 37:41.040 --> 37:44.120
2851
+ And that's sort of where the entire industry
2852
+
2853
+ 37:44.120 --> 37:45.920
2854
+ is sort of in this holding pattern right now.
2855
+
2856
+ 37:45.920 --> 37:47.960
2857
+ Yeah, they're trying to achieve that baseline.
2858
+
2859
+ 37:47.960 --> 37:51.520
2860
+ But you said sort of not reliability consistency.
2861
+
2862
+ 37:51.520 --> 37:52.360
2863
+ It's kind of interesting.
2864
+
2865
+ 37:52.360 --> 37:54.200
2866
+ I think I heard you say somewhere,
2867
+
2868
+ 37:54.200 --> 37:55.440
2869
+ not sure if that's what you meant,
2870
+
2871
+ 37:55.440 --> 37:58.240
2872
+ but I can imagine a situation
2873
+
2874
+ 37:58.240 --> 38:01.200
2875
+ where you would get an autonomous vehicle.
2876
+
2877
+ 38:01.200 --> 38:04.560
2878
+ And when you get into an Uber or Lyft,
2879
+
2880
+ 38:04.560 --> 38:05.960
2881
+ you don't get to choose the driver
2882
+
2883
+ 38:05.960 --> 38:07.320
2884
+ in a sense that you don't get to choose
2885
+
2886
+ 38:07.320 --> 38:09.080
2887
+ the personality of the driving.
2888
+
2889
+ 38:09.080 --> 38:12.040
2890
+ Do you think there's room
2891
+
2892
+ 38:12.040 --> 38:14.120
2893
+ to define the personality of the car
2894
+
2895
+ 38:14.120 --> 38:15.040
2896
+ the way it drives you,
2897
+
2898
+ 38:15.040 --> 38:17.600
2899
+ in terms of aggressiveness, for example,
2900
+
2901
+ 38:17.600 --> 38:21.120
2902
+ in terms of sort of pushing the boundaries.
2903
+
2904
+ 38:21.120 --> 38:22.760
2905
+ One of the biggest challenges in autonomous driving
2906
+
2907
+ 38:22.760 --> 38:27.760
2908
+ is the trade off between sort of safety and assertiveness.
2909
+
2910
+ 38:28.600 --> 38:30.920
2911
+ And do you think there's any room
2912
+
2913
+ 38:30.920 --> 38:35.920
2914
+ for the human to take a role in that decision?
2915
+
2916
+ 38:36.040 --> 38:38.080
2917
+ Sort of accept some of the liability, I guess.
2918
+
2919
+ 38:38.080 --> 38:41.000
2920
+ I wouldn't say, no, I'd say within reasonable bounds,
2921
+
2922
+ 38:41.000 --> 38:42.280
2923
+ as in we're not gonna,
2924
+
2925
+ 38:43.200 --> 38:44.360
2926
+ I think it'd be higher than likely
2927
+
2928
+ 38:44.360 --> 38:46.600
2929
+ we'd expose any knob that would let you
2930
+
2931
+ 38:46.600 --> 38:50.240
2932
+ significantly increase safety risk.
2933
+
2934
+ 38:50.240 --> 38:53.080
2935
+ I think that's just not something we'd be willing to do.
2936
+
2937
+ 38:53.080 --> 38:56.760
2938
+ But I think driving style or like,
2939
+
2940
+ 38:56.760 --> 38:59.120
2941
+ are you gonna relax the comfort constraints slightly
2942
+
2943
+ 38:59.120 --> 39:00.160
2944
+ or things like that?
2945
+
2946
+ 39:00.160 --> 39:02.400
2947
+ All of those things make sense and are plausible.
2948
+
2949
+ 39:02.400 --> 39:04.480
2950
+ I see all those as nice optimizations.
2951
+
2952
+ 39:04.480 --> 39:06.760
2953
+ Once again, we get the core problem solved
2954
+
2955
+ 39:06.760 --> 39:08.120
2956
+ in these fleets out there.
2957
+
2958
+ 39:08.120 --> 39:10.440
2959
+ But the other thing we've sort of observed
2960
+
2961
+ 39:10.440 --> 39:12.560
2962
+ is that you have this intuition
2963
+
2964
+ 39:12.560 --> 39:15.400
2965
+ that if you sort of slam your foot on the gas
2966
+
2967
+ 39:15.400 --> 39:16.680
2968
+ right after the light turns green
2969
+
2970
+ 39:16.680 --> 39:18.160
2971
+ and aggressively accelerate,
2972
+
2973
+ 39:18.160 --> 39:19.720
2974
+ you're gonna get there faster.
2975
+
2976
+ 39:19.720 --> 39:22.080
2977
+ But the actual impact of doing that is pretty small.
2978
+
2979
+ 39:22.080 --> 39:23.680
2980
+ You feel like you're getting there faster,
2981
+
2982
+ 39:23.680 --> 39:26.680
2983
+ but so the same would be true for AVs.
2984
+
2985
+ 39:26.680 --> 39:29.640
2986
+ Even if they don't slam the pedal to the floor
2987
+
2988
+ 39:29.640 --> 39:31.000
2989
+ when the light turns green,
2990
+
2991
+ 39:31.000 --> 39:32.520
2992
+ they're gonna get you there within,
2993
+
2994
+ 39:32.520 --> 39:33.600
2995
+ if it's a 15 minute trip,
2996
+
2997
+ 39:33.600 --> 39:36.400
2998
+ within 30 seconds of what you would have done otherwise
2999
+
3000
+ 39:36.400 --> 39:37.800
3001
+ if you were going really aggressively.
3002
+
3003
+ 39:37.800 --> 39:40.760
3004
+ So I think there's this sort of self deception
3005
+
3006
+ 39:40.760 --> 39:44.440
3007
+ that my aggressive driving style is getting me there faster.
3008
+
3009
+ 39:44.440 --> 39:46.640
3010
+ Well, so that's, you know, some of the things I study,
3011
+
3012
+ 39:46.640 --> 39:48.760
3013
+ some of the things I'm fascinated by the psychology of that.
3014
+
3015
+ 39:48.760 --> 39:50.640
3016
+ And I don't think it matters
3017
+
3018
+ 39:50.640 --> 39:52.240
3019
+ that it doesn't get you there faster.
3020
+
3021
+ 39:52.240 --> 39:55.520
3022
+ It's the emotional release.
3023
+
3024
+ 39:55.520 --> 39:59.080
3025
+ Driving is a place, being inside our car,
3026
+
3027
+ 39:59.080 --> 40:00.880
3028
+ somebody said it's like the real world version
3029
+
3030
+ 40:00.880 --> 40:02.920
3031
+ of being a troll.
3032
+
3033
+ 40:02.920 --> 40:04.960
3034
+ So you have this protection, this mental protection,
3035
+
3036
+ 40:04.960 --> 40:06.640
3037
+ and you're able to sort of yell at the world,
3038
+
3039
+ 40:06.640 --> 40:08.200
3040
+ like release your anger, whatever it is.
3041
+
3042
+ 40:08.200 --> 40:10.040
3043
+ But so there's an element of that
3044
+
3045
+ 40:10.040 --> 40:12.000
3046
+ that I think autonomous vehicles
3047
+
3048
+ 40:12.000 --> 40:15.400
3049
+ would also have to, you know, giving an outlet to people,
3050
+
3051
+ 40:15.400 --> 40:19.120
3052
+ but it doesn't have to be through driving or honking
3053
+
3054
+ 40:19.120 --> 40:21.200
3055
+ or so on, there might be other outlets.
3056
+
3057
+ 40:21.200 --> 40:24.040
3058
+ But I think to just sort of even just put that aside,
3059
+
3060
+ 40:24.040 --> 40:26.880
3061
+ the baseline is really, you know, that's the focus,
3062
+
3063
+ 40:26.880 --> 40:28.200
3064
+ that's the thing you need to solve,
3065
+
3066
+ 40:28.200 --> 40:31.000
3067
+ and then the fun human things can be solved after.
3068
+
3069
+ 40:31.000 --> 40:34.680
3070
+ But so from the baseline of just solving autonomous driving,
3071
+
3072
+ 40:34.680 --> 40:36.000
3073
+ you're working in San Francisco,
3074
+
3075
+ 40:36.000 --> 40:38.960
3076
+ one of the more difficult cities to operate in,
3077
+
3078
+ 40:38.960 --> 40:42.080
3079
+ what is the, in your view currently,
3080
+
3081
+ 40:42.080 --> 40:45.040
3082
+ the hardest aspect of autonomous driving?
3083
+
3084
+ 40:46.880 --> 40:49.200
3085
+ Negotiating with pedestrians,
3086
+
3087
+ 40:49.200 --> 40:51.400
3088
+ is it edge cases of perception?
3089
+
3090
+ 40:51.400 --> 40:52.760
3091
+ Is it planning?
3092
+
3093
+ 40:52.760 --> 40:54.520
3094
+ Is there a mechanical engineering?
3095
+
3096
+ 40:54.520 --> 40:57.040
3097
+ Is it data, fleet stuff?
3098
+
3099
+ 40:57.040 --> 41:01.200
3100
+ What are your thoughts on the more challenging aspects there?
3101
+
3102
+ 41:01.200 --> 41:02.240
3103
+ That's a good question.
3104
+
3105
+ 41:02.240 --> 41:03.520
3106
+ I think before we go to that though,
3107
+
3108
+ 41:03.520 --> 41:05.080
3109
+ I just want to, I like what you said
3110
+
3111
+ 41:05.080 --> 41:07.600
3112
+ about the psychology aspect of this,
3113
+
3114
+ 41:07.600 --> 41:09.680
3115
+ because I think one observation I've made is,
3116
+
3117
+ 41:09.680 --> 41:11.760
3118
+ I think I read somewhere that I think it's,
3119
+
3120
+ 41:11.760 --> 41:13.880
3121
+ maybe Americans on average spend, you know,
3122
+
3123
+ 41:13.880 --> 41:16.520
3124
+ over an hour a day on social media,
3125
+
3126
+ 41:16.520 --> 41:18.280
3127
+ like staring at Facebook.
3128
+
3129
+ 41:18.280 --> 41:20.080
3130
+ And so that's just, you know,
3131
+
3132
+ 41:20.080 --> 41:21.600
3133
+ 60 minutes of your life, you're not getting back.
3134
+
3135
+ 41:21.600 --> 41:23.120
3136
+ It's probably not super productive.
3137
+
3138
+ 41:23.120 --> 41:26.200
3139
+ And so that's 3,600 seconds, right?
3140
+
3141
+ 41:26.200 --> 41:29.160
3142
+ And that's, that's time, you know,
3143
+
3144
+ 41:29.160 --> 41:30.600
3145
+ it's a lot of time you're giving up.
3146
+
3147
+ 41:30.600 --> 41:34.080
3148
+ And if you compare that to people being on the road,
3149
+
3150
+ 41:34.080 --> 41:35.360
3151
+ if another vehicle,
3152
+
3153
+ 41:35.360 --> 41:37.600
3154
+ whether it's a human driver or autonomous vehicle,
3155
+
3156
+ 41:37.600 --> 41:39.840
3157
+ delays them by even three seconds,
3158
+
3159
+ 41:39.840 --> 41:41.920
3160
+ they're laying in on the horn, you know,
3161
+
3162
+ 41:41.920 --> 41:43.280
3163
+ even though that's, that's, you know,
3164
+
3165
+ 41:43.280 --> 41:45.280
3166
+ one 1,000th of the time they waste
3167
+
3168
+ 41:45.280 --> 41:46.360
3169
+ looking at Facebook every day.
3170
+
3171
+ 41:46.360 --> 41:48.640
3172
+ So there's, there's definitely some,
3173
+
3174
+ 41:48.640 --> 41:50.040
3175
+ you know, psychology aspects of this,
3176
+
3177
+ 41:50.040 --> 41:50.880
3178
+ I think that are pretty interesting.
3179
+
3180
+ 41:50.880 --> 41:51.720
3181
+ Road rage in general.
3182
+
3183
+ 41:51.720 --> 41:52.960
3184
+ And then the question, of course,
3185
+
3186
+ 41:52.960 --> 41:54.960
3187
+ is if everyone is in self driving cars,
3188
+
3189
+ 41:54.960 --> 41:57.560
3190
+ do they even notice these three second delays anymore?
3191
+
3192
+ 41:57.560 --> 41:58.920
3193
+ Because they're doing other things
3194
+
3195
+ 41:58.920 --> 42:01.720
3196
+ or reading or working or just talking to each other.
3197
+
3198
+ 42:01.720 --> 42:03.200
3199
+ So it'll be interesting to see where that goes.
3200
+
3201
+ 42:03.200 --> 42:05.120
3202
+ In a certain aspect, people,
3203
+
3204
+ 42:05.120 --> 42:06.360
3205
+ people need to be distracted
3206
+
3207
+ 42:06.360 --> 42:07.360
3208
+ by something entertaining,
3209
+
3210
+ 42:07.360 --> 42:09.160
3211
+ something useful inside the car
3212
+
3213
+ 42:09.160 --> 42:10.960
3214
+ so they don't pay attention to the external world.
3215
+
3216
+ 42:10.960 --> 42:14.240
3217
+ And then, and then they can take whatever psychology
3218
+
3219
+ 42:14.240 --> 42:17.400
3220
+ and bring it back to Twitter and then focus on that
3221
+
3222
+ 42:17.400 --> 42:19.640
3223
+ as opposed to sort of interacting,
3224
+
3225
+ 42:20.920 --> 42:23.200
3226
+ sort of putting the emotion out there into the world.
3227
+
3228
+ 42:23.200 --> 42:24.560
3229
+ So it's an interesting problem,
3230
+
3231
+ 42:24.560 --> 42:26.960
3232
+ but baseline autonomy.
3233
+
3234
+ 42:26.960 --> 42:28.760
3235
+ I guess you could say self driving cars,
3236
+
3237
+ 42:28.760 --> 42:31.680
3238
+ you know, at scale will lower the collective blood pressure
3239
+
3240
+ 42:31.680 --> 42:33.920
3241
+ of society probably by a couple of points
3242
+
3243
+ 42:33.920 --> 42:35.760
3244
+ without all that road rage and stress.
3245
+
3246
+ 42:35.760 --> 42:37.480
3247
+ So that's a good, good externality.
3248
+
3249
+ 42:38.560 --> 42:41.760
3250
+ So back to your question about the technology
3251
+
3252
+ 42:41.760 --> 42:43.760
3253
+ and the, I guess the biggest problems.
3254
+
3255
+ 42:43.760 --> 42:45.560
3256
+ And I have a hard time answering that question
3257
+
3258
+ 42:45.560 --> 42:48.680
3259
+ because, you know, we've been at this,
3260
+
3261
+ 42:48.680 --> 42:51.440
3262
+ like specifically focusing on driverless cars
3263
+
3264
+ 42:51.440 --> 42:53.520
3265
+ and all the technology needed to enable that
3266
+
3267
+ 42:53.520 --> 42:55.160
3268
+ for a little over four and a half years now.
3269
+
3270
+ 42:55.160 --> 42:58.080
3271
+ And even a year or two in,
3272
+
3273
+ 42:58.080 --> 43:02.960
3274
+ I felt like we had completed the functionality needed
3275
+
3276
+ 43:02.960 --> 43:04.800
3277
+ to get someone from point A to point B.
3278
+
3279
+ 43:04.800 --> 43:07.280
3280
+ As in, if we need to do a left turn maneuver
3281
+
3282
+ 43:07.280 --> 43:08.960
3283
+ or if we need to drive around a, you know,
3284
+
3285
+ 43:08.960 --> 43:11.800
3286
+ a double parked vehicle into oncoming traffic
3287
+
3288
+ 43:11.800 --> 43:13.840
3289
+ or navigate through construction zones,
3290
+
3291
+ 43:13.840 --> 43:15.960
3292
+ the scaffolding and the building blocks
3293
+
3294
+ 43:15.960 --> 43:17.800
3295
+ was there pretty early on.
3296
+
3297
+ 43:17.800 --> 43:22.360
3298
+ And so the challenge is not any one scenario or situation
3299
+
3300
+ 43:22.360 --> 43:25.520
3301
+ for which, you know, we fail at 100% of those.
3302
+
3303
+ 43:25.520 --> 43:28.960
3304
+ It's more, you know, we're benchmarking against a pretty good
3305
+
3306
+ 43:28.960 --> 43:31.320
3307
+ or pretty high standard, which is human driving.
3308
+
3309
+ 43:31.320 --> 43:33.320
3310
+ All things considered, humans are excellent
3311
+
3312
+ 43:33.320 --> 43:36.240
3313
+ at handling edge cases and unexpected scenarios
3314
+
3315
+ 43:36.240 --> 43:38.400
3316
+ where it's computers are the opposite.
3317
+
3318
+ 43:38.400 --> 43:43.080
3319
+ And so beating that baseline set by humans is the challenge.
3320
+
3321
+ 43:43.080 --> 43:46.520
3322
+ And so what we've been doing for quite some time now
3323
+
3324
+ 43:46.520 --> 43:50.760
3325
+ is basically it's this continuous improvement process
3326
+
3327
+ 43:50.760 --> 43:55.000
3328
+ where we find sort of the most, you know, uncomfortable
3329
+
3330
+ 43:55.000 --> 43:59.840
3331
+ or the things that could lead to a safety issue
3332
+
3333
+ 43:59.840 --> 44:00.960
3334
+ or other things, all these events.
3335
+
3336
+ 44:00.960 --> 44:02.520
3337
+ And then we sort of categorize them
3338
+
3339
+ 44:02.520 --> 44:04.560
3340
+ and rework parts of our system
3341
+
3342
+ 44:04.560 --> 44:06.200
3343
+ to make incremental improvements
3344
+
3345
+ 44:06.200 --> 44:08.040
3346
+ and do that over and over and over again.
3347
+
3348
+ 44:08.040 --> 44:10.160
3349
+ And we just see sort of the overall performance
3350
+
3351
+ 44:10.160 --> 44:12.120
3352
+ of the system, you know,
3353
+
3354
+ 44:12.120 --> 44:13.960
3355
+ actually increasing in a pretty steady clip.
3356
+
3357
+ 44:13.960 --> 44:15.360
3358
+ But there's no one thing.
3359
+
3360
+ 44:15.360 --> 44:17.360
3361
+ There's actually like thousands of little things
3362
+
3363
+ 44:17.360 --> 44:19.880
3364
+ and just like polishing functionality
3365
+
3366
+ 44:19.880 --> 44:21.640
3367
+ and making sure that it handles, you know,
3368
+
3369
+ 44:21.640 --> 44:26.120
3370
+ every version and possible permutation of a situation
3371
+
3372
+ 44:26.120 --> 44:29.200
3373
+ by either applying more deep learning systems
3374
+
3375
+ 44:30.120 --> 44:32.960
3376
+ or just by, you know, adding more test coverage
3377
+
3378
+ 44:32.960 --> 44:35.760
3379
+ or new scenarios that we develop against
3380
+
3381
+ 44:35.760 --> 44:37.160
3382
+ and just grinding on that.
3383
+
3384
+ 44:37.160 --> 44:40.120
3385
+ We're sort of in the unsexy phase of development right now
3386
+
3387
+ 44:40.120 --> 44:41.800
3388
+ which is doing the real engineering work
3389
+
3390
+ 44:41.800 --> 44:44.120
3391
+ that it takes to go from prototype to production.
3392
+
3393
+ 44:44.120 --> 44:46.960
3394
+ You're basically scaling the grinding.
3395
+
3396
+ 44:46.960 --> 44:50.560
3397
+ So sort of taking seriously the process
3398
+
3399
+ 44:50.560 --> 44:54.040
3400
+ of all those edge cases, both with human experts
3401
+
3402
+ 44:54.040 --> 44:57.520
3403
+ and machine learning methods to cover,
3404
+
3405
+ 44:57.520 --> 44:59.320
3406
+ to cover all those situations.
3407
+
3408
+ 44:59.320 --> 45:00.760
3409
+ Yeah, and the exciting thing for me is
3410
+
3411
+ 45:00.760 --> 45:03.000
3412
+ I don't think that grinding ever stops
3413
+
3414
+ 45:03.000 --> 45:04.840
3415
+ because there's a moment in time
3416
+
3417
+ 45:04.840 --> 45:08.760
3418
+ where you've crossed that threshold of human performance
3419
+
3420
+ 45:08.760 --> 45:10.000
3421
+ and become superhuman.
3422
+
3423
+ 45:11.200 --> 45:13.560
3424
+ But there's no reason, there's no first principles reason
3425
+
3426
+ 45:13.560 --> 45:17.560
3427
+ that AV capability will tap out anywhere near humans.
3428
+
3429
+ 45:17.560 --> 45:20.280
3430
+ Like there's no reason it couldn't be 20 times better
3431
+
3432
+ 45:20.280 --> 45:22.120
3433
+ whether that's, you know, just better driving
3434
+
3435
+ 45:22.120 --> 45:24.240
3436
+ or safer driving or more comfortable driving
3437
+
3438
+ 45:24.240 --> 45:26.800
3439
+ or even a thousand times better given enough time.
3440
+
3441
+ 45:26.800 --> 45:31.480
3442
+ And we intend to basically chase that, you know, forever
3443
+
3444
+ 45:31.480 --> 45:32.840
3445
+ to build the best possible product.
3446
+
3447
+ 45:32.840 --> 45:33.960
3448
+ Better and better and better
3449
+
3450
+ 45:33.960 --> 45:36.400
3451
+ and always new edge cases come up and new experiences.
3452
+
3453
+ 45:36.400 --> 45:39.520
3454
+ So, and you want to automate that process
3455
+
3456
+ 45:39.520 --> 45:40.720
3457
+ as much as possible.
3458
+
3459
+ 45:42.680 --> 45:45.160
3460
+ So what do you think in general in society
3461
+
3462
+ 45:45.160 --> 45:48.200
3463
+ when do you think we may have hundreds of thousands
3464
+
3465
+ 45:48.200 --> 45:50.200
3466
+ of fully autonomous vehicles driving around?
3467
+
3468
+ 45:50.200 --> 45:53.560
3469
+ So first of all, predictions, nobody knows the future.
3470
+
3471
+ 45:53.560 --> 45:55.360
3472
+ You're a part of the leading people
3473
+
3474
+ 45:55.360 --> 45:56.560
3475
+ trying to define that future,
3476
+
3477
+ 45:56.560 --> 45:58.560
3478
+ but even then you still don't know.
3479
+
3480
+ 45:58.560 --> 46:02.240
3481
+ But if you think about hundreds of thousands of vehicles,
3482
+
3483
+ 46:02.240 --> 46:05.840
3484
+ so a significant fraction of vehicles
3485
+
3486
+ 46:05.840 --> 46:07.600
3487
+ in major cities are autonomous.
3488
+
3489
+ 46:07.600 --> 46:10.800
3490
+ Do you think, are you with Rodney Brooks
3491
+
3492
+ 46:10.800 --> 46:13.960
3493
+ who is 2050 and beyond?
3494
+
3495
+ 46:13.960 --> 46:17.200
3496
+ Or are you more with Elon Musk
3497
+
3498
+ 46:17.200 --> 46:20.600
3499
+ who is, we should have had that two years ago?
3500
+
3501
+ 46:20.600 --> 46:23.840
3502
+ Well, I mean, I'd love to have it two years ago,
3503
+
3504
+ 46:23.840 --> 46:26.120
3505
+ but we're not there yet.
3506
+
3507
+ 46:26.120 --> 46:28.480
3508
+ So I guess the way I would think about that
3509
+
3510
+ 46:28.480 --> 46:31.240
3511
+ is let's flip that question around.
3512
+
3513
+ 46:31.240 --> 46:34.200
3514
+ So what would prevent you to reach hundreds
3515
+
3516
+ 46:34.200 --> 46:36.320
3517
+ of thousands of vehicles and...
3518
+
3519
+ 46:36.320 --> 46:38.200
3520
+ That's a good rephrasing.
3521
+
3522
+ 46:38.200 --> 46:43.200
3523
+ Yeah, so the, I'd say that it seems the consensus
3524
+
3525
+ 46:43.200 --> 46:45.200
3526
+ among the people developing self driving cars today
3527
+
3528
+ 46:45.200 --> 46:49.200
3529
+ is to sort of start with some form of an easier environment,
3530
+
3531
+ 46:49.200 --> 46:52.200
3532
+ whether it means lacking, inclement weather,
3533
+
3534
+ 46:52.200 --> 46:55.200
3535
+ or mostly sunny or whatever it is.
3536
+
3537
+ 46:55.200 --> 46:59.200
3538
+ And then add capability for more complex situations
3539
+
3540
+ 46:59.200 --> 47:00.200
3541
+ over time.
3542
+
3543
+ 47:00.200 --> 47:05.200
3544
+ And so if you're only able to deploy in areas
3545
+
3546
+ 47:05.200 --> 47:07.200
3547
+ that meet sort of your criteria
3548
+
3549
+ 47:07.200 --> 47:09.200
3550
+ or that the current don't meet,
3551
+
3552
+ 47:09.200 --> 47:13.200
3553
+ operating domain of the software you developed,
3554
+
3555
+ 47:13.200 --> 47:16.200
3556
+ that may put a cap on how many cities you could deploy in.
3557
+
3558
+ 47:16.200 --> 47:19.200
3559
+ But then as those restrictions start to fall away,
3560
+
3561
+ 47:19.200 --> 47:22.200
3562
+ like maybe you add capability to drive really well
3563
+
3564
+ 47:22.200 --> 47:25.200
3565
+ and safely and have you rain or snow,
3566
+
3567
+ 47:25.200 --> 47:28.200
3568
+ that probably opens up the market by two or three fold
3569
+
3570
+ 47:28.200 --> 47:31.200
3571
+ in terms of the cities you can expand into and so on.
3572
+
3573
+ 47:31.200 --> 47:33.200
3574
+ And so the real question is,
3575
+
3576
+ 47:33.200 --> 47:35.200
3577
+ I know today if we wanted to,
3578
+
3579
+ 47:35.200 --> 47:39.200
3580
+ we could produce that many autonomous vehicles,
3581
+
3582
+ 47:39.200 --> 47:41.200
3583
+ but we wouldn't be able to make use of all of them yet
3584
+
3585
+ 47:41.200 --> 47:44.200
3586
+ because we would sort of saturate the demand in the cities
3587
+
3588
+ 47:44.200 --> 47:47.200
3589
+ in which we would want to operate initially.
3590
+
3591
+ 47:47.200 --> 47:49.200
3592
+ So if I were to guess what the timeline is
3593
+
3594
+ 47:49.200 --> 47:51.200
3595
+ for those things falling away
3596
+
3597
+ 47:51.200 --> 47:54.200
3598
+ and reaching hundreds, thousands of vehicles.
3599
+
3600
+ 47:54.200 --> 47:55.200
3601
+ Maybe a range is better.
3602
+
3603
+ 47:55.200 --> 47:57.200
3604
+ I would say less than five years.
3605
+
3606
+ 47:57.200 --> 47:58.200
3607
+ Less than five years.
3608
+
3609
+ 47:58.200 --> 47:59.200
3610
+ Yeah.
3611
+
3612
+ 47:59.200 --> 48:02.200
3613
+ And of course you're working hard to make that happen.
3614
+
3615
+ 48:02.200 --> 48:05.200
3616
+ So you started two companies that were eventually acquired
3617
+
3618
+ 48:05.200 --> 48:08.200
3619
+ for each $4 billion.
3620
+
3621
+ 48:08.200 --> 48:10.200
3622
+ So you're a pretty good person to ask,
3623
+
3624
+ 48:10.200 --> 48:13.200
3625
+ what does it take to build a successful startup?
3626
+
3627
+ 48:13.200 --> 48:18.200
3628
+ I think there's sort of survivor bias here a little bit,
3629
+
3630
+ 48:18.200 --> 48:20.200
3631
+ but I can try to find some common threads
3632
+
3633
+ 48:20.200 --> 48:22.200
3634
+ for the things that worked for me, which is...
3635
+
3636
+ 48:24.200 --> 48:26.200
3637
+ In both of these companies,
3638
+
3639
+ 48:26.200 --> 48:28.200
3640
+ I was really passionate about the core technology.
3641
+
3642
+ 48:28.200 --> 48:31.200
3643
+ I actually lay awake at night thinking about these problems
3644
+
3645
+ 48:31.200 --> 48:33.200
3646
+ and how to solve them.
3647
+
3648
+ 48:33.200 --> 48:35.200
3649
+ And I think that's helpful because when you start a business,
3650
+
3651
+ 48:35.200 --> 48:37.200
3652
+ there are...
3653
+
3654
+ 48:37.200 --> 48:40.200
3655
+ To this day, there are these crazy ups and downs.
3656
+
3657
+ 48:40.200 --> 48:43.200
3658
+ One day you think the business is just on top of the world
3659
+
3660
+ 48:43.200 --> 48:45.200
3661
+ and unstoppable and the next day you think,
3662
+
3663
+ 48:45.200 --> 48:47.200
3664
+ okay, this is all going to end.
3665
+
3666
+ 48:47.200 --> 48:50.200
3667
+ It's just going south and it's going to be over tomorrow.
3668
+
3669
+ 48:52.200 --> 48:55.200
3670
+ And so I think having a true passion that you can fall back on
3671
+
3672
+ 48:55.200 --> 48:57.200
3673
+ and knowing that you would be doing it
3674
+
3675
+ 48:57.200 --> 48:58.200
3676
+ even if you weren't getting paid for it
3677
+
3678
+ 48:58.200 --> 49:00.200
3679
+ helps you weather those tough times.
3680
+
3681
+ 49:00.200 --> 49:02.200
3682
+ So that's one thing.
3683
+
3684
+ 49:02.200 --> 49:05.200
3685
+ I think the other one is really good people.
3686
+
3687
+ 49:05.200 --> 49:07.200
3688
+ So I've always been surrounded by really good cofounders
3689
+
3690
+ 49:07.200 --> 49:09.200
3691
+ that are logical thinkers,
3692
+
3693
+ 49:09.200 --> 49:11.200
3694
+ are always pushing their limits
3695
+
3696
+ 49:11.200 --> 49:13.200
3697
+ and have very high levels of integrity.
3698
+
3699
+ 49:13.200 --> 49:15.200
3700
+ So that's Dan Kahn in my current company
3701
+
3702
+ 49:15.200 --> 49:17.200
3703
+ and actually his brother and a couple other guys
3704
+
3705
+ 49:17.200 --> 49:19.200
3706
+ for Justin TV and Twitch.
3707
+
3708
+ 49:19.200 --> 49:23.200
3709
+ And then I think the last thing is just,
3710
+
3711
+ 49:23.200 --> 49:26.200
3712
+ I guess, persistence or perseverance.
3713
+
3714
+ 49:26.200 --> 49:29.200
3715
+ And that can apply to sticking to
3716
+
3717
+ 49:29.200 --> 49:33.200
3718
+ having conviction around the original premise of your idea
3719
+
3720
+ 49:33.200 --> 49:36.200
3721
+ and sticking around to do all the unsexy work
3722
+
3723
+ 49:36.200 --> 49:38.200
3724
+ to actually make it come to fruition,
3725
+
3726
+ 49:38.200 --> 49:41.200
3727
+ including dealing with whatever it is
3728
+
3729
+ 49:41.200 --> 49:43.200
3730
+ that you're not passionate about,
3731
+
3732
+ 49:43.200 --> 49:47.200
3733
+ whether that's finance or HR or operations or those things.
3734
+
3735
+ 49:47.200 --> 49:49.200
3736
+ As long as you are grinding away
3737
+
3738
+ 49:49.200 --> 49:52.200
3739
+ and working towards that North Star for your business,
3740
+
3741
+ 49:52.200 --> 49:54.200
3742
+ whatever it is and you don't give up
3743
+
3744
+ 49:54.200 --> 49:56.200
3745
+ and you're making progress every day,
3746
+
3747
+ 49:56.200 --> 49:58.200
3748
+ it seems like eventually you'll end up in a good place.
3749
+
3750
+ 49:58.200 --> 50:00.200
3751
+ And the only things that can slow you down
3752
+
3753
+ 50:00.200 --> 50:01.200
3754
+ are running out of money
3755
+
3756
+ 50:01.200 --> 50:03.200
3757
+ or I suppose your competitor is destroying you,
3758
+
3759
+ 50:03.200 --> 50:06.200
3760
+ but I think most of the time it's people giving up
3761
+
3762
+ 50:06.200 --> 50:08.200
3763
+ or somehow destroying things themselves
3764
+
3765
+ 50:08.200 --> 50:10.200
3766
+ rather than being beaten by their competition
3767
+
3768
+ 50:10.200 --> 50:11.200
3769
+ or running out of money.
3770
+
3771
+ 50:11.200 --> 50:14.200
3772
+ Yeah, if you never quit, eventually you'll arrive.
3773
+
3774
+ 50:14.200 --> 50:16.200
3775
+ It's a much more concise version
3776
+
3777
+ 50:16.200 --> 50:18.200
3778
+ of what I was trying to say.
3779
+
3780
+ 50:18.200 --> 50:21.200
3781
+ So you went the Y Combinator out twice.
3782
+
3783
+ 50:21.200 --> 50:23.200
3784
+ What do you think, in a quick question,
3785
+
3786
+ 50:23.200 --> 50:25.200
3787
+ do you think is the best way to raise funds
3788
+
3789
+ 50:25.200 --> 50:27.200
3790
+ in the early days?
3791
+
3792
+ 50:27.200 --> 50:30.200
3793
+ Or not just funds, but just community,
3794
+
3795
+ 50:30.200 --> 50:32.200
3796
+ develop your idea and so on.
3797
+
3798
+ 50:32.200 --> 50:37.200
3799
+ Can you do it solo or maybe with a cofounder
3800
+
3801
+ 50:37.200 --> 50:39.200
3802
+ like self funded?
3803
+
3804
+ 50:39.200 --> 50:40.200
3805
+ Do you think Y Combinator is good?
3806
+
3807
+ 50:40.200 --> 50:41.200
3808
+ Is it good to do VC route?
3809
+
3810
+ 50:41.200 --> 50:43.200
3811
+ Is there no right answer or is there,
3812
+
3813
+ 50:43.200 --> 50:45.200
3814
+ from the Y Combinator experience,
3815
+
3816
+ 50:45.200 --> 50:47.200
3817
+ something that you could take away
3818
+
3819
+ 50:47.200 --> 50:49.200
3820
+ that that was the right path to take?
3821
+
3822
+ 50:49.200 --> 50:50.200
3823
+ There's no one size fits all answer,
3824
+
3825
+ 50:50.200 --> 50:54.200
3826
+ but if your ambition I think is to see how big
3827
+
3828
+ 50:54.200 --> 50:57.200
3829
+ you can make something or rapidly expand
3830
+
3831
+ 50:57.200 --> 50:59.200
3832
+ and capture a market or solve a problem
3833
+
3834
+ 50:59.200 --> 51:02.200
3835
+ or whatever it is, then going the venture
3836
+
3837
+ 51:02.200 --> 51:04.200
3838
+ back route is probably a good approach
3839
+
3840
+ 51:04.200 --> 51:07.200
3841
+ so that capital doesn't become your primary constraint.
3842
+
3843
+ 51:07.200 --> 51:10.200
3844
+ Y Combinator, I love because it puts you
3845
+
3846
+ 51:10.200 --> 51:13.200
3847
+ in this sort of competitive environment
3848
+
3849
+ 51:13.200 --> 51:16.200
3850
+ where you're surrounded by the top,
3851
+
3852
+ 51:16.200 --> 51:19.200
3853
+ maybe 1% of other really highly motivated
3854
+
3855
+ 51:19.200 --> 51:22.200
3856
+ peers who are in the same place.
3857
+
3858
+ 51:22.200 --> 51:26.200
3859
+ In that environment I think just breeds success.
3860
+
3861
+ 51:26.200 --> 51:28.200
3862
+ If you're surrounded by really brilliant
3863
+
3864
+ 51:28.200 --> 51:30.200
3865
+ hardworking people, you're going to feel
3866
+
3867
+ 51:30.200 --> 51:32.200
3868
+ sort of compelled or inspired to try
3869
+
3870
+ 51:32.200 --> 51:35.200
3871
+ to emulate them or beat them.
3872
+
3873
+ 51:35.200 --> 51:37.200
3874
+ So even though I had done it once before
3875
+
3876
+ 51:37.200 --> 51:41.200
3877
+ and I felt like I'm pretty self motivated,
3878
+
3879
+ 51:41.200 --> 51:43.200
3880
+ I thought this is going to be a hard problem,
3881
+
3882
+ 51:43.200 --> 51:45.200
3883
+ I can use all the help I can get.
3884
+
3885
+ 51:45.200 --> 51:46.200
3886
+ So surrounding myself with other entrepreneurs
3887
+
3888
+ 51:46.200 --> 51:48.200
3889
+ is going to make me work a little bit harder
3890
+
3891
+ 51:48.200 --> 51:51.200
3892
+ or push a little harder then it's worth it.
3893
+
3894
+ 51:51.200 --> 51:54.200
3895
+ That's why I did it, for example, the second time.
3896
+
3897
+ 51:54.200 --> 51:57.200
3898
+ Let's go full soft, go existential.
3899
+
3900
+ 51:57.200 --> 52:00.200
3901
+ If you go back and do something differently in your life,
3902
+
3903
+ 52:00.200 --> 52:06.200
3904
+ starting in high school and MIT, leaving MIT,
3905
+
3906
+ 52:06.200 --> 52:08.200
3907
+ you could have gone to the PhD route,
3908
+
3909
+ 52:08.200 --> 52:13.200
3910
+ doing startup, going to see about a startup in California
3911
+
3912
+ 52:13.200 --> 52:15.200
3913
+ or maybe some aspects of fundraising.
3914
+
3915
+ 52:15.200 --> 52:17.200
3916
+ Is there something you regret,
3917
+
3918
+ 52:17.200 --> 52:20.200
3919
+ not necessarily regret, but if you go back,
3920
+
3921
+ 52:20.200 --> 52:22.200
3922
+ you could do differently?
3923
+
3924
+ 52:22.200 --> 52:24.200
3925
+ I think I've made a lot of mistakes,
3926
+
3927
+ 52:24.200 --> 52:26.200
3928
+ pretty much everything you can screw up,
3929
+
3930
+ 52:26.200 --> 52:28.200
3931
+ I think I've screwed up at least once.
3932
+
3933
+ 52:28.200 --> 52:30.200
3934
+ But I don't regret those things.
3935
+
3936
+ 52:30.200 --> 52:32.200
3937
+ I think it's hard to look back on things,
3938
+
3939
+ 52:32.200 --> 52:34.200
3940
+ even if they didn't go well and call it a regret,
3941
+
3942
+ 52:34.200 --> 52:37.200
3943
+ because hopefully it took away some new knowledge
3944
+
3945
+ 52:37.200 --> 52:39.200
3946
+ or learning from that.
3947
+
3948
+ 52:42.200 --> 52:45.200
3949
+ I would say there's a period,
3950
+
3951
+ 52:45.200 --> 52:47.200
3952
+ the closest I can come to this,
3953
+
3954
+ 52:47.200 --> 52:49.200
3955
+ there's a period in just in TV,
3956
+
3957
+ 52:49.200 --> 52:54.200
3958
+ I think after seven years where the company was going
3959
+
3960
+ 52:54.200 --> 52:57.200
3961
+ one direction, which is towards Twitch and video gaming.
3962
+
3963
+ 52:57.200 --> 52:58.200
3964
+ I'm not a video gamer.
3965
+
3966
+ 52:58.200 --> 53:01.200
3967
+ I don't really even use Twitch at all.
3968
+
3969
+ 53:01.200 --> 53:04.200
3970
+ I was still working on the core technology there,
3971
+
3972
+ 53:04.200 --> 53:06.200
3973
+ but my heart was no longer in it,
3974
+
3975
+ 53:06.200 --> 53:08.200
3976
+ because the business that we were creating
3977
+
3978
+ 53:08.200 --> 53:10.200
3979
+ was not something that I was personally passionate about.
3980
+
3981
+ 53:10.200 --> 53:12.200
3982
+ It didn't meet your bar of existential impact.
3983
+
3984
+ 53:12.200 --> 53:16.200
3985
+ Yeah, and I'd say I probably spent an extra year or two
3986
+
3987
+ 53:16.200 --> 53:20.200
3988
+ working on that, and I'd say I would have just tried
3989
+
3990
+ 53:20.200 --> 53:22.200
3991
+ to do something different sooner.
3992
+
3993
+ 53:22.200 --> 53:26.200
3994
+ Because those were two years where I felt like,
3995
+
3996
+ 53:26.200 --> 53:29.200
3997
+ from this philosophical or existential thing,
3998
+
3999
+ 53:29.200 --> 53:31.200
4000
+ I just felt that something was missing.
4001
+
4002
+ 53:31.200 --> 53:34.200
4003
+ If I could look back now and tell myself,
4004
+
4005
+ 53:34.200 --> 53:35.200
4006
+ I would have said exactly that.
4007
+
4008
+ 53:35.200 --> 53:38.200
4009
+ You're not getting any meaning out of your work personally
4010
+
4011
+ 53:38.200 --> 53:39.200
4012
+ right now.
4013
+
4014
+ 53:39.200 --> 53:41.200
4015
+ You should find a way to change that.
4016
+
4017
+ 53:41.200 --> 53:44.200
4018
+ And that's part of the pitch I used
4019
+
4020
+ 53:44.200 --> 53:46.200
4021
+ to basically everyone who joins Cruise today.
4022
+
4023
+ 53:46.200 --> 53:48.200
4024
+ It's like, hey, you've got that now by coming here.
4025
+
4026
+ 53:48.200 --> 53:51.200
4027
+ Well, maybe you needed the two years of that existential dread
4028
+
4029
+ 53:51.200 --> 53:53.200
4030
+ to develop the feeling that ultimately
4031
+
4032
+ 53:53.200 --> 53:55.200
4033
+ it was the fire that created Cruise.
4034
+
4035
+ 53:55.200 --> 53:56.200
4036
+ So you never know.
4037
+
4038
+ 53:56.200 --> 53:57.200
4039
+ You can't repair.
4040
+
4041
+ 53:57.200 --> 53:58.200
4042
+ Good theory, yeah.
4043
+
4044
+ 53:58.200 --> 53:59.200
4045
+ So last question.
4046
+
4047
+ 53:59.200 --> 54:02.200
4048
+ What does 2019 hold for Cruise?
4049
+
4050
+ 54:02.200 --> 54:05.200
4051
+ After this, I guess we're going to go and talk to your class.
4052
+
4053
+ 54:05.200 --> 54:08.200
4054
+ But one of the big things is going from prototype to production
4055
+
4056
+ 54:08.200 --> 54:09.200
4057
+ for autonomous cars.
4058
+
4059
+ 54:09.200 --> 54:10.200
4060
+ And what does that mean?
4061
+
4062
+ 54:10.200 --> 54:11.200
4063
+ What does that look like?
4064
+
4065
+ 54:11.200 --> 54:14.200
4066
+ 2019 for us is the year that we try to cross over
4067
+
4068
+ 54:14.200 --> 54:17.200
4069
+ that threshold and reach superhuman level of performance
4070
+
4071
+ 54:17.200 --> 54:20.200
4072
+ to some degree with the software and have all the other
4073
+
4074
+ 54:20.200 --> 54:23.200
4075
+ of the thousands of little building blocks in place
4076
+
4077
+ 54:23.200 --> 54:27.200
4078
+ to launch our first commercial product.
4079
+
4080
+ 54:27.200 --> 54:30.200
4081
+ So that's what's in store for us.
4082
+
4083
+ 54:30.200 --> 54:32.200
4084
+ And we've got a lot of work to do.
4085
+
4086
+ 54:32.200 --> 54:35.200
4087
+ We've got a lot of brilliant people working on it.
4088
+
4089
+ 54:35.200 --> 54:37.200
4090
+ So it's all up to us now.
4091
+
4092
+ 54:37.200 --> 54:38.200
4093
+ Yeah.
4094
+
4095
+ 54:38.200 --> 54:41.200
4096
+ So Charlie Miller and Chris Vell is like the people I've
4097
+
4098
+ 54:41.200 --> 54:42.200
4099
+ crossed paths with.
4100
+
4101
+ 54:42.200 --> 54:43.200
4102
+ Oh, great, yeah.
4103
+
4104
+ 54:43.200 --> 54:46.200
4105
+ It sounds like you have an amazing team.
4106
+
4107
+ 54:46.200 --> 54:49.200
4108
+ So like I said, it's one of the most, I think, one of the most
4109
+
4110
+ 54:49.200 --> 54:52.200
4111
+ important problems in artificial intelligence of this century.
4112
+
4113
+ 54:52.200 --> 54:53.200
4114
+ It'll be one of the most defining.
4115
+
4116
+ 54:53.200 --> 54:55.200
4117
+ It's super exciting that you work on it.
4118
+
4119
+ 54:55.200 --> 54:59.200
4120
+ And the best of luck in 2019.
4121
+
4122
+ 54:59.200 --> 55:01.200
4123
+ I'm really excited to see what Cruise comes up with.
4124
+
4125
+ 55:01.200 --> 55:02.200
4126
+ Thank you.
4127
+
4128
+ 55:02.200 --> 55:03.200
4129
+ Thanks for having me today.
4130
+
4131
+ 55:03.200 --> 55:08.200
4132
+ Thank you.
4133
+
vtt/episode_015_small.vtt ADDED
@@ -0,0 +1,1826 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:05.440
4
+ The following is a conversation with Leslie Kailbling. She is a roboticist and professor at
5
+
6
+ 00:05.440 --> 00:12.080
7
+ MIT. She is recognized for her work in reinforcement learning, planning, robot navigation, and several
8
+
9
+ 00:12.080 --> 00:18.560
10
+ other topics in AI. She won the Ijkai Computers and Thought Award and was the editor in chief
11
+
12
+ 00:18.560 --> 00:24.320
13
+ of the prestigious journal machine learning research. This conversation is part of the
14
+
15
+ 00:24.320 --> 00:30.400
16
+ artificial intelligence podcast at MIT and beyond. If you enjoy it, subscribe on YouTube,
17
+
18
+ 00:30.400 --> 00:37.760
19
+ iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now,
20
+
21
+ 00:37.760 --> 00:45.360
22
+ here's my conversation with Leslie Kailbling. What made me get excited about AI, I can say
23
+
24
+ 00:45.360 --> 00:49.920
25
+ that, is I read Girdle Escherbach when I was in high school. That was pretty formative for me
26
+
27
+ 00:49.920 --> 00:59.280
28
+ because it exposed the interestingness of primitives and combination and how you can
29
+
30
+ 00:59.280 --> 01:06.000
31
+ make complex things out of simple parts and ideas of AI and what kinds of programs might
32
+
33
+ 01:06.000 --> 01:12.800
34
+ generate intelligent behavior. So you first fell in love with AI reasoning logic versus robots?
35
+
36
+ 01:12.800 --> 01:18.240
37
+ Yeah, the robots came because my first job, so I finished an undergraduate degree in philosophy
38
+
39
+ 01:18.240 --> 01:24.160
40
+ at Stanford and was about to finish masters in computer science and I got hired at SRI
41
+
42
+ 01:25.440 --> 01:30.960
43
+ in their AI lab and they were building a robot. It was a kind of a follow on to shaky,
44
+
45
+ 01:30.960 --> 01:35.840
46
+ but all the shaky people were not there anymore. And so my job was to try to get this robot to
47
+
48
+ 01:35.840 --> 01:41.200
49
+ do stuff and that's really kind of what got me interested in robots. So maybe taking a small
50
+
51
+ 01:41.200 --> 01:46.160
52
+ step back to your bachelor's in Stanford philosophy, did master's in PhD in computer science,
53
+
54
+ 01:46.160 --> 01:51.440
55
+ but the bachelor's in philosophy. So what was that journey like? What elements of philosophy
56
+
57
+ 01:52.320 --> 01:55.200
58
+ do you think you bring to your work in computer science?
59
+
60
+ 01:55.200 --> 02:00.080
61
+ So it's surprisingly relevant. So part of the reason that I didn't do a computer science
62
+
63
+ 02:00.080 --> 02:04.560
64
+ undergraduate degree was that there wasn't one at Stanford at the time, but that there's part of
65
+
66
+ 02:04.560 --> 02:09.200
67
+ philosophy and in fact Stanford has a special sub major in something called now Symbolic Systems,
68
+
69
+ 02:09.200 --> 02:15.520
70
+ which is logic, model, theory, formal semantics of natural language. And so that's actually
71
+
72
+ 02:15.520 --> 02:19.680
73
+ a perfect preparation for work in AI and computer science.
74
+
75
+ 02:19.680 --> 02:23.760
76
+ That's kind of interesting. So if you were interested in artificial intelligence,
77
+
78
+ 02:26.000 --> 02:32.560
79
+ what kind of majors were people even thinking about taking? What is in your science? So besides
80
+
81
+ 02:32.560 --> 02:37.120
82
+ philosophies, what were you supposed to do if you were fascinated by the idea of creating
83
+
84
+ 02:37.120 --> 02:41.440
85
+ intelligence? There weren't enough people who did that for that even to be a conversation.
86
+
87
+ 02:41.440 --> 02:49.920
88
+ I mean, I think probably philosophy. I mean, it's interesting in my graduating class of
89
+
90
+ 02:49.920 --> 02:57.120
91
+ undergraduate philosophers, probably maybe slightly less than half went on in computer
92
+
93
+ 02:57.120 --> 03:02.240
94
+ science, slightly less than half went on in law, and like one or two went on in philosophy.
95
+
96
+ 03:03.360 --> 03:07.920
97
+ So it was a common kind of connection. Do you think AI researchers have a role,
98
+
99
+ 03:07.920 --> 03:12.480
100
+ be part time philosophers, or should they stick to the solid science and engineering
101
+
102
+ 03:12.480 --> 03:17.200
103
+ without sort of taking the philosophizing tangents? I mean, you work with robots,
104
+
105
+ 03:17.200 --> 03:22.960
106
+ you think about what it takes to create intelligent beings. Aren't you the perfect person to think
107
+
108
+ 03:22.960 --> 03:27.440
109
+ about the big picture philosophy at all? The parts of philosophy that are closest to AI,
110
+
111
+ 03:27.440 --> 03:30.400
112
+ I think, or at least the closest to AI that I think about are stuff like
113
+
114
+ 03:30.400 --> 03:38.400
115
+ belief and knowledge and denotation and that kind of stuff. It's quite formal, and it's
116
+
117
+ 03:38.400 --> 03:44.000
118
+ like just one step away from the kinds of computer science work that we do kind of routinely.
119
+
120
+ 03:45.680 --> 03:53.040
121
+ I think that there are important questions still about what you can do with a machine and what
122
+
123
+ 03:53.040 --> 03:57.680
124
+ you can't and so on. Although at least my personal view is that I'm completely a materialist,
125
+
126
+ 03:57.680 --> 04:01.920
127
+ and I don't think that there's any reason why we can't make a robot be
128
+
129
+ 04:02.800 --> 04:06.800
130
+ behaviorally indistinguishable from a human. And the question of whether it's
131
+
132
+ 04:08.480 --> 04:13.600
133
+ distinguishable internally, whether it's a zombie or not in philosophy terms, I actually don't,
134
+
135
+ 04:14.720 --> 04:16.960
136
+ I don't know, and I don't know if I care too much about that.
137
+
138
+ 04:16.960 --> 04:22.080
139
+ Right, but there is a philosophical notions there, mathematical and philosophical,
140
+
141
+ 04:22.080 --> 04:27.520
142
+ because we don't know so much of how difficult that is, how difficult is a perception problem.
143
+
144
+ 04:27.520 --> 04:32.640
145
+ How difficult is the planning problem? How difficult is it to operate in this world successfully?
146
+
147
+ 04:32.640 --> 04:37.920
148
+ Because our robots are not currently as successful as human beings in many tasks.
149
+
150
+ 04:37.920 --> 04:44.320
151
+ The question about the gap between current robots and human beings borders a little bit
152
+
153
+ 04:44.320 --> 04:52.400
154
+ on philosophy. The expanse of knowledge that's required to operate in this world and the ability
155
+
156
+ 04:52.400 --> 04:57.280
157
+ to form common sense knowledge, the ability to reason about uncertainty, much of the work
158
+
159
+ 04:57.280 --> 05:05.040
160
+ you've been doing, there's open questions there that, I don't know, require to activate a certain
161
+
162
+ 05:06.320 --> 05:09.840
163
+ big picture view. To me, that doesn't seem like a philosophical gap at all.
164
+
165
+ 05:10.640 --> 05:14.240
166
+ To me, there is a big technical gap. There's a huge technical gap,
167
+
168
+ 05:15.040 --> 05:19.360
169
+ but I don't see any reason why it's more than a technical gap.
170
+
171
+ 05:19.360 --> 05:28.400
172
+ Perfect. When you mentioned AI, you mentioned SRI, and maybe can you describe to me when you
173
+
174
+ 05:28.400 --> 05:37.680
175
+ first fell in love with robotics, with robots, or inspired, so you mentioned flaky or shaky flaky,
176
+
177
+ 05:38.400 --> 05:42.720
178
+ and what was the robot that first captured your imagination of what's possible?
179
+
180
+ 05:42.720 --> 05:47.920
181
+ Right. The first robot I worked with was flaky. Shaky was a robot that the SRI people had built,
182
+
183
+ 05:47.920 --> 05:53.360
184
+ but by the time, I think when I arrived, it was sitting in a corner of somebody's office
185
+
186
+ 05:53.360 --> 06:00.640
187
+ dripping hydraulic fluid into a pan, but it's iconic. Really, everybody should read the Shaky
188
+
189
+ 06:00.640 --> 06:07.840
190
+ Tech Report because it has so many good ideas in it. They invented ASTAR Search and symbolic
191
+
192
+ 06:07.840 --> 06:15.520
193
+ planning and learning macro operators. They had low level kind of configuration space planning for
194
+
195
+ 06:15.520 --> 06:20.160
196
+ the robot. They had vision. That's the basic ideas of a ton of things.
197
+
198
+ 06:20.160 --> 06:27.920
199
+ Can you take a step by it? Shaky was a mobile robot, but it could push objects,
200
+
201
+ 06:27.920 --> 06:31.680
202
+ and so it would move things around. With which actuator?
203
+
204
+ 06:31.680 --> 06:40.080
205
+ With its self, with its base. They had painted the baseboards black,
206
+
207
+ 06:40.080 --> 06:48.320
208
+ so it used vision to localize itself in a map. It detected objects. It could detect objects that
209
+
210
+ 06:48.320 --> 06:54.800
211
+ were surprising to it. It would plan and replan based on what it saw. It reasoned about whether
212
+
213
+ 06:54.800 --> 07:02.240
214
+ to look and take pictures. It really had the basics of so many of the things that we think about now.
215
+
216
+ 07:03.280 --> 07:05.360
217
+ How did it represent the space around it?
218
+
219
+ 07:05.360 --> 07:09.680
220
+ It had representations at a bunch of different levels of abstraction,
221
+
222
+ 07:09.680 --> 07:13.920
223
+ so it had, I think, a kind of an occupancy grid of some sort at the lowest level.
224
+
225
+ 07:14.880 --> 07:20.000
226
+ At the high level, it was abstract, symbolic kind of rooms and connectivity.
227
+
228
+ 07:20.000 --> 07:22.160
229
+ So where does Flakey come in?
230
+
231
+ 07:22.160 --> 07:29.600
232
+ Yeah, okay. I showed up at SRI and we were building a brand new robot. As I said, none of the people
233
+
234
+ 07:29.600 --> 07:34.240
235
+ from the previous project were there or involved anymore, so we were starting from scratch.
236
+
237
+ 07:34.240 --> 07:43.920
238
+ My advisor was Stan Rosenstein. He ended up being my thesis advisor. He was motivated by this idea
239
+
240
+ 07:43.920 --> 07:52.400
241
+ of situated computation or situated automata. The idea was that the tools of logical reasoning were
242
+
243
+ 07:52.400 --> 08:01.200
244
+ important, but possibly only for the engineers or designers to use in the analysis of a system,
245
+
246
+ 08:01.200 --> 08:05.600
247
+ but not necessarily to be manipulated in the head of the system itself.
248
+
249
+ 08:06.400 --> 08:09.920
250
+ So I might use logic to prove a theorem about the behavior of my robot,
251
+
252
+ 08:10.480 --> 08:14.400
253
+ even if the robot's not using logic, and it's had to prove theorems. So that was kind of the
254
+
255
+ 08:14.400 --> 08:22.160
256
+ distinction. And so the idea was to kind of use those principles to make a robot do stuff.
257
+
258
+ 08:22.800 --> 08:28.960
259
+ But a lot of the basic things we had to kind of learn for ourselves, because I had zero
260
+
261
+ 08:28.960 --> 08:32.160
262
+ background in robotics. I didn't know anything about control. I didn't know anything about
263
+
264
+ 08:32.160 --> 08:36.640
265
+ sensors. So we reinvented a lot of wheels on the way to getting that robot to do stuff.
266
+
267
+ 08:36.640 --> 08:39.120
268
+ Do you think that was an advantage or hindrance?
269
+
270
+ 08:39.120 --> 08:45.600
271
+ Oh, no. I'm big in favor of wheel reinvention, actually. I mean, I think you learned a lot
272
+
273
+ 08:45.600 --> 08:51.920
274
+ by doing it. It's important though to eventually have the pointers so that you can see what's
275
+
276
+ 08:51.920 --> 08:58.080
277
+ really going on. But I think you can appreciate much better the good solutions once you've
278
+
279
+ 08:58.080 --> 09:00.400
280
+ messed around a little bit on your own and found a bad one.
281
+
282
+ 09:00.400 --> 09:04.880
283
+ Yeah, I think you mentioned reinventing reinforcement learning and referring to
284
+
285
+ 09:04.880 --> 09:10.960
286
+ rewards as pleasures, a pleasure, I think, which I think is a nice name for it.
287
+
288
+ 09:12.800 --> 09:18.960
289
+ It's more fun, almost. Do you think you could tell the history of AI, machine learning,
290
+
291
+ 09:18.960 --> 09:23.520
292
+ reinforcement learning, how you think about it from the 50s to now?
293
+
294
+ 09:23.520 --> 09:29.440
295
+ One thing is that it oscillates. So things become fashionable and then they go out and
296
+
297
+ 09:29.440 --> 09:34.480
298
+ then something else becomes cool and then it goes out and so on. So there's some interesting
299
+
300
+ 09:34.480 --> 09:41.600
301
+ sociological process that actually drives a lot of what's going on. Early days was cybernetics and
302
+
303
+ 09:41.600 --> 09:48.320
304
+ control and the idea that of homeostasis, people who made these robots that could,
305
+
306
+ 09:48.320 --> 09:54.400
307
+ I don't know, try to plug into the wall when they needed power and then come loose and roll
308
+
309
+ 09:54.400 --> 10:00.960
310
+ around and do stuff. And then I think over time, they thought, well, that was inspiring, but people
311
+
312
+ 10:00.960 --> 10:04.880
313
+ said, no, no, no, we want to get maybe closer to what feels like real intelligence or human
314
+
315
+ 10:04.880 --> 10:15.040
316
+ intelligence. And then maybe the expert systems people tried to do that, but maybe a little
317
+
318
+ 10:15.040 --> 10:21.760
319
+ too superficially. So we get this surface understanding of what intelligence is like,
320
+
321
+ 10:21.760 --> 10:25.840
322
+ because I understand how a steel mill works and I can try to explain it to you and you can write
323
+
324
+ 10:25.840 --> 10:31.520
325
+ it down in logic and then we can make a computer infer that. And then that didn't work out.
326
+
327
+ 10:32.400 --> 10:37.520
328
+ But what's interesting, I think, is when a thing starts to not be working very well,
329
+
330
+ 10:38.720 --> 10:44.480
331
+ it's not only do we change methods, we change problems. So it's not like we have better ways
332
+
333
+ 10:44.480 --> 10:48.160
334
+ of doing the problem of the expert systems people are trying to do. We have no ways of
335
+
336
+ 10:48.160 --> 10:56.800
337
+ trying to do that problem. Oh, yeah, no, I think maybe a few. But we kind of give up on that problem
338
+
339
+ 10:56.800 --> 11:01.520
340
+ and we switch to a different problem. And we work that for a while and we make progress.
341
+
342
+ 11:01.520 --> 11:04.960
343
+ As a broad community. As a community. And there's a lot of people who would argue,
344
+
345
+ 11:04.960 --> 11:09.760
346
+ you don't give up on the problem. It's just the decrease in the number of people working on it.
347
+
348
+ 11:09.760 --> 11:13.920
349
+ You almost kind of like put it on the shelf. So we'll come back to this 20 years later.
350
+
351
+ 11:13.920 --> 11:19.360
352
+ Yeah, I think that's right. Or you might decide that it's malformed. Like you might say,
353
+
354
+ 11:21.600 --> 11:26.800
355
+ it's wrong to just try to make something that does superficial symbolic reasoning behave like a
356
+
357
+ 11:26.800 --> 11:34.000
358
+ doctor. You can't do that until you've had the sensory motor experience of being a doctor or
359
+
360
+ 11:34.000 --> 11:38.560
361
+ something. So there's arguments that say that that problem was not well formed. Or it could be
362
+
363
+ 11:38.560 --> 11:44.160
364
+ that it is well formed, but we just weren't approaching it well. So you mentioned that your
365
+
366
+ 11:44.160 --> 11:49.120
367
+ favorite part of logic and symbolic systems is that they give short names for large sets.
368
+
369
+ 11:49.840 --> 11:56.320
370
+ So there is some use to this. They use symbolic reasoning. So looking at expert systems
371
+
372
+ 11:56.960 --> 12:01.760
373
+ and symbolic computing, what do you think are the roadblocks that were hit in the 80s and 90s?
374
+
375
+ 12:02.640 --> 12:08.320
376
+ Okay, so right. So the fact that I'm not a fan of expert systems doesn't mean that I'm not a fan
377
+
378
+ 12:08.320 --> 12:16.640
379
+ of some kind of symbolic reasoning. So let's see roadblocks. Well, the main roadblock, I think,
380
+
381
+ 12:16.640 --> 12:25.040
382
+ was that the idea that humans could articulate their knowledge effectively into some kind of
383
+
384
+ 12:25.040 --> 12:30.560
385
+ logical statements. So it's not just the cost, the effort, but really just the capability of
386
+
387
+ 12:30.560 --> 12:37.120
388
+ doing it. Right. Because we're all experts in vision, but totally don't have introspective access
389
+
390
+ 12:37.120 --> 12:45.440
391
+ into how we do that. Right. And it's true that, I mean, I think the idea was, well, of course,
392
+
393
+ 12:45.440 --> 12:49.040
394
+ even people then would know, of course, I wouldn't ask you to please write down the rules that you
395
+
396
+ 12:49.040 --> 12:54.000
397
+ use for recognizing a water bottle. That's crazy. And everyone understood that. But we might ask
398
+
399
+ 12:54.000 --> 13:00.800
400
+ you to please write down the rules you use for deciding, I don't know what tie to put on or
401
+
402
+ 13:00.800 --> 13:08.240
403
+ or how to set up a microphone or something like that. But even those things, I think people maybe,
404
+
405
+ 13:08.880 --> 13:12.720
406
+ I think what they found, I'm not sure about this, but I think what they found was that the
407
+
408
+ 13:12.720 --> 13:19.120
409
+ so called experts could give explanations that sort of post hoc explanations for how and why
410
+
411
+ 13:19.120 --> 13:27.680
412
+ they did things, but they weren't necessarily very good. And then they depended on maybe some
413
+
414
+ 13:27.680 --> 13:33.280
415
+ kinds of perceptual things, which again, they couldn't really define very well. So I think,
416
+
417
+ 13:33.280 --> 13:38.800
418
+ I think fundamentally, I think that the underlying problem with that was the assumption that people
419
+
420
+ 13:38.800 --> 13:45.280
421
+ could articulate how and why they make their decisions. Right. So it's almost encoding the
422
+
423
+ 13:45.280 --> 13:51.440
424
+ knowledge from converting from expert to something that a machine can understand and reason with.
425
+
426
+ 13:51.440 --> 13:58.880
427
+ No, no, no, not even just encoding, but getting it out of you. Not not not writing it. I mean,
428
+
429
+ 13:58.880 --> 14:03.680
430
+ yes, hard also to write it down for the computer. But I don't think that people can
431
+
432
+ 14:04.240 --> 14:10.080
433
+ produce it. You can tell me a story about why you do stuff. But I'm not so sure that's the why.
434
+
435
+ 14:11.440 --> 14:16.960
436
+ Great. So there are still on the hierarchical planning side,
437
+
438
+ 14:16.960 --> 14:25.120
439
+ places where symbolic reasoning is very useful. So as you've talked about, so
440
+
441
+ 14:27.840 --> 14:34.400
442
+ where so don't where's the gap? Yeah, okay, good. So saying that humans can't provide a
443
+
444
+ 14:34.400 --> 14:40.560
445
+ description of their reasoning processes. That's okay, fine. But that doesn't mean that it's not
446
+
447
+ 14:40.560 --> 14:44.880
448
+ good to do reasoning of various styles inside a computer. Those are just two orthogonal points.
449
+
450
+ 14:44.880 --> 14:50.560
451
+ So then the question is, what kind of reasoning should you do inside a computer?
452
+
453
+ 14:50.560 --> 14:55.680
454
+ Right. And the answer is, I think you need to do all different kinds of reasoning inside
455
+
456
+ 14:55.680 --> 15:01.680
457
+ a computer, depending on what kinds of problems you face. I guess the question is, what kind of
458
+
459
+ 15:01.680 --> 15:12.880
460
+ things can you encode symbolically so you can reason about? I think the idea about and and
461
+
462
+ 15:12.880 --> 15:18.080
463
+ even symbolic, I don't even like that terminology because I don't know what it means technically
464
+
465
+ 15:18.080 --> 15:24.240
466
+ and formally. I do believe in abstractions. So abstractions are critical, right? You cannot
467
+
468
+ 15:24.240 --> 15:30.240
469
+ reason at completely fine grain about everything in your life, right? You can't make a plan at the
470
+
471
+ 15:30.240 --> 15:37.680
472
+ level of images and torques for getting a PhD. So you have to reduce the size of the state space
473
+
474
+ 15:37.680 --> 15:43.040
475
+ and you have to reduce the horizon if you're going to reason about getting a PhD or even buying
476
+
477
+ 15:43.040 --> 15:50.080
478
+ the ingredients to make dinner. And so how can you reduce the spaces and the horizon of the
479
+
480
+ 15:50.080 --> 15:53.200
481
+ reasoning you have to do? And the answer is abstraction, spatial abstraction, temporal
482
+
483
+ 15:53.200 --> 15:58.080
484
+ abstraction. I think abstraction along the lines of goals is also interesting, like you might
485
+
486
+ 15:58.800 --> 16:03.840
487
+ or well, abstraction and decomposition. Goals is maybe more of a decomposition thing.
488
+
489
+ 16:03.840 --> 16:08.880
490
+ So I think that's where these kinds of, if you want to call it symbolic or discrete
491
+
492
+ 16:08.880 --> 16:15.440
493
+ models come in. You talk about a room of your house instead of your pose. You talk about
494
+
495
+ 16:16.800 --> 16:22.560
496
+ doing something during the afternoon instead of at 2.54. And you do that because it makes
497
+
498
+ 16:22.560 --> 16:30.000
499
+ your reasoning problem easier and also because you don't have enough information
500
+
501
+ 16:30.000 --> 16:37.120
502
+ to reason in high fidelity about your pose of your elbow at 2.35 this afternoon anyway.
503
+
504
+ 16:37.120 --> 16:39.440
505
+ Right. When you're trying to get a PhD.
506
+
507
+ 16:39.440 --> 16:41.600
508
+ Right. Or when you're doing anything really.
509
+
510
+ 16:41.600 --> 16:44.400
511
+ Yeah, okay. Except for at that moment. At that moment,
512
+
513
+ 16:44.400 --> 16:48.160
514
+ you do have to reason about the pose of your elbow, maybe. But then maybe you do that in some
515
+
516
+ 16:48.160 --> 16:55.680
517
+ continuous joint space kind of model. And so again, my biggest point about all of this is that
518
+
519
+ 16:55.680 --> 17:01.440
520
+ there should be, the dogma is not the thing, right? It shouldn't be that I am in favor
521
+
522
+ 17:01.440 --> 17:06.320
523
+ against symbolic reasoning and you're in favor against neural networks. It should be that just
524
+
525
+ 17:07.600 --> 17:12.240
526
+ computer science tells us what the right answer to all these questions is if we were smart enough
527
+
528
+ 17:12.240 --> 17:16.960
529
+ to figure it out. Yeah. When you try to actually solve the problem with computers, the right answer
530
+
531
+ 17:16.960 --> 17:22.880
532
+ comes out. You mentioned abstractions. I mean, neural networks form abstractions or rather,
533
+
534
+ 17:22.880 --> 17:30.320
535
+ there's automated ways to form abstractions and there's expert driven ways to form abstractions
536
+
537
+ 17:30.320 --> 17:35.920
538
+ and expert human driven ways. And humans just seems to be way better at forming abstractions
539
+
540
+ 17:35.920 --> 17:44.080
541
+ currently and certain problems. So when you're referring to 2.45 a.m. versus afternoon,
542
+
543
+ 17:44.960 --> 17:49.920
544
+ how do we construct that taxonomy? Is there any room for automated construction of such
545
+
546
+ 17:49.920 --> 17:55.200
547
+ abstractions? Oh, I think eventually, yeah. I mean, I think when we get to be better
548
+
549
+ 17:56.160 --> 18:02.240
550
+ and machine learning engineers, we'll build algorithms that build awesome abstractions.
551
+
552
+ 18:02.240 --> 18:06.720
553
+ That are useful in this kind of way that you're describing. Yeah. So let's then step from
554
+
555
+ 18:07.840 --> 18:14.400
556
+ the abstraction discussion and let's talk about BOMMDP's
557
+
558
+ 18:14.400 --> 18:21.440
559
+ Partially Observable Markov Decision Processes. So uncertainty. So first, what are Markov Decision
560
+
561
+ 18:21.440 --> 18:27.520
562
+ Processes? What are Markov Decision Processes? Maybe how much of our world can be models and
563
+
564
+ 18:27.520 --> 18:32.080
565
+ MDPs? How much when you wake up in the morning and you're making breakfast, how do you think
566
+
567
+ 18:32.080 --> 18:38.080
568
+ of yourself as an MDP? So how do you think about MDPs and how they relate to our world?
569
+
570
+ 18:38.080 --> 18:43.040
571
+ Well, so there's a stance question, right? So a stance is a position that I take with
572
+
573
+ 18:43.040 --> 18:52.160
574
+ respect to a problem. So I as a researcher or a person who designed systems can decide to make
575
+
576
+ 18:52.160 --> 18:58.960
577
+ a model of the world around me in some terms. So I take this messy world and I say, I'm going to
578
+
579
+ 18:58.960 --> 19:04.640
580
+ treat it as if it were a problem of this formal kind, and then I can apply solution concepts
581
+
582
+ 19:04.640 --> 19:09.120
583
+ or algorithms or whatever to solve that formal thing, right? So of course, the world is not
584
+
585
+ 19:09.120 --> 19:14.080
586
+ anything. It's not an MDP or a POMDP. I don't know what it is, but I can model aspects of it
587
+
588
+ 19:14.080 --> 19:19.280
589
+ in some way or some other way. And when I model some aspect of it in a certain way, that gives me
590
+
591
+ 19:19.280 --> 19:25.600
592
+ some set of algorithms I can use. You can model the world in all kinds of ways. Some have some
593
+
594
+ 19:26.400 --> 19:32.880
595
+ are more accepting of uncertainty, more easily modeling uncertainty of the world. Some really
596
+
597
+ 19:32.880 --> 19:40.720
598
+ force the world to be deterministic. And so certainly MDPs model the uncertainty of the world.
599
+
600
+ 19:40.720 --> 19:47.200
601
+ Yes. Model some uncertainty. They model not present state uncertainty, but they model uncertainty
602
+
603
+ 19:47.200 --> 19:53.840
604
+ in the way the future will unfold. Right. So what are Markov decision processes?
605
+
606
+ 19:53.840 --> 19:57.680
607
+ So Markov decision process is a model. It's a kind of a model that you can make that says,
608
+
609
+ 19:57.680 --> 20:05.600
610
+ I know completely the current state of my system. And what it means to be a state is that I have
611
+
612
+ 20:05.600 --> 20:10.720
613
+ all the information right now that will let me make predictions about the future as well as I
614
+
615
+ 20:10.720 --> 20:14.640
616
+ can. So that remembering anything about my history wouldn't make my predictions any better.
617
+
618
+ 20:18.720 --> 20:23.680
619
+ But then it also says that then I can take some actions that might change the state of the world
620
+
621
+ 20:23.680 --> 20:28.800
622
+ and that I don't have a deterministic model of those changes. I have a probabilistic model
623
+
624
+ 20:28.800 --> 20:35.600
625
+ of how the world might change. It's a useful model for some kinds of systems. I mean, it's
626
+
627
+ 20:35.600 --> 20:43.280
628
+ certainly not a good model for most problems. I think because for most problems, you don't
629
+
630
+ 20:43.280 --> 20:49.680
631
+ actually know the state. For most problems, it's partially observed. So that's now a different
632
+
633
+ 20:49.680 --> 20:56.480
634
+ problem class. So okay, that's where the problem depies, the partially observed Markov decision
635
+
636
+ 20:56.480 --> 21:03.600
637
+ process step in. So how do they address the fact that you can't observe most the incomplete
638
+
639
+ 21:03.600 --> 21:09.360
640
+ information about most of the world around you? Right. So now the idea is we still kind of postulate
641
+
642
+ 21:09.360 --> 21:14.080
643
+ that there exists a state. We think that there is some information about the world out there
644
+
645
+ 21:14.640 --> 21:18.800
646
+ such that if we knew that we could make good predictions, but we don't know the state.
647
+
648
+ 21:18.800 --> 21:23.840
649
+ And so then we have to think about how, but we do get observations. Maybe I get images or I hear
650
+
651
+ 21:23.840 --> 21:29.520
652
+ things or I feel things and those might be local or noisy. And so therefore they don't tell me
653
+
654
+ 21:29.520 --> 21:35.440
655
+ everything about what's going on. And then I have to reason about given the history of actions
656
+
657
+ 21:35.440 --> 21:40.000
658
+ I've taken and observations I've gotten, what do I think is going on in the world? And then
659
+
660
+ 21:40.000 --> 21:43.920
661
+ given my own kind of uncertainty about what's going on in the world, I can decide what actions to
662
+
663
+ 21:43.920 --> 21:51.120
664
+ take. And so difficult is this problem of planning under uncertainty in your view and your
665
+
666
+ 21:51.120 --> 21:57.840
667
+ long experience of modeling the world, trying to deal with this uncertainty in
668
+
669
+ 21:57.840 --> 22:04.240
670
+ especially in real world systems. Optimal planning for even discrete POMDPs can be
671
+
672
+ 22:04.240 --> 22:12.000
673
+ undecidable depending on how you set it up. And so lots of people say I don't use POMDPs
674
+
675
+ 22:12.000 --> 22:17.600
676
+ because they are intractable. And I think that that's a kind of a very funny thing to say because
677
+
678
+ 22:18.880 --> 22:23.120
679
+ the problem you have to solve is the problem you have to solve. So if the problem you have to
680
+
681
+ 22:23.120 --> 22:28.160
682
+ solve is intractable, that's what makes us AI people, right? So we solve, we understand that
683
+
684
+ 22:28.160 --> 22:34.320
685
+ the problem we're solving is wildly intractable that we will never be able to solve it optimally,
686
+
687
+ 22:34.320 --> 22:41.360
688
+ at least I don't. Yeah, right. So later we can come back to an idea about bounded optimality
689
+
690
+ 22:41.360 --> 22:44.960
691
+ and something. But anyway, we can't come up with optimal solutions to these problems.
692
+
693
+ 22:45.520 --> 22:51.200
694
+ So we have to make approximations. Approximations in modeling approximations in solution algorithms
695
+
696
+ 22:51.200 --> 22:58.160
697
+ and so on. And so I don't have a problem with saying, yeah, my problem actually it is POMDP in
698
+
699
+ 22:58.160 --> 23:02.880
700
+ continuous space with continuous observations. And it's so computationally complex. I can't
701
+
702
+ 23:02.880 --> 23:10.320
703
+ even think about it's, you know, big O whatever. But that doesn't prevent me from it helps me
704
+
705
+ 23:10.320 --> 23:17.360
706
+ gives me some clarity to think about it that way. And to then take steps to make approximation
707
+
708
+ 23:17.360 --> 23:22.080
709
+ after approximation to get down to something that's like computable in some reasonable time.
710
+
711
+ 23:22.080 --> 23:27.920
712
+ When you think about optimality, you know, the community broadly has shifted on that, I think,
713
+
714
+ 23:27.920 --> 23:35.600
715
+ a little bit in how much they value the idea of optimality of chasing an optimal solution.
716
+
717
+ 23:35.600 --> 23:42.240
718
+ How is your views of chasing an optimal solution changed over the years when you work with robots?
719
+
720
+ 23:42.240 --> 23:49.920
721
+ That's interesting. I think we have a little bit of a methodological crisis, actually,
722
+
723
+ 23:49.920 --> 23:54.000
724
+ from the theoretical side. I mean, I do think that theory is important and that right now we're not
725
+
726
+ 23:54.000 --> 24:00.640
727
+ doing much of it. So there's lots of empirical hacking around and training this and doing that
728
+
729
+ 24:00.640 --> 24:05.440
730
+ and reporting numbers. But is it good? Is it bad? We don't know. It's very hard to say things.
731
+
732
+ 24:08.240 --> 24:15.920
733
+ And if you look at like computer science theory, so people talked for a while,
734
+
735
+ 24:15.920 --> 24:21.280
736
+ everyone was about solving problems optimally or completely. And then there were interesting
737
+
738
+ 24:21.280 --> 24:27.520
739
+ relaxations. So people look at, oh, can I, are there regret bounds? Or can I do some kind of,
740
+
741
+ 24:27.520 --> 24:33.280
742
+ you know, approximation? Can I prove something that I can approximately solve this problem or
743
+
744
+ 24:33.280 --> 24:38.160
745
+ that I get closer to the solution as I spend more time and so on? What's interesting, I think,
746
+
747
+ 24:38.160 --> 24:47.680
748
+ is that we don't have good approximate solution concepts for very difficult problems. Right?
749
+
750
+ 24:47.680 --> 24:52.640
751
+ I like to, you know, I like to say that I'm interested in doing a very bad job of very big
752
+
753
+ 24:52.640 --> 25:02.960
754
+ problems. Right. So very bad job, very big problems. I like to do that. But I wish I could say
755
+
756
+ 25:02.960 --> 25:09.600
757
+ something. I wish I had a, I don't know, some kind of a formal solution concept
758
+
759
+ 25:10.320 --> 25:16.640
760
+ that I could use to say, oh, this algorithm actually, it gives me something. Like, I know
761
+
762
+ 25:16.640 --> 25:21.760
763
+ what I'm going to get. I can do something other than just run it and get out. So that notion
764
+
765
+ 25:21.760 --> 25:28.640
766
+ is still somewhere deeply compelling to you. The notion that you can say, you can drop
767
+
768
+ 25:28.640 --> 25:33.440
769
+ thing on the table says this, you can expect this, this algorithm will give me some good results.
770
+
771
+ 25:33.440 --> 25:38.960
772
+ I hope there's, I hope science will, I mean, there's engineering and there's science,
773
+
774
+ 25:38.960 --> 25:44.720
775
+ I think that they're not exactly the same. And I think right now we're making huge engineering
776
+
777
+ 25:45.600 --> 25:49.840
778
+ like leaps and bounds. So the engineering is running away ahead of the science, which is cool.
779
+
780
+ 25:49.840 --> 25:54.800
781
+ And often how it goes, right? So we're making things and nobody knows how and why they work,
782
+
783
+ 25:54.800 --> 26:03.200
784
+ roughly. But we need to turn that into science. There's some form. It's, yeah,
785
+
786
+ 26:03.200 --> 26:07.200
787
+ there's some room for formalizing. We need to know what the principles are. Why does this work?
788
+
789
+ 26:07.200 --> 26:12.480
790
+ Why does that not work? I mean, for while people build bridges by trying, but now we can often
791
+
792
+ 26:12.480 --> 26:17.520
793
+ predict whether it's going to work or not without building it. Can we do that for learning systems
794
+
795
+ 26:17.520 --> 26:23.600
796
+ or for robots? See, your hope is from a materialistic perspective that intelligence,
797
+
798
+ 26:23.600 --> 26:28.000
799
+ artificial intelligence systems, robots are kind of just fancier bridges.
800
+
801
+ 26:29.200 --> 26:33.600
802
+ Belief space. What's the difference between belief space and state space? So we mentioned
803
+
804
+ 26:33.600 --> 26:42.000
805
+ MDPs, FOMDPs, you reasoning about, you sense the world, there's a state. What's this belief
806
+
807
+ 26:42.000 --> 26:49.040
808
+ space idea? Yeah. Okay, that sounds good. It sounds good. So belief space, that is, instead of
809
+
810
+ 26:49.040 --> 26:54.880
811
+ thinking about what's the state of the world and trying to control that as a robot, I think about
812
+
813
+ 26:55.760 --> 27:01.120
814
+ what is the space of beliefs that I could have about the world? What's, if I think of a belief
815
+
816
+ 27:01.120 --> 27:06.640
817
+ as a probability distribution of the ways the world could be, a belief state is a distribution,
818
+
819
+ 27:06.640 --> 27:13.040
820
+ and then my control problem, if I'm reasoning about how to move through a world I'm uncertain about,
821
+
822
+ 27:14.160 --> 27:18.880
823
+ my control problem is actually the problem of controlling my beliefs. So I think about taking
824
+
825
+ 27:18.880 --> 27:23.120
826
+ actions, not just what effect they'll have on the world outside, but what effect they'll have on my
827
+
828
+ 27:23.120 --> 27:29.920
829
+ own understanding of the world outside. And so that might compel me to ask a question or look
830
+
831
+ 27:29.920 --> 27:35.280
832
+ somewhere to gather information, which may not really change the world state, but it changes
833
+
834
+ 27:35.280 --> 27:43.440
835
+ my own belief about the world. That's a powerful way to empower the agent to reason about the
836
+
837
+ 27:43.440 --> 27:47.840
838
+ world, to explore the world. What kind of problems does it allow you to solve to
839
+
840
+ 27:49.040 --> 27:54.560
841
+ consider belief space versus just state space? Well, any problem that requires deliberate
842
+
843
+ 27:54.560 --> 28:02.800
844
+ information gathering. So if in some problems, like chess, there's no uncertainty, or maybe
845
+
846
+ 28:02.800 --> 28:06.320
847
+ there's uncertainty about the opponent. There's no uncertainty about the state.
848
+
849
+ 28:08.400 --> 28:14.000
850
+ And some problems, there's uncertainty, but you gather information as you go. You might say,
851
+
852
+ 28:14.000 --> 28:18.240
853
+ oh, I'm driving my autonomous car down the road, and it doesn't know perfectly where it is, but
854
+
855
+ 28:18.240 --> 28:23.280
856
+ the LiDARs are all going all the time. So I don't have to think about whether to gather information.
857
+
858
+ 28:24.160 --> 28:28.800
859
+ But if you're a human driving down the road, you sometimes look over your shoulder to see what's
860
+
861
+ 28:28.800 --> 28:36.320
862
+ going on behind you in the lane. And you have to decide whether you should do that now. And you
863
+
864
+ 28:36.320 --> 28:40.400
865
+ have to trade off the fact that you're not seeing in front of you, and you're looking behind you,
866
+
867
+ 28:40.400 --> 28:45.440
868
+ and how valuable is that information, and so on. And so to make choices about information
869
+
870
+ 28:45.440 --> 28:56.080
871
+ gathering, you have to reason in belief space. Also to just take into account your own uncertainty
872
+
873
+ 28:56.080 --> 29:03.280
874
+ before trying to do things. So you might say, if I understand where I'm standing relative to the
875
+
876
+ 29:03.280 --> 29:08.880
877
+ door jam, pretty accurately, then it's okay for me to go through the door. But if I'm really not
878
+
879
+ 29:08.880 --> 29:14.240
880
+ sure where the door is, then it might be better to not do that right now. The degree of your
881
+
882
+ 29:14.240 --> 29:18.800
883
+ uncertainty about the world is actually part of the thing you're trying to optimize in forming the
884
+
885
+ 29:18.800 --> 29:26.560
886
+ plan, right? So this idea of a long horizon of planning for a PhD or just even how to get out
887
+
888
+ 29:26.560 --> 29:32.720
889
+ of the house or how to make breakfast, you show this presentation of the WTF, where's the fork
890
+
891
+ 29:33.360 --> 29:42.000
892
+ of a robot looking to sink. And can you describe how we plan in this world is this idea of hierarchical
893
+
894
+ 29:42.000 --> 29:52.000
895
+ planning we've mentioned? Yeah, how can a robot hope to plan about something with such a long
896
+
897
+ 29:52.000 --> 29:58.400
898
+ horizon where the goal is quite far away? People since probably reasoning began have thought about
899
+
900
+ 29:58.400 --> 30:02.560
901
+ hierarchical reasoning, the temporal hierarchy in particular. Well, there's spatial hierarchy,
902
+
903
+ 30:02.560 --> 30:06.240
904
+ but let's talk about temporal hierarchy. So you might say, oh, I have this long
905
+
906
+ 30:06.240 --> 30:13.680
907
+ execution I have to do, but I can divide it into some segments abstractly, right? So maybe
908
+
909
+ 30:14.400 --> 30:19.360
910
+ have to get out of the house, I have to get in the car, I have to drive, and so on. And so
911
+
912
+ 30:20.800 --> 30:25.920
913
+ you can plan if you can build abstractions. So this we started out by talking about abstractions,
914
+
915
+ 30:25.920 --> 30:30.080
916
+ and we're back to that now. If you can build abstractions in your state space,
917
+
918
+ 30:30.080 --> 30:37.760
919
+ and abstractions, sort of temporal abstractions, then you can make plans at a high level. And you
920
+
921
+ 30:37.760 --> 30:42.320
922
+ can say, I'm going to go to town, and then I'll have to get gas, and I can go here, and I can do
923
+
924
+ 30:42.320 --> 30:47.360
925
+ this other thing. And you can reason about the dependencies and constraints among these actions,
926
+
927
+ 30:47.920 --> 30:55.600
928
+ again, without thinking about the complete details. What we do in our hierarchical planning work is
929
+
930
+ 30:55.600 --> 31:00.960
931
+ then say, all right, I make a plan at a high level of abstraction. I have to have some
932
+
933
+ 31:00.960 --> 31:06.640
934
+ reason to think that it's feasible without working it out in complete detail. And that's
935
+
936
+ 31:06.640 --> 31:10.800
937
+ actually the interesting step. I always like to talk about walking through an airport, like
938
+
939
+ 31:12.160 --> 31:16.720
940
+ you can plan to go to New York and arrive at the airport, and then find yourself in an office
941
+
942
+ 31:16.720 --> 31:21.520
943
+ building later. You can't even tell me in advance what your plan is for walking through the airport,
944
+
945
+ 31:21.520 --> 31:26.320
946
+ partly because you're too lazy to think about it maybe, but partly also because you just don't
947
+
948
+ 31:26.320 --> 31:30.960
949
+ have the information. You don't know what gate you're landing in or what people are going to be
950
+
951
+ 31:30.960 --> 31:37.040
952
+ in front of you or anything. So there's no point in planning in detail. But you have to have,
953
+
954
+ 31:38.000 --> 31:43.760
955
+ you have to make a leap of faith that you can figure it out once you get there. And it's really
956
+
957
+ 31:43.760 --> 31:52.000
958
+ interesting to me how you arrive at that. How do you, so you have learned over your lifetime to be
959
+
960
+ 31:52.000 --> 31:56.800
961
+ able to make some kinds of predictions about how hard it is to achieve some kinds of sub goals.
962
+
963
+ 31:57.440 --> 32:01.440
964
+ And that's critical. Like you would never plan to fly somewhere if you couldn't,
965
+
966
+ 32:02.000 --> 32:05.200
967
+ didn't have a model of how hard it was to do some of the intermediate steps.
968
+
969
+ 32:05.200 --> 32:09.440
970
+ So one of the things we're thinking about now is how do you do this kind of very aggressive
971
+
972
+ 32:09.440 --> 32:16.400
973
+ generalization to situations that you haven't been in and so on to predict how long will it
974
+
975
+ 32:16.400 --> 32:20.400
976
+ take to walk through the Kuala Lumpur airport? Like you could give me an estimate and it wouldn't
977
+
978
+ 32:20.400 --> 32:26.800
979
+ be crazy. And you have to have an estimate of that in order to make plans that involve
980
+
981
+ 32:26.800 --> 32:30.080
982
+ walking through the Kuala Lumpur airport, even if you don't need to know it in detail.
983
+
984
+ 32:31.040 --> 32:35.520
985
+ So I'm really interested in these kinds of abstract models and how do we acquire them.
986
+
987
+ 32:35.520 --> 32:39.760
988
+ But once we have them, we can use them to do hierarchical reasoning, which I think is very
989
+
990
+ 32:39.760 --> 32:46.400
991
+ important. Yeah, there's this notion of goal regression and preimage backchaining.
992
+
993
+ 32:46.400 --> 32:53.760
994
+ This idea of starting at the goal and just forming these big clouds of states. I mean,
995
+
996
+ 32:54.560 --> 33:01.840
997
+ it's almost like saying to the airport, you know, you know, once you show up to the airport,
998
+
999
+ 33:01.840 --> 33:08.560
1000
+ you're like a few steps away from the goal. So thinking of it this way is kind of interesting.
1001
+
1002
+ 33:08.560 --> 33:15.600
1003
+ I don't know if you have further comments on that of starting at the goal. Yeah, I mean,
1004
+
1005
+ 33:15.600 --> 33:22.400
1006
+ it's interesting that Herb Simon back in the early days of AI talked a lot about
1007
+
1008
+ 33:22.400 --> 33:26.960
1009
+ means ends reasoning and reasoning back from the goal. There's a kind of an intuition that people
1010
+
1011
+ 33:26.960 --> 33:34.960
1012
+ have that the number of the state space is big, the number of actions you could take is really big.
1013
+
1014
+ 33:35.760 --> 33:39.440
1015
+ So if you say, here I sit and I want to search forward from where I am, what are all the things
1016
+
1017
+ 33:39.440 --> 33:45.040
1018
+ I could do? That's just overwhelming. If you say, if you can reason at this other level and say,
1019
+
1020
+ 33:45.040 --> 33:49.520
1021
+ here's what I'm hoping to achieve, what can I do to make that true that somehow the
1022
+
1023
+ 33:49.520 --> 33:54.000
1024
+ branching is smaller? Now, what's interesting is that like in the AI planning community,
1025
+
1026
+ 33:54.000 --> 33:59.120
1027
+ that hasn't worked out in the class of problems that they look at and the methods that they tend
1028
+
1029
+ 33:59.120 --> 34:04.400
1030
+ to use, it hasn't turned out that it's better to go backward. It's still kind of my intuition
1031
+
1032
+ 34:04.400 --> 34:10.000
1033
+ that it is, but I can't prove that to you right now. Right. I share your intuition, at least for us
1034
+
1035
+ 34:10.720 --> 34:19.920
1036
+ mirror humans. Speaking of which, when you maybe now we take it and take a little step into that
1037
+
1038
+ 34:19.920 --> 34:27.280
1039
+ philosophy circle, how hard would it, when you think about human life, you give those examples
1040
+
1041
+ 34:27.280 --> 34:32.400
1042
+ often, how hard do you think it is to formulate human life as a planning problem or aspects of
1043
+
1044
+ 34:32.400 --> 34:37.600
1045
+ human life? So when you look at robots, you're often trying to think about object manipulation,
1046
+
1047
+ 34:38.640 --> 34:46.240
1048
+ tasks about moving a thing. When you take a slight step outside the room, let the robot
1049
+
1050
+ 34:46.240 --> 34:54.480
1051
+ leave and he'll get lunch or maybe try to pursue more fuzzy goals. How hard do you think is that
1052
+
1053
+ 34:54.480 --> 35:00.720
1054
+ problem? If you were to try to maybe put another way, try to formulate human life as a planning
1055
+
1056
+ 35:00.720 --> 35:05.680
1057
+ problem. Well, that would be a mistake. I mean, it's not all a planning problem, right? I think
1058
+
1059
+ 35:05.680 --> 35:11.920
1060
+ it's really, really important that we understand that you have to put together pieces and parts
1061
+
1062
+ 35:11.920 --> 35:18.640
1063
+ that have different styles of reasoning and representation and learning. I think it seems
1064
+
1065
+ 35:18.640 --> 35:25.680
1066
+ probably clear to anybody that it can't all be this or all be that. Brains aren't all like this
1067
+
1068
+ 35:25.680 --> 35:30.160
1069
+ or all like that, right? They have different pieces and parts and substructure and so on.
1070
+
1071
+ 35:30.160 --> 35:34.400
1072
+ So I don't think that there's any good reason to think that there's going to be like one true
1073
+
1074
+ 35:34.400 --> 35:39.600
1075
+ algorithmic thing that's going to do the whole job. Just a bunch of pieces together,
1076
+
1077
+ 35:39.600 --> 35:48.160
1078
+ design to solve a bunch of specific problems. Or maybe styles of problems. I mean,
1079
+
1080
+ 35:48.160 --> 35:52.880
1081
+ there's probably some reasoning that needs to go on in image space. I think, again,
1082
+
1083
+ 35:55.840 --> 35:59.440
1084
+ there's this model base versus model free idea, right? So in reinforcement learning,
1085
+
1086
+ 35:59.440 --> 36:06.000
1087
+ people talk about, oh, should I learn? I could learn a policy just straight up a way of behaving.
1088
+
1089
+ 36:06.000 --> 36:11.360
1090
+ I could learn it's popular in a value function. That's some kind of weird intermediate ground.
1091
+
1092
+ 36:13.360 --> 36:17.440
1093
+ Or I could learn a transition model, which tells me something about the dynamics of the world.
1094
+
1095
+ 36:18.320 --> 36:22.560
1096
+ If I take a, imagine that I learn a transition model and I couple it with a planner and I
1097
+
1098
+ 36:22.560 --> 36:29.520
1099
+ draw a box around that, I have a policy again. It's just stored a different way, right?
1100
+
1101
+ 36:30.800 --> 36:34.560
1102
+ But it's just as much of a policy as the other policy. It's just I've made, I think,
1103
+
1104
+ 36:34.560 --> 36:41.920
1105
+ the way I see it is it's a time space trade off in computation, right? A more overt policy
1106
+
1107
+ 36:41.920 --> 36:47.680
1108
+ representation. Maybe it takes more space, but maybe I can compute quickly what action I should
1109
+
1110
+ 36:47.680 --> 36:52.880
1111
+ take. On the other hand, maybe a very compact model of the world dynamics plus a planner
1112
+
1113
+ 36:53.680 --> 36:58.240
1114
+ lets me compute what action to take to just more slowly. There's no, I mean, I don't think,
1115
+
1116
+ 36:58.240 --> 37:04.240
1117
+ there's no argument to be had. It's just like a question of what form of computation is best
1118
+
1119
+ 37:04.240 --> 37:12.720
1120
+ for us. For the various sub problems. Right. So, and so like learning to do algebra manipulations
1121
+
1122
+ 37:12.720 --> 37:17.280
1123
+ for some reason is, I mean, that's probably going to want naturally a sort of a different
1124
+
1125
+ 37:17.280 --> 37:22.640
1126
+ representation than riding a unicycle. At the time constraints on the unicycle are serious.
1127
+
1128
+ 37:22.640 --> 37:28.640
1129
+ The space is maybe smaller. I don't know. But so I could be the more human size of
1130
+
1131
+ 37:28.640 --> 37:36.240
1132
+ falling in love, having a relationship that might be another another style of no idea how to model
1133
+
1134
+ 37:36.240 --> 37:43.280
1135
+ that. Yeah, that's, that's first solve the algebra and the object manipulation. What do you think
1136
+
1137
+ 37:44.160 --> 37:50.480
1138
+ is harder perception or planning perception? That's why I'm understanding that's why.
1139
+
1140
+ 37:51.920 --> 37:55.440
1141
+ So what do you think is so hard about perception by understanding the world around you?
1142
+
1143
+ 37:55.440 --> 38:03.520
1144
+ Well, I mean, I think the big question is representational. A hugely the question is
1145
+
1146
+ 38:03.520 --> 38:12.560
1147
+ representation. So perception has made great strides lately, right? And we can classify images and we
1148
+
1149
+ 38:12.560 --> 38:17.760
1150
+ can play certain kinds of games and predict how to steer the car and all this sort of stuff.
1151
+
1152
+ 38:17.760 --> 38:28.160
1153
+ I don't think we have a very good idea of what perception should deliver, right? So if you
1154
+
1155
+ 38:28.160 --> 38:32.000
1156
+ if you believe in modularity, okay, there's there's a very strong view which says
1157
+
1158
+ 38:34.640 --> 38:40.400
1159
+ we shouldn't build in any modularity, we should make a giant gigantic neural network,
1160
+
1161
+ 38:40.400 --> 38:44.000
1162
+ train it end to end to do the thing. And that's the best way forward.
1163
+
1164
+ 38:44.000 --> 38:51.280
1165
+ And it's hard to argue with that except on a sample complexity basis, right? So you might say,
1166
+
1167
+ 38:51.280 --> 38:55.120
1168
+ oh, well, if I want to do end to end reinforcement learning on this giant giant neural network,
1169
+
1170
+ 38:55.120 --> 39:00.800
1171
+ it's going to take a lot of data and a lot of like broken robots and stuff. So
1172
+
1173
+ 39:02.640 --> 39:10.000
1174
+ then the only answer is to say, okay, we have to build something in build in some structure
1175
+
1176
+ 39:10.000 --> 39:14.080
1177
+ or some bias, we know from theory of machine learning, the only way to cut down the sample
1178
+
1179
+ 39:14.080 --> 39:20.320
1180
+ complexity is to kind of cut down somehow cut down the hypothesis space, you can do that by
1181
+
1182
+ 39:20.320 --> 39:24.880
1183
+ building in bias. There's all kinds of reason to think that nature built bias into humans.
1184
+
1185
+ 39:27.520 --> 39:32.800
1186
+ Convolution is a bias, right? It's a very strong bias and it's a very critical bias.
1187
+
1188
+ 39:32.800 --> 39:39.520
1189
+ So my view is that we should look for more things that are like convolution, but that address other
1190
+
1191
+ 39:39.520 --> 39:43.440
1192
+ aspects of reasoning, right? So convolution helps us a lot with a certain kind of spatial
1193
+
1194
+ 39:43.440 --> 39:51.520
1195
+ reasoning that's quite close to the imaging. I think there's other ideas like that,
1196
+
1197
+ 39:52.400 --> 39:58.240
1198
+ maybe some amount of forward search, maybe some notions of abstraction, maybe the notion that
1199
+
1200
+ 39:58.240 --> 40:02.560
1201
+ objects exist, actually, I think that's pretty important. And a lot of people won't give you
1202
+
1203
+ 40:02.560 --> 40:06.720
1204
+ that to start with, right? So almost like a convolution in the
1205
+
1206
+ 40:08.640 --> 40:13.600
1207
+ in the object semantic object space or some kind of some kind of ideas in there. That's right.
1208
+
1209
+ 40:13.600 --> 40:17.680
1210
+ And people are like the graph, graph convolutions are an idea that are related to
1211
+
1212
+ 40:17.680 --> 40:26.240
1213
+ relational representations. And so I think there are, so you, I've come far field from perception,
1214
+
1215
+ 40:26.240 --> 40:33.200
1216
+ but I think, I think the thing that's going to make perception that kind of the next step is
1217
+
1218
+ 40:33.200 --> 40:38.000
1219
+ actually understanding better what it should produce, right? So what are we going to do with
1220
+
1221
+ 40:38.000 --> 40:41.920
1222
+ the output of it, right? It's fine when what we're going to do with the output is steer,
1223
+
1224
+ 40:41.920 --> 40:48.880
1225
+ it's less clear when we're just trying to make one integrated intelligent agent,
1226
+
1227
+ 40:48.880 --> 40:53.520
1228
+ what should the output of perception be? We have no idea. And how should that hook up to the other
1229
+
1230
+ 40:53.520 --> 41:00.240
1231
+ stuff? We don't know. So I think the pressing question is, what kinds of structure can we
1232
+
1233
+ 41:00.240 --> 41:05.520
1234
+ build in that are like the moral equivalent of convolution that will make a really awesome
1235
+
1236
+ 41:05.520 --> 41:10.240
1237
+ superstructure that then learning can kind of progress on efficiently?
1238
+
1239
+ 41:10.240 --> 41:14.080
1240
+ I agree. Very compelling description of actually where we stand with the perception from
1241
+
1242
+ 41:15.280 --> 41:19.120
1243
+ you're teaching a course on embodying intelligence. What do you think it takes to
1244
+
1245
+ 41:19.120 --> 41:24.800
1246
+ build a robot with human level intelligence? I don't know if we knew we would do it.
1247
+
1248
+ 41:27.680 --> 41:34.240
1249
+ If you were to, I mean, okay, so do you think a robot needs to have a self awareness,
1250
+
1251
+ 41:36.000 --> 41:44.160
1252
+ consciousness, fear of mortality? Or is it, is it simpler than that? Or is consciousness a simple
1253
+
1254
+ 41:44.160 --> 41:51.680
1255
+ thing? Do you think about these notions? I don't think much about consciousness. Even most philosophers
1256
+
1257
+ 41:51.680 --> 41:56.560
1258
+ who care about it will give you that you could have robots that are zombies, right, that behave
1259
+
1260
+ 41:56.560 --> 42:01.360
1261
+ like humans but are not conscious. And I, at this moment, would be happy enough for that. So I'm not
1262
+
1263
+ 42:01.360 --> 42:06.240
1264
+ really worried one way or the other. So the technical side, you're not thinking of the use of self
1265
+
1266
+ 42:06.240 --> 42:13.760
1267
+ awareness? Well, but I, okay, but then what does self awareness mean? I mean, that you need to have
1268
+
1269
+ 42:13.760 --> 42:18.800
1270
+ some part of the system that can observe other parts of the system and tell whether they're
1271
+
1272
+ 42:18.800 --> 42:24.560
1273
+ working well or not. That seems critical. So does that count as, I mean, does that count as
1274
+
1275
+ 42:24.560 --> 42:30.560
1276
+ self awareness or not? Well, it depends on whether you think that there's somebody at home who can
1277
+
1278
+ 42:30.560 --> 42:35.600
1279
+ articulate whether they're self aware. But clearly, if I have like, you know, some piece of code
1280
+
1281
+ 42:35.600 --> 42:41.120
1282
+ that's counting how many times this procedure gets executed, that's a kind of self awareness,
1283
+
1284
+ 42:41.120 --> 42:44.560
1285
+ right? So there's a big spectrum. It's clear you have to have some of it.
1286
+
1287
+ 42:44.560 --> 42:48.800
1288
+ Right. You know, we're quite far away on many dimensions, but is the direction of research
1289
+
1290
+ 42:49.600 --> 42:54.720
1291
+ that's most compelling to you for, you know, trying to achieve human level intelligence
1292
+
1293
+ 42:54.720 --> 43:00.880
1294
+ in our robots? Well, to me, I guess the thing that seems most compelling to me at the moment is this
1295
+
1296
+ 43:00.880 --> 43:10.320
1297
+ question of what to build in and what to learn. I think we're, we don't, we're missing a bunch of
1298
+
1299
+ 43:10.320 --> 43:17.120
1300
+ ideas. And, and we, you know, people, you know, don't you dare ask me how many years it's going
1301
+
1302
+ 43:17.120 --> 43:22.320
1303
+ to be until that happens, because I won't even participate in the conversation. Because I think
1304
+
1305
+ 43:22.320 --> 43:26.240
1306
+ we're missing ideas and I don't know how long it's going to take to find them. So I won't ask you
1307
+
1308
+ 43:26.240 --> 43:34.160
1309
+ how many years, but maybe I'll ask you what it, when you will be sufficiently impressed that we've
1310
+
1311
+ 43:34.160 --> 43:41.280
1312
+ achieved it. So what's a good test of intelligence? Do you like the Turing test and natural language
1313
+
1314
+ 43:41.280 --> 43:47.520
1315
+ in the robotic space? Is there something where you would sit back and think, oh, that's pretty
1316
+
1317
+ 43:47.520 --> 43:52.800
1318
+ impressive as a test, as a benchmark. Do you think about these kinds of problems?
1319
+
1320
+ 43:52.800 --> 43:58.480
1321
+ No, I resist. I mean, I think all the time that we spend arguing about those kinds of things could
1322
+
1323
+ 43:58.480 --> 44:04.800
1324
+ be better spent just making their robots work better. So you don't value competition. So I mean,
1325
+
1326
+ 44:04.800 --> 44:11.280
1327
+ there's a nature of benchmark, benchmarks and data sets, or Turing test challenges, where
1328
+
1329
+ 44:11.280 --> 44:15.440
1330
+ everybody kind of gets together and tries to build a better robot because they want to outcompete
1331
+
1332
+ 44:15.440 --> 44:21.360
1333
+ each other, like the DARPA challenge with the autonomous vehicles. Do you see the value of that?
1334
+
1335
+ 44:23.600 --> 44:27.440
1336
+ Or can get in the way? I think you can get in the way. I mean, some people, many people find it
1337
+
1338
+ 44:27.440 --> 44:34.880
1339
+ motivating. And so that's good. I find it anti motivating personally. But I think you get an
1340
+
1341
+ 44:34.880 --> 44:41.200
1342
+ interesting cycle where for a contest, a bunch of smart people get super motivated and they hack
1343
+
1344
+ 44:41.200 --> 44:47.120
1345
+ their brains out. And much of what gets done is just hacks, but sometimes really cool ideas emerge.
1346
+
1347
+ 44:47.120 --> 44:53.360
1348
+ And then that gives us something to chew on after that. So it's not a thing for me, but I don't
1349
+
1350
+ 44:53.360 --> 44:58.480
1351
+ I don't regret that other people do it. Yeah, it's like you said, with everything else that
1352
+
1353
+ 44:58.480 --> 45:03.840
1354
+ makes us good. So jumping topics a little bit, you started the journal machine learning research
1355
+
1356
+ 45:04.560 --> 45:09.920
1357
+ and served as its editor in chief. How did the publication come about?
1358
+
1359
+ 45:11.680 --> 45:17.040
1360
+ And what do you think about the current publishing model space in machine learning
1361
+
1362
+ 45:17.040 --> 45:23.040
1363
+ artificial intelligence? Okay, good. So it came about because there was a journal called machine
1364
+
1365
+ 45:23.040 --> 45:29.840
1366
+ learning, which still exists, which was owned by Clure. And there was I was on the editorial
1367
+
1368
+ 45:29.840 --> 45:33.520
1369
+ board and we used to have these meetings annually where we would complain to Clure that
1370
+
1371
+ 45:33.520 --> 45:37.280
1372
+ it was too expensive for the libraries and that people couldn't publish. And we would really
1373
+
1374
+ 45:37.280 --> 45:41.920
1375
+ like to have some kind of relief on those fronts. And they would always sympathize,
1376
+
1377
+ 45:41.920 --> 45:49.120
1378
+ but not do anything. So we just decided to make a new journal. And there was the Journal of AI
1379
+
1380
+ 45:49.120 --> 45:54.880
1381
+ Research, which has was on the same model, which had been in existence for maybe five years or so,
1382
+
1383
+ 45:54.880 --> 46:01.920
1384
+ and it was going on pretty well. So we just made a new journal. It wasn't I mean,
1385
+
1386
+ 46:03.600 --> 46:07.600
1387
+ I don't know, I guess it was work, but it wasn't that hard. So basically the editorial board,
1388
+
1389
+ 46:07.600 --> 46:17.440
1390
+ probably 75% of the editorial board of machine learning resigned. And we founded the new journal.
1391
+
1392
+ 46:17.440 --> 46:25.280
1393
+ But it was sort of it was more open. Yeah, right. So it's completely open. It's open access.
1394
+
1395
+ 46:25.280 --> 46:31.600
1396
+ Actually, I had a postdoc, George Conrad Harris, who wanted to call these journals free for all.
1397
+
1398
+ 46:33.520 --> 46:37.600
1399
+ Because there were I mean, it both has no page charges and has no
1400
+
1401
+ 46:40.080 --> 46:45.520
1402
+ access restrictions. And the reason and so lots of people, I mean, for there were,
1403
+
1404
+ 46:45.520 --> 46:50.240
1405
+ there were people who are mad about the existence of this journal who thought it was a fraud or
1406
+
1407
+ 46:50.240 --> 46:55.200
1408
+ something, it would be impossible, they said, to run a journal like this with basically,
1409
+
1410
+ 46:55.200 --> 46:59.200
1411
+ I mean, for a long time, I didn't even have a bank account. I paid for the
1412
+
1413
+ 46:59.840 --> 47:06.640
1414
+ lawyer to incorporate and the IP address. And it just didn't cost a couple hundred dollars a year
1415
+
1416
+ 47:06.640 --> 47:12.880
1417
+ to run. It's a little bit more now, but not that much more. But that's because I think computer
1418
+
1419
+ 47:12.880 --> 47:19.920
1420
+ scientists are competent and autonomous in a way that many scientists in other fields aren't.
1421
+
1422
+ 47:19.920 --> 47:23.920
1423
+ I mean, at doing these kinds of things, we already type set around papers,
1424
+
1425
+ 47:23.920 --> 47:28.000
1426
+ we all have students and people who can hack a website together in the afternoon.
1427
+
1428
+ 47:28.000 --> 47:32.960
1429
+ So the infrastructure for us was like, not a problem, but for other people in other fields,
1430
+
1431
+ 47:32.960 --> 47:38.960
1432
+ it's a harder thing to do. Yeah. And this kind of open access journal is nevertheless,
1433
+
1434
+ 47:38.960 --> 47:45.840
1435
+ one of the most prestigious journals. So it's not like a prestige and it can be achieved
1436
+
1437
+ 47:45.840 --> 47:49.920
1438
+ without any of the papers. Paper is not required for prestige, turns out. Yeah.
1439
+
1440
+ 47:50.640 --> 47:56.960
1441
+ So on the review process side of actually a long time ago, I don't remember when I reviewed a paper
1442
+
1443
+ 47:56.960 --> 48:01.360
1444
+ where you were also a reviewer and I remember reading your review and being influenced by it.
1445
+
1446
+ 48:01.360 --> 48:06.480
1447
+ It was really well written. It influenced how I write feature reviews. You disagreed with me,
1448
+
1449
+ 48:06.480 --> 48:15.280
1450
+ actually. And you made it my review, but much better. But nevertheless, the review process
1451
+
1452
+ 48:16.880 --> 48:23.600
1453
+ has its flaws. And what do you think works well? How can it be improved?
1454
+
1455
+ 48:23.600 --> 48:27.600
1456
+ So actually, when I started JMLR, I wanted to do something completely different.
1457
+
1458
+ 48:28.720 --> 48:34.800
1459
+ And I didn't because it felt like we needed a traditional journal of record and so we just
1460
+
1461
+ 48:34.800 --> 48:40.800
1462
+ made JMLR be almost like a normal journal, except for the open access parts of it, basically.
1463
+
1464
+ 48:43.200 --> 48:47.600
1465
+ Increasingly, of course, publication is not even a sensible word. You can publish something by
1466
+
1467
+ 48:47.600 --> 48:54.400
1468
+ putting it in an archive so I can publish everything tomorrow. So making stuff public is
1469
+
1470
+ 48:54.400 --> 49:04.800
1471
+ there's no barrier. We still need curation and evaluation. I don't have time to read all of
1472
+
1473
+ 49:04.800 --> 49:20.480
1474
+ archive. And you could argue that kind of social thumbs uping of articles suffices, right? You
1475
+
1476
+ 49:20.480 --> 49:25.440
1477
+ might say, oh, heck with this, we don't need journals at all. We'll put everything on archive
1478
+
1479
+ 49:25.440 --> 49:30.400
1480
+ and people will upvote and downvote the articles and then your CV will say, oh, man, he got a lot
1481
+
1482
+ 49:30.400 --> 49:44.000
1483
+ of upvotes. So that's good. But I think there's still value in careful reading and commentary of
1484
+
1485
+ 49:44.000 --> 49:48.480
1486
+ things. And it's hard to tell when people are upvoting and downvoting or arguing about your
1487
+
1488
+ 49:48.480 --> 49:55.440
1489
+ paper on Twitter and Reddit, whether they know what they're talking about. So then I have the
1490
+
1491
+ 49:55.440 --> 50:01.360
1492
+ second order problem of trying to decide whose opinions I should value and such. So I don't
1493
+
1494
+ 50:01.360 --> 50:06.240
1495
+ know. If I had infinite time, which I don't, and I'm not going to do this because I really want to
1496
+
1497
+ 50:06.240 --> 50:11.920
1498
+ make robots work, but if I felt inclined to do something more in a publication direction,
1499
+
1500
+ 50:12.880 --> 50:16.160
1501
+ I would do this other thing, which I thought about doing the first time, which is to get
1502
+
1503
+ 50:16.160 --> 50:22.480
1504
+ together some set of people whose opinions I value and who are pretty articulate. And I guess we
1505
+
1506
+ 50:22.480 --> 50:27.520
1507
+ would be public, although we could be private, I'm not sure. And we would review papers. We wouldn't
1508
+
1509
+ 50:27.520 --> 50:31.600
1510
+ publish them and you wouldn't submit them. We would just find papers and we would write reviews
1511
+
1512
+ 50:32.720 --> 50:39.120
1513
+ and we would make those reviews public. And maybe if you, you know, so we're Leslie's friends who
1514
+
1515
+ 50:39.120 --> 50:45.200
1516
+ review papers and maybe eventually if we, our opinion was sufficiently valued, like the opinion
1517
+
1518
+ 50:45.200 --> 50:50.800
1519
+ of JMLR is valued, then you'd say on your CV that Leslie's friends gave my paper a five star reading
1520
+
1521
+ 50:50.800 --> 50:58.800
1522
+ and that would be just as good as saying I got it accepted into this journal. So I think we
1523
+
1524
+ 50:58.800 --> 51:04.800
1525
+ should have good public commentary and organize it in some way, but I don't really know how to
1526
+
1527
+ 51:04.800 --> 51:09.120
1528
+ do it. It's interesting times. The way you describe it actually is really interesting. I mean,
1529
+
1530
+ 51:09.120 --> 51:15.040
1531
+ we do it for movies, IMDB.com. There's experts, critics come in, they write reviews, but there's
1532
+
1533
+ 51:15.040 --> 51:20.960
1534
+ also regular non critics humans write reviews and they're separated. I like open review.
1535
+
1536
+ 51:22.240 --> 51:31.600
1537
+ The eye clear process, I think is interesting. It's a step in the right direction, but it's still
1538
+
1539
+ 51:31.600 --> 51:39.840
1540
+ not as compelling as reviewing movies or video games. I mean, it sometimes almost, it might be
1541
+
1542
+ 51:39.840 --> 51:44.400
1543
+ silly, at least from my perspective to say, but it boils down to the user interface, how fun and
1544
+
1545
+ 51:44.400 --> 51:50.400
1546
+ easy it is to actually perform the reviews, how efficient, how much you as a reviewer get
1547
+
1548
+ 51:51.200 --> 51:56.560
1549
+ street cred for being a good reviewer. Those human elements come into play.
1550
+
1551
+ 51:57.200 --> 52:03.600
1552
+ No, it's a big investment to do a good review of a paper and the flood of papers is out of control.
1553
+
1554
+ 52:05.280 --> 52:08.960
1555
+ There aren't 3,000 new, I don't know how many new movies are there in a year, I don't know,
1556
+
1557
+ 52:08.960 --> 52:15.440
1558
+ but there's probably going to be less than how many machine learning papers there are in a year now.
1559
+
1560
+ 52:19.840 --> 52:22.320
1561
+ Right, so I'm like an old person, so of course I'm going to say,
1562
+
1563
+ 52:23.520 --> 52:30.240
1564
+ things are moving too fast, I'm a stick in the mud. So I can say that, but my particular flavor
1565
+
1566
+ 52:30.240 --> 52:38.240
1567
+ of that is, I think the horizon for researchers has gotten very short, that students want to
1568
+
1569
+ 52:38.240 --> 52:46.000
1570
+ publish a lot of papers and it's exciting and there's value in that and you get padded on the
1571
+
1572
+ 52:46.000 --> 52:58.320
1573
+ head for it and so on. And some of that is fine, but I'm worried that we're driving out people who
1574
+
1575
+ 52:58.320 --> 53:05.280
1576
+ would spend two years thinking about something. Back in my day, when we worked on our theses,
1577
+
1578
+ 53:05.280 --> 53:10.560
1579
+ we did not publish papers, you did your thesis for years, you picked a hard problem and then you
1580
+
1581
+ 53:10.560 --> 53:16.320
1582
+ worked and chewed on it and did stuff and wasted time and for a long time. And when it was roughly,
1583
+
1584
+ 53:16.320 --> 53:22.800
1585
+ when it was done, you would write papers. And so I don't know how to, and I don't think that
1586
+
1587
+ 53:22.800 --> 53:26.720
1588
+ everybody has to work in that mode, but I think there's some problems that are hard enough
1589
+
1590
+ 53:27.680 --> 53:31.680
1591
+ that it's important to have a longer research horizon and I'm worried that
1592
+
1593
+ 53:31.680 --> 53:39.600
1594
+ we don't incentivize that at all at this point. In this current structure. So what do you see
1595
+
1596
+ 53:41.440 --> 53:47.280
1597
+ what are your hopes and fears about the future of AI and continuing on this theme? So AI has
1598
+
1599
+ 53:47.280 --> 53:53.440
1600
+ gone through a few winters, ups and downs. Do you see another winter of AI coming?
1601
+
1602
+ 53:53.440 --> 54:02.480
1603
+ Or are you more hopeful about making robots work, as you said? I think the cycles are inevitable,
1604
+
1605
+ 54:03.040 --> 54:10.080
1606
+ but I think each time we get higher, right? I mean, it's like climbing some kind of
1607
+
1608
+ 54:10.080 --> 54:19.600
1609
+ landscape with a noisy optimizer. So it's clear that the deep learning stuff has
1610
+
1611
+ 54:19.600 --> 54:25.760
1612
+ made deep and important improvements. And so the high watermark is now higher. There's no question.
1613
+
1614
+ 54:25.760 --> 54:34.400
1615
+ But of course, I think people are overselling and eventually investors, I guess, and other people
1616
+
1617
+ 54:34.400 --> 54:40.640
1618
+ look around and say, well, you're not quite delivering on this grand claim and that wild
1619
+
1620
+ 54:40.640 --> 54:47.680
1621
+ hypothesis. It's like probably it's going to crash something out and then it's okay. I mean,
1622
+
1623
+ 54:47.680 --> 54:54.000
1624
+ it's okay. I mean, but I don't I can't imagine that there's like some awesome monotonic improvement
1625
+
1626
+ 54:54.000 --> 55:01.760
1627
+ from here to human level AI. So in, you know, I have to ask this question, I probably anticipate
1628
+
1629
+ 55:01.760 --> 55:09.120
1630
+ answers, the answers. But do you have a worry short term, a long term about the existential
1631
+
1632
+ 55:09.120 --> 55:18.880
1633
+ threats of AI and maybe short term, less existential, but more robots taking away jobs?
1634
+
1635
+ 55:20.480 --> 55:28.000
1636
+ Well, actually, let me talk a little bit about utility. Actually, I had an interesting conversation
1637
+
1638
+ 55:28.000 --> 55:32.480
1639
+ with some military ethicists who wanted to talk to me about autonomous weapons.
1640
+
1641
+ 55:32.480 --> 55:39.360
1642
+ And they're, they were interesting, smart, well educated guys who didn't know too much about AI or
1643
+
1644
+ 55:39.360 --> 55:43.600
1645
+ machine learning. And the first question they asked me was, has your robot ever done something you
1646
+
1647
+ 55:43.600 --> 55:49.120
1648
+ didn't expect? And I like burst out laughing because anybody who's ever done something other robot
1649
+
1650
+ 55:49.120 --> 55:54.720
1651
+ right knows that they don't do much. And what I realized was that their model of how we program
1652
+
1653
+ 55:54.720 --> 55:59.440
1654
+ a robot was completely wrong. Their model of how we could put program robot was like,
1655
+
1656
+ 55:59.440 --> 56:05.600
1657
+ program robot was like, Lego Mindstorms, like, Oh, go forward a meter, turn left, take a picture,
1658
+
1659
+ 56:05.600 --> 56:11.120
1660
+ do this, do that. And so if you have that model of programming, then it's true, it's kind of weird
1661
+
1662
+ 56:11.120 --> 56:16.240
1663
+ that your robot would do something that you didn't anticipate. But the fact is, and actually,
1664
+
1665
+ 56:16.240 --> 56:21.840
1666
+ so now this is my new educational mission, if I have to talk to non experts, I try to teach them
1667
+
1668
+ 56:22.720 --> 56:28.080
1669
+ the idea that we don't operate, we operate at least one or maybe many levels of abstraction
1670
+
1671
+ 56:28.080 --> 56:33.280
1672
+ about that. And we say, Oh, here's a hypothesis class, maybe it's a space of plans, or maybe it's a
1673
+
1674
+ 56:33.280 --> 56:38.400
1675
+ space of classifiers, or whatever. But there's some set of answers and an objective function. And
1676
+
1677
+ 56:38.400 --> 56:44.800
1678
+ then we work on some optimization method that tries to optimize a solution in that class.
1679
+
1680
+ 56:46.080 --> 56:50.560
1681
+ And we don't know what solution is going to come out. Right. So I think it's important to
1682
+
1683
+ 56:50.560 --> 56:55.520
1684
+ communicate that. So I mean, of course, probably people who listen to this, they know that lesson.
1685
+
1686
+ 56:55.520 --> 56:59.600
1687
+ But I think it's really critical to communicate that lesson. And then lots of people are now
1688
+
1689
+ 56:59.600 --> 57:05.600
1690
+ talking about, you know, the value alignment problem. So you want to be sure, as robots or
1691
+
1692
+ 57:06.480 --> 57:11.280
1693
+ software systems get more competent, that their objectives are aligned with your objectives,
1694
+
1695
+ 57:11.280 --> 57:17.680
1696
+ or that our objectives are compatible in some way, or we have a good way of mediating when they have
1697
+
1698
+ 57:17.680 --> 57:22.240
1699
+ different objectives. And so I think it is important to start thinking in terms, like,
1700
+
1701
+ 57:22.240 --> 57:28.480
1702
+ you don't have to be freaked out by the robot apocalypse to accept that it's important to think
1703
+
1704
+ 57:28.480 --> 57:33.760
1705
+ about objective functions of value alignment. And that you have to really, everyone who's done
1706
+
1707
+ 57:33.760 --> 57:38.160
1708
+ optimization knows that you have to be careful what you wish for that, you know, sometimes you get
1709
+
1710
+ 57:38.160 --> 57:45.280
1711
+ the optimal solution. And you realize, man, that was that objective was wrong. So pragmatically,
1712
+
1713
+ 57:45.280 --> 57:51.360
1714
+ in the shortest term, it seems to me that that those are really interesting and critical questions.
1715
+
1716
+ 57:51.360 --> 57:55.680
1717
+ And the idea that we're going to go from being people who engineer algorithms to being people
1718
+
1719
+ 57:55.680 --> 58:00.800
1720
+ who engineer objective functions, I think that's, that's definitely going to happen. And that's
1721
+
1722
+ 58:00.800 --> 58:03.360
1723
+ going to change our thinking and methodology and stuff.
1724
+
1725
+ 58:03.360 --> 58:07.520
1726
+ We're going to, you started at Stanford philosophy, that's wish you could be science,
1727
+
1728
+ 58:07.520 --> 58:13.840
1729
+ and I will go back to philosophy maybe. Well, I mean, they're mixed together because, because,
1730
+
1731
+ 58:13.840 --> 58:18.240
1732
+ as we also know, as machine learning people, right? When you design, in fact, this is the
1733
+
1734
+ 58:18.240 --> 58:23.360
1735
+ lecture I gave in class today, when you design an objective function, you have to wear both hats.
1736
+
1737
+ 58:23.360 --> 58:28.320
1738
+ There's the hat that says, what do I want? And there's the hat that says, but I know what my
1739
+
1740
+ 58:28.320 --> 58:34.240
1741
+ optimizer can do to some degree. And I have to take that into account. So it's, it's always a
1742
+
1743
+ 58:34.240 --> 58:40.480
1744
+ trade off. And we have to kind of be mindful of that. The part about taking people's jobs,
1745
+
1746
+ 58:40.480 --> 58:47.360
1747
+ that I understand that that's important, I don't understand sociology or economics or people
1748
+
1749
+ 58:47.360 --> 58:51.840
1750
+ very well. So I don't know how to think about that. So that's, yeah, so there might be a
1751
+
1752
+ 58:51.840 --> 58:56.640
1753
+ sociological aspect there, the economic aspect that's very difficult to think about. Okay.
1754
+
1755
+ 58:56.640 --> 59:00.000
1756
+ I mean, I think other people should be thinking about it, but I'm just, that's not my strength.
1757
+
1758
+ 59:00.000 --> 59:04.320
1759
+ So what do you think is the most exciting area of research in the short term,
1760
+
1761
+ 59:04.320 --> 59:08.560
1762
+ for the community and for your, for yourself? Well, so, I mean, there's this story I've been
1763
+
1764
+ 59:08.560 --> 59:16.480
1765
+ telling about how to engineer intelligent robots. So that's what we want to do. We all kind of want
1766
+
1767
+ 59:16.480 --> 59:20.960
1768
+ to do, well, I mean, some set of us want to do this. And the question is, what's the most effective
1769
+
1770
+ 59:20.960 --> 59:25.840
1771
+ strategy? And we've tried, and there's a bunch of different things you could do at the extremes,
1772
+
1773
+ 59:25.840 --> 59:32.000
1774
+ right? One super extreme is we do introspection and we write a program. Okay, that has not worked
1775
+
1776
+ 59:32.000 --> 59:37.360
1777
+ out very well. Another extreme is we take a giant bunch of neural guru and we try and train it up to
1778
+
1779
+ 59:37.360 --> 59:43.040
1780
+ do something. I don't think that's going to work either. So the question is, what's the middle
1781
+
1782
+ 59:43.040 --> 59:49.840
1783
+ ground? And again, this isn't a theological question or anything like that. It's just,
1784
+
1785
+ 59:49.840 --> 59:57.040
1786
+ like, how do, just how do we, what's the best way to make this work out? And I think it's clear,
1787
+
1788
+ 59:57.040 --> 1:00:01.840
1789
+ it's a combination of learning, to me, it's clear, it's a combination of learning and not learning.
1790
+
1791
+ 1:00:02.400 --> 1:00:05.920
1792
+ And what should that combination be? And what's the stuff we build in? So to me,
1793
+
1794
+ 1:00:05.920 --> 1:00:10.080
1795
+ that's the most compelling question. And when you say engineer robots, you mean
1796
+
1797
+ 1:00:10.080 --> 1:00:15.600
1798
+ engineering systems that work in the real world. That's the emphasis.
1799
+
1800
+ 1:00:17.600 --> 1:00:23.200
1801
+ Last question, which robots or robot is your favorite from science fiction?
1802
+
1803
+ 1:00:24.480 --> 1:00:32.960
1804
+ So you can go with Star Wars or RTD2, or you can go with more modern, maybe Hal.
1805
+
1806
+ 1:00:32.960 --> 1:00:37.040
1807
+ No, sir, I don't think I have a favorite robot from science fiction.
1808
+
1809
+ 1:00:37.040 --> 1:00:45.520
1810
+ This is, this is back to, you like to make robots work in the real world here, not, not in.
1811
+
1812
+ 1:00:45.520 --> 1:00:50.000
1813
+ I mean, I love the process. And I care more about the process.
1814
+
1815
+ 1:00:50.000 --> 1:00:51.040
1816
+ The engineering process.
1817
+
1818
+ 1:00:51.600 --> 1:00:55.760
1819
+ Yeah. I mean, I do research because it's fun, not because I care about what we produce.
1820
+
1821
+ 1:00:57.520 --> 1:01:01.920
1822
+ Well, that's, that's a beautiful note, actually. And Leslie, thank you so much for talking today.
1823
+
1824
+ 1:01:01.920 --> 1:01:07.920
1825
+ Sure, it's been fun.
1826
+
vtt/episode_016_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_017_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_018_small.vtt ADDED
@@ -0,0 +1,1871 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.000
4
+ The following is a conversation with Elon Musk.
5
+
6
+ 00:03.000 --> 00:06.240
7
+ He's the CEO of Tesla, SpaceX, Neuralink,
8
+
9
+ 00:06.240 --> 00:09.200
10
+ and a cofounder of several other companies.
11
+
12
+ 00:09.200 --> 00:10.740
13
+ This conversation is part
14
+
15
+ 00:10.740 --> 00:13.200
16
+ of the artificial intelligence podcast.
17
+
18
+ 00:13.200 --> 00:15.640
19
+ The series includes leading researchers
20
+
21
+ 00:15.640 --> 00:19.320
22
+ in academia and industry, including CEOs and CTOs
23
+
24
+ 00:19.320 --> 00:24.080
25
+ of automotive, robotics, AI, and technology companies.
26
+
27
+ 00:24.080 --> 00:26.880
28
+ This conversation happened after the release of the paper
29
+
30
+ 00:26.880 --> 00:30.520
31
+ from our group at MIT on driver functional vigilance
32
+
33
+ 00:30.520 --> 00:32.880
34
+ during use of Tesla's autopilot.
35
+
36
+ 00:32.880 --> 00:34.560
37
+ The Tesla team reached out to me,
38
+
39
+ 00:34.560 --> 00:37.480
40
+ offering a podcast conversation with Mr. Musk.
41
+
42
+ 00:37.480 --> 00:40.640
43
+ I accepted with full control of questions I could ask
44
+
45
+ 00:40.640 --> 00:43.560
46
+ and the choice of what is released publicly.
47
+
48
+ 00:43.560 --> 00:46.840
49
+ I ended up editing out nothing of substance.
50
+
51
+ 00:46.840 --> 00:49.680
52
+ I've never spoken with Elon before this conversation,
53
+
54
+ 00:49.680 --> 00:51.720
55
+ publicly or privately.
56
+
57
+ 00:51.720 --> 00:54.360
58
+ Neither he nor his companies have any influence
59
+
60
+ 00:54.360 --> 00:57.840
61
+ on my opinion, nor on the rigor and integrity
62
+
63
+ 00:57.840 --> 00:59.760
64
+ of the scientific method that I practice
65
+
66
+ 00:59.760 --> 01:01.840
67
+ in my position at MIT.
68
+
69
+ 01:01.840 --> 01:04.640
70
+ Tesla has never financially supported my research
71
+
72
+ 01:04.640 --> 01:07.320
73
+ and I've never owned a Tesla vehicle.
74
+
75
+ 01:07.320 --> 01:10.160
76
+ I've never owned Tesla stock.
77
+
78
+ 01:10.160 --> 01:12.800
79
+ This podcast is not a scientific paper.
80
+
81
+ 01:12.800 --> 01:14.360
82
+ It is a conversation.
83
+
84
+ 01:14.360 --> 01:16.720
85
+ I respect Elon as I do all other leaders
86
+
87
+ 01:16.720 --> 01:18.680
88
+ and engineers I've spoken with.
89
+
90
+ 01:18.680 --> 01:21.440
91
+ We agree on some things and disagree on others.
92
+
93
+ 01:21.440 --> 01:23.480
94
+ My goal is always with these conversations
95
+
96
+ 01:23.480 --> 01:26.920
97
+ is to understand the way the guest sees the world.
98
+
99
+ 01:26.920 --> 01:28.600
100
+ One particular point of this agreement
101
+
102
+ 01:28.600 --> 01:30.640
103
+ in this conversation was the extent
104
+
105
+ 01:30.640 --> 01:33.240
106
+ to which camera based driver monitoring
107
+
108
+ 01:33.240 --> 01:36.120
109
+ will improve outcomes and for how long
110
+
111
+ 01:36.120 --> 01:39.120
112
+ it will remain relevant for AI assisted driving.
113
+
114
+ 01:39.960 --> 01:42.240
115
+ As someone who works on and is fascinated
116
+
117
+ 01:42.240 --> 01:45.200
118
+ by human centered artificial intelligence,
119
+
120
+ 01:45.200 --> 01:48.720
121
+ I believe that if implemented and integrated effectively,
122
+
123
+ 01:48.720 --> 01:51.840
124
+ camera based driver monitoring is likely to be of benefit
125
+
126
+ 01:51.840 --> 01:55.640
127
+ in both the short term and the long term.
128
+
129
+ 01:55.640 --> 01:59.240
130
+ In contrast, Elon and Tesla's focus
131
+
132
+ 01:59.240 --> 02:01.200
133
+ is on the improvement of autopilot
134
+
135
+ 02:01.200 --> 02:04.480
136
+ such that its statistical safety benefits
137
+
138
+ 02:04.480 --> 02:09.040
139
+ override any concern of human behavior and psychology.
140
+
141
+ 02:09.040 --> 02:12.040
142
+ Elon and I may not agree on everything
143
+
144
+ 02:12.040 --> 02:13.920
145
+ but I deeply respect the engineering
146
+
147
+ 02:13.920 --> 02:16.880
148
+ and innovation behind the efforts that he leads.
149
+
150
+ 02:16.880 --> 02:20.560
151
+ My goal here is to catalyze a rigorous, nuanced
152
+
153
+ 02:20.560 --> 02:23.520
154
+ and objective discussion in industry and academia
155
+
156
+ 02:23.520 --> 02:26.240
157
+ on AI assisted driving,
158
+
159
+ 02:26.240 --> 02:30.840
160
+ one that ultimately makes for a safer and better world.
161
+
162
+ 02:30.840 --> 02:34.600
163
+ And now here's my conversation with Elon Musk.
164
+
165
+ 02:35.600 --> 02:38.640
166
+ What was the vision, the dream of autopilot
167
+
168
+ 02:38.640 --> 02:41.400
169
+ when in the beginning the big picture system level
170
+
171
+ 02:41.400 --> 02:43.680
172
+ when it was first conceived
173
+
174
+ 02:43.680 --> 02:45.960
175
+ and started being installed in 2014
176
+
177
+ 02:45.960 --> 02:47.520
178
+ in the hardware and the cars?
179
+
180
+ 02:47.520 --> 02:49.760
181
+ What was the vision, the dream?
182
+
183
+ 02:49.760 --> 02:51.400
184
+ I would characterize the vision or dream
185
+
186
+ 02:51.400 --> 02:54.400
187
+ simply that there are obviously two
188
+
189
+ 02:54.400 --> 02:59.400
190
+ massive revolutions in the automobile industry.
191
+
192
+ 03:00.120 --> 03:04.440
193
+ One is the transition to electrification
194
+
195
+ 03:04.440 --> 03:06.400
196
+ and then the other is autonomy.
197
+
198
+ 03:07.720 --> 03:12.720
199
+ And it became obvious to me that in the future
200
+
201
+ 03:13.240 --> 03:16.240
202
+ any car that does not have autonomy
203
+
204
+ 03:16.240 --> 03:19.160
205
+ I would be about as useful as a horse.
206
+
207
+ 03:19.160 --> 03:22.040
208
+ Which is not to say that there's no use, it's just rare
209
+
210
+ 03:22.040 --> 03:23.640
211
+ and somewhat idiosyncratic
212
+
213
+ 03:23.640 --> 03:25.480
214
+ if somebody has a horse at this point.
215
+
216
+ 03:25.480 --> 03:28.000
217
+ It's just obvious that cars will drive themselves completely.
218
+
219
+ 03:28.000 --> 03:29.600
220
+ It's just a question of time
221
+
222
+ 03:29.600 --> 03:34.600
223
+ and if we did not participate in the autonomy revolution
224
+
225
+ 03:36.920 --> 03:40.840
226
+ then our cars would not be useful to people
227
+
228
+ 03:40.840 --> 03:43.680
229
+ relative to cars that are autonomous.
230
+
231
+ 03:43.680 --> 03:47.160
232
+ I mean an autonomous car is arguably worth
233
+
234
+ 03:47.160 --> 03:52.160
235
+ five to 10 times more than a car that which is not autonomous.
236
+
237
+ 03:53.760 --> 03:55.160
238
+ In the long term.
239
+
240
+ 03:55.160 --> 03:56.200
241
+ Turns out what you mean by long term,
242
+
243
+ 03:56.200 --> 03:59.520
244
+ but let's say at least for the next five years
245
+
246
+ 03:59.520 --> 04:00.520
247
+ perhaps 10 years.
248
+
249
+ 04:01.440 --> 04:04.080
250
+ So there are a lot of very interesting design choices
251
+
252
+ 04:04.080 --> 04:05.720
253
+ with autopilot early on.
254
+
255
+ 04:05.720 --> 04:09.960
256
+ First is showing on the instrument cluster
257
+
258
+ 04:09.960 --> 04:12.680
259
+ or in the Model 3 on the center stack display
260
+
261
+ 04:12.680 --> 04:15.720
262
+ what the combined sensor suite sees.
263
+
264
+ 04:15.720 --> 04:17.920
265
+ What was the thinking behind that choice?
266
+
267
+ 04:17.920 --> 04:18.960
268
+ Was there a debate?
269
+
270
+ 04:18.960 --> 04:20.480
271
+ What was the process?
272
+
273
+ 04:20.480 --> 04:24.840
274
+ The whole point of the display is to provide a health check
275
+
276
+ 04:24.840 --> 04:28.080
277
+ on the vehicle's perception of reality.
278
+
279
+ 04:28.080 --> 04:31.320
280
+ So the vehicle's taking information for a bunch of sensors
281
+
282
+ 04:31.320 --> 04:34.680
283
+ primarily cameras, but also radar and ultrasonics,
284
+
285
+ 04:34.680 --> 04:35.960
286
+ GPS and so forth.
287
+
288
+ 04:37.200 --> 04:42.200
289
+ And then that information is then rendered into vector space
290
+
291
+ 04:42.200 --> 04:46.360
292
+ and that with a bunch of objects with properties
293
+
294
+ 04:46.360 --> 04:49.920
295
+ like lane lines and traffic lights and other cars.
296
+
297
+ 04:49.920 --> 04:54.920
298
+ And then in vector space that is re rendered onto a display
299
+
300
+ 04:54.920 --> 04:57.400
301
+ so you can confirm whether the car knows
302
+
303
+ 04:57.400 --> 05:00.600
304
+ what's going on or not by looking out the window.
305
+
306
+ 05:01.600 --> 05:04.240
307
+ Right, I think that's an extremely powerful thing
308
+
309
+ 05:04.240 --> 05:06.480
310
+ for people to get an understanding
311
+
312
+ 05:06.480 --> 05:07.840
313
+ to become one with the system
314
+
315
+ 05:07.840 --> 05:10.400
316
+ and understanding what the system is capable of.
317
+
318
+ 05:10.400 --> 05:13.600
319
+ Now, have you considered showing more?
320
+
321
+ 05:13.600 --> 05:15.400
322
+ So if we look at the computer vision,
323
+
324
+ 05:16.400 --> 05:18.400
325
+ you know, like road segmentation, lane detection,
326
+
327
+ 05:18.400 --> 05:21.640
328
+ vehicle detection, object detection, underlying the system,
329
+
330
+ 05:21.640 --> 05:24.400
331
+ there is at the edges some uncertainty.
332
+
333
+ 05:24.400 --> 05:28.400
334
+ Have you considered revealing the parts
335
+
336
+ 05:28.400 --> 05:32.400
337
+ that the uncertainty in the system, the sort of problem
338
+
339
+ 05:32.400 --> 05:35.000
340
+ these associated with say image recognition
341
+
342
+ 05:35.000 --> 05:35.840
343
+ or something like that?
344
+
345
+ 05:35.840 --> 05:37.840
346
+ Yeah, so right now it shows like the vehicles
347
+
348
+ 05:37.840 --> 05:40.840
349
+ and the vicinity of very clean crisp image
350
+
351
+ 05:40.840 --> 05:43.840
352
+ and people do confirm that there's a car in front of me
353
+
354
+ 05:43.840 --> 05:45.840
355
+ and the system sees there's a car in front of me
356
+
357
+ 05:45.840 --> 05:47.840
358
+ but to help people build an intuition
359
+
360
+ 05:47.840 --> 05:51.840
361
+ of what computer vision is by showing some of the uncertainty.
362
+
363
+ 05:51.840 --> 05:53.840
364
+ Well, I think it's, in my car,
365
+
366
+ 05:53.840 --> 05:56.840
367
+ I always look at the sort of the debug view
368
+
369
+ 05:56.840 --> 05:58.840
370
+ and there's two debug views.
371
+
372
+ 05:58.840 --> 06:03.840
373
+ One is augmented vision, which I'm sure you've seen
374
+
375
+ 06:03.840 --> 06:07.840
376
+ where it's basically, we draw boxes and labels
377
+
378
+ 06:07.840 --> 06:10.840
379
+ around objects that are recognized.
380
+
381
+ 06:10.840 --> 06:14.840
382
+ And then there's what we call the visualizer,
383
+
384
+ 06:14.840 --> 06:16.840
385
+ which is basically a vector space representation
386
+
387
+ 06:16.840 --> 06:21.840
388
+ summing up the input from all sensors.
389
+
390
+ 06:21.840 --> 06:23.840
391
+ That does not show any pictures,
392
+
393
+ 06:23.840 --> 06:26.840
394
+ but it shows all of the,
395
+
396
+ 06:26.840 --> 06:32.840
397
+ it basically shows the cause view of the world in vector space.
398
+
399
+ 06:32.840 --> 06:36.840
400
+ But I think this is very difficult for normal people to understand.
401
+
402
+ 06:36.840 --> 06:38.840
403
+ They would not know what they're looking at.
404
+
405
+ 06:38.840 --> 06:40.840
406
+ So it's almost an HMI challenge.
407
+
408
+ 06:40.840 --> 06:42.840
409
+ The current things that are being displayed
410
+
411
+ 06:42.840 --> 06:46.840
412
+ is optimized for the general public understanding
413
+
414
+ 06:46.840 --> 06:48.840
415
+ of what the system is capable of.
416
+
417
+ 06:48.840 --> 06:50.840
418
+ It's like if you have no idea how computer vision works
419
+
420
+ 06:50.840 --> 06:52.840
421
+ or anything, you can still look at the screen
422
+
423
+ 06:52.840 --> 06:54.840
424
+ and see if the car knows what's going on.
425
+
426
+ 06:54.840 --> 06:57.840
427
+ And then if you're a development engineer
428
+
429
+ 06:57.840 --> 07:01.840
430
+ or if you have the development build like I do,
431
+
432
+ 07:01.840 --> 07:05.840
433
+ then you can see all the debug information.
434
+
435
+ 07:05.840 --> 07:10.840
436
+ But those would just be total diverse to most people.
437
+
438
+ 07:10.840 --> 07:13.840
439
+ What's your view on how to best distribute effort?
440
+
441
+ 07:13.840 --> 07:16.840
442
+ So there's three, I would say, technical aspects of autopilot
443
+
444
+ 07:16.840 --> 07:18.840
445
+ that are really important.
446
+
447
+ 07:18.840 --> 07:19.840
448
+ So it's the underlying algorithms,
449
+
450
+ 07:19.840 --> 07:21.840
451
+ like the neural network architecture.
452
+
453
+ 07:21.840 --> 07:23.840
454
+ There's the data that's trained on
455
+
456
+ 07:23.840 --> 07:25.840
457
+ and then there's the hardware development.
458
+
459
+ 07:25.840 --> 07:26.840
460
+ There may be others.
461
+
462
+ 07:26.840 --> 07:31.840
463
+ But so look, algorithm, data, hardware.
464
+
465
+ 07:31.840 --> 07:34.840
466
+ You only have so much money, only have so much time.
467
+
468
+ 07:34.840 --> 07:36.840
469
+ What do you think is the most important thing
470
+
471
+ 07:36.840 --> 07:39.840
472
+ to allocate resources to?
473
+
474
+ 07:39.840 --> 07:41.840
475
+ Do you see it as pretty evenly distributed
476
+
477
+ 07:41.840 --> 07:43.840
478
+ between those three?
479
+
480
+ 07:43.840 --> 07:46.840
481
+ We automatically get fast amounts of data
482
+
483
+ 07:46.840 --> 07:48.840
484
+ because all of our cars have
485
+
486
+ 07:50.840 --> 07:54.840
487
+ eight external facing cameras and radar
488
+
489
+ 07:54.840 --> 07:59.840
490
+ and usually 12 ultrasonic sensors, GPS, obviously,
491
+
492
+ 07:59.840 --> 08:03.840
493
+ and IMU.
494
+
495
+ 08:03.840 --> 08:08.840
496
+ And so we basically have a fleet that has,
497
+
498
+ 08:08.840 --> 08:11.840
499
+ we've got about 400,000 cars on the road
500
+
501
+ 08:11.840 --> 08:13.840
502
+ that have that level of data.
503
+
504
+ 08:13.840 --> 08:15.840
505
+ I think you keep quite close track of it, actually.
506
+
507
+ 08:15.840 --> 08:16.840
508
+ Yes.
509
+
510
+ 08:16.840 --> 08:19.840
511
+ So we're approaching half a million cars
512
+
513
+ 08:19.840 --> 08:22.840
514
+ on the road that have the full sensor suite.
515
+
516
+ 08:22.840 --> 08:26.840
517
+ So this is, I'm not sure how many other cars
518
+
519
+ 08:26.840 --> 08:28.840
520
+ on the road have this sensor suite,
521
+
522
+ 08:28.840 --> 08:31.840
523
+ but I'd be surprised if it's more than 5,000,
524
+
525
+ 08:31.840 --> 08:35.840
526
+ which means that we have 99% of all the data.
527
+
528
+ 08:35.840 --> 08:37.840
529
+ So there's this huge inflow of data.
530
+
531
+ 08:37.840 --> 08:39.840
532
+ Absolutely, massive inflow of data.
533
+
534
+ 08:39.840 --> 08:43.840
535
+ And then it's taken about three years,
536
+
537
+ 08:43.840 --> 08:46.840
538
+ but now we've finally developed our full self driving computer,
539
+
540
+ 08:46.840 --> 08:51.840
541
+ which can process
542
+
543
+ 08:51.840 --> 08:54.840
544
+ an order of magnitude as much as the NVIDIA system
545
+
546
+ 08:54.840 --> 08:56.840
547
+ that we currently have in the cars.
548
+
549
+ 08:56.840 --> 08:58.840
550
+ And it's really just to use it,
551
+
552
+ 08:58.840 --> 09:01.840
553
+ you unplug the NVIDIA computer and plug the Tesla computer in.
554
+
555
+ 09:01.840 --> 09:03.840
556
+ And that's it.
557
+
558
+ 09:03.840 --> 09:06.840
559
+ And it's, in fact, we're not even,
560
+
561
+ 09:06.840 --> 09:09.840
562
+ we're still exploring the boundaries of its capabilities,
563
+
564
+ 09:09.840 --> 09:11.840
565
+ but we're able to run the cameras at full frame rate,
566
+
567
+ 09:11.840 --> 09:14.840
568
+ full resolution, not even crop of the images,
569
+
570
+ 09:14.840 --> 09:19.840
571
+ and it's still got headroom, even on one of the systems.
572
+
573
+ 09:19.840 --> 09:22.840
574
+ The full self driving computer is really two computers,
575
+
576
+ 09:22.840 --> 09:25.840
577
+ two systems on a chip that are fully redundant.
578
+
579
+ 09:25.840 --> 09:28.840
580
+ So you could put a bolt through basically any part of that system
581
+
582
+ 09:28.840 --> 09:29.840
583
+ and it still works.
584
+
585
+ 09:29.840 --> 09:32.840
586
+ The redundancy, are they perfect copies of each other?
587
+
588
+ 09:32.840 --> 09:35.840
589
+ Or also it's purely for redundancy
590
+
591
+ 09:35.840 --> 09:37.840
592
+ as opposed to an arguing machine kind of architecture
593
+
594
+ 09:37.840 --> 09:39.840
595
+ where they're both making decisions.
596
+
597
+ 09:39.840 --> 09:41.840
598
+ This is purely for redundancy.
599
+
600
+ 09:41.840 --> 09:44.840
601
+ I think it's more like, if you have a twin engine aircraft,
602
+
603
+ 09:44.840 --> 09:46.840
604
+ commercial aircraft,
605
+
606
+ 09:46.840 --> 09:51.840
607
+ this system will operate best if both systems are operating,
608
+
609
+ 09:51.840 --> 09:55.840
610
+ but it's capable of operating safely on one.
611
+
612
+ 09:55.840 --> 09:59.840
613
+ So, but as it is right now, we can just run,
614
+
615
+ 09:59.840 --> 10:03.840
616
+ we haven't even hit the edge of performance,
617
+
618
+ 10:03.840 --> 10:08.840
619
+ so there's no need to actually distribute
620
+
621
+ 10:08.840 --> 10:12.840
622
+ functionality across both SoCs.
623
+
624
+ 10:12.840 --> 10:16.840
625
+ We can actually just run a full duplicate on each one.
626
+
627
+ 10:16.840 --> 10:20.840
628
+ You haven't really explored or hit the limit of the system?
629
+
630
+ 10:20.840 --> 10:21.840
631
+ Not yet, hit the limit now.
632
+
633
+ 10:21.840 --> 10:26.840
634
+ So the magic of deep learning is that it gets better with data.
635
+
636
+ 10:26.840 --> 10:28.840
637
+ You said there's a huge inflow of data,
638
+
639
+ 10:28.840 --> 10:33.840
640
+ but the thing about driving the really valuable data
641
+
642
+ 10:33.840 --> 10:35.840
643
+ to learn from is the edge cases.
644
+
645
+ 10:35.840 --> 10:42.840
646
+ So how do you, I mean, I've heard you talk somewhere about
647
+
648
+ 10:42.840 --> 10:46.840
649
+ autopilot disengagement as being an important moment of time to use.
650
+
651
+ 10:46.840 --> 10:51.840
652
+ Is there other edge cases or perhaps can you speak to those edge cases,
653
+
654
+ 10:51.840 --> 10:53.840
655
+ what aspects of them might be valuable,
656
+
657
+ 10:53.840 --> 10:55.840
658
+ or if you have other ideas,
659
+
660
+ 10:55.840 --> 10:59.840
661
+ how to discover more and more and more edge cases in driving?
662
+
663
+ 10:59.840 --> 11:01.840
664
+ Well, there's a lot of things that I learned.
665
+
666
+ 11:01.840 --> 11:05.840
667
+ There are certainly edge cases where I say somebody's on autopilot
668
+
669
+ 11:05.840 --> 11:07.840
670
+ and they take over.
671
+
672
+ 11:07.840 --> 11:12.840
673
+ And then, okay, that's a trigger that goes to a system that says,
674
+
675
+ 11:12.840 --> 11:14.840
676
+ okay, do they take over for convenience
677
+
678
+ 11:14.840 --> 11:18.840
679
+ or do they take over because the autopilot wasn't working properly?
680
+
681
+ 11:18.840 --> 11:21.840
682
+ There's also, like let's say we're trying to figure out
683
+
684
+ 11:21.840 --> 11:26.840
685
+ what is the optimal spline for traversing an intersection.
686
+
687
+ 11:26.840 --> 11:30.840
688
+ Then the ones where there are no interventions
689
+
690
+ 11:30.840 --> 11:32.840
691
+ and are the right ones.
692
+
693
+ 11:32.840 --> 11:36.840
694
+ So you then say, okay, when it looks like this, do the following.
695
+
696
+ 11:36.840 --> 11:40.840
697
+ And then you get the optimal spline for a complex,
698
+
699
+ 11:40.840 --> 11:44.840
700
+ now getting a complex intersection.
701
+
702
+ 11:44.840 --> 11:48.840
703
+ So that's for, there's kind of the common case.
704
+
705
+ 11:48.840 --> 11:51.840
706
+ You're trying to capture a huge amount of samples
707
+
708
+ 11:51.840 --> 11:54.840
709
+ of a particular intersection, how one thing went right.
710
+
711
+ 11:54.840 --> 11:58.840
712
+ And then there's the edge case where, as you said,
713
+
714
+ 11:58.840 --> 12:01.840
715
+ not for convenience, but something didn't go exactly right.
716
+
717
+ 12:01.840 --> 12:04.840
718
+ Somebody took over, somebody asserted manual control from autopilot.
719
+
720
+ 12:04.840 --> 12:08.840
721
+ And really, like the way to look at this is view all input is error.
722
+
723
+ 12:08.840 --> 12:11.840
724
+ If the user had to do input, it does something.
725
+
726
+ 12:11.840 --> 12:13.840
727
+ All input is error.
728
+
729
+ 12:13.840 --> 12:15.840
730
+ That's a powerful line to think of it that way,
731
+
732
+ 12:15.840 --> 12:17.840
733
+ because it may very well be error.
734
+
735
+ 12:17.840 --> 12:19.840
736
+ But if you want to exit the highway,
737
+
738
+ 12:19.840 --> 12:22.840
739
+ or if you want to, it's a navigation decision
740
+
741
+ 12:22.840 --> 12:24.840
742
+ that all autopilot is not currently designed to do,
743
+
744
+ 12:24.840 --> 12:26.840
745
+ then the driver takes over.
746
+
747
+ 12:26.840 --> 12:28.840
748
+ How do you know the difference?
749
+
750
+ 12:28.840 --> 12:30.840
751
+ Yeah, that's going to change with navigate and autopilot,
752
+
753
+ 12:30.840 --> 12:33.840
754
+ which we've just released, and without stall confirm.
755
+
756
+ 12:33.840 --> 12:36.840
757
+ So the navigation, like lane change based,
758
+
759
+ 12:36.840 --> 12:39.840
760
+ like asserting control in order to do a lane change,
761
+
762
+ 12:39.840 --> 12:43.840
763
+ or exit a freeway, or doing highway interchange,
764
+
765
+ 12:43.840 --> 12:47.840
766
+ the vast majority of that will go away with the release
767
+
768
+ 12:47.840 --> 12:49.840
769
+ that just went out.
770
+
771
+ 12:49.840 --> 12:52.840
772
+ Yeah, I don't think people quite understand
773
+
774
+ 12:52.840 --> 12:54.840
775
+ how big of a step that is.
776
+
777
+ 12:54.840 --> 12:55.840
778
+ Yeah, they don't.
779
+
780
+ 12:55.840 --> 12:57.840
781
+ If you drive the car, then you do.
782
+
783
+ 12:57.840 --> 12:59.840
784
+ So you still have to keep your hands on the steering wheel
785
+
786
+ 12:59.840 --> 13:02.840
787
+ currently when it does the automatic lane change?
788
+
789
+ 13:02.840 --> 13:04.840
790
+ What are...
791
+
792
+ 13:04.840 --> 13:07.840
793
+ So there's these big leaps through the development of autopilot
794
+
795
+ 13:07.840 --> 13:09.840
796
+ through its history,
797
+
798
+ 13:09.840 --> 13:12.840
799
+ and what stands out to you as the big leaps?
800
+
801
+ 13:12.840 --> 13:14.840
802
+ I would say this one,
803
+
804
+ 13:14.840 --> 13:19.840
805
+ navigate and autopilot without having to confirm,
806
+
807
+ 13:19.840 --> 13:20.840
808
+ is a huge leap.
809
+
810
+ 13:20.840 --> 13:21.840
811
+ It is a huge leap.
812
+
813
+ 13:21.840 --> 13:24.840
814
+ It also automatically overtakes slow cars.
815
+
816
+ 13:24.840 --> 13:30.840
817
+ So it's both navigation and seeking the fastest lane.
818
+
819
+ 13:30.840 --> 13:36.840
820
+ So it'll overtake a slow cause and exit the freeway
821
+
822
+ 13:36.840 --> 13:39.840
823
+ and take highway interchanges.
824
+
825
+ 13:39.840 --> 13:46.840
826
+ And then we have traffic light recognition,
827
+
828
+ 13:46.840 --> 13:49.840
829
+ which is introduced initially as a warning.
830
+
831
+ 13:49.840 --> 13:51.840
832
+ I mean, on the development version that I'm driving,
833
+
834
+ 13:51.840 --> 13:55.840
835
+ the car fully stops and goes at traffic lights.
836
+
837
+ 13:55.840 --> 13:57.840
838
+ So those are the steps, right?
839
+
840
+ 13:57.840 --> 13:59.840
841
+ You just mentioned something sort of
842
+
843
+ 13:59.840 --> 14:02.840
844
+ including a step towards full autonomy.
845
+
846
+ 14:02.840 --> 14:07.840
847
+ What would you say are the biggest technological roadblocks
848
+
849
+ 14:07.840 --> 14:09.840
850
+ to full cell driving?
851
+
852
+ 14:09.840 --> 14:10.840
853
+ Actually, I don't think...
854
+
855
+ 14:10.840 --> 14:11.840
856
+ I think we just...
857
+
858
+ 14:11.840 --> 14:13.840
859
+ the full cell driving computer that we just...
860
+
861
+ 14:13.840 --> 14:14.840
862
+ that has a...
863
+
864
+ 14:14.840 --> 14:16.840
865
+ what we call the FSD computer.
866
+
867
+ 14:16.840 --> 14:20.840
868
+ That's now in production.
869
+
870
+ 14:20.840 --> 14:25.840
871
+ So if you order any Model SRX or any Model 3
872
+
873
+ 14:25.840 --> 14:28.840
874
+ that has the full cell driving package,
875
+
876
+ 14:28.840 --> 14:31.840
877
+ you'll get the FSD computer.
878
+
879
+ 14:31.840 --> 14:36.840
880
+ That's important to have enough base computation.
881
+
882
+ 14:36.840 --> 14:40.840
883
+ Then refining the neural net and the control software.
884
+
885
+ 14:40.840 --> 14:44.840
886
+ But all of that can just be provided as an over there update.
887
+
888
+ 14:44.840 --> 14:46.840
889
+ The thing that's really profound,
890
+
891
+ 14:46.840 --> 14:50.840
892
+ and where I'll be emphasizing at the...
893
+
894
+ 14:50.840 --> 14:52.840
895
+ that investor day that we're having focused on autonomy,
896
+
897
+ 14:52.840 --> 14:55.840
898
+ is that the cars currently being produced,
899
+
900
+ 14:55.840 --> 14:57.840
901
+ or the hardware currently being produced,
902
+
903
+ 14:57.840 --> 15:00.840
904
+ is capable of full cell driving.
905
+
906
+ 15:00.840 --> 15:03.840
907
+ But capable is an interesting word because...
908
+
909
+ 15:03.840 --> 15:05.840
910
+ Like the hardware is.
911
+
912
+ 15:05.840 --> 15:08.840
913
+ And as we refine the software,
914
+
915
+ 15:08.840 --> 15:11.840
916
+ the capabilities will increase dramatically
917
+
918
+ 15:11.840 --> 15:13.840
919
+ and then the reliability will increase dramatically
920
+
921
+ 15:13.840 --> 15:15.840
922
+ and then it will receive regulatory approval.
923
+
924
+ 15:15.840 --> 15:18.840
925
+ So essentially buying a car today is an investment in the future.
926
+
927
+ 15:18.840 --> 15:21.840
928
+ You're essentially buying...
929
+
930
+ 15:21.840 --> 15:25.840
931
+ I think the most profound thing is that
932
+
933
+ 15:25.840 --> 15:27.840
934
+ if you buy a Tesla today,
935
+
936
+ 15:27.840 --> 15:29.840
937
+ I believe you are buying an appreciating asset,
938
+
939
+ 15:29.840 --> 15:32.840
940
+ not a depreciating asset.
941
+
942
+ 15:32.840 --> 15:34.840
943
+ So that's a really important statement there
944
+
945
+ 15:34.840 --> 15:36.840
946
+ because if hardware is capable enough,
947
+
948
+ 15:36.840 --> 15:39.840
949
+ that's the hard thing to upgrade usually.
950
+
951
+ 15:39.840 --> 15:40.840
952
+ Exactly.
953
+
954
+ 15:40.840 --> 15:43.840
955
+ So then the rest is a software problem.
956
+
957
+ 15:43.840 --> 15:47.840
958
+ Yes. Software has no marginal cost, really.
959
+
960
+ 15:47.840 --> 15:51.840
961
+ But what's your intuition on the software side?
962
+
963
+ 15:51.840 --> 15:55.840
964
+ How hard are the remaining steps
965
+
966
+ 15:55.840 --> 15:58.840
967
+ to get it to where...
968
+
969
+ 15:58.840 --> 16:02.840
970
+ you know, the experience,
971
+
972
+ 16:02.840 --> 16:05.840
973
+ not just the safety, but the full experience
974
+
975
+ 16:05.840 --> 16:08.840
976
+ is something that people would enjoy.
977
+
978
+ 16:08.840 --> 16:12.840
979
+ I think people would enjoy it very much on the highways.
980
+
981
+ 16:12.840 --> 16:16.840
982
+ It's a total game changer for quality of life,
983
+
984
+ 16:16.840 --> 16:20.840
985
+ for using Tesla autopilot on the highways.
986
+
987
+ 16:20.840 --> 16:24.840
988
+ So it's really just extending that functionality to city streets,
989
+
990
+ 16:24.840 --> 16:28.840
991
+ adding in the traffic light recognition,
992
+
993
+ 16:28.840 --> 16:31.840
994
+ navigating complex intersections,
995
+
996
+ 16:31.840 --> 16:36.840
997
+ and then being able to navigate complicated parking lots
998
+
999
+ 16:36.840 --> 16:39.840
1000
+ so the car can exit a parking space
1001
+
1002
+ 16:39.840 --> 16:45.840
1003
+ and come and find you even if it's in a complete maze of a parking lot.
1004
+
1005
+ 16:45.840 --> 16:51.840
1006
+ And then you can just drop you off and find a parking spot by itself.
1007
+
1008
+ 16:51.840 --> 16:53.840
1009
+ Yeah, in terms of enjoyability
1010
+
1011
+ 16:53.840 --> 16:57.840
1012
+ and something that people would actually find a lot of use from,
1013
+
1014
+ 16:57.840 --> 17:00.840
1015
+ the parking lot is a really...
1016
+
1017
+ 17:00.840 --> 17:03.840
1018
+ it's rich of annoyance when you have to do it manually,
1019
+
1020
+ 17:03.840 --> 17:07.840
1021
+ so there's a lot of benefit to be gained from automation there.
1022
+
1023
+ 17:07.840 --> 17:11.840
1024
+ So let me start injecting the human into this discussion a little bit.
1025
+
1026
+ 17:11.840 --> 17:14.840
1027
+ So let's talk about full autonomy.
1028
+
1029
+ 17:14.840 --> 17:17.840
1030
+ If you look at the current level four vehicles,
1031
+
1032
+ 17:17.840 --> 17:19.840
1033
+ being Tesla and road like Waymo and so on,
1034
+
1035
+ 17:19.840 --> 17:22.840
1036
+ they're only technically autonomous.
1037
+
1038
+ 17:22.840 --> 17:25.840
1039
+ They're really level two systems
1040
+
1041
+ 17:25.840 --> 17:28.840
1042
+ with just a different design philosophy
1043
+
1044
+ 17:28.840 --> 17:31.840
1045
+ because there's always a safety driver in almost all cases
1046
+
1047
+ 17:31.840 --> 17:33.840
1048
+ and they're monitoring the system.
1049
+
1050
+ 17:33.840 --> 17:37.840
1051
+ Maybe Tesla's full self driving
1052
+
1053
+ 17:37.840 --> 17:41.840
1054
+ is still for a time to come,
1055
+
1056
+ 17:41.840 --> 17:44.840
1057
+ requiring supervision of the human being.
1058
+
1059
+ 17:44.840 --> 17:47.840
1060
+ So its capabilities are powerful enough to drive,
1061
+
1062
+ 17:47.840 --> 17:50.840
1063
+ but nevertheless requires the human to still be supervising
1064
+
1065
+ 17:50.840 --> 17:56.840
1066
+ just like a safety driver is in a other fully autonomous vehicles.
1067
+
1068
+ 17:56.840 --> 18:01.840
1069
+ I think it will require detecting hands on wheel
1070
+
1071
+ 18:01.840 --> 18:08.840
1072
+ or at least six months or something like that from here.
1073
+
1074
+ 18:08.840 --> 18:11.840
1075
+ Really it's a question of like,
1076
+
1077
+ 18:11.840 --> 18:15.840
1078
+ from a regulatory standpoint,
1079
+
1080
+ 18:15.840 --> 18:19.840
1081
+ how much safer than a person does autopilot need to be
1082
+
1083
+ 18:19.840 --> 18:24.840
1084
+ for it to be okay to not monitor the car?
1085
+
1086
+ 18:24.840 --> 18:27.840
1087
+ And this is a debate that one can have.
1088
+
1089
+ 18:27.840 --> 18:31.840
1090
+ But you need a large amount of data
1091
+
1092
+ 18:31.840 --> 18:34.840
1093
+ so you can prove with high confidence,
1094
+
1095
+ 18:34.840 --> 18:36.840
1096
+ statistically speaking,
1097
+
1098
+ 18:36.840 --> 18:39.840
1099
+ that the car is dramatically safer than a person
1100
+
1101
+ 18:39.840 --> 18:42.840
1102
+ and that adding in the person monitoring
1103
+
1104
+ 18:42.840 --> 18:45.840
1105
+ does not materially affect the safety.
1106
+
1107
+ 18:45.840 --> 18:49.840
1108
+ So it might need to be like two or three hundred percent safer than a person.
1109
+
1110
+ 18:49.840 --> 18:51.840
1111
+ And how do you prove that?
1112
+
1113
+ 18:51.840 --> 18:53.840
1114
+ Incidence per mile.
1115
+
1116
+ 18:53.840 --> 18:56.840
1117
+ So crashes and fatalities.
1118
+
1119
+ 18:56.840 --> 18:58.840
1120
+ Yeah, fatalities would be a factor,
1121
+
1122
+ 18:58.840 --> 19:00.840
1123
+ but there are just not enough fatalities
1124
+
1125
+ 19:00.840 --> 19:03.840
1126
+ to be statistically significant at scale.
1127
+
1128
+ 19:03.840 --> 19:06.840
1129
+ But there are enough crashes,
1130
+
1131
+ 19:06.840 --> 19:10.840
1132
+ there are far more crashes than there are fatalities.
1133
+
1134
+ 19:10.840 --> 19:15.840
1135
+ So you can assess what is the probability of a crash,
1136
+
1137
+ 19:15.840 --> 19:19.840
1138
+ then there's another step which probability of injury
1139
+
1140
+ 19:19.840 --> 19:21.840
1141
+ and probability of permanent injury
1142
+
1143
+ 19:21.840 --> 19:23.840
1144
+ and probability of death.
1145
+
1146
+ 19:23.840 --> 19:27.840
1147
+ And all of those need to be much better than a person
1148
+
1149
+ 19:27.840 --> 19:32.840
1150
+ by at least perhaps two hundred percent.
1151
+
1152
+ 19:32.840 --> 19:36.840
1153
+ And you think there's the ability to have a healthy discourse
1154
+
1155
+ 19:36.840 --> 19:39.840
1156
+ with the regulatory bodies on this topic?
1157
+
1158
+ 19:39.840 --> 19:43.840
1159
+ I mean, there's no question that regulators pay
1160
+
1161
+ 19:43.840 --> 19:48.840
1162
+ disproportionate amount of attention to that which generates press.
1163
+
1164
+ 19:48.840 --> 19:50.840
1165
+ This is just an objective fact.
1166
+
1167
+ 19:50.840 --> 19:52.840
1168
+ And Tesla generates a lot of press.
1169
+
1170
+ 19:52.840 --> 19:56.840
1171
+ So that, you know, in the United States,
1172
+
1173
+ 19:56.840 --> 20:00.840
1174
+ there's I think almost 40,000 automotive deaths per year.
1175
+
1176
+ 20:00.840 --> 20:03.840
1177
+ But if there are four in Tesla,
1178
+
1179
+ 20:03.840 --> 20:06.840
1180
+ they'll probably receive a thousand times more press
1181
+
1182
+ 20:06.840 --> 20:08.840
1183
+ than anyone else.
1184
+
1185
+ 20:08.840 --> 20:10.840
1186
+ So the psychology of that is actually fascinating.
1187
+
1188
+ 20:10.840 --> 20:12.840
1189
+ I don't think we'll have enough time to talk about that,
1190
+
1191
+ 20:12.840 --> 20:16.840
1192
+ but I have to talk to you about the human side of things.
1193
+
1194
+ 20:16.840 --> 20:20.840
1195
+ So myself and our team at MIT recently released a paper
1196
+
1197
+ 20:20.840 --> 20:24.840
1198
+ on functional vigilance of drivers while using autopilot.
1199
+
1200
+ 20:24.840 --> 20:27.840
1201
+ This is work we've been doing since autopilot was first
1202
+
1203
+ 20:27.840 --> 20:30.840
1204
+ released publicly over three years ago,
1205
+
1206
+ 20:30.840 --> 20:34.840
1207
+ collecting video driver faces and driver body.
1208
+
1209
+ 20:34.840 --> 20:38.840
1210
+ So I saw that you tweeted a quote from the abstract
1211
+
1212
+ 20:38.840 --> 20:43.840
1213
+ so I can at least guess that you've glanced at it.
1214
+
1215
+ 20:43.840 --> 20:46.840
1216
+ Can I talk you through what we found?
1217
+
1218
+ 20:46.840 --> 20:51.840
1219
+ Okay, so it appears that in the data that we've collected
1220
+
1221
+ 20:51.840 --> 20:54.840
1222
+ that drivers are maintaining functional vigilance
1223
+
1224
+ 20:54.840 --> 20:57.840
1225
+ such that we're looking at 18,000 disengagement
1226
+
1227
+ 20:57.840 --> 21:02.840
1228
+ from autopilot, 18,900 and annotating were they able
1229
+
1230
+ 21:02.840 --> 21:05.840
1231
+ to take over control in a timely manner?
1232
+
1233
+ 21:05.840 --> 21:07.840
1234
+ So they were there present looking at the road
1235
+
1236
+ 21:07.840 --> 21:09.840
1237
+ to take over control.
1238
+
1239
+ 21:09.840 --> 21:14.840
1240
+ Okay, so this goes against what many would predict
1241
+
1242
+ 21:14.840 --> 21:18.840
1243
+ from the body of literature on vigilance with automation.
1244
+
1245
+ 21:18.840 --> 21:21.840
1246
+ Now the question is, do you think these results
1247
+
1248
+ 21:21.840 --> 21:23.840
1249
+ hold across the broader population?
1250
+
1251
+ 21:23.840 --> 21:26.840
1252
+ So ours is just a small subset.
1253
+
1254
+ 21:26.840 --> 21:30.840
1255
+ Do you think one of the criticism is that there's
1256
+
1257
+ 21:30.840 --> 21:34.840
1258
+ a small minority of drivers that may be highly responsible
1259
+
1260
+ 21:34.840 --> 21:37.840
1261
+ where their vigilance decrement would increase
1262
+
1263
+ 21:37.840 --> 21:39.840
1264
+ with autopilot use?
1265
+
1266
+ 21:39.840 --> 21:41.840
1267
+ I think this is all really going to be swept.
1268
+
1269
+ 21:41.840 --> 21:46.840
1270
+ I mean, the system's improving so much so fast
1271
+
1272
+ 21:46.840 --> 21:50.840
1273
+ that this is going to be a mood point very soon
1274
+
1275
+ 21:50.840 --> 21:56.840
1276
+ where vigilance is, if something's many times safer
1277
+
1278
+ 21:56.840 --> 22:00.840
1279
+ than a person, then adding a person does,
1280
+
1281
+ 22:00.840 --> 22:04.840
1282
+ the effect on safety is limited.
1283
+
1284
+ 22:04.840 --> 22:09.840
1285
+ And in fact, it could be negative.
1286
+
1287
+ 22:09.840 --> 22:11.840
1288
+ That's really interesting.
1289
+
1290
+ 22:11.840 --> 22:16.840
1291
+ So the fact that a human may, some percent of the population
1292
+
1293
+ 22:16.840 --> 22:20.840
1294
+ may exhibit a vigilance decrement will not affect
1295
+
1296
+ 22:20.840 --> 22:22.840
1297
+ overall statistics numbers of safety.
1298
+
1299
+ 22:22.840 --> 22:27.840
1300
+ No, in fact, I think it will become very, very quickly,
1301
+
1302
+ 22:27.840 --> 22:29.840
1303
+ maybe even towards the end of this year,
1304
+
1305
+ 22:29.840 --> 22:32.840
1306
+ but I'd say I'd be shocked if it's not next year,
1307
+
1308
+ 22:32.840 --> 22:36.840
1309
+ at the latest, that having a human intervene
1310
+
1311
+ 22:36.840 --> 22:39.840
1312
+ will increase safety.
1313
+
1314
+ 22:39.840 --> 22:40.840
1315
+ Decrease.
1316
+
1317
+ 22:40.840 --> 22:42.840
1318
+ I can imagine if you're an elevator.
1319
+
1320
+ 22:42.840 --> 22:45.840
1321
+ Now, it used to be that there were elevator operators
1322
+
1323
+ 22:45.840 --> 22:47.840
1324
+ and you couldn't go on an elevator by yourself
1325
+
1326
+ 22:47.840 --> 22:51.840
1327
+ and work the lever to move between floors.
1328
+
1329
+ 22:51.840 --> 22:56.840
1330
+ And now, nobody wants an elevator operator
1331
+
1332
+ 22:56.840 --> 23:00.840
1333
+ because the automated elevator that stops the floors
1334
+
1335
+ 23:00.840 --> 23:03.840
1336
+ is much safer than the elevator operator.
1337
+
1338
+ 23:03.840 --> 23:05.840
1339
+ And in fact, it would be quite dangerous
1340
+
1341
+ 23:05.840 --> 23:07.840
1342
+ if someone with a lever that can move
1343
+
1344
+ 23:07.840 --> 23:09.840
1345
+ the elevator between floors.
1346
+
1347
+ 23:09.840 --> 23:12.840
1348
+ So that's a really powerful statement
1349
+
1350
+ 23:12.840 --> 23:14.840
1351
+ and a really interesting one.
1352
+
1353
+ 23:14.840 --> 23:16.840
1354
+ But I also have to ask, from a user experience
1355
+
1356
+ 23:16.840 --> 23:18.840
1357
+ and from a safety perspective,
1358
+
1359
+ 23:18.840 --> 23:20.840
1360
+ one of the passions for me algorithmically
1361
+
1362
+ 23:20.840 --> 23:25.840
1363
+ is camera based detection of sensing the human,
1364
+
1365
+ 23:25.840 --> 23:27.840
1366
+ but detecting what the driver is looking at,
1367
+
1368
+ 23:27.840 --> 23:29.840
1369
+ cognitive load, body pose.
1370
+
1371
+ 23:29.840 --> 23:31.840
1372
+ On the computer vision side, that's a fascinating problem,
1373
+
1374
+ 23:31.840 --> 23:34.840
1375
+ but there's many in industry who believe
1376
+
1377
+ 23:34.840 --> 23:37.840
1378
+ you have to have camera based driver monitoring.
1379
+
1380
+ 23:37.840 --> 23:39.840
1381
+ Do you think this could be benefit gained
1382
+
1383
+ 23:39.840 --> 23:41.840
1384
+ from driver monitoring?
1385
+
1386
+ 23:41.840 --> 23:45.840
1387
+ If you have a system that's out or below
1388
+
1389
+ 23:45.840 --> 23:49.840
1390
+ human level reliability, then driver monitoring makes sense.
1391
+
1392
+ 23:49.840 --> 23:51.840
1393
+ But if your system is dramatically better,
1394
+
1395
+ 23:51.840 --> 23:53.840
1396
+ more reliable than a human,
1397
+
1398
+ 23:53.840 --> 23:58.840
1399
+ then driver monitoring is not help much.
1400
+
1401
+ 23:58.840 --> 24:03.840
1402
+ And like I said, you wouldn't want someone into...
1403
+
1404
+ 24:03.840 --> 24:05.840
1405
+ You wouldn't want someone in the elevator.
1406
+
1407
+ 24:05.840 --> 24:07.840
1408
+ If you're in an elevator, do you really want someone
1409
+
1410
+ 24:07.840 --> 24:09.840
1411
+ with a big lever, some random person operating
1412
+
1413
+ 24:09.840 --> 24:11.840
1414
+ in the elevator between floors?
1415
+
1416
+ 24:11.840 --> 24:13.840
1417
+ I wouldn't trust that.
1418
+
1419
+ 24:13.840 --> 24:16.840
1420
+ I would rather have the buttons.
1421
+
1422
+ 24:16.840 --> 24:19.840
1423
+ Okay, you're optimistic about the pace
1424
+
1425
+ 24:19.840 --> 24:21.840
1426
+ of improvement of the system.
1427
+
1428
+ 24:21.840 --> 24:23.840
1429
+ From what you've seen with the full self driving car,
1430
+
1431
+ 24:23.840 --> 24:25.840
1432
+ computer.
1433
+
1434
+ 24:25.840 --> 24:27.840
1435
+ The rate of improvement is exponential.
1436
+
1437
+ 24:27.840 --> 24:30.840
1438
+ So one of the other very interesting design choices
1439
+
1440
+ 24:30.840 --> 24:34.840
1441
+ early on that connects to this is the operational
1442
+
1443
+ 24:34.840 --> 24:37.840
1444
+ design domain of autopilot.
1445
+
1446
+ 24:37.840 --> 24:41.840
1447
+ So where autopilot is able to be turned on.
1448
+
1449
+ 24:41.840 --> 24:46.840
1450
+ So contrast another vehicle system that we're studying
1451
+
1452
+ 24:46.840 --> 24:48.840
1453
+ is the Cadillac SuperCrew system.
1454
+
1455
+ 24:48.840 --> 24:51.840
1456
+ That's in terms of ODD, very constrained to this particular
1457
+
1458
+ 24:51.840 --> 24:54.840
1459
+ kinds of highways, well mapped, tested,
1460
+
1461
+ 24:54.840 --> 24:58.840
1462
+ but it's much narrower than the ODD of Tesla vehicles.
1463
+
1464
+ 24:58.840 --> 25:00.840
1465
+ What's...
1466
+
1467
+ 25:00.840 --> 25:02.840
1468
+ It's like ADD.
1469
+
1470
+ 25:02.840 --> 25:04.840
1471
+ Yeah.
1472
+
1473
+ 25:04.840 --> 25:07.840
1474
+ That's good. That's a good line.
1475
+
1476
+ 25:07.840 --> 25:10.840
1477
+ What was the design decision
1478
+
1479
+ 25:10.840 --> 25:13.840
1480
+ in that different philosophy of thinking where...
1481
+
1482
+ 25:13.840 --> 25:15.840
1483
+ There's pros and cons.
1484
+
1485
+ 25:15.840 --> 25:20.840
1486
+ What we see with a wide ODD is Tesla drivers are able
1487
+
1488
+ 25:20.840 --> 25:23.840
1489
+ to explore more the limitations of the system,
1490
+
1491
+ 25:23.840 --> 25:26.840
1492
+ at least early on, and they understand together
1493
+
1494
+ 25:26.840 --> 25:28.840
1495
+ the instrument cluster display.
1496
+
1497
+ 25:28.840 --> 25:30.840
1498
+ They start to understand what are the capabilities.
1499
+
1500
+ 25:30.840 --> 25:32.840
1501
+ So that's a benefit.
1502
+
1503
+ 25:32.840 --> 25:37.840
1504
+ The con is you're letting drivers use it basically anywhere.
1505
+
1506
+ 25:37.840 --> 25:41.840
1507
+ Well, anyways, I could detect lanes with confidence.
1508
+
1509
+ 25:41.840 --> 25:46.840
1510
+ Was there a philosophy design decisions that were challenging
1511
+
1512
+ 25:46.840 --> 25:48.840
1513
+ that were being made there?
1514
+
1515
+ 25:48.840 --> 25:53.840
1516
+ Or from the very beginning, was that done on purpose
1517
+
1518
+ 25:53.840 --> 25:55.840
1519
+ with intent?
1520
+
1521
+ 25:55.840 --> 25:58.840
1522
+ Frankly, it's pretty crazy letting people drive
1523
+
1524
+ 25:58.840 --> 26:02.840
1525
+ a two ton death machine manually.
1526
+
1527
+ 26:02.840 --> 26:04.840
1528
+ That's crazy.
1529
+
1530
+ 26:04.840 --> 26:06.840
1531
+ In the future, people will be like,
1532
+
1533
+ 26:06.840 --> 26:09.840
1534
+ I can't believe anyone was just allowed to drive
1535
+
1536
+ 26:09.840 --> 26:12.840
1537
+ one of these two ton death machines
1538
+
1539
+ 26:12.840 --> 26:14.840
1540
+ and they just drive wherever they wanted,
1541
+
1542
+ 26:14.840 --> 26:16.840
1543
+ just like elevators.
1544
+
1545
+ 26:16.840 --> 26:18.840
1546
+ You just move the elevator with the lever wherever you want.
1547
+
1548
+ 26:18.840 --> 26:21.840
1549
+ It can stop at halfway between floors if you want.
1550
+
1551
+ 26:21.840 --> 26:24.840
1552
+ It's pretty crazy.
1553
+
1554
+ 26:24.840 --> 26:29.840
1555
+ So it's going to seem like a mad thing in the future
1556
+
1557
+ 26:29.840 --> 26:32.840
1558
+ that people were driving cars.
1559
+
1560
+ 26:32.840 --> 26:35.840
1561
+ So I have a bunch of questions about the human psychology,
1562
+
1563
+ 26:35.840 --> 26:37.840
1564
+ about behavior and so on.
1565
+
1566
+ 26:37.840 --> 26:39.840
1567
+ I don't know.
1568
+
1569
+ 26:39.840 --> 26:45.840
1570
+ Because you have faith in the AI system,
1571
+
1572
+ 26:45.840 --> 26:50.840
1573
+ not faith, but both on the hardware side
1574
+
1575
+ 26:50.840 --> 26:52.840
1576
+ and the deep learning approach of learning from data
1577
+
1578
+ 26:52.840 --> 26:55.840
1579
+ will make it just far safer than humans.
1580
+
1581
+ 26:55.840 --> 26:57.840
1582
+ Yeah, exactly.
1583
+
1584
+ 26:57.840 --> 27:00.840
1585
+ Recently, there are a few hackers who tricked autopilot
1586
+
1587
+ 27:00.840 --> 27:03.840
1588
+ to act in unexpected ways with adversarial examples.
1589
+
1590
+ 27:03.840 --> 27:06.840
1591
+ So we all know that neural network systems
1592
+
1593
+ 27:06.840 --> 27:08.840
1594
+ are very sensitive to minor disturbances
1595
+
1596
+ 27:08.840 --> 27:10.840
1597
+ to these adversarial examples on input.
1598
+
1599
+ 27:10.840 --> 27:13.840
1600
+ Do you think it's possible to defend against something like this
1601
+
1602
+ 27:13.840 --> 27:15.840
1603
+ for the industry?
1604
+
1605
+ 27:15.840 --> 27:17.840
1606
+ Sure.
1607
+
1608
+ 27:17.840 --> 27:22.840
1609
+ Can you elaborate on the confidence behind that answer?
1610
+
1611
+ 27:22.840 --> 27:27.840
1612
+ Well, a neural net is just like a basic bunch of matrix math.
1613
+
1614
+ 27:27.840 --> 27:30.840
1615
+ You have to be like a very sophisticated,
1616
+
1617
+ 27:30.840 --> 27:32.840
1618
+ somebody who really understands neural nets
1619
+
1620
+ 27:32.840 --> 27:37.840
1621
+ and basically reverse engineer how the matrix is being built
1622
+
1623
+ 27:37.840 --> 27:42.840
1624
+ and then create a little thing that just exactly causes
1625
+
1626
+ 27:42.840 --> 27:44.840
1627
+ the matrix math to be slightly off.
1628
+
1629
+ 27:44.840 --> 27:48.840
1630
+ But it's very easy to then block that by having
1631
+
1632
+ 27:48.840 --> 27:51.840
1633
+ basically anti negative recognition.
1634
+
1635
+ 27:51.840 --> 27:55.840
1636
+ It's like if the system sees something that looks like a matrix hack
1637
+
1638
+ 27:55.840 --> 28:01.840
1639
+ excluded, it's such an easy thing to do.
1640
+
1641
+ 28:01.840 --> 28:05.840
1642
+ So learn both on the valid data and the invalid data.
1643
+
1644
+ 28:05.840 --> 28:07.840
1645
+ So basically learn on the adversarial examples
1646
+
1647
+ 28:07.840 --> 28:09.840
1648
+ to be able to exclude them.
1649
+
1650
+ 28:09.840 --> 28:12.840
1651
+ Yeah, you basically want to both know what is a car
1652
+
1653
+ 28:12.840 --> 28:15.840
1654
+ and what is definitely not a car.
1655
+
1656
+ 28:15.840 --> 28:18.840
1657
+ You train for this is a car and this is definitely not a car.
1658
+
1659
+ 28:18.840 --> 28:20.840
1660
+ Those are two different things.
1661
+
1662
+ 28:20.840 --> 28:23.840
1663
+ People have no idea neural nets really.
1664
+
1665
+ 28:23.840 --> 28:25.840
1666
+ They probably think neural nets involves like, you know,
1667
+
1668
+ 28:25.840 --> 28:28.840
1669
+ fishing net or something.
1670
+
1671
+ 28:28.840 --> 28:35.840
1672
+ So as you know, taking a step beyond just Tesla and autopilot,
1673
+
1674
+ 28:35.840 --> 28:39.840
1675
+ current deep learning approaches still seem in some ways
1676
+
1677
+ 28:39.840 --> 28:44.840
1678
+ to be far from general intelligence systems.
1679
+
1680
+ 28:44.840 --> 28:49.840
1681
+ Do you think the current approaches will take us to general intelligence
1682
+
1683
+ 28:49.840 --> 28:55.840
1684
+ or do totally new ideas need to be invented?
1685
+
1686
+ 28:55.840 --> 28:59.840
1687
+ I think we're missing a few key ideas for general intelligence,
1688
+
1689
+ 28:59.840 --> 29:04.840
1690
+ general, artificial general intelligence.
1691
+
1692
+ 29:04.840 --> 29:08.840
1693
+ But it's going to be upon us very quickly
1694
+
1695
+ 29:08.840 --> 29:11.840
1696
+ and then we'll need to figure out what shall we do
1697
+
1698
+ 29:11.840 --> 29:15.840
1699
+ if we even have that choice.
1700
+
1701
+ 29:15.840 --> 29:18.840
1702
+ But it's amazing how people can't differentiate between, say,
1703
+
1704
+ 29:18.840 --> 29:22.840
1705
+ the narrow AI that, you know, allows a car to figure out
1706
+
1707
+ 29:22.840 --> 29:25.840
1708
+ what a lane line is and, you know,
1709
+
1710
+ 29:25.840 --> 29:29.840
1711
+ and navigate streets versus general intelligence.
1712
+
1713
+ 29:29.840 --> 29:32.840
1714
+ Like these are just very different things.
1715
+
1716
+ 29:32.840 --> 29:35.840
1717
+ Like your toaster and your computer are both machines,
1718
+
1719
+ 29:35.840 --> 29:38.840
1720
+ but one's much more sophisticated than another.
1721
+
1722
+ 29:38.840 --> 29:43.840
1723
+ You're confident with Tesla you can create the world's best toaster.
1724
+
1725
+ 29:43.840 --> 29:45.840
1726
+ The world's best toaster, yes.
1727
+
1728
+ 29:45.840 --> 29:48.840
1729
+ The world's best self driving.
1730
+
1731
+ 29:48.840 --> 29:51.840
1732
+ I'm, yes.
1733
+
1734
+ 29:51.840 --> 29:54.840
1735
+ To me, right now, this seems game set match.
1736
+
1737
+ 29:54.840 --> 29:57.840
1738
+ I don't, I mean, that's, I don't want to be complacent or overconfident,
1739
+
1740
+ 29:57.840 --> 29:59.840
1741
+ but that's what it appears.
1742
+
1743
+ 29:59.840 --> 30:02.840
1744
+ That is just literally what it, how it appears right now.
1745
+
1746
+ 30:02.840 --> 30:06.840
1747
+ It could be wrong, but it appears to be the case
1748
+
1749
+ 30:06.840 --> 30:10.840
1750
+ that Tesla is vastly ahead of everyone.
1751
+
1752
+ 30:10.840 --> 30:13.840
1753
+ Do you think we will ever create an AI system
1754
+
1755
+ 30:13.840 --> 30:17.840
1756
+ that we can love and loves us back in a deep meaningful way
1757
+
1758
+ 30:17.840 --> 30:20.840
1759
+ like in the movie, Her?
1760
+
1761
+ 30:20.840 --> 30:23.840
1762
+ I think AI will be capable of convincing you
1763
+
1764
+ 30:23.840 --> 30:25.840
1765
+ to fall in love with it very well.
1766
+
1767
+ 30:25.840 --> 30:28.840
1768
+ And that's different than us humans?
1769
+
1770
+ 30:28.840 --> 30:31.840
1771
+ You know, we start getting into a metaphysical question
1772
+
1773
+ 30:31.840 --> 30:35.840
1774
+ and do emotions and thoughts exist in a different realm than the physical.
1775
+
1776
+ 30:35.840 --> 30:37.840
1777
+ And maybe they do, maybe they don't.
1778
+
1779
+ 30:37.840 --> 30:39.840
1780
+ I don't know, but from a physics standpoint,
1781
+
1782
+ 30:39.840 --> 30:43.840
1783
+ I tend to think of things, you know,
1784
+
1785
+ 30:43.840 --> 30:47.840
1786
+ like physics was my main sort of training.
1787
+
1788
+ 30:47.840 --> 30:50.840
1789
+ And from a physics standpoint,
1790
+
1791
+ 30:50.840 --> 30:52.840
1792
+ essentially, if it loves you in a way
1793
+
1794
+ 30:52.840 --> 30:57.840
1795
+ that you can't tell whether it's real or not, it is real.
1796
+
1797
+ 30:57.840 --> 30:59.840
1798
+ That's a physics view of love.
1799
+
1800
+ 30:59.840 --> 31:04.840
1801
+ If you cannot prove that it does not,
1802
+
1803
+ 31:04.840 --> 31:07.840
1804
+ if there's no test that you can apply
1805
+
1806
+ 31:07.840 --> 31:14.840
1807
+ that would make it allow you to tell the difference,
1808
+
1809
+ 31:14.840 --> 31:16.840
1810
+ then there is no difference.
1811
+
1812
+ 31:16.840 --> 31:20.840
1813
+ And it's similar to seeing our world as simulation.
1814
+
1815
+ 31:20.840 --> 31:22.840
1816
+ There may not be a test to tell the difference
1817
+
1818
+ 31:22.840 --> 31:24.840
1819
+ between what the real world and the simulation.
1820
+
1821
+ 31:24.840 --> 31:26.840
1822
+ And therefore, from a physics perspective,
1823
+
1824
+ 31:26.840 --> 31:28.840
1825
+ it might as well be the same thing.
1826
+
1827
+ 31:28.840 --> 31:29.840
1828
+ Yes.
1829
+
1830
+ 31:29.840 --> 31:32.840
1831
+ There may be ways to test whether it's a simulation.
1832
+
1833
+ 31:32.840 --> 31:35.840
1834
+ There might be, I'm not saying there aren't,
1835
+
1836
+ 31:35.840 --> 31:38.840
1837
+ but you could certainly imagine that a simulation could correct
1838
+
1839
+ 31:38.840 --> 31:40.840
1840
+ that once an entity in the simulation
1841
+
1842
+ 31:40.840 --> 31:42.840
1843
+ found a way to detect the simulation,
1844
+
1845
+ 31:42.840 --> 31:44.840
1846
+ it could either restart, you know,
1847
+
1848
+ 31:44.840 --> 31:47.840
1849
+ pause the simulation, start a new simulation,
1850
+
1851
+ 31:47.840 --> 31:52.840
1852
+ or do one of many other things that then corrects for that error.
1853
+
1854
+ 31:52.840 --> 31:58.840
1855
+ So when maybe you or somebody else creates an AGI system
1856
+
1857
+ 31:58.840 --> 32:02.840
1858
+ and you get to ask her one question,
1859
+
1860
+ 32:02.840 --> 32:16.840
1861
+ what would that question be?
1862
+
1863
+ 32:16.840 --> 32:21.840
1864
+ What's outside the simulation?
1865
+
1866
+ 32:21.840 --> 32:23.840
1867
+ Milan, thank you so much for talking today.
1868
+
1869
+ 32:23.840 --> 32:52.840
1870
+ All right, thank you.
1871
+
vtt/episode_019_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_020_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_021_large.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_021_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_022_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_023_small.vtt ADDED
@@ -0,0 +1,2177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:04.720
4
+ The following is a conversation with Gavin Miller, he's the head of Adobe Research.
5
+
6
+ 00:04.720 --> 00:08.960
7
+ Adobe has empowered artists, designers, and creative minds from all professions,
8
+
9
+ 00:08.960 --> 00:14.320
10
+ working in the digital medium for over 30 years with software such as Photoshop, Illustrator,
11
+
12
+ 00:14.320 --> 00:20.560
13
+ Premiere, After Effects, InDesign, Audition, Software that work with images, video, and audio.
14
+
15
+ 00:21.200 --> 00:25.920
16
+ Adobe Research is working to define the future evolution of these products in a way
17
+
18
+ 00:25.920 --> 00:31.360
19
+ that makes the life of creatives easier, automates the tedious tasks, and gives more and more time
20
+
21
+ 00:31.360 --> 00:36.880
22
+ to operate in the idea space instead of pixel space. This is where the cutting edge, deep
23
+
24
+ 00:36.880 --> 00:41.360
25
+ learning methods of the past decade can really shine more than perhaps any other application.
26
+
27
+ 00:42.240 --> 00:47.840
28
+ Gavin is the embodiment of combining tech and creativity. Outside of Adobe Research,
29
+
30
+ 00:47.840 --> 00:53.600
31
+ he writes poetry and builds robots, both things that are near and dear to my heart as well.
32
+
33
+ 00:53.600 --> 00:59.200
34
+ This conversation is part of the Artificial Intelligence Podcast. If you enjoy it, subscribe
35
+
36
+ 00:59.200 --> 01:05.360
37
+ on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman's spelled F R I D.
38
+
39
+ 01:06.000 --> 01:09.600
40
+ And now here's my conversation with Gavin Miller.
41
+
42
+ 01:11.120 --> 01:15.920
43
+ You're head of Adobe Research, leading a lot of innovative efforts and applications of AI,
44
+
45
+ 01:15.920 --> 01:23.200
46
+ creating images, video, audio, language, but you're also yourself an artist, a poet,
47
+
48
+ 01:23.200 --> 01:28.640
49
+ a writer, and even a roboticist. So while I promise to everyone listening,
50
+
51
+ 01:28.640 --> 01:32.880
52
+ that I will not spend the entire time we have together reading your poetry, which I love.
53
+
54
+ 01:33.440 --> 01:39.200
55
+ I have to sprinkle it in at least a little bit. So some of them are pretty deep and profound,
56
+
57
+ 01:39.200 --> 01:43.520
58
+ and some are light and silly. Let's start with a few lines from the silly variety.
59
+
60
+ 01:43.520 --> 01:56.800
61
+ You write in a beautiful parody, both Edith Piaf's and my web at Frank Sinatra.
62
+
63
+ 01:56.800 --> 02:06.400
64
+ So it opens with, and now dessert is near. It's time to pay the final total. I've tried to slim
65
+
66
+ 02:06.400 --> 02:14.800
67
+ all year, but my diets have been anecdotal. So where does that love for poetry come from
68
+
69
+ 02:14.800 --> 02:20.880
70
+ for you? And if we dissect your mind, how does it all fit together in the bigger puzzle of Dr.
71
+
72
+ 02:20.880 --> 02:27.440
73
+ Gavin Miller? Well, interesting you chose that one. That was a poem I wrote when I'd been to
74
+
75
+ 02:27.440 --> 02:32.400
76
+ my doctor and he said, you really need to lose some weight and go on a diet. And whilst the
77
+
78
+ 02:32.400 --> 02:37.200
79
+ rational part of my brain wanted to do that, the irrational part of my brain was protesting and
80
+
81
+ 02:37.200 --> 02:42.400
82
+ sort of embraced the opposite idea. I regret nothing, hence. Yes, exactly. Taken to an extreme,
83
+
84
+ 02:42.400 --> 02:49.600
85
+ I thought it would be funny. Obviously, it's a serious topic for some people. But I think,
86
+
87
+ 02:49.600 --> 02:53.920
88
+ for me, I've always been interested in writing since I was in high school, as well as doing
89
+
90
+ 02:53.920 --> 02:58.960
91
+ technology and invention. And sometimes the parallel strands in your life that carry on,
92
+
93
+ 02:58.960 --> 03:05.120
94
+ and one is more about your private life and one's more about your technological career.
95
+
96
+ 03:05.680 --> 03:10.640
97
+ And then at sort of happy moments along the way, sometimes the two things touch, one idea informs
98
+
99
+ 03:10.640 --> 03:17.040
100
+ the other. And we can talk about that as we go. Do you think you're writing the art, the poetry
101
+
102
+ 03:17.040 --> 03:23.440
103
+ contribute indirectly or directly to your research, to your work in Adobe? Well, sometimes it does if
104
+
105
+ 03:23.440 --> 03:30.000
106
+ I say, imagine a future in a science fiction kind of way. And then once it exists on paper,
107
+
108
+ 03:30.000 --> 03:37.520
109
+ I think, well, why shouldn't I just build that? There was an example where when realistic voice
110
+
111
+ 03:37.520 --> 03:42.800
112
+ synthesis first started in the 90s at Apple, where I worked in research. I was done by a friend of mine.
113
+
114
+ 03:44.000 --> 03:48.640
115
+ I sort of sat down and started writing a poem which each line I would enter into the voice
116
+
117
+ 03:48.640 --> 03:54.240
118
+ synthesizer and see how it sounded and sort of wrote it for that voice. And at the time,
119
+
120
+ 03:55.040 --> 04:00.160
121
+ the agents weren't very sophisticated. So they'd sort of add random intonation. And I kind of made
122
+
123
+ 04:00.160 --> 04:06.560
124
+ up the poem to sort of match the tone of the voice. And it sounded slightly sad and depressed. So I
125
+
126
+ 04:06.560 --> 04:12.720
127
+ pretended it was a poem written by an intelligent agent, sort of telling the user to go home and
128
+
129
+ 04:12.720 --> 04:16.560
130
+ leave them alone. But at the same time, they were lonely and wanted to have company and learn from
131
+
132
+ 04:16.560 --> 04:21.760
133
+ what the user was saying. And at the time, it was way beyond anything that AI could possibly do.
134
+
135
+ 04:21.760 --> 04:27.520
136
+ But, you know, since then, it's becoming more within the bounds of possibility.
137
+
138
+ 04:29.040 --> 04:34.800
139
+ And then at the same time, I had a project at home where I did sort of a smart home. This was
140
+
141
+ 04:34.800 --> 04:40.960
142
+ probably 93, 94. And I had the talking voice who'd remind me when I walked in the door of what
143
+
144
+ 04:40.960 --> 04:45.600
145
+ things I had to do. I had buttons on my washing machine because I was a bachelor and I'd leave
146
+
147
+ 04:45.600 --> 04:49.920
148
+ the clothes in there for three days and they'd go moldy. So as I got up in the morning, I would say,
149
+
150
+ 04:49.920 --> 04:56.640
151
+ don't forget the washing and so on. I made photographic photo albums that used light
152
+
153
+ 04:56.640 --> 05:01.040
154
+ sensors to know which page you were looking at would send that over wireless radio to the agent
155
+
156
+ 05:01.040 --> 05:05.760
157
+ who would then play sounds that matched the image she were looking at in the book. So I was kind of
158
+
159
+ 05:05.760 --> 05:10.480
160
+ in love with this idea of magical realism and whether it was possible to do that with technology.
161
+
162
+ 05:10.480 --> 05:16.080
163
+ So that was a case where the sort of the agent sort of intrigued me from a literary point of
164
+
165
+ 05:16.080 --> 05:22.880
166
+ view and became a personality. I think more recently, I've also written plays and when
167
+
168
+ 05:22.880 --> 05:27.440
169
+ plays you write dialogue and obviously you write a fixed set of dialogue that follows a linear
170
+
171
+ 05:27.440 --> 05:33.360
172
+ narrative. But with modern agents, as you design a personality or a capability for conversation,
173
+
174
+ 05:33.360 --> 05:37.760
175
+ you're sort of thinking of, I kind of have imaginary dialogue in my head. And then I think,
176
+
177
+ 05:37.760 --> 05:43.680
178
+ what would it take not only to have that be real, but for it to really know what it's talking about.
179
+
180
+ 05:44.240 --> 05:49.440
181
+ So it's easy to fall into the uncanny valley with AI where it says something it doesn't really
182
+
183
+ 05:49.440 --> 05:54.560
184
+ understand, but it sounds good to the person. But you rapidly realize that it's kind of just
185
+
186
+ 05:55.520 --> 06:00.000
187
+ stimulus response. It doesn't really have real world knowledge about the thing it's describing.
188
+
189
+ 06:00.640 --> 06:06.320
190
+ And so when you get to that point, it really needs to have multiple ways of talking about
191
+
192
+ 06:06.320 --> 06:10.560
193
+ the same concept. So it sounds as though it really understands it. Now, what really understanding
194
+
195
+ 06:10.560 --> 06:16.160
196
+ means is in the eye of the beholder, right? But if it only has one way of referring to something,
197
+
198
+ 06:16.160 --> 06:21.200
199
+ it feels like it's a canned response. But if it can reason about it, or you can go at it from
200
+
201
+ 06:21.200 --> 06:25.600
202
+ multiple angles and give a similar kind of response that people would, then it starts to
203
+
204
+ 06:26.400 --> 06:30.480
205
+ seem more like there's something there that's sentient.
206
+
207
+ 06:31.040 --> 06:35.600
208
+ You can say the same thing, multiple things from different perspectives. I mean, with the
209
+
210
+ 06:35.600 --> 06:40.000
211
+ automatic image captioning that I've seen the work that you're doing, there's elements of that,
212
+
213
+ 06:40.000 --> 06:46.000
214
+ right? Being able to generate different kinds of... Right. So one in my team, there's a lot of work on
215
+
216
+ 06:46.640 --> 06:52.000
217
+ turning a medium from one form to another, whether it's auto tagging imagery or making up full
218
+
219
+ 06:52.000 --> 06:57.840
220
+ sentences about what's in the image, then changing the sentence, finding another image that matches
221
+
222
+ 06:57.840 --> 07:04.720
223
+ the new sentence or vice versa. And in the modern world of GANs, you sort of give it a description
224
+
225
+ 07:04.720 --> 07:11.360
226
+ and it synthesizes an asset that matches the description. So I've sort of gone on a journey.
227
+
228
+ 07:11.360 --> 07:16.560
229
+ My early days in my career were about 3D computer graphics, the sort of pioneering work sort of
230
+
231
+ 07:16.560 --> 07:22.400
232
+ before movies had special effects done with 3D graphics and sort of rode that revolution. And
233
+
234
+ 07:22.400 --> 07:26.720
235
+ that was very much like the renaissance where people would model light and color and shape
236
+
237
+ 07:26.720 --> 07:32.160
238
+ and everything. And now we're kind of in another wave where it's more impressionistic and it's
239
+
240
+ 07:32.160 --> 07:38.240
241
+ sort of the idea of something can be used to generate an image directly, which is sort of the
242
+
243
+ 07:38.240 --> 07:45.520
244
+ new frontier in computer image generation using AI algorithms. So the creative process is more in
245
+
246
+ 07:45.520 --> 07:49.840
247
+ the space of ideas or becoming more in the space of ideas versus in the raw pixels?
248
+
249
+ 07:50.720 --> 07:55.280
250
+ Well, it's interesting. It depends. I think at Adobe, we really want to span the entire range
251
+
252
+ 07:55.280 --> 08:01.040
253
+ from really, really good, what you might call low level tools by low level, as close to say analog
254
+
255
+ 08:01.040 --> 08:07.040
256
+ workflows as possible. So what we do there is we make up systems that do really realistic oil
257
+
258
+ 08:07.040 --> 08:11.600
259
+ paint and watercolor simulation. So if you want every bristle to behave as it would in the real
260
+
261
+ 08:11.600 --> 08:17.760
262
+ world and leave a beautiful analog trail of water and then flow after you've made the brushstroke,
263
+
264
+ 08:17.760 --> 08:21.600
265
+ you can do that. And that's really important for people who want to create something
266
+
267
+ 08:22.720 --> 08:28.160
268
+ really expressive or really novel because they have complete control. And then a certain other
269
+
270
+ 08:28.160 --> 08:35.600
271
+ task become automated. It frees the artists up to focus on the inspiration and less of the perspiration.
272
+
273
+ 08:35.600 --> 08:43.920
274
+ So thinking about different ideas, obviously, once you finish the design, there's a lot of work to
275
+
276
+ 08:43.920 --> 08:49.840
277
+ say do it for all the different aspect ratio of phones or websites and so on. And that used to
278
+
279
+ 08:49.840 --> 08:55.040
280
+ take up an awful lot of time for artists. It still does for many, what we call content velocity.
281
+
282
+ 08:55.040 --> 09:01.360
283
+ And one of the targets of AI is actually to reason about from the first example of what are the
284
+
285
+ 09:01.360 --> 09:06.960
286
+ likely intent for these other formats, maybe if you change the language to German and the words
287
+
288
+ 09:06.960 --> 09:12.160
289
+ are longer, how do you reflow everything so that it looks nicely artistic in that way.
290
+
291
+ 09:12.160 --> 09:17.360
292
+ And so the person can focus on the really creative bit in the middle, which is what is the look and
293
+
294
+ 09:17.360 --> 09:21.200
295
+ style and feel and what's the message and what's the story and the human element.
296
+
297
+ 09:21.200 --> 09:27.920
298
+ So I think creativity is changing. So that's one way in which we're trying to just make it easier
299
+
300
+ 09:27.920 --> 09:32.880
301
+ and faster and cheaper to do so that there can be more of it, more demand, because it's less
302
+
303
+ 09:32.880 --> 09:39.200
304
+ expensive. So everyone wants beautiful artwork for everything from a school website to Hollywood movie.
305
+
306
+ 09:40.800 --> 09:46.480
307
+ On the other side, as some of these things have automatic versions of them, people will
308
+
309
+ 09:46.480 --> 09:52.480
310
+ possibly change role from being the hands on artist and to being either the art director or
311
+
312
+ 09:52.480 --> 09:57.920
313
+ the conceptual artist. And then the computer will be a partner to help create polished examples of
314
+
315
+ 09:57.920 --> 10:02.720
316
+ the idea that they're exploring. Let's talk about Adobe products versus AI and Adobe products.
317
+
318
+ 10:04.000 --> 10:11.200
319
+ Just so you know where I'm coming from, I'm a huge fan of Photoshop for images premiere for video,
320
+
321
+ 10:11.200 --> 10:17.200
322
+ audition for audio. I'll probably use Photoshop to create the thumbnail for this video, premiere
323
+
324
+ 10:17.200 --> 10:24.800
325
+ to edit the video, audition to do the audio. That said, everything I do is really manually. And I
326
+
327
+ 10:24.800 --> 10:30.640
328
+ set up, I use this old school kinesis keyboard and I have auto hotkey that just it's really about
329
+
330
+ 10:30.640 --> 10:37.280
331
+ optimizing the flow of just making sure there's as few clicks as possible. So just being extremely
332
+
333
+ 10:37.280 --> 10:43.920
334
+ efficient. It's something you started to speak to. So before we get into the fun, sort of awesome
335
+
336
+ 10:43.920 --> 10:48.880
337
+ deep learning things, where does AI, if you could speak a little more to it AI or just
338
+
339
+ 10:48.880 --> 10:57.280
340
+ automation in general, do you see in the coming months and years or in general prior in 2018
341
+
342
+ 10:58.560 --> 11:03.840
343
+ fitting into making the life, the low level pixel work flow easier?
344
+
345
+ 11:03.840 --> 11:10.160
346
+ Yeah, that's a great question. So we have a very rich array of algorithms already in Photoshop,
347
+
348
+ 11:10.160 --> 11:17.600
349
+ just classical procedural algorithms as well as ones based on data. In some cases, they end up
350
+
351
+ 11:17.600 --> 11:22.560
352
+ with a large number of sliders and degrees of freedom. So one way in which AI can help is just
353
+
354
+ 11:22.560 --> 11:27.520
355
+ an auto button which comes up with default settings based on the content itself rather than
356
+
357
+ 11:27.520 --> 11:34.080
358
+ default values for the tool. At that point, you then start tweaking. So that's that's a very kind of
359
+
360
+ 11:34.080 --> 11:39.600
361
+ make life easier for people whilst making use of common sense from other example images.
362
+
363
+ 11:39.600 --> 11:40.960
364
+ So like smart defaults.
365
+
366
+ 11:40.960 --> 11:47.760
367
+ Smart defaults, absolutely. Another one is something we've spent a lot of work over the last
368
+
369
+ 11:47.760 --> 11:53.360
370
+ 20 years. I've been at Adobe 19 thinking about selection, for instance, where
371
+
372
+ 11:53.360 --> 11:58.720
373
+ you know, with a quick select, you would look at color boundaries and figure out how to sort of
374
+
375
+ 11:58.720 --> 12:02.480
376
+ flood fill into regions that you thought were physically connected in the real world.
377
+
378
+ 12:03.360 --> 12:08.080
379
+ But that algorithm had no visual common sense about what a cat looks like or a dog. It would just do
380
+
381
+ 12:08.080 --> 12:14.080
382
+ it based on rules of thumb, which were applied to graph theory. And it was a big improvement over
383
+
384
+ 12:14.080 --> 12:19.600
385
+ the previous work we had sort of almost click every everything by hand or if it just did similar
386
+
387
+ 12:19.600 --> 12:24.880
388
+ colors, it would do little tiny regions that wouldn't be connected. But in the future,
389
+
390
+ 12:24.880 --> 12:31.120
391
+ using neural nets to actually do a great job with say a single click or even in the case of
392
+
393
+ 12:31.120 --> 12:36.160
394
+ well known categories like people or animals, no click, where you just say select the object and
395
+
396
+ 12:36.160 --> 12:41.440
397
+ it just knows the dominant object as a person in the middle of the photograph. Those kinds of things
398
+
399
+ 12:41.440 --> 12:47.520
400
+ are really valuable if they can be robust enough to give you good quality results.
401
+
402
+ 12:47.520 --> 12:50.720
403
+ Or they can be a great start for like tweaking it.
404
+
405
+ 12:50.720 --> 12:56.080
406
+ So for example, background removal, like one thing I'll, in a thumbnail,
407
+
408
+ 12:56.080 --> 13:00.240
409
+ I'll take a picture of you right now and essentially remove the background behind you.
410
+
411
+ 13:00.240 --> 13:06.480
412
+ And I want to make that as easy as possible. You don't have flowing hair, like rich at the
413
+
414
+ 13:06.480 --> 13:11.040
415
+ moment. Rich sort of. I had it in the past, it may come again in the future, but for now.
416
+
417
+ 13:12.320 --> 13:16.160
418
+ So that sometimes makes it a little more challenging to remove the background.
419
+
420
+ 13:16.160 --> 13:23.680
421
+ How difficult do you think is that problem for AI for basically making the quick selection tool
422
+
423
+ 13:23.680 --> 13:26.960
424
+ smarter and smarter and smarter? Well, we have a lot of research on that already.
425
+
426
+ 13:28.400 --> 13:34.160
427
+ If you want a sort of quick, cheap and cheerful, look, I'm pretending I'm in Hawaii,
428
+
429
+ 13:34.160 --> 13:38.400
430
+ but it's sort of a joke, then you don't need perfect boundaries. And you can do that today
431
+
432
+ 13:38.400 --> 13:44.960
433
+ with a single click for the algorithms we have. We have other algorithms where with a little bit
434
+
435
+ 13:44.960 --> 13:48.560
436
+ more guidance on the boundaries, like you might need to touch it up a little bit.
437
+
438
+ 13:49.920 --> 13:56.480
439
+ We have other algorithms that can pull a nice mat from a crude selection. So we have combinations
440
+
441
+ 13:56.480 --> 14:04.000
442
+ of tools that can do all of that. And at our recent Max conference at AB Max, we demonstrated how
443
+
444
+ 14:04.880 --> 14:09.680
445
+ very quickly just by drawing a simple polygon around the object of interest, we could not
446
+
447
+ 14:09.680 --> 14:17.920
448
+ only do it for a single still, but we could pull at least a selection mask from a moving target,
449
+
450
+ 14:17.920 --> 14:23.360
451
+ like a person dancing in front of a brick wall or something. And so it's going from hours to
452
+
453
+ 14:23.360 --> 14:29.360
454
+ a few seconds for workflows that are really nice. And then you might go in and touch up a little.
455
+
456
+ 14:30.240 --> 14:33.040
457
+ So that's a really interesting question. You mentioned the word robust.
458
+
459
+ 14:33.040 --> 14:40.000
460
+ You know, there's like a journey for an idea, right? And what you presented probably at Max
461
+
462
+ 14:41.360 --> 14:46.320
463
+ has elements of just sort of it inspires the concept, it can work pretty well in a majority
464
+
465
+ 14:46.320 --> 14:51.520
466
+ of cases. But how do you make something that works? Well, in majority of cases, how do you make
467
+
468
+ 14:51.520 --> 14:56.480
469
+ something that works, maybe in all cases, or it becomes a robust tool?
470
+
471
+ 14:56.480 --> 15:01.760
472
+ There are a couple of things. So that really touches on the difference between academic research
473
+
474
+ 15:01.760 --> 15:06.640
475
+ and industrial research. So in academic research, it's really about who's the person to have the
476
+
477
+ 15:06.640 --> 15:12.480
478
+ great new idea that shows promise. And we certainly love to be those people too. But
479
+
480
+ 15:13.120 --> 15:17.840
481
+ we have sort of two forms of publishing. One is academic peer review, which we do a lot of,
482
+
483
+ 15:17.840 --> 15:24.720
484
+ and we have great success there as much as some universities. But then we also have shipping,
485
+
486
+ 15:24.720 --> 15:30.160
487
+ which is a different type of, and then we get customer review, as well as, you know, product
488
+
489
+ 15:30.160 --> 15:37.840
490
+ critics. And that might be a case where it's not about being perfect every single time, but
491
+
492
+ 15:37.840 --> 15:43.280
493
+ perfect enough at the time, plus a mechanism to intervene and recover where you do have mistakes.
494
+
495
+ 15:43.280 --> 15:47.120
496
+ So we have the luxury of very talented customers. We don't want them to be
497
+
498
+ 15:48.720 --> 15:55.200
499
+ overly taxed doing it every time. But if they can go in and just take it from 99 to 100,
500
+
501
+ 15:55.200 --> 16:01.440
502
+ 100 with the touch of a mouse or something, then for the professional end, that's something
503
+
504
+ 16:01.440 --> 16:06.320
505
+ that we definitely want to support as well. And for them, it went from having to do that
506
+
507
+ 16:06.320 --> 16:13.360
508
+ tedious task all the time to much less often. So I think that gives us an out. If it had to be
509
+
510
+ 16:13.360 --> 16:18.640
511
+ 100% automatic all the time, then that would delay the time at which we could get to market.
512
+
513
+ 16:18.640 --> 16:26.000
514
+ So on that thread, maybe you can untangle something. Again, I'm sort of just speaking to
515
+
516
+ 16:26.000 --> 16:36.000
517
+ my own experience. Maybe that is the most useful idea. So I think Photoshop, as an example or premiere,
518
+
519
+ 16:37.680 --> 16:44.240
520
+ has a lot of amazing features that I haven't touched. And so what's the, in terms of AI,
521
+
522
+ 16:44.240 --> 16:54.080
523
+ helping make my life or the life of creatives easier? How this collaboration between human
524
+
525
+ 16:54.080 --> 16:58.960
526
+ and machine, how do you learn to collaborate better? How do you learn the new algorithms?
527
+
528
+ 17:00.000 --> 17:04.240
529
+ Is it something that where you have to watch tutorials and you have to watch videos and so
530
+
531
+ 17:04.240 --> 17:10.400
532
+ on? Or do you ever think, do you think about the experience itself through exploration being
533
+
534
+ 17:10.400 --> 17:18.320
535
+ the teacher? We absolutely do. So I'm glad that you brought this up. We sort of think about
536
+
537
+ 17:18.320 --> 17:22.080
538
+ two things. One is helping the person in the moment to do the task that they need to do. But
539
+
540
+ 17:22.080 --> 17:26.960
541
+ the other is thinking more holistically about their journey learning a tool. And when it's like,
542
+
543
+ 17:26.960 --> 17:31.120
544
+ think of it as Adobe University, where you use the tool long enough, you become an expert.
545
+
546
+ 17:31.120 --> 17:34.960
547
+ And not necessarily an expert in everything. It's like living in a city. You don't necessarily
548
+
549
+ 17:34.960 --> 17:40.160
550
+ know every street, but you know, the important ones you need to get to. So we have projects in
551
+
552
+ 17:40.160 --> 17:45.360
553
+ research, which actually look at the thousands of hours of tutorials online and try to understand
554
+
555
+ 17:45.360 --> 17:51.040
556
+ what's being taught in them. And then we had one publication at CHI where it was looking at,
557
+
558
+ 17:52.560 --> 17:57.360
559
+ given the last three or four actions you did, what did other people in tutorials do next?
560
+
561
+ 17:57.360 --> 18:01.680
562
+ So if you want some inspiration for what you might do next, or you just want to watch the
563
+
564
+ 18:01.680 --> 18:06.800
565
+ tutorial and see, learn from people who are doing similar workflows to you, you can without having
566
+
567
+ 18:06.800 --> 18:13.360
568
+ to go and search on keywords and everything. So really trying to use the context of your use of
569
+
570
+ 18:13.360 --> 18:18.640
571
+ the app to make intelligent suggestions, either about choices that you might make,
572
+
573
+ 18:20.800 --> 18:26.480
574
+ or in a more assistive way where it could say, if you did this next, we could show you. And that's
575
+
576
+ 18:26.480 --> 18:31.360
577
+ basically the frontier that we're exploring now, which is, if we really deeply understand the
578
+
579
+ 18:31.360 --> 18:38.480
580
+ domain in which designers and creative people work, can we combine that with AI and pattern
581
+
582
+ 18:38.480 --> 18:47.520
583
+ matching of behavior to make intelligent suggestions, either through verbal possibilities or just
584
+
585
+ 18:47.520 --> 18:53.840
586
+ showing the results of if you try this. And that's really the sort of, I was in a meeting today
587
+
588
+ 18:53.840 --> 18:58.880
589
+ thinking about these things. So it's still a grand challenge. We'd all love
590
+
591
+ 18:58.880 --> 19:05.920
592
+ an artist over one shoulder and a teacher over the other, right? And we hope to get there. And
593
+
594
+ 19:05.920 --> 19:10.640
595
+ the right thing to do is to give enough at each stage that it's useful in itself, but it builds
596
+
597
+ 19:10.640 --> 19:19.120
598
+ a foundation for the next level of expectation. Are you aware of this gigantic medium of YouTube
599
+
600
+ 19:19.120 --> 19:26.240
601
+ that's creating just a bunch of creative people, both artists and teachers of different kinds?
602
+
603
+ 19:26.240 --> 19:31.440
604
+ Absolutely. And the more we can understand those media types, both visually and in terms of
605
+
606
+ 19:31.440 --> 19:37.200
607
+ transcripts and words, the more we can bring the wisdom that they embody into the guidance that's
608
+
609
+ 19:37.200 --> 19:42.960
610
+ embedded in the tool. That would be brilliant to remove the barrier from having to yourself type
611
+
612
+ 19:42.960 --> 19:49.600
613
+ in the keyword, searching, so on. Absolutely. And then in the longer term, an interesting
614
+
615
+ 19:49.600 --> 19:54.000
616
+ discussion is, does it ultimately not just assist with learning the interface we have,
617
+
618
+ 19:54.000 --> 19:59.200
619
+ but does it modify the interface to be simpler? Or do you fragment into a variety of tools,
620
+
621
+ 19:59.200 --> 20:05.600
622
+ each of which has a different level of visibility of the functionality? I like to say that if you
623
+
624
+ 20:05.600 --> 20:12.640
625
+ add a feature to a GUI, you have to have yet more visual complexity confronting the new user.
626
+
627
+ 20:12.640 --> 20:17.120
628
+ Whereas if you have an assistant with a new skill, if you know they have it, so you know
629
+
630
+ 20:17.120 --> 20:23.280
631
+ to ask for it, then it's sort of additive without being more intimidating. So we definitely think
632
+
633
+ 20:23.280 --> 20:29.120
634
+ about new users and how to onboard them. Many actually value the idea of being able to master
635
+
636
+ 20:29.120 --> 20:34.720
637
+ that complex interface and keyboard shortcuts, like you were talking about earlier, because
638
+
639
+ 20:35.520 --> 20:39.680
640
+ with great familiarity, it becomes a musical instrument for expressing your visual ideas.
641
+
642
+ 20:40.480 --> 20:45.840
643
+ And other people just want to get something done quickly in the simplest way possible,
644
+
645
+ 20:45.840 --> 20:50.400
646
+ and that's where a more assistive version of the same technology might be useful,
647
+
648
+ 20:50.400 --> 20:54.560
649
+ maybe on a different class of device, which is more in context for capture, say,
650
+
651
+ 20:55.920 --> 21:01.680
652
+ whereas somebody who's in a deep post production workflow maybe want to be on a laptop or a big
653
+
654
+ 21:01.680 --> 21:10.560
655
+ screen desktop and have more knobs and dials to really express the subtlety of what they want to do.
656
+
657
+ 21:12.160 --> 21:16.320
658
+ So there's so many exciting applications of computer vision and machine learning
659
+
660
+ 21:16.320 --> 21:21.920
661
+ that Adobe is working on, like scene stitching, sky replacement, foreground,
662
+
663
+ 21:21.920 --> 21:26.880
664
+ background removal, spatial object based image search, automatic image captioning,
665
+
666
+ 21:26.880 --> 21:31.280
667
+ like we mentioned, project cloak, project deep fill filling in parts of the images,
668
+
669
+ 21:31.920 --> 21:38.640
670
+ project scribbler, style transfer video, style transfer faces and video with Project Puppetron,
671
+
672
+ 21:38.640 --> 21:49.040
673
+ best name ever. Can you talk through a favorite or some of them or examples that pop in mind?
674
+
675
+ 21:49.040 --> 21:54.800
676
+ I'm sure I'll be able to provide links to other ones we don't talk about because there's visual
677
+
678
+ 21:54.800 --> 22:00.640
679
+ elements to all of them that are exciting. Why they're interesting for different reasons might
680
+
681
+ 22:00.640 --> 22:06.640
682
+ be a good way to go. So I think sky replace is interesting because we talked about selection
683
+
684
+ 22:06.640 --> 22:11.440
685
+ being sort of an atomic operation. It's almost like a, if you think of an assembly language,
686
+
687
+ 22:11.440 --> 22:17.840
688
+ it's like a single instruction. Whereas sky replace is a compound action where you automatically
689
+
690
+ 22:17.840 --> 22:22.720
691
+ select the sky, you look for stock content that matches the geometry of the scene.
692
+
693
+ 22:24.000 --> 22:27.600
694
+ You try to have variety in your choices so that you do coverage of different moods.
695
+
696
+ 22:28.160 --> 22:35.600
697
+ It then mats in the sky behind the foreground, but then importantly it uses the foreground
698
+
699
+ 22:35.600 --> 22:40.560
700
+ of the other image that you just searched on to recolor the foreground of the image that
701
+
702
+ 22:40.560 --> 22:47.840
703
+ you're editing. So if you say go from a midday sky to an evening sky, it will actually add
704
+
705
+ 22:47.840 --> 22:53.760
706
+ sort of an orange glow to the foreground objects as well. I was a big fan in college of Magritte
707
+
708
+ 22:53.760 --> 22:59.600
709
+ and he has a number of paintings where it's surrealism because he'll like do a composite,
710
+
711
+ 22:59.600 --> 23:03.440
712
+ but the foreground building will be at night and the sky will be during the day. There's one
713
+
714
+ 23:03.440 --> 23:09.120
715
+ called The Empire of Light which was on my wall in college and we're trying not to do surrealism.
716
+
717
+ 23:09.120 --> 23:15.680
718
+ It can be a choice, but we'd rather have it be natural by default rather than it looking fake
719
+
720
+ 23:15.680 --> 23:19.760
721
+ and then you have to do a whole bunch of post production to fix it. So that's a case where
722
+
723
+ 23:19.760 --> 23:25.120
724
+ we're kind of capturing an entire workflow into a single action and doing it in about a second
725
+
726
+ 23:25.120 --> 23:30.720
727
+ rather than a minute or two. And when you do that, you can not just do it once, but you can do it
728
+
729
+ 23:30.720 --> 23:36.960
730
+ for say like 10 different backgrounds and then you're almost back to this inspiration idea of
731
+
732
+ 23:36.960 --> 23:41.840
733
+ I don't know quite what I want, but I'll know it when I see it. And you can just explore the
734
+
735
+ 23:41.840 --> 23:47.200
736
+ design space as close to final production value as possible. And then when you really pick one,
737
+
738
+ 23:47.200 --> 23:51.120
739
+ you might go back and slightly tweak the selection mask just to make it perfect and
740
+
741
+ 23:51.120 --> 23:54.320
742
+ do that kind of polish that professionals like to bring to their work.
743
+
744
+ 23:54.320 --> 24:01.040
745
+ So then there's this idea of, as you mentioned, the sky replacing it to different stock images of
746
+
747
+ 24:01.040 --> 24:07.040
748
+ the sky. In general, you have this idea or it could be on your disk or whatever. But making even
749
+
750
+ 24:07.040 --> 24:12.400
751
+ more intelligent choices about ways to search stock images which is really interesting. It's
752
+
753
+ 24:12.400 --> 24:19.120
754
+ kind of spatial. Absolutely. Right. So that was something we called Concept Canvas. So normally
755
+
756
+ 24:19.120 --> 24:26.240
757
+ when you do say an image search, I assume it's just based on text. You would give the keywords
758
+
759
+ 24:26.240 --> 24:30.080
760
+ of the things you want to be in the image and it would find the nearest one that had those tags.
761
+
762
+ 24:32.720 --> 24:37.200
763
+ For many tasks, you really want to be able to say I want a big person in the middle or in a
764
+
765
+ 24:37.200 --> 24:41.280
766
+ dog to the right and umbrella above the left because you want to leave space for the text or
767
+
768
+ 24:41.280 --> 24:48.640
769
+ whatever. And so Concept Canvas lets you assign spatial regions to the keywords. And then we've
770
+
771
+ 24:48.640 --> 24:54.560
772
+ already preindexed the images to know where the important concepts are in the picture. So we then
773
+
774
+ 24:54.560 --> 25:01.200
775
+ go through that index matching to assets. And even though it's just another form of search,
776
+
777
+ 25:01.200 --> 25:06.480
778
+ because you're doing spatial design or layout, it starts to feel like design. You sort of feel
779
+
780
+ 25:06.480 --> 25:13.120
781
+ oddly responsible for the image that comes back as if you invented it a little bit. So it's a good
782
+
783
+ 25:13.120 --> 25:18.960
784
+ example where giving enough control starts to make people have a sense of ownership over the
785
+
786
+ 25:18.960 --> 25:23.280
787
+ outcome of the event. And then we also have technologies in Photoshop where you physically
788
+
789
+ 25:23.280 --> 25:29.440
790
+ can move the dog in post as well. But for Concept Canvas, it was just a very fast way to sort of
791
+
792
+ 25:29.440 --> 25:38.560
793
+ loop through and be able to lay things out. In terms of being able to remove objects from a scene
794
+
795
+ 25:38.560 --> 25:45.920
796
+ and fill in the background automatically. So that's extremely exciting. And that's
797
+
798
+ 25:45.920 --> 25:51.200
799
+ so neural networks are stepping in there. I just talked this week with Ian Goodfellow.
800
+
801
+ 25:51.200 --> 25:55.360
802
+ So the GANS for doing that is definitely one approach.
803
+
804
+ 25:55.360 --> 25:59.440
805
+ So is that a really difficult problem? Is it as difficult as it looks,
806
+
807
+ 25:59.440 --> 26:06.080
808
+ again, to take it to a robust product level? Well, there are certain classes of image for
809
+
810
+ 26:06.080 --> 26:10.800
811
+ which the traditional algorithms like Content Aware Fill work really well. Like if you have
812
+
813
+ 26:10.800 --> 26:15.200
814
+ a naturalistic texture like a gravel path or something, because it's patch based, it will
815
+
816
+ 26:15.200 --> 26:19.840
817
+ make up a very plausible looking intermediate thing and fill in the hole. And then we use some
818
+
819
+ 26:20.960 --> 26:25.200
820
+ algorithms to sort of smooth out the lighting so you don't see any brightness contrasts in that
821
+
822
+ 26:25.200 --> 26:29.680
823
+ region or you've gradually ramped from dark to light if it straddles the boundary.
824
+
825
+ 26:29.680 --> 26:37.600
826
+ Where it gets complicated is if you have to infer invisible structure behind the person in front.
827
+
828
+ 26:37.600 --> 26:41.920
829
+ And that really requires a common sense knowledge of the world to know what,
830
+
831
+ 26:42.480 --> 26:47.040
832
+ if I see three quarters of a house, do I have a rough sense of what the rest of the house looks
833
+
834
+ 26:47.040 --> 26:51.840
835
+ like? If you just fill it in with patches, it can end up sort of doing things that make sense
836
+
837
+ 26:51.840 --> 26:56.480
838
+ locally. But you look at the global structure and it looks like it's just sort of crumpled or messed
839
+
840
+ 26:56.480 --> 27:02.800
841
+ up. And so what GANs and neural nets bring to the table is this common sense learned from the
842
+
843
+ 27:02.800 --> 27:10.640
844
+ training set. And the challenge right now is that the generative methods that can make up
845
+
846
+ 27:10.640 --> 27:14.960
847
+ missing holes using that kind of technology are still only stable at low resolutions.
848
+
849
+ 27:15.520 --> 27:19.680
850
+ And so you either need to then go from a low resolution to a high resolution using some other
851
+
852
+ 27:19.680 --> 27:24.720
853
+ algorithm or we need to push the state of the art and it's still in research to get to that point.
854
+
855
+ 27:24.720 --> 27:30.720
856
+ Right. Of course, if you show it something, say it's trained on houses and then you show it in
857
+
858
+ 27:30.720 --> 27:37.360
859
+ octopus, it's not going to do a very good job of showing common sense about octopuses. So
860
+
861
+ 27:39.360 --> 27:45.600
862
+ again, you're asking about how you know that it's ready for prime time. You really need a very
863
+
864
+ 27:45.600 --> 27:52.880
865
+ diverse training set of images. And ultimately, that may be a case where you put it out there
866
+
867
+ 27:52.880 --> 28:00.400
868
+ with some guard rails where you might do a detector which looks at the image and
869
+
870
+ 28:00.960 --> 28:05.280
871
+ sort of estimates its own competence of how well a job could this algorithm do.
872
+
873
+ 28:05.920 --> 28:10.400
874
+ So eventually, there may be this idea of what we call an ensemble of experts where
875
+
876
+ 28:11.120 --> 28:15.440
877
+ any particular expert is specialized in certain things and then there's sort of a
878
+
879
+ 28:15.440 --> 28:19.360
880
+ either they vote to say how confident they are about what to do. This is sort of more future
881
+
882
+ 28:19.360 --> 28:24.080
883
+ looking or there's some dispatcher which says you're good at houses, you're good at trees.
884
+
885
+ 28:27.120 --> 28:31.520
886
+ So I mean, all this adds up to a lot of work because each of those models will be a whole
887
+
888
+ 28:31.520 --> 28:38.320
889
+ bunch of work. But I think over time, you'd gradually fill out the set and initially focus
890
+
891
+ 28:38.320 --> 28:41.520
892
+ on certain workflows and then sort of branch out as you get more capable.
893
+
894
+ 28:41.520 --> 28:48.640
895
+ So you mentioned workflows and have you considered maybe looking far into the future?
896
+
897
+ 28:50.000 --> 28:57.680
898
+ First of all, using the fact that there is a huge amount of people that use Photoshop,
899
+
900
+ 28:57.680 --> 29:03.520
901
+ for example, they have certain workflows, being able to collect the information by which
902
+
903
+ 29:04.880 --> 29:09.440
904
+ they basically get information about their workflows, about what they need,
905
+
906
+ 29:09.440 --> 29:15.120
907
+ what can the ways to help them, whether it is houses or octopus that people work on more.
908
+
909
+ 29:16.000 --> 29:23.440
910
+ Basically getting a beat on what kind of data is needed to be annotated and collected for people
911
+
912
+ 29:23.440 --> 29:26.320
913
+ to build tools that actually work well for people.
914
+
915
+ 29:26.320 --> 29:31.680
916
+ Absolutely. And this is a big topic and the whole world of AI is what data can you gather and why.
917
+
918
+ 29:33.200 --> 29:39.120
919
+ At one level, the way to think about it is we not only want to train our customers in how to use
920
+
921
+ 29:39.120 --> 29:44.160
922
+ our products, but we want them to teach us what's important and what's useful. At the same time,
923
+
924
+ 29:44.160 --> 29:51.120
925
+ we want to respect their privacy and obviously we wouldn't do things without their explicit permission.
926
+
927
+ 29:52.800 --> 29:57.440
928
+ And I think the modern spirit of the age around this is you have to demonstrate to somebody
929
+
930
+ 29:57.440 --> 30:02.720
931
+ how they're benefiting from sharing their data with the tool. Either it's helping in the short
932
+
933
+ 30:02.720 --> 30:07.120
934
+ term to understand their intent so you can make better recommendations or if they're
935
+
936
+ 30:07.120 --> 30:11.840
937
+ there friendly to your cause or you're tall or they want to help you evolve quickly because
938
+
939
+ 30:11.840 --> 30:16.320
940
+ they depend on you for their livelihood, they may be willing to share some of their
941
+
942
+ 30:17.360 --> 30:23.360
943
+ workflows or choices with the dataset to be then trained.
944
+
945
+ 30:24.720 --> 30:30.560
946
+ There are technologies for looking at learning without necessarily storing all the information
947
+
948
+ 30:30.560 --> 30:36.080
949
+ permanently so that you can learn on the fly but not keep a record of what somebody did.
950
+
951
+ 30:36.080 --> 30:38.720
952
+ So, we're definitely exploring all of those possibilities.
953
+
954
+ 30:38.720 --> 30:45.520
955
+ And I think Adobe exists in a space where Photoshop, if I look at the data I've created
956
+
957
+ 30:45.520 --> 30:51.840
958
+ in OWN, I'm less comfortable sharing data with social networks than I am with Adobe because
959
+
960
+ 30:51.840 --> 31:01.360
961
+ there's just exactly as you said, there's an obvious benefit for sharing the data that I use
962
+
963
+ 31:01.360 --> 31:05.440
964
+ to create in Photoshop because it's helping improve the workflow in the future.
965
+
966
+ 31:05.440 --> 31:06.080
967
+ Right.
968
+
969
+ 31:06.080 --> 31:09.440
970
+ As opposed to it's not clear what the benefit is in social networks.
971
+
972
+ 31:10.000 --> 31:14.000
973
+ It's nice of you to say that. I mean, I think there are some professional workflows where
974
+
975
+ 31:14.000 --> 31:17.360
976
+ people might be very protective of what they're doing such as if I was preparing
977
+
978
+ 31:18.240 --> 31:22.640
979
+ evidence for a legal case, I wouldn't want any of that, you know,
980
+
981
+ 31:22.640 --> 31:25.440
982
+ phoning home to help train the algorithm or anything.
983
+
984
+ 31:26.560 --> 31:30.720
985
+ There may be other cases where people say having a trial version or they're doing some,
986
+
987
+ 31:30.720 --> 31:35.280
988
+ I'm not saying we're doing this today, but there's a future scenario where somebody has a more
989
+
990
+ 31:35.280 --> 31:40.400
991
+ permissive relationship with Adobe where they explicitly say, I'm fine, I'm only doing hobby
992
+
993
+ 31:40.400 --> 31:48.000
994
+ projects or things which are non confidential and in exchange for some benefit tangible or
995
+
996
+ 31:48.000 --> 31:51.200
997
+ otherwise, I'm willing to share very fine grain data.
998
+
999
+ 31:51.840 --> 31:57.920
1000
+ So, another possible scenario is to capture relatively crude high level things from more
1001
+
1002
+ 31:57.920 --> 32:02.160
1003
+ people and then more detailed knowledge from people who are willing to participate.
1004
+
1005
+ 32:02.160 --> 32:07.280
1006
+ We do that today with explicit customer studies where, you know, we go and visit somebody and
1007
+
1008
+ 32:07.280 --> 32:10.640
1009
+ ask them to try the tool and we human observe what they're doing.
1010
+
1011
+ 32:12.000 --> 32:15.760
1012
+ In the future, to be able to do that enough to be able to train an algorithm,
1013
+
1014
+ 32:16.320 --> 32:20.240
1015
+ we'd need a more systematic process, but we'd have to do it very consciously because
1016
+
1017
+ 32:21.200 --> 32:24.560
1018
+ one of the things people treasure about Adobe is a sense of trust
1019
+
1020
+ 32:24.560 --> 32:28.880
1021
+ and we don't want to endanger that through overly aggressive data collection.
1022
+
1023
+ 32:28.880 --> 32:35.520
1024
+ So, we have a Chief Privacy Officer and it's definitely front and center of thinking about AI
1025
+
1026
+ 32:35.520 --> 32:39.920
1027
+ rather than an afterthought. Well, when you start that program, sign me up.
1028
+
1029
+ 32:39.920 --> 32:41.040
1030
+ Okay, happy to.
1031
+
1032
+ 32:42.800 --> 32:48.640
1033
+ Is there other projects that you wanted to mention that I didn't perhaps that pop into mind?
1034
+
1035
+ 32:48.640 --> 32:51.840
1036
+ Well, you covered the number. I think you mentioned Project Puppetron.
1037
+
1038
+ 32:51.840 --> 32:58.480
1039
+ I think that one is interesting because you might think of Adobe as only thinking in 2D
1040
+
1041
+ 32:59.760 --> 33:04.800
1042
+ and that's a good example where we're actually thinking more three dimensionally about how to
1043
+
1044
+ 33:04.800 --> 33:09.440
1045
+ assign features to faces so that we can, you know, if you take, so what Puppetron does,
1046
+
1047
+ 33:09.440 --> 33:16.320
1048
+ it takes either a still or a video of a person talking and then it can take a painting of somebody
1049
+
1050
+ 33:16.320 --> 33:20.160
1051
+ else and then apply the style of the painting to the person who's talking in the video.
1052
+
1053
+ 33:20.160 --> 33:29.600
1054
+ And it's unlike a sort of screen door post filter effect that you sometimes see online.
1055
+
1056
+ 33:30.320 --> 33:36.080
1057
+ It really looks as though it's sort of somehow attached or reflecting the motion of the face.
1058
+
1059
+ 33:36.080 --> 33:40.320
1060
+ And so that's the case where even to do a 2D workflow like stylization,
1061
+
1062
+ 33:40.320 --> 33:44.160
1063
+ you really need to infer more about the 3D structure of the world.
1064
+
1065
+ 33:44.160 --> 33:48.560
1066
+ And I think as 3D computer vision algorithms get better,
1067
+
1068
+ 33:48.560 --> 33:52.960
1069
+ initially they'll focus on particular domains like faces where you have a lot of
1070
+
1071
+ 33:52.960 --> 33:57.680
1072
+ prior knowledge about structure and you can maybe have a parameterized template that you fit to the
1073
+
1074
+ 33:57.680 --> 34:03.600
1075
+ image. But over time, this should be possible for more general content. And it might even be
1076
+
1077
+ 34:03.600 --> 34:09.360
1078
+ invisible to the user that you're doing 3D reconstruction but under the hood, but it might
1079
+
1080
+ 34:09.360 --> 34:15.200
1081
+ then let you do edits much more reliably or correctly than you would otherwise.
1082
+
1083
+ 34:15.200 --> 34:20.800
1084
+ And you know, the face is a very important application, right?
1085
+
1086
+ 34:20.800 --> 34:22.480
1087
+ So making things work.
1088
+
1089
+ 34:22.480 --> 34:26.640
1090
+ And a very sensitive one. If you do something uncanny, it's very disturbing.
1091
+
1092
+ 34:26.640 --> 34:30.080
1093
+ That's right. You have to get it. You have to get it right.
1094
+
1095
+ 34:30.080 --> 34:39.040
1096
+ So in the space of augmented reality and virtual reality, what do you think is the role of AR and
1097
+
1098
+ 34:39.040 --> 34:45.360
1099
+ VR and in the content we consume as people as consumers and the content we create as creators?
1100
+
1101
+ 34:45.360 --> 34:47.920
1102
+ No, that's a great question. Let me think about this a lot too.
1103
+
1104
+ 34:48.720 --> 34:55.360
1105
+ So I think VR and AR serve slightly different purposes. So VR can really transport you to an
1106
+
1107
+ 34:55.360 --> 35:02.400
1108
+ entire immersive world, no matter what your personal situation is. To that extent, it's a bit like
1109
+
1110
+ 35:02.400 --> 35:06.720
1111
+ a really, really widescreen television where it sort of snaps you out of your context and puts you
1112
+
1113
+ 35:06.720 --> 35:13.360
1114
+ in a new one. And I think it's still evolving in terms of the hardware I actually worked on,
1115
+
1116
+ 35:13.360 --> 35:17.680
1117
+ VR in the 90s, trying to solve the latency and sort of nausea problem, which we did,
1118
+
1119
+ 35:17.680 --> 35:23.040
1120
+ but it was very expensive and a bit early. There's a new wave of that now. I think
1121
+
1122
+ 35:23.600 --> 35:27.600
1123
+ and increasingly those devices are becoming all in one rather than something that's tethered to a
1124
+
1125
+ 35:27.600 --> 35:34.480
1126
+ box. I think the market seems to be bifurcating into things for consumers and things for professional
1127
+
1128
+ 35:34.480 --> 35:40.080
1129
+ use cases, like for architects and people designing where your product is a building and you really
1130
+
1131
+ 35:40.080 --> 35:45.200
1132
+ want to experience it better than looking at a scale model or a drawing, I think,
1133
+
1134
+ 35:45.920 --> 35:50.320
1135
+ or even than a video. So I think for that, where you need a sense of scale and spatial
1136
+
1137
+ 35:50.320 --> 35:59.600
1138
+ relationships, it's great. I think AR holds the promise of sort of taking digital assets off the
1139
+
1140
+ 35:59.600 --> 36:03.600
1141
+ screen and putting them in context in the real world on the table in front of you on the wall
1142
+
1143
+ 36:03.600 --> 36:10.400
1144
+ behind you. And that has the corresponding need that the assets need to adapt to the physical
1145
+
1146
+ 36:10.400 --> 36:15.680
1147
+ context in which they're being placed. I mean, it's a bit like having a live theater troupe come
1148
+
1149
+ 36:15.680 --> 36:20.880
1150
+ to your house and put on Hamlet. My mother had a friend who used to do this at Stately Homes in
1151
+
1152
+ 36:20.880 --> 36:26.640
1153
+ England for the National Trust. And they would adapt the scenes and even they'd walk the audience
1154
+
1155
+ 36:26.640 --> 36:32.880
1156
+ through the rooms to see the action based on the country house they found themselves in for two
1157
+
1158
+ 36:32.880 --> 36:38.720
1159
+ days. And I think AR will have the same issue that if you have a tiny table in a big living room
1160
+
1161
+ 36:38.720 --> 36:44.960
1162
+ or something, it'll try to figure out what can you change and what's fixed. And there's a little
1163
+
1164
+ 36:44.960 --> 36:52.560
1165
+ bit of a tension between fidelity, where if you captured Senior Eye of doing a fantastic ballet,
1166
+
1167
+ 36:52.560 --> 36:56.640
1168
+ you'd want it to be sort of exactly reproduced. And maybe all you could do is scale it down.
1169
+
1170
+ 36:56.640 --> 37:03.760
1171
+ Whereas somebody telling you a story might be walking around the room doing some gestures
1172
+
1173
+ 37:03.760 --> 37:07.040
1174
+ and that could adapt to the room in which they were telling the story.
1175
+
1176
+ 37:07.760 --> 37:12.800
1177
+ And do you think fidelity is that important in that space or is it more about the storytelling?
1178
+
1179
+ 37:12.800 --> 37:17.840
1180
+ I think it may depend on the characteristic of the media. If it's a famous celebrity,
1181
+
1182
+ 37:17.840 --> 37:22.480
1183
+ then it may be that you want to catch every nuance and they don't want to be reanimated by some
1184
+
1185
+ 37:22.480 --> 37:30.800
1186
+ algorithm. It could be that if it's really a loveable frog telling you a story and it's
1187
+
1188
+ 37:30.800 --> 37:34.320
1189
+ about a princess and a frog, then it doesn't matter if the frog moves in a different way.
1190
+
1191
+ 37:35.520 --> 37:38.640
1192
+ I think a lot of the ideas that have sort of grown up in the game world will
1193
+
1194
+ 37:39.520 --> 37:45.120
1195
+ now come into the broader commercial sphere once they're needing adaptive characters in AR.
1196
+
1197
+ 37:45.120 --> 37:51.920
1198
+ Are you thinking of engineering tools that allow creators to create in the augmented world,
1199
+
1200
+ 37:52.480 --> 37:56.000
1201
+ basically making a Photoshop for the augmented world?
1202
+
1203
+ 37:57.360 --> 38:02.560
1204
+ Well, we have shown a few demos of sort of taking a Photoshop layer stack and then expanding it into
1205
+
1206
+ 38:02.560 --> 38:08.640
1207
+ 3D. That's actually been shown publicly as one example in AR. Where we're particularly excited
1208
+
1209
+ 38:08.640 --> 38:17.120
1210
+ at the moment is in 3D. 3D design is still a very challenging space. We believe that it's
1211
+
1212
+ 38:17.120 --> 38:22.800
1213
+ a worthwhile experiment to try to figure out if AR or immersive makes 3D design more spontaneous.
1214
+
1215
+ 38:23.360 --> 38:26.960
1216
+ Can you give me an example of 3D design just like applications?
1217
+
1218
+ 38:26.960 --> 38:32.080
1219
+ Well, literally, a simple one would be laying out objects, right? On a conventional screen,
1220
+
1221
+ 38:32.080 --> 38:35.680
1222
+ you'd sort of have a plan view and a side view and a perspective view and you sort of be dragging
1223
+
1224
+ 38:35.680 --> 38:39.440
1225
+ it around with the mouse and if you're not careful, it would go through the wall and all that.
1226
+
1227
+ 38:39.440 --> 38:46.400
1228
+ Whereas if you were really laying out objects, say in a VR headset, you could literally move
1229
+
1230
+ 38:46.400 --> 38:50.720
1231
+ your head to see a different viewpoint. They'd be in stereo, so you'd have a sense of depth
1232
+
1233
+ 38:50.720 --> 38:57.040
1234
+ because you're already wearing the depth glasses, right? So it would be those sort of big gross
1235
+
1236
+ 38:57.040 --> 39:01.120
1237
+ motor, move things around, kind of skills seem much more spontaneous just like they are in the
1238
+
1239
+ 39:01.120 --> 39:08.320
1240
+ real world. The frontier for us, I think, is whether that same medium can be used to do fine
1241
+
1242
+ 39:08.320 --> 39:14.880
1243
+ grain design tasks, like very accurate constraints on, say, a CAD model or something. That may be
1244
+
1245
+ 39:14.880 --> 39:19.120
1246
+ better done on a desktop, but it may just be a matter of inventing the right UI.
1247
+
1248
+ 39:20.160 --> 39:27.600
1249
+ So we're hopeful that because there will be this potential explosion of demand for 3D assets
1250
+
1251
+ 39:27.600 --> 39:32.560
1252
+ that's driven by AR and more real time animation on conventional screens,
1253
+
1254
+ 39:34.880 --> 39:40.800
1255
+ those tools will also help with, or those devices will help with designing the content as well.
1256
+
1257
+ 39:40.800 --> 39:46.480
1258
+ You've mentioned quite a few interesting sort of new ideas. At the same time, there's old
1259
+
1260
+ 39:46.480 --> 39:51.280
1261
+ timers like me that are stuck in their old ways. I think I'm the old timer.
1262
+
1263
+ 39:51.280 --> 39:58.560
1264
+ Okay. All right. But the opposed all change at all costs. When you're thinking about
1265
+
1266
+ 39:58.560 --> 40:05.440
1267
+ creating new interfaces, do you feel the burden of just this giant user base that loves the
1268
+
1269
+ 40:05.440 --> 40:13.760
1270
+ current product? So anything new you do that any new idea comes at a cost that you'll be resisted?
1271
+
1272
+ 40:13.760 --> 40:21.200
1273
+ Well, I think if you have to trade off control for convenience, then our existing user base
1274
+
1275
+ 40:21.200 --> 40:27.680
1276
+ would definitely be offended by that. I think if there are some things where you have more convenience
1277
+
1278
+ 40:27.680 --> 40:34.160
1279
+ and just as much control, that may be more welcome. We do think about not breaking well known
1280
+
1281
+ 40:34.160 --> 40:40.640
1282
+ metaphors for things. So things should sort of make sense. Photoshop has never been a static
1283
+
1284
+ 40:40.640 --> 40:46.640
1285
+ target. It's always been evolving and growing. And to some extent, there's been a lot of brilliant
1286
+
1287
+ 40:46.640 --> 40:50.640
1288
+ thought along the way of how it works today. So we don't want to just throw all that out.
1289
+
1290
+ 40:51.840 --> 40:55.600
1291
+ If there's a fundamental breakthrough, like a single click is good enough to select an object
1292
+
1293
+ 40:55.600 --> 41:01.760
1294
+ rather than having to do lots of strokes, that actually fits in quite nicely to the existing
1295
+
1296
+ 41:01.760 --> 41:07.840
1297
+ tool set, either as an optional mode or as a starting point. I think where we're looking at
1298
+
1299
+ 41:07.840 --> 41:13.440
1300
+ radical simplicity where you could encapsulate an entire workflow with a much simpler UI,
1301
+
1302
+ 41:14.000 --> 41:18.640
1303
+ then sometimes that's easier to do in the context of either a different device like a
1304
+
1305
+ 41:18.640 --> 41:25.120
1306
+ mobile device where the affordances are naturally different or in a tool that's targeted at a
1307
+
1308
+ 41:25.120 --> 41:31.040
1309
+ different workflow where it's about spontaneity and velocity rather than precision. And we have
1310
+
1311
+ 41:31.040 --> 41:36.720
1312
+ projects like Rush, which can let you do professional quality video editing for a certain class of
1313
+
1314
+ 41:36.720 --> 41:47.440
1315
+ media output that is targeted very differently in terms of users and the experience. And ideally,
1316
+
1317
+ 41:47.440 --> 41:54.160
1318
+ people would go, if I'm feeling like doing Premiere, big project, I'm doing a four part
1319
+
1320
+ 41:54.160 --> 41:59.200
1321
+ television series. That's definitely a premier thing. But if I want to do something to show my
1322
+
1323
+ 41:59.200 --> 42:05.200
1324
+ recent vacation, maybe I'll just use Rush because I can do it in the half an hour. I have free at
1325
+
1326
+ 42:05.200 --> 42:12.480
1327
+ home rather than the four hours I need to do it at work. And for the use cases which we can do well,
1328
+
1329
+ 42:12.480 --> 42:16.960
1330
+ it really is much faster to get the same output. But the more professional tools obviously have
1331
+
1332
+ 42:16.960 --> 42:21.520
1333
+ a much richer toolkit and more flexibility in what they can do.
1334
+
1335
+ 42:21.520 --> 42:26.400
1336
+ And then at the same time, with the flexibility and control, I like this idea of smart defaults,
1337
+
1338
+ 42:27.040 --> 42:33.520
1339
+ of using AI to coach you to like what Google has, I'm feeling lucky button.
1340
+
1341
+ 42:33.520 --> 42:37.360
1342
+ Right. Or one button kind of gives you a pretty good
1343
+
1344
+ 42:38.160 --> 42:41.440
1345
+ set of settings. And then you almost, that's almost an educational tool.
1346
+
1347
+ 42:42.000 --> 42:43.360
1348
+ Absolutely. Yeah.
1349
+
1350
+ 42:43.360 --> 42:49.920
1351
+ To show, because sometimes when you have all this control, you're not sure about the
1352
+
1353
+ 42:51.040 --> 42:55.600
1354
+ correlation between the different bars that control different elements of the image and so on.
1355
+
1356
+ 42:55.600 --> 43:00.400
1357
+ And sometimes there's a degree of, you don't know what the optimal is.
1358
+
1359
+ 43:00.400 --> 43:05.360
1360
+ And then some things are sort of on demand like help, right?
1361
+
1362
+ 43:05.360 --> 43:06.160
1363
+ Help, yeah.
1364
+
1365
+ 43:06.160 --> 43:09.600
1366
+ I'm stuck. I need to know what to look for. I'm not quite sure what it's called.
1367
+
1368
+ 43:10.400 --> 43:13.920
1369
+ And something that was proactively making helpful suggestions or,
1370
+
1371
+ 43:14.800 --> 43:20.640
1372
+ you know, you could imagine a make a suggestion button where you'd use all of that knowledge
1373
+
1374
+ 43:20.640 --> 43:25.120
1375
+ of workflows and everything to maybe suggest something to go and learn about or just to try
1376
+
1377
+ 43:25.120 --> 43:31.280
1378
+ or show the answer. And maybe it's not one intelligent to pothole, but it's like a variety
1379
+
1380
+ 43:31.280 --> 43:33.760
1381
+ of defaults. And then you go, oh, I like that one.
1382
+
1383
+ 43:33.760 --> 43:34.960
1384
+ Yeah. Yeah.
1385
+
1386
+ 43:34.960 --> 43:35.680
1387
+ Several options.
1388
+
1389
+ 43:37.200 --> 43:39.520
1390
+ So back to poetry.
1391
+
1392
+ 43:39.520 --> 43:40.640
1393
+ Ah, yes.
1394
+
1395
+ 43:40.640 --> 43:42.400
1396
+ We're going to interleave.
1397
+
1398
+ 43:43.440 --> 43:48.000
1399
+ So first few lines of a recent poem of yours before I ask the next question.
1400
+
1401
+ 43:48.000 --> 43:55.840
1402
+ Yeah. This is about the smartphone. Today left my phone at home and went down to the sea.
1403
+
1404
+ 43:57.120 --> 44:01.600
1405
+ The sand was soft, the ocean glass, but I was still just me.
1406
+
1407
+ 44:02.480 --> 44:08.240
1408
+ So this is a poem about you leaving your phone behind and feeling quite liberated because of it.
1409
+
1410
+ 44:08.960 --> 44:14.800
1411
+ So this is kind of a difficult topic and let's see if we can talk about it, figure it out.
1412
+
1413
+ 44:14.800 --> 44:21.120
1414
+ But so with the help of AI, more and more, we can create versions of ourselves, versions of
1415
+
1416
+ 44:21.120 --> 44:31.920
1417
+ reality that are in some ways more beautiful than actual reality. And some of the creative effort
1418
+
1419
+ 44:31.920 --> 44:38.880
1420
+ there is part of creating this illusion. So of course, this is inevitable, but how do you think
1421
+
1422
+ 44:38.880 --> 44:44.320
1423
+ we should adjust this human beings to live in this digital world that's partly artificial,
1424
+
1425
+ 44:44.320 --> 44:51.520
1426
+ that's better than the world that we lived in a hundred years ago when you didn't have
1427
+
1428
+ 44:51.520 --> 44:55.840
1429
+ Instagram and Facebook versions of ourselves and the online.
1430
+
1431
+ 44:55.840 --> 44:58.880
1432
+ Oh, this is sort of showing off better versions of ourselves.
1433
+
1434
+ 44:58.880 --> 45:04.880
1435
+ We're using the tooling of modifying the images or even with artificial intelligence
1436
+
1437
+ 45:04.880 --> 45:12.880
1438
+ ideas of deep fakes and creating adjusted or fake versions of ourselves and reality.
1439
+
1440
+ 45:14.080 --> 45:17.600
1441
+ I think it's an interesting question. You're all sort of historical bent on this.
1442
+
1443
+ 45:19.360 --> 45:24.720
1444
+ I actually wonder if 18th century aristocrats who commissioned famous painters to paint portraits
1445
+
1446
+ 45:24.720 --> 45:28.960
1447
+ of them had portraits that were slightly nicer than they actually looked in practice.
1448
+
1449
+ 45:28.960 --> 45:29.680
1450
+ Well played, sir.
1451
+
1452
+ 45:29.680 --> 45:34.720
1453
+ So human desire to put your best foot forward has always been true.
1454
+
1455
+ 45:37.440 --> 45:42.240
1456
+ I think it's interesting. You sort of framed it in two ways. One is if we can imagine alternate
1457
+
1458
+ 45:42.240 --> 45:47.280
1459
+ realities and visualize them, is that a good or bad thing? In the old days, you do it with
1460
+
1461
+ 45:47.280 --> 45:52.560
1462
+ storytelling and words and poetry, which still resides sometimes on websites. But
1463
+
1464
+ 45:52.560 --> 46:00.640
1465
+ we've become a very visual culture in particular. In the 19th century, we were very much a text
1466
+
1467
+ 46:00.640 --> 46:07.280
1468
+ based culture. People would read long tracks. Political speeches were very long. Nowadays,
1469
+
1470
+ 46:07.280 --> 46:15.280
1471
+ everything's very kind of quick and visual and snappy. I think it depends on how harmless your
1472
+
1473
+ 46:15.280 --> 46:23.120
1474
+ intent. A lot of it's about intent. So if you have a somewhat flattering photo that you pick
1475
+
1476
+ 46:23.120 --> 46:30.960
1477
+ out of the photos that you have in your inbox to say, this is what I look like, it's probably fine.
1478
+
1479
+ 46:32.160 --> 46:37.040
1480
+ If someone's going to judge you by how you look, then they'll decide soon enough when they meet
1481
+
1482
+ 46:37.040 --> 46:45.120
1483
+ you whether the reality. I think where it can be harmful is if people hold themselves up to an
1484
+
1485
+ 46:45.120 --> 46:49.520
1486
+ impossible standard, which they then feel bad about themselves for not meeting. I think that's
1487
+
1488
+ 46:49.520 --> 46:58.400
1489
+ definitely can be an issue. But I think the ability to imagine and visualize an alternate
1490
+
1491
+ 46:58.400 --> 47:04.800
1492
+ reality, which sometimes which you then go off and build later, can be a wonderful thing too.
1493
+
1494
+ 47:04.800 --> 47:10.720
1495
+ People can imagine architectural styles, which they then have a startup, make a fortune and then
1496
+
1497
+ 47:10.720 --> 47:17.680
1498
+ build a house that looks like their favorite video game. Is that a terrible thing? I think
1499
+
1500
+ 47:18.720 --> 47:24.560
1501
+ I used to worry about exploration actually, that part of the joy of going to the moon
1502
+
1503
+ 47:24.560 --> 47:30.320
1504
+ when I was a tiny child, I remember it, and grainy black and white, was to know what it would look
1505
+
1506
+ 47:30.320 --> 47:35.120
1507
+ like when you got there. And I think now we have such good graphics for knowing, for visualizing
1508
+
1509
+ 47:35.120 --> 47:40.960
1510
+ experience before it happens, that I slightly worry that it may take the edge off actually wanting
1511
+
1512
+ 47:40.960 --> 47:46.160
1513
+ to go. Because we've seen it on TV, we kind of, oh, by the time we finally get to Mars,
1514
+
1515
+ 47:46.160 --> 47:53.200
1516
+ we're going, oh yeah, it's Mars, that's what it looks like. But then the outer exploration,
1517
+
1518
+ 47:53.200 --> 47:58.480
1519
+ I mean, I think Pluto was a fantastic recent discovery where nobody had any idea what it
1520
+
1521
+ 47:58.480 --> 48:04.800
1522
+ looked like and it was just breathtakingly varied and beautiful. So I think expanding
1523
+
1524
+ 48:04.800 --> 48:10.800
1525
+ the ability of the human toolkit to imagine and communicate on balance is a good thing.
1526
+
1527
+ 48:10.800 --> 48:15.920
1528
+ I think there are abuses, we definitely take them seriously and try to discourage them.
1529
+
1530
+ 48:17.440 --> 48:22.960
1531
+ I think there's a parallel side where the public needs to know what's possible through events like
1532
+
1533
+ 48:22.960 --> 48:30.720
1534
+ this, right? So that you don't believe everything you read and print anymore, and it may over time
1535
+
1536
+ 48:30.720 --> 48:35.440
1537
+ become true of images as well. Or you need multiple sets of evidence to really believe
1538
+
1539
+ 48:35.440 --> 48:40.240
1540
+ something rather than a single media asset. So I think it's a constantly evolving thing.
1541
+
1542
+ 48:40.240 --> 48:45.680
1543
+ It's been true forever. There's a famous story about Anne of Cleves and Henry VIII where,
1544
+
1545
+ 48:47.760 --> 48:52.720
1546
+ luckily for Anne, they didn't get married, right? So, or they got married and
1547
+
1548
+ 48:53.840 --> 48:54.560
1549
+ What's the story?
1550
+
1551
+ 48:54.560 --> 48:59.200
1552
+ Oh, so Holbein went and painted a picture and then Henry VIII wasn't pleased and, you know,
1553
+
1554
+ 48:59.200 --> 49:03.520
1555
+ history doesn't record whether Anne was pleased, but I think she was pleased not
1556
+
1557
+ 49:03.520 --> 49:07.920
1558
+ to be married more than a day or something. So I mean, this has gone on for a long time,
1559
+
1560
+ 49:07.920 --> 49:13.280
1561
+ but I think it's just part of the magnification of human capability.
1562
+
1563
+ 49:14.640 --> 49:20.560
1564
+ You've kind of built up an amazing research environment here, research culture, research
1565
+
1566
+ 49:20.560 --> 49:25.600
1567
+ lab, and you've written that the secret to a thriving research lab is interns. Can you unpack
1568
+
1569
+ 49:25.600 --> 49:29.520
1570
+ that a little bit? Oh, absolutely. So a couple of reasons.
1571
+
1572
+ 49:31.360 --> 49:36.080
1573
+ As you see, looking at my personal history, there are certain ideas you bond with at a certain
1574
+
1575
+ 49:36.080 --> 49:40.880
1576
+ stage of your career and you tend to keep revisiting them through time. If you're lucky,
1577
+
1578
+ 49:40.880 --> 49:44.880
1579
+ you pick one that doesn't just get solved in the next five years, and then you're sort of out of
1580
+
1581
+ 49:44.880 --> 49:49.840
1582
+ luck. So I think a constant influx of new people brings new ideas with it.
1583
+
1584
+ 49:49.840 --> 49:56.800
1585
+ From the point of view of industrial research, because a big part of what we do is really taking
1586
+
1587
+ 49:56.800 --> 50:03.360
1588
+ those ideas to the point where they can ship us very robust features, you end up investing a lot
1589
+
1590
+ 50:03.360 --> 50:08.640
1591
+ in a particular idea. And if you're not careful, people can get too conservative in what they
1592
+
1593
+ 50:08.640 --> 50:15.280
1594
+ choose to do next, knowing that the product teams will want it. And interns let you explore the more
1595
+
1596
+ 50:15.280 --> 50:22.160
1597
+ fanciful or unproven ideas in a relatively lightweight way, ideally leading to new publications for
1598
+
1599
+ 50:22.160 --> 50:28.080
1600
+ the intern and for the researcher. And it gives you then a portfolio from which to draw which idea
1601
+
1602
+ 50:28.080 --> 50:32.720
1603
+ am I going to then try to take all the way through to being robust in the next year or two to ship.
1604
+
1605
+ 50:34.000 --> 50:37.520
1606
+ So it sort of becomes part of the funnel. It's also a great way for us to
1607
+
1608
+ 50:38.160 --> 50:42.640
1609
+ identify future full time researchers, many of our greatest researchers were former interns.
1610
+
1611
+ 50:42.640 --> 50:48.800
1612
+ It builds a bridge to university departments so we can get to know and build an enduring relationship
1613
+
1614
+ 50:48.800 --> 50:53.840
1615
+ with the professors and we often do academic give funds to as well as an acknowledgement of the
1616
+
1617
+ 50:53.840 --> 51:01.360
1618
+ value the interns add and their own collaborations. So it's sort of a virtuous cycle. And then the
1619
+
1620
+ 51:01.360 --> 51:06.960
1621
+ long term legacy of a great research lab hopefully will be not only the people who stay but the ones
1622
+
1623
+ 51:06.960 --> 51:11.520
1624
+ who move through and then go off and carry that same model to other companies.
1625
+
1626
+ 51:11.520 --> 51:17.440
1627
+ And so we believe strongly in industrial research and how it can complement academia and
1628
+
1629
+ 51:17.440 --> 51:21.920
1630
+ we hope that this model will continue to propagate and be invested in by other companies,
1631
+
1632
+ 51:21.920 --> 51:26.800
1633
+ which makes it harder for us to recruit, of course, but you know, that's the sign of success
1634
+
1635
+ 51:26.800 --> 51:30.320
1636
+ and a rising tide lifts all ships in that sense.
1637
+
1638
+ 51:31.040 --> 51:38.080
1639
+ And where's the idea of born with the interns? Is there brainstorming? Is there discussions
1640
+
1641
+ 51:38.080 --> 51:46.240
1642
+ about, you know, like what the ideas come from? Yeah, as I'm asking the question, I
1643
+
1644
+ 51:46.240 --> 51:51.760
1645
+ realized how dumb it is, but I'm hoping you have a better answer than a question I ask at the
1646
+
1647
+ 51:51.760 --> 51:59.280
1648
+ beginning of every summer. So what will happen is we'll send out a call for interns. They'll
1649
+
1650
+ 52:00.000 --> 52:04.240
1651
+ we'll have a number of resumes come in, people will contact the candidates, talk to them about
1652
+
1653
+ 52:04.240 --> 52:09.920
1654
+ their interests. They'll usually try to find some somebody who has a reasonably good match to what
1655
+
1656
+ 52:09.920 --> 52:14.320
1657
+ they're already doing, or just has a really interesting domain that they've been pursuing in
1658
+
1659
+ 52:14.320 --> 52:20.640
1660
+ their PhD. And we think we'd love to do one of those projects too. And then the intern stays in
1661
+
1662
+ 52:20.640 --> 52:27.360
1663
+ touch with the mentors, we call them. And then they come and in the first at the end of two weeks,
1664
+
1665
+ 52:27.360 --> 52:32.000
1666
+ they have to decide. So they'll often have a general sense by the time they arrive.
1667
+
1668
+ 52:32.000 --> 52:36.080
1669
+ And we'll have internal discussions about what are all the general
1670
+
1671
+ 52:36.800 --> 52:41.120
1672
+ ideas that we're wanting to pursue to see whether two people have the same idea and maybe they
1673
+
1674
+ 52:41.120 --> 52:47.040
1675
+ should talk and all that. But then once the intern actually arrives, sometimes the idea goes linearly
1676
+
1677
+ 52:47.040 --> 52:51.120
1678
+ and sometimes it takes a giant left turn and we go, that sounded good. But when we thought about
1679
+
1680
+ 52:51.120 --> 52:54.880
1681
+ it, there's this other project or it's already been done and we found this paper that we were
1682
+
1683
+ 52:54.880 --> 53:02.240
1684
+ scooped. But we have this other great idea. So it's pretty flexible at the beginning. One of the
1685
+
1686
+ 53:02.240 --> 53:08.080
1687
+ questions for research labs is who's deciding what to do, and then who's to blame if it goes
1688
+
1689
+ 53:08.080 --> 53:15.600
1690
+ wrong, who gets the credit if it goes right. And so in Adobe, we push the needle very much towards
1691
+
1692
+ 53:15.600 --> 53:22.960
1693
+ freedom of choice of projects by the researchers and the interns. But then we reward people based
1694
+
1695
+ 53:22.960 --> 53:28.000
1696
+ on impact. So if the projects ultimately end up impacting the products and having papers and so
1697
+
1698
+ 53:28.000 --> 53:34.400
1699
+ on. And so your alternative model just to be clear is that you have one lab director who thinks he's
1700
+
1701
+ 53:34.400 --> 53:38.720
1702
+ a genius and tells everybody what to do, takes all the credit if it goes well, blames everybody
1703
+
1704
+ 53:38.720 --> 53:44.800
1705
+ else if it goes badly. So we don't want that model. And this helps new ideas percolate up.
1706
+
1707
+ 53:45.440 --> 53:49.840
1708
+ The art of running such a lab is that there are strategic priorities for the company
1709
+
1710
+ 53:49.840 --> 53:55.520
1711
+ and there are areas where we do want to invest in pressing problems. And so it's a little bit of a
1712
+
1713
+ 53:55.520 --> 54:01.360
1714
+ trickle down and filter up meets in the middle. And so you don't tell people you have to do X,
1715
+
1716
+ 54:01.360 --> 54:07.360
1717
+ but you say X would be particularly appreciated this year. And then people reinterpret X through
1718
+
1719
+ 54:07.360 --> 54:12.720
1720
+ the filter of things they want to do and they're interested in. And miraculously, it usually comes
1721
+
1722
+ 54:12.720 --> 54:18.640
1723
+ together very well. One thing that really helps is Adobe has a really broad portfolio of products.
1724
+
1725
+ 54:18.640 --> 54:24.960
1726
+ So if we have a good idea, there's usually a product team that is intrigued or interested.
1727
+
1728
+ 54:26.000 --> 54:31.520
1729
+ So it means we don't have to qualify things too much ahead of time. Once in a while, the product
1730
+
1731
+ 54:31.520 --> 54:36.880
1732
+ teams sponsor an extra intern because they have a particular problem that they really care about,
1733
+
1734
+ 54:36.880 --> 54:41.440
1735
+ in which case it's a little bit more, we really need one of these. And then we sort of say,
1736
+
1737
+ 54:41.440 --> 54:45.920
1738
+ great, I get an extra intern. We find an intern who thinks that's a great problem. But that's not
1739
+
1740
+ 54:45.920 --> 54:49.520
1741
+ the typical model. That's sort of the icing on the cake as far as the budget's concerned.
1742
+
1743
+ 54:51.440 --> 54:55.920
1744
+ And all of the above end up being important. It's really hard to predict at the beginning of the
1745
+
1746
+ 54:55.920 --> 55:01.200
1747
+ summer, which we all have high hopes of all of the intern projects. But ultimately, some of them
1748
+
1749
+ 55:01.200 --> 55:06.480
1750
+ pay off and some of them sort of are a nice paper, but don't turn into a feature. Others turn out
1751
+
1752
+ 55:06.480 --> 55:12.080
1753
+ not to be as novel as we thought, but they'd be a great feature, but not a paper. And then others,
1754
+
1755
+ 55:12.080 --> 55:16.400
1756
+ we make a little bit of progress and we realize how much we don't know. And maybe we revisit that
1757
+
1758
+ 55:16.400 --> 55:22.320
1759
+ problem several years in a row until it finally we have a breakthrough. And then it becomes more
1760
+
1761
+ 55:22.320 --> 55:30.320
1762
+ on track to impact a product. Jumping back to a big overall view of Adobe Research, what are you
1763
+
1764
+ 55:30.320 --> 55:37.360
1765
+ looking forward to in 2019 and beyond? What is, you mentioned there's a giant suite of products,
1766
+
1767
+ 55:37.360 --> 55:45.920
1768
+ giant suite of products, giant suite of ideas, new interns, a large team of researchers.
1769
+
1770
+ 55:46.960 --> 55:54.400
1771
+ Where do you think the future holds? In terms of the technological breakthroughs?
1772
+
1773
+ 55:54.400 --> 56:00.960
1774
+ Technological breakthroughs, especially ones that will make it into product will get to
1775
+
1776
+ 56:00.960 --> 56:05.920
1777
+ impact the world. So I think the creative or the analytics assistance that we talked about where
1778
+
1779
+ 56:05.920 --> 56:10.320
1780
+ they're constantly trying to figure out what you're trying to do and how can they be helpful and make
1781
+
1782
+ 56:10.320 --> 56:16.160
1783
+ useful suggestions is a really hot topic. And it's very unpredictable as to when it'll be ready,
1784
+
1785
+ 56:16.160 --> 56:20.720
1786
+ but I'm really looking forward to seeing how much progress we make against that. I think
1787
+
1788
+ 56:22.480 --> 56:28.640
1789
+ some of the core technologies like generative adversarial networks are immensely promising
1790
+
1791
+ 56:28.640 --> 56:34.720
1792
+ and seeing how quickly those become practical for mainstream use cases at high resolution with
1793
+
1794
+ 56:34.720 --> 56:39.840
1795
+ really good quality is also exciting. And they also have this sort of strange way of even the
1796
+
1797
+ 56:39.840 --> 56:45.120
1798
+ things they do oddly are odd in an interesting way. So it can look like dreaming or something.
1799
+
1800
+ 56:46.160 --> 56:55.040
1801
+ So that's fascinating. I think internally we have a Sensei platform, which is a way in which
1802
+
1803
+ 56:55.040 --> 57:01.760
1804
+ we're pooling our neural net and other intelligence models into a central platform, which can then be
1805
+
1806
+ 57:01.760 --> 57:07.040
1807
+ leveraged by multiple product teams at once. So we're in the middle of transitioning from a,
1808
+
1809
+ 57:07.040 --> 57:11.040
1810
+ you know, once you have a good idea, you pick a product team to work with and you sort of hand
1811
+
1812
+ 57:11.040 --> 57:17.520
1813
+ design it for that use case to a more sort of Henry Ford, stand it up in a standard way, which
1814
+
1815
+ 57:17.520 --> 57:22.880
1816
+ can be accessed in a standard way, which should mean that the time between a good idea and impacting
1817
+
1818
+ 57:22.880 --> 57:28.960
1819
+ our products will be greatly shortened. And when one product has a good idea, many of the other
1820
+
1821
+ 57:28.960 --> 57:34.240
1822
+ products can just leverage it too. So it's sort of an economy of scale. So that's more about the
1823
+
1824
+ 57:34.240 --> 57:39.680
1825
+ how then the what, but that combination of this sort of renaissance in AI, there's a comparable
1826
+
1827
+ 57:39.680 --> 57:45.280
1828
+ one in graphics with real time ray tracing and other really exciting emerging technologies.
1829
+
1830
+ 57:45.280 --> 57:49.920
1831
+ And when these all come together, you'll sort of basically be dancing with light, right? Where
1832
+
1833
+ 57:49.920 --> 57:56.080
1834
+ you'll have real time shadows, reflections, and as if it's a real world in front of you, but then
1835
+
1836
+ 57:56.080 --> 58:00.880
1837
+ with all these magical properties brought by AI where it sort of anticipates or modifies itself
1838
+
1839
+ 58:00.880 --> 58:05.040
1840
+ in ways that make sense based on how it understands the creative task you're trying to do.
1841
+
1842
+ 58:06.320 --> 58:12.320
1843
+ That's a really exciting future for creative for myself too, the creator. So first of all,
1844
+
1845
+ 58:12.320 --> 58:17.760
1846
+ I work in autonomous vehicles. I'm a roboticist. I love robots. And I think you have a fascination
1847
+
1848
+ 58:17.760 --> 58:23.920
1849
+ with snakes, both natural and artificial robots. I share your fascination. I mean, their movement
1850
+
1851
+ 58:23.920 --> 58:31.760
1852
+ is beautiful, adaptable. The adaptability is fascinating. There are, I looked it up, 2900
1853
+
1854
+ 58:31.760 --> 58:37.200
1855
+ species of snakes in the world. Wow. The 175 venomous, some are tiny, some are huge.
1856
+
1857
+ 58:38.880 --> 58:44.560
1858
+ Saw that there's one that's 25 feet in some cases. So what's the most interesting thing
1859
+
1860
+ 58:44.560 --> 58:52.000
1861
+ that you connect with in terms of snakes, both natural and artificial? Why, what was the connection
1862
+
1863
+ 58:52.000 --> 58:58.000
1864
+ with robotics AI in this particular form of a robot? Well, it actually came out of my work
1865
+
1866
+ 58:58.000 --> 59:02.880
1867
+ in the 80s on computer animation, where I started doing things like cloth simulation and
1868
+
1869
+ 59:02.880 --> 59:07.360
1870
+ other kind of soft body simulation. And you'd sort of drop it and it would bounce,
1871
+
1872
+ 59:07.360 --> 59:10.880
1873
+ then it would just sort of stop moving. And I thought, well, what if you animate the spring
1874
+
1875
+ 59:10.880 --> 59:16.160
1876
+ lengths and simulate muscles? And the simplest object I could do that for was an earthworm.
1877
+
1878
+ 59:16.160 --> 59:21.680
1879
+ So I actually did a paper in 1988 on called the motion dynamics of snakes and worms. And I
1880
+
1881
+ 59:21.680 --> 59:27.760
1882
+ read the physiology literature on both Hale snakes and worms move and then did some of the early
1883
+
1884
+ 59:27.760 --> 59:34.640
1885
+ computer animation examples of that. So your interest in robotics started with graphics?
1886
+
1887
+ 59:34.640 --> 59:40.960
1888
+ Came out of simulation and graphics. When I moved from Alias to Apple, we actually did a
1889
+
1890
+ 59:40.960 --> 59:46.160
1891
+ movie called Her Majesty's Secret Serpent, which is about a secret agent snake that parachutes in
1892
+
1893
+ 59:46.160 --> 59:50.320
1894
+ and captures a film canister from a satellite, which tells you how old fashioned we were thinking
1895
+
1896
+ 59:50.320 --> 59:57.760
1897
+ back then, sort of classic 19 sort of 50s or 60s Bond movie kind of thing. And at the same time,
1898
+
1899
+ 59:57.760 --> 1:00:03.120
1900
+ I'd always made radio control ships when I was a child and from scratch. And I thought, well,
1901
+
1902
+ 1:00:03.120 --> 1:00:08.800
1903
+ how can it be to build a real one? And so then started what turned out to be like a 15 year
1904
+
1905
+ 1:00:08.800 --> 1:00:14.320
1906
+ obsession with trying to build better snake robots. And the first one that I built just sort of
1907
+
1908
+ 1:00:14.320 --> 1:00:19.520
1909
+ slithered sideways, but didn't actually go forward, then added wheels and building things in real
1910
+
1911
+ 1:00:19.520 --> 1:00:25.840
1912
+ life makes you honest about the friction. The thing that appeals to me is I love creating the
1913
+
1914
+ 1:00:25.840 --> 1:00:30.800
1915
+ illusion of life, which is what drove me to drove me to animation. And if you have a robot with
1916
+
1917
+ 1:00:30.800 --> 1:00:36.320
1918
+ enough degrees of coordinated freedom that move in a kind of biological way, then it starts to
1919
+
1920
+ 1:00:36.320 --> 1:00:42.080
1921
+ cross the Yankani Valley and to see me like a creature rather than a thing. And I certainly got
1922
+
1923
+ 1:00:42.080 --> 1:00:50.320
1924
+ that with the early snakes by S3, I had it able to sidewind as well as go directly forward. My
1925
+
1926
+ 1:00:50.320 --> 1:00:54.240
1927
+ wife to be suggested that it would be the ring bearer at our wedding. So it actually went down
1928
+
1929
+ 1:00:54.240 --> 1:00:59.360
1930
+ the aisle carrying the rings and got in the local paper for that, which was really fun.
1931
+
1932
+ 1:01:00.160 --> 1:01:07.200
1933
+ And this was all done as a hobby. And then I at the time that can onboard compute was incredibly
1934
+
1935
+ 1:01:07.200 --> 1:01:11.840
1936
+ limited. It was sort of yes, you should explain that these snakes, the whole idea is that you would
1937
+
1938
+ 1:01:11.840 --> 1:01:18.640
1939
+ you're trying to run it autonomously. Autonomously, on board right. And so
1940
+
1941
+ 1:01:19.760 --> 1:01:25.280
1942
+ the very first one, I actually built the controller from discrete logic, because I used to do LSI,
1943
+
1944
+ 1:01:25.280 --> 1:01:30.640
1945
+ you know, circuits and things when I was a teenager. And then the second and third one,
1946
+
1947
+ 1:01:30.640 --> 1:01:35.120
1948
+ the 8 bit microprocessors were available with like a whole 256 bytes of RAM,
1949
+
1950
+ 1:01:36.000 --> 1:01:39.840
1951
+ which you could just about squeeze in. So they were radio controlled rather than autonomous.
1952
+
1953
+ 1:01:39.840 --> 1:01:44.480
1954
+ And really, we're more about the physic physicality and coordinated motion.
1955
+
1956
+ 1:01:46.560 --> 1:01:51.520
1957
+ I've occasionally taken a side step into if only I could make it cheaply enough,
1958
+
1959
+ 1:01:51.520 --> 1:01:59.040
1960
+ bake a great toy, which has been a lesson in how clockwork is its own magical realm that
1961
+
1962
+ 1:01:59.040 --> 1:02:03.680
1963
+ you venture into and learn things about backlash and other things you don't take into account as
1964
+
1965
+ 1:02:03.680 --> 1:02:07.600
1966
+ a computer scientist, which is why what seemed like a good idea doesn't work. So it's quite
1967
+
1968
+ 1:02:07.600 --> 1:02:14.160
1969
+ humbling. And then more recently, I've been building S9, which is a much better engineered
1970
+
1971
+ 1:02:14.160 --> 1:02:17.760
1972
+ version of S3 where the motors wore out and it doesn't work anymore. And you can't buy
1973
+
1974
+ 1:02:17.760 --> 1:02:24.640
1975
+ replacements, which is sad given that it was such a meaningful one. S5 was about twice as long and
1976
+
1977
+ 1:02:25.760 --> 1:02:32.960
1978
+ look much more biologically inspired. I, unlike the typical roboticist, I taper my snakes.
1979
+
1980
+ 1:02:33.520 --> 1:02:37.040
1981
+ There are good mechanical reasons to do that, but it also makes them look more biological,
1982
+
1983
+ 1:02:37.040 --> 1:02:43.280
1984
+ although it means every segment's unique rather than a repetition, which is why most engineers
1985
+
1986
+ 1:02:43.280 --> 1:02:49.840
1987
+ don't do it. It actually saves weight and leverage and everything. And that one is currently on
1988
+
1989
+ 1:02:49.840 --> 1:02:54.480
1990
+ display at the International Spy Museum in Washington, DC. None of it has done any spying.
1991
+
1992
+ 1:02:56.080 --> 1:03:00.000
1993
+ It was on YouTube and it got its own conspiracy theory where people thought that it wasn't real
1994
+
1995
+ 1:03:00.000 --> 1:03:04.160
1996
+ because they work at Adobe, it must be fake graphics. And people would write to me, tell me
1997
+
1998
+ 1:03:04.160 --> 1:03:11.200
1999
+ it's real. They say the background doesn't move and it's like, it's on a tripod. So that one,
2000
+
2001
+ 1:03:11.200 --> 1:03:16.960
2002
+ but you can see the real thing. So it really is true. And then the latest one is the first one
2003
+
2004
+ 1:03:16.960 --> 1:03:21.280
2005
+ where I could put a Raspberry Pi, which leads to all sorts of terrible jokes about pythons and
2006
+
2007
+ 1:03:21.280 --> 1:03:29.920
2008
+ things. But this one can have onboard compute. And then where my hobby work and my work worker
2009
+
2010
+ 1:03:29.920 --> 1:03:36.400
2011
+ converging is you can now add vision accelerator chips, which can evaluate neural nets and do
2012
+
2013
+ 1:03:36.400 --> 1:03:41.600
2014
+ object recognition and everything. So both for the snakes and more recently for the spider that
2015
+
2016
+ 1:03:41.600 --> 1:03:48.640
2017
+ I've been working on, having desktop level compute is now opening up a whole world of
2018
+
2019
+ 1:03:49.200 --> 1:03:54.880
2020
+ true autonomy with onboard compute, onboard batteries, and still having that sort of
2021
+
2022
+ 1:03:54.880 --> 1:04:01.680
2023
+ biomimetic quality that appeals to children in particular. They are really drawn to them and
2024
+
2025
+ 1:04:01.680 --> 1:04:08.960
2026
+ adults think they look creepy, but children actually think they look charming. And I gave a
2027
+
2028
+ 1:04:08.960 --> 1:04:14.880
2029
+ series of lectures at Girls Who Code to encourage people to take an interest in technology. And
2030
+
2031
+ 1:04:14.880 --> 1:04:19.200
2032
+ at the moment, I'd say they're still more expensive than the value that they add,
2033
+
2034
+ 1:04:19.200 --> 1:04:22.560
2035
+ which is why they're a great hobby for me, but they're not really a great product.
2036
+
2037
+ 1:04:22.560 --> 1:04:30.000
2038
+ It makes me think about doing that very early thing I did at Alias with changing the muscle
2039
+
2040
+ 1:04:30.000 --> 1:04:35.760
2041
+ rest lengths. If I could do that with a real artificial muscle material, then the next snake
2042
+
2043
+ 1:04:35.760 --> 1:04:40.960
2044
+ ideally would use that rather than motors and gearboxes and everything. It would be lighter,
2045
+
2046
+ 1:04:40.960 --> 1:04:49.200
2047
+ much stronger, and more continuous and smooth. So I like to say being in research as a license
2048
+
2049
+ 1:04:49.200 --> 1:04:54.640
2050
+ to be curious, and I have the same feeling with my hobby yet. It forced me to read biology and
2051
+
2052
+ 1:04:54.640 --> 1:04:59.680
2053
+ be curious about things that otherwise would have just been natural geographic special.
2054
+
2055
+ 1:04:59.680 --> 1:05:04.320
2056
+ Suddenly, I'm thinking, how does that snake move? Can I copy it? I look at the trails that
2057
+
2058
+ 1:05:04.320 --> 1:05:08.640
2059
+ side winding snakes leave in sand and see if my snake robots would do the same thing.
2060
+
2061
+ 1:05:10.240 --> 1:05:13.840
2062
+ Out of something inanimate, I like why you put a try to bring life into it and beauty.
2063
+
2064
+ 1:05:13.840 --> 1:05:18.240
2065
+ Absolutely. And then ultimately, give it a personality, which is where the intelligent
2066
+
2067
+ 1:05:18.240 --> 1:05:25.040
2068
+ agent research will converge with the vision and voice synthesis to give it a sense of having
2069
+
2070
+ 1:05:25.040 --> 1:05:29.840
2071
+ not necessarily human level intelligence. I think the Turing test is such a high bar, it's
2072
+
2073
+ 1:05:30.480 --> 1:05:36.080
2074
+ a little bit self defeating, but having one that you can have a meaningful conversation with,
2075
+
2076
+ 1:05:36.880 --> 1:05:43.040
2077
+ especially if you have a reasonably good sense of what you can say. So not trying to have it so
2078
+
2079
+ 1:05:43.040 --> 1:05:50.080
2080
+ a stranger could walk up and have one, but so as a pet owner or a robot pet owner, you could
2081
+
2082
+ 1:05:50.080 --> 1:05:53.200
2083
+ know what it thinks about and what it can reason about.
2084
+
2085
+ 1:05:53.200 --> 1:05:58.800
2086
+ Or sometimes just meaningful interaction. If you have the kind of interaction you have with a dog,
2087
+
2088
+ 1:05:58.800 --> 1:06:02.080
2089
+ sometimes you might have a conversation, but it's usually one way.
2090
+
2091
+ 1:06:02.080 --> 1:06:02.640
2092
+ Absolutely.
2093
+
2094
+ 1:06:02.640 --> 1:06:06.000
2095
+ And nevertheless, it feels like a meaningful connection.
2096
+
2097
+ 1:06:06.800 --> 1:06:12.720
2098
+ And one of the things that I'm trying to do in the sample audio that will play you is beginning
2099
+
2100
+ 1:06:12.720 --> 1:06:17.680
2101
+ to get towards the point where the reasoning system can explain why it knows something or
2102
+
2103
+ 1:06:17.680 --> 1:06:22.320
2104
+ why it thinks something. And that again, creates the sense that it really does know what it's
2105
+
2106
+ 1:06:22.320 --> 1:06:29.840
2107
+ talking about, but also for debugging. As you get more and more elaborate behavior, it's like,
2108
+
2109
+ 1:06:29.840 --> 1:06:37.120
2110
+ why did you decide to do that? How do you know that? I think the robot's really
2111
+
2112
+ 1:06:37.120 --> 1:06:42.560
2113
+ my muse for helping me think about the future of AI and what to invent next.
2114
+
2115
+ 1:06:42.560 --> 1:06:47.040
2116
+ So even at Adobe, that's mostly operating in the digital world.
2117
+
2118
+ 1:06:47.040 --> 1:06:47.440
2119
+ Correct.
2120
+
2121
+ 1:06:47.440 --> 1:06:54.480
2122
+ Do you ever, do you see a future where Adobe even expands into the more physical world perhaps?
2123
+
2124
+ 1:06:54.480 --> 1:07:02.080
2125
+ So bringing life not into animations, but bringing life into physical objects with whether it's,
2126
+
2127
+ 1:07:02.080 --> 1:07:08.640
2128
+ well, I'd have to say at the moment it's a twinkle in my eye. I think the more likely thing is that
2129
+
2130
+ 1:07:08.640 --> 1:07:14.240
2131
+ we will bring virtual objects into the physical world through augmented reality.
2132
+
2133
+ 1:07:14.240 --> 1:07:21.360
2134
+ And many of the ideas that might take five years to build a robot to do, you can do in a few weeks
2135
+
2136
+ 1:07:21.360 --> 1:07:29.120
2137
+ with digital assets. So I think when really intelligent robots finally become commonplace,
2138
+
2139
+ 1:07:29.120 --> 1:07:34.000
2140
+ they won't be that surprising because we'll have been living with those personalities in the virtual
2141
+
2142
+ 1:07:34.000 --> 1:07:38.640
2143
+ sphere for a long time. And then they'll just say, oh, it's Siri with legs or Alexa,
2144
+
2145
+ 1:07:39.520 --> 1:07:47.760
2146
+ Alexa on hooves or something. So I can see that welcoming. And for now, it's still an adventure
2147
+
2148
+ 1:07:47.760 --> 1:07:53.760
2149
+ and we don't know quite what the experience will be like. And it's really exciting to sort of see
2150
+
2151
+ 1:07:53.760 --> 1:07:59.680
2152
+ all of these different strands of my career converge. Yeah, in interesting ways. And it is
2153
+
2154
+ 1:07:59.680 --> 1:08:07.920
2155
+ definitely a fun adventure. So let me end with my favorite poem, the last few lines of my favorite
2156
+
2157
+ 1:08:07.920 --> 1:08:14.560
2158
+ poem of yours that ponders mortality. And in some sense, immortality, as our ideas live through
2159
+
2160
+ 1:08:14.560 --> 1:08:21.120
2161
+ the ideas of others through the work of others, it ends with, do not weep or mourn. It was enough
2162
+
2163
+ 1:08:21.120 --> 1:08:28.000
2164
+ the little atomies permitted just a single dance. Scatter them as deep as your eyes can see. I'm
2165
+
2166
+ 1:08:28.000 --> 1:08:34.320
2167
+ content they'll have another chance, sweeping more centered parts along to join a jostling,
2168
+
2169
+ 1:08:34.320 --> 1:08:41.440
2170
+ lifting throng as others danced in me. Beautiful poem. Beautiful way to end it. Gavin, thank you
2171
+
2172
+ 1:08:41.440 --> 1:08:45.920
2173
+ so much for talking today. And thank you for inspiring and empowering millions of people
2174
+
2175
+ 1:08:45.920 --> 1:08:50.960
2176
+ like myself for creating amazing stuff. Oh, thank you. It's a great conversation.
2177
+
vtt/episode_024_small.vtt ADDED
@@ -0,0 +1,2675 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:02.880
4
+ The following is a conversation with Rosalind Picard.
5
+
6
+ 00:02.880 --> 00:04.560
7
+ She's a professor at MIT,
8
+
9
+ 00:04.560 --> 00:06.880
10
+ director of the Effective Computing Research Group
11
+
12
+ 00:06.880 --> 00:08.360
13
+ at the MIT Media Lab,
14
+
15
+ 00:08.360 --> 00:12.440
16
+ and cofounder of two companies, Effectiva and Empatica.
17
+
18
+ 00:12.440 --> 00:13.560
19
+ Over two decades ago,
20
+
21
+ 00:13.560 --> 00:15.440
22
+ she launched a field of effective computing
23
+
24
+ 00:15.440 --> 00:17.560
25
+ with her book of the same name.
26
+
27
+ 00:17.560 --> 00:20.040
28
+ This book described the importance of emotion
29
+
30
+ 00:20.040 --> 00:23.040
31
+ in artificial and natural intelligence.
32
+
33
+ 00:23.040 --> 00:25.320
34
+ The vital role of emotional communication
35
+
36
+ 00:25.320 --> 00:28.520
37
+ has to the relationship between people in general
38
+
39
+ 00:28.520 --> 00:30.920
40
+ and human robot interaction.
41
+
42
+ 00:30.920 --> 00:34.040
43
+ I really enjoy talking with Ros over so many topics,
44
+
45
+ 00:34.040 --> 00:36.440
46
+ including emotion, ethics, privacy,
47
+
48
+ 00:36.440 --> 00:39.720
49
+ wearable computing and her recent research in epilepsy
50
+
51
+ 00:39.720 --> 00:42.600
52
+ and even love and meaning.
53
+
54
+ 00:42.600 --> 00:44.000
55
+ This conversation is part
56
+
57
+ 00:44.000 --> 00:46.040
58
+ of the Artificial Intelligence Podcast.
59
+
60
+ 00:46.040 --> 00:48.200
61
+ If you enjoy it, subscribe on YouTube,
62
+
63
+ 00:48.200 --> 00:50.760
64
+ iTunes or simply connect with me on Twitter
65
+
66
+ 00:50.760 --> 00:54.000
67
+ at Lex Freedman, spelled F R I D.
68
+
69
+ 00:54.000 --> 00:58.000
70
+ And now here's my conversation with Rosalind Picard.
71
+
72
+ 00:59.520 --> 01:00.760
73
+ More than 20 years ago,
74
+
75
+ 01:00.760 --> 01:03.360
76
+ you've coined the term Effective Computing
77
+
78
+ 01:03.360 --> 01:05.400
79
+ and led a lot of research in this area.
80
+
81
+ 01:05.400 --> 01:08.800
82
+ Since then, as I understand the goal is to make the machine
83
+
84
+ 01:08.800 --> 01:12.400
85
+ detect and interpret the emotional state of a human being
86
+
87
+ 01:12.400 --> 01:14.240
88
+ and adopt the behavior of the machine
89
+
90
+ 01:14.240 --> 01:16.160
91
+ based on the emotional state.
92
+
93
+ 01:16.160 --> 01:19.680
94
+ So how is your understanding of the problems
95
+
96
+ 01:19.680 --> 01:24.680
97
+ based defined by effective computing changed in the past 24 years?
98
+
99
+ 01:25.400 --> 01:28.960
100
+ So it's the scope, the applications, the challenges,
101
+
102
+ 01:28.960 --> 01:32.160
103
+ what's involved, how has that evolved over the years?
104
+
105
+ 01:32.160 --> 01:35.400
106
+ Yeah, actually, originally when I defined the term
107
+
108
+ 01:35.400 --> 01:38.080
109
+ Effective Computing, it was a bit broader
110
+
111
+ 01:38.080 --> 01:41.320
112
+ than just recognizing and responding intelligently
113
+
114
+ 01:41.320 --> 01:44.560
115
+ to human emotion, although those are probably the two pieces
116
+
117
+ 01:44.560 --> 01:47.160
118
+ that we've worked on the hardest.
119
+
120
+ 01:47.160 --> 01:50.680
121
+ The original concept also encompassed machines
122
+
123
+ 01:50.680 --> 01:53.520
124
+ that would have mechanisms that functioned
125
+
126
+ 01:53.520 --> 01:55.680
127
+ like human emotion does inside them.
128
+
129
+ 01:55.680 --> 01:59.000
130
+ It would be any computing that relates to arises from
131
+
132
+ 01:59.000 --> 02:01.600
133
+ or deliberately influences human emotion.
134
+
135
+ 02:02.600 --> 02:05.160
136
+ So the human computer interaction part
137
+
138
+ 02:05.160 --> 02:07.880
139
+ is the part that people tend to see.
140
+
141
+ 02:07.880 --> 02:11.000
142
+ Like if I'm really ticked off at my computer
143
+
144
+ 02:11.000 --> 02:13.480
145
+ and I'm scowling at it and I'm cursing at it
146
+
147
+ 02:13.480 --> 02:15.720
148
+ and it just keeps acting smiling and happy
149
+
150
+ 02:15.720 --> 02:17.480
151
+ like that little paperclip used to do,
152
+
153
+ 02:17.480 --> 02:22.160
154
+ like dancing, winking, that kind of thing
155
+
156
+ 02:22.160 --> 02:24.640
157
+ just makes you even more frustrated.
158
+
159
+ 02:24.640 --> 02:29.080
160
+ And I thought that stupid thing needs to see my affect
161
+
162
+ 02:29.080 --> 02:30.600
163
+ and if it's gonna be intelligent,
164
+
165
+ 02:30.600 --> 02:33.000
166
+ which Microsoft researchers had worked really hard on,
167
+
168
+ 02:33.000 --> 02:35.120
169
+ it actually had some of the most sophisticated AI in it
170
+
171
+ 02:35.120 --> 02:37.960
172
+ at the time, that thing's gonna actually be smart.
173
+
174
+ 02:37.960 --> 02:41.560
175
+ It needs to respond to me and you
176
+
177
+ 02:41.560 --> 02:45.320
178
+ and we can send it very different signals.
179
+
180
+ 02:45.320 --> 02:47.160
181
+ So by the way, just a quick interruption.
182
+
183
+ 02:47.160 --> 02:52.160
184
+ The Clippy, maybe it's in Word 95 and 98,
185
+
186
+ 02:52.360 --> 02:54.360
187
+ I don't remember when it was born,
188
+
189
+ 02:54.360 --> 02:58.280
190
+ but many people, do you find yourself with that reference
191
+
192
+ 02:58.280 --> 03:00.720
193
+ that people recognize what you're talking about still
194
+
195
+ 03:00.720 --> 03:01.640
196
+ to this point?
197
+
198
+ 03:01.640 --> 03:05.160
199
+ I don't expect the newest students to these days,
200
+
201
+ 03:05.160 --> 03:07.160
202
+ but I've mentioned it to a lot of audiences
203
+
204
+ 03:07.160 --> 03:09.200
205
+ like how many of you know this Clippy thing
206
+
207
+ 03:09.200 --> 03:11.680
208
+ and still the majority of people seem to know it.
209
+
210
+ 03:11.680 --> 03:15.280
211
+ So Clippy kind of looks at maybe natural language processing
212
+
213
+ 03:15.280 --> 03:19.200
214
+ where you are typing and tries to help you complete, I think.
215
+
216
+ 03:19.200 --> 03:22.440
217
+ I don't even remember what Clippy was except annoying.
218
+
219
+ 03:22.440 --> 03:25.760
220
+ Yeah, some people actually liked it.
221
+
222
+ 03:25.760 --> 03:27.480
223
+ I would hear those stories.
224
+
225
+ 03:27.480 --> 03:28.400
226
+ You miss it?
227
+
228
+ 03:28.400 --> 03:31.240
229
+ Well, I miss the annoyance.
230
+
231
+ 03:31.240 --> 03:35.680
232
+ They felt like there's an element, somebody was there
233
+
234
+ 03:35.680 --> 03:37.560
235
+ and we're in it together and they were annoying.
236
+
237
+ 03:37.560 --> 03:40.800
238
+ It's like a puppy that just doesn't get it.
239
+
240
+ 03:40.800 --> 03:42.120
241
+ They keep stripping up the couch kind of thing.
242
+
243
+ 03:42.120 --> 03:44.920
244
+ And in fact, they could have done it smarter like a puppy.
245
+
246
+ 03:44.920 --> 03:47.960
247
+ If they had done, like if when you yelled at it
248
+
249
+ 03:47.960 --> 03:50.760
250
+ or cursed at it, if it had put its little ears back
251
+
252
+ 03:50.760 --> 03:52.880
253
+ and its tail down and shirked off,
254
+
255
+ 03:52.880 --> 03:55.840
256
+ probably people would have wanted it back, right?
257
+
258
+ 03:55.840 --> 03:58.560
259
+ But instead when you yelled at it, what did it do?
260
+
261
+ 03:58.560 --> 04:01.200
262
+ It smiled, it winked, it danced, right?
263
+
264
+ 04:01.200 --> 04:03.160
265
+ If somebody comes to my office and I yell at them,
266
+
267
+ 04:03.160 --> 04:04.720
268
+ they started smiling, winking and dancing.
269
+
270
+ 04:04.720 --> 04:06.720
271
+ I'm like, I never want to see you again.
272
+
273
+ 04:06.720 --> 04:08.480
274
+ So Bill Gates got a standing ovation
275
+
276
+ 04:08.480 --> 04:10.080
277
+ when he said it was going away
278
+
279
+ 04:10.080 --> 04:12.320
280
+ because people were so ticked.
281
+
282
+ 04:12.320 --> 04:14.960
283
+ It was so emotionally unintelligent, right?
284
+
285
+ 04:14.960 --> 04:18.080
286
+ It was intelligent about whether you were writing a letter
287
+
288
+ 04:18.080 --> 04:20.800
289
+ or what kind of help you needed for that context.
290
+
291
+ 04:20.800 --> 04:23.360
292
+ It was completely unintelligent about,
293
+
294
+ 04:23.360 --> 04:25.720
295
+ hey, if you're annoying your customer,
296
+
297
+ 04:25.720 --> 04:28.320
298
+ don't smile in their face when you do it.
299
+
300
+ 04:28.320 --> 04:32.280
301
+ So that kind of mismatch was something
302
+
303
+ 04:32.280 --> 04:35.000
304
+ the developers just didn't think about.
305
+
306
+ 04:35.000 --> 04:39.440
307
+ And intelligence at the time was really all about math
308
+
309
+ 04:39.440 --> 04:44.920
310
+ and language and chess and games,
311
+
312
+ 04:44.920 --> 04:47.880
313
+ problems that could be pretty well defined.
314
+
315
+ 04:47.880 --> 04:51.440
316
+ Social emotional interaction is much more complex than chess
317
+
318
+ 04:51.440 --> 04:55.960
319
+ or Go or any of the games that people are trying to solve.
320
+
321
+ 04:55.960 --> 04:58.640
322
+ And in order to understand that required skills
323
+
324
+ 04:58.640 --> 05:00.240
325
+ that most people in computer science
326
+
327
+ 05:00.240 --> 05:02.520
328
+ actually were lacking personally.
329
+
330
+ 05:02.520 --> 05:03.760
331
+ Well, let's talk about computer science
332
+
333
+ 05:03.760 --> 05:06.320
334
+ if things gotten better since the work,
335
+
336
+ 05:06.320 --> 05:09.480
337
+ since the message, since you've really launched a field
338
+
339
+ 05:09.480 --> 05:11.280
340
+ with a lot of research work in the space,
341
+
342
+ 05:11.280 --> 05:14.080
343
+ I still find as a person like yourself
344
+
345
+ 05:14.080 --> 05:16.640
346
+ who's deeply passionate about human beings
347
+
348
+ 05:16.640 --> 05:18.840
349
+ and yet in computer science,
350
+
351
+ 05:18.840 --> 05:23.040
352
+ there still seems to be a lack of, sorry to say,
353
+
354
+ 05:23.040 --> 05:26.760
355
+ empathy in us computer scientists.
356
+
357
+ 05:26.760 --> 05:28.800
358
+ Yeah, well, or hasn't gotten better.
359
+
360
+ 05:28.800 --> 05:30.680
361
+ Let's just say there's a lot more variety
362
+
363
+ 05:30.680 --> 05:32.320
364
+ among computer scientists these days.
365
+
366
+ 05:32.320 --> 05:34.960
367
+ Computer scientists are a much more diverse group today
368
+
369
+ 05:34.960 --> 05:37.600
370
+ than they were 25 years ago.
371
+
372
+ 05:37.600 --> 05:39.000
373
+ And that's good.
374
+
375
+ 05:39.000 --> 05:41.760
376
+ We need all kinds of people to become computer scientists
377
+
378
+ 05:41.760 --> 05:45.600
379
+ so that computer science reflects more what society needs.
380
+
381
+ 05:45.600 --> 05:49.080
382
+ And there's brilliance among every personality type.
383
+
384
+ 05:49.080 --> 05:52.000
385
+ So it need not be limited to people
386
+
387
+ 05:52.000 --> 05:54.120
388
+ who prefer computers to other people.
389
+
390
+ 05:54.120 --> 05:55.400
391
+ How hard do you think it is?
392
+
393
+ 05:55.400 --> 05:58.600
394
+ How your view of how difficult it is to recognize emotion
395
+
396
+ 05:58.600 --> 06:03.960
397
+ or to create a deeply emotional intelligent interaction
398
+
399
+ 06:03.960 --> 06:07.440
400
+ has it gotten easier or harder as you've explored it further?
401
+
402
+ 06:07.440 --> 06:10.040
403
+ And how far away are we from cracking this?
404
+
405
+ 06:12.440 --> 06:16.040
406
+ If you think of the Turing test solving the intelligence,
407
+
408
+ 06:16.040 --> 06:18.760
409
+ looking at the Turing test for emotional intelligence?
410
+
411
+ 06:20.720 --> 06:25.600
412
+ I think it is as difficult as I thought it was gonna be.
413
+
414
+ 06:25.600 --> 06:29.280
415
+ I think my prediction of its difficulty is spot on.
416
+
417
+ 06:29.280 --> 06:33.160
418
+ I think the time estimates are always hard
419
+
420
+ 06:33.160 --> 06:37.360
421
+ because they're always a function of society's love
422
+
423
+ 06:37.360 --> 06:39.560
424
+ and hate of a particular topic.
425
+
426
+ 06:39.560 --> 06:44.560
427
+ If society gets excited and you get thousands
428
+
429
+ 06:44.560 --> 06:49.080
430
+ of researchers working on it for a certain application,
431
+
432
+ 06:49.080 --> 06:52.080
433
+ that application gets solved really quickly.
434
+
435
+ 06:52.080 --> 06:54.360
436
+ The general intelligence,
437
+
438
+ 06:54.360 --> 06:58.200
439
+ the computer's complete lack of ability
440
+
441
+ 06:58.200 --> 07:03.200
442
+ to have awareness of what it's doing,
443
+
444
+ 07:03.560 --> 07:05.560
445
+ the fact that it's not conscious,
446
+
447
+ 07:05.560 --> 07:08.640
448
+ the fact that there's no signs of it becoming conscious,
449
+
450
+ 07:08.640 --> 07:11.880
451
+ the fact that it doesn't read between the lines,
452
+
453
+ 07:11.880 --> 07:15.080
454
+ those kinds of things that we have to teach it explicitly,
455
+
456
+ 07:15.080 --> 07:17.520
457
+ what other people pick up implicitly.
458
+
459
+ 07:17.520 --> 07:20.440
460
+ We don't see that changing yet.
461
+
462
+ 07:20.440 --> 07:23.600
463
+ There aren't breakthroughs yet that lead us to believe
464
+
465
+ 07:23.600 --> 07:25.360
466
+ that that's gonna go any faster,
467
+
468
+ 07:25.360 --> 07:28.680
469
+ which means that it's still gonna be kind of stuck
470
+
471
+ 07:28.680 --> 07:31.320
472
+ with a lot of limitations
473
+
474
+ 07:31.320 --> 07:34.080
475
+ where it's probably only gonna do the right thing
476
+
477
+ 07:34.080 --> 07:37.200
478
+ in very limited, narrow, prespecified contexts
479
+
480
+ 07:37.200 --> 07:40.960
481
+ where we can prescribe pretty much
482
+
483
+ 07:40.960 --> 07:42.880
484
+ what's gonna happen there.
485
+
486
+ 07:42.880 --> 07:44.920
487
+ So I don't see the,
488
+
489
+ 07:47.000 --> 07:48.040
490
+ it's hard to predict the date
491
+
492
+ 07:48.040 --> 07:51.800
493
+ because when people don't work on it, it's infinite.
494
+
495
+ 07:51.800 --> 07:55.240
496
+ When everybody works on it, you get a nice piece of it
497
+
498
+ 07:55.240 --> 07:58.600
499
+ well solved in a short amount of time.
500
+
501
+ 07:58.600 --> 08:01.560
502
+ I actually think there's a more important issue right now
503
+
504
+ 08:01.560 --> 08:04.560
505
+ than the difficulty of it.
506
+
507
+ 08:04.560 --> 08:07.440
508
+ And that's causing some of us to put the brakes on a little bit.
509
+
510
+ 08:07.440 --> 08:09.360
511
+ Usually we're all just like step on the gas,
512
+
513
+ 08:09.360 --> 08:11.160
514
+ let's go faster.
515
+
516
+ 08:11.160 --> 08:14.200
517
+ This is causing us to pull back and put the brakes on.
518
+
519
+ 08:14.200 --> 08:18.680
520
+ And that's the way that some of this technology
521
+
522
+ 08:18.680 --> 08:21.200
523
+ is being used in places like China right now.
524
+
525
+ 08:21.200 --> 08:24.520
526
+ And that worries me so deeply
527
+
528
+ 08:24.520 --> 08:27.800
529
+ that it's causing me to pull back myself
530
+
531
+ 08:27.800 --> 08:30.080
532
+ on a lot of the things that we could be doing
533
+
534
+ 08:30.080 --> 08:33.680
535
+ and try to get the community to think a little bit more
536
+
537
+ 08:33.680 --> 08:36.040
538
+ about, okay, if we're gonna go forward with that,
539
+
540
+ 08:36.040 --> 08:39.320
541
+ how can we do it in a way that puts in place safeguards
542
+
543
+ 08:39.320 --> 08:41.120
544
+ that protects people?
545
+
546
+ 08:41.120 --> 08:43.560
547
+ So the technology we're referring to is
548
+
549
+ 08:43.560 --> 08:46.400
550
+ just when a computer senses the human being,
551
+
552
+ 08:46.400 --> 08:47.720
553
+ like the human face.
554
+
555
+ 08:47.720 --> 08:48.560
556
+ Yeah, yeah.
557
+
558
+ 08:48.560 --> 08:51.840
559
+ And so there's a lot of exciting things there
560
+
561
+ 08:51.840 --> 08:53.960
562
+ like forming a deep connection with the human being.
563
+
564
+ 08:53.960 --> 08:55.440
565
+ So what are your worries?
566
+
567
+ 08:55.440 --> 08:56.880
568
+ How that could go wrong?
569
+
570
+ 08:58.000 --> 08:59.520
571
+ Is it in terms of privacy?
572
+
573
+ 08:59.520 --> 09:03.000
574
+ Is it in terms of other kinds of more subtle things?
575
+
576
+ 09:03.000 --> 09:04.320
577
+ Let's dig into privacy.
578
+
579
+ 09:04.320 --> 09:07.800
580
+ So here in the US, if I'm watching a video
581
+
582
+ 09:07.800 --> 09:11.920
583
+ of say a political leader and in the US
584
+
585
+ 09:11.920 --> 09:13.640
586
+ we're quite free as we all know
587
+
588
+ 09:13.640 --> 09:17.920
589
+ to even criticize the president of the United States, right?
590
+
591
+ 09:17.920 --> 09:19.440
592
+ Here that's not a shocking thing.
593
+
594
+ 09:19.440 --> 09:22.720
595
+ It happens about every five seconds, right?
596
+
597
+ 09:22.720 --> 09:27.720
598
+ But in China, what happens if you criticize
599
+
600
+ 09:27.880 --> 09:31.080
601
+ the leader of the government, right?
602
+
603
+ 09:31.080 --> 09:34.320
604
+ And so people are very careful not to do that.
605
+
606
+ 09:34.320 --> 09:37.880
607
+ However, what happens if you're simply watching a video
608
+
609
+ 09:37.880 --> 09:41.000
610
+ and you make a facial expression
611
+
612
+ 09:41.000 --> 09:45.280
613
+ that shows a little bit of skepticism, right?
614
+
615
+ 09:45.280 --> 09:48.200
616
+ Well, and here we're completely free to do that.
617
+
618
+ 09:48.200 --> 09:50.680
619
+ In fact, we're free to fly off the handle
620
+
621
+ 09:50.680 --> 09:54.720
622
+ and say anything we want, usually.
623
+
624
+ 09:54.720 --> 09:56.560
625
+ I mean, there are some restrictions
626
+
627
+ 09:56.560 --> 10:00.840
628
+ when the athlete does this as part of the national broadcast
629
+
630
+ 10:00.840 --> 10:03.840
631
+ maybe the teams get a little unhappy
632
+
633
+ 10:03.840 --> 10:05.880
634
+ about picking that forum to do it, right?
635
+
636
+ 10:05.880 --> 10:08.720
637
+ But that's more a question of judgment.
638
+
639
+ 10:08.720 --> 10:12.760
640
+ We have these freedoms and in places
641
+
642
+ 10:12.760 --> 10:14.160
643
+ that don't have those freedoms,
644
+
645
+ 10:14.160 --> 10:17.080
646
+ what if our technology can read
647
+
648
+ 10:17.080 --> 10:19.600
649
+ your underlying affective state?
650
+
651
+ 10:19.600 --> 10:22.440
652
+ What if our technology can read it even noncontact?
653
+
654
+ 10:22.440 --> 10:24.440
655
+ What if our technology can read it
656
+
657
+ 10:24.440 --> 10:28.840
658
+ without your prior consent?
659
+
660
+ 10:28.840 --> 10:31.400
661
+ And here in the US and my first company,
662
+
663
+ 10:31.400 --> 10:32.960
664
+ we started Affectiva.
665
+
666
+ 10:32.960 --> 10:35.600
667
+ We have worked super hard to turn away money
668
+
669
+ 10:35.600 --> 10:38.440
670
+ and opportunities that try to read people's affect
671
+
672
+ 10:38.440 --> 10:41.360
673
+ without their prior informed consent.
674
+
675
+ 10:41.360 --> 10:45.160
676
+ And even the software that is licensible,
677
+
678
+ 10:45.160 --> 10:47.680
679
+ you have to sign things saying you will only use it
680
+
681
+ 10:47.680 --> 10:51.800
682
+ in certain ways, which essentially is get people's buy in,
683
+
684
+ 10:51.800 --> 10:52.640
685
+ right?
686
+
687
+ 10:52.640 --> 10:55.400
688
+ Don't do this without people agreeing to it.
689
+
690
+ 10:56.840 --> 10:58.640
691
+ There are other countries where they're not interested
692
+
693
+ 10:58.640 --> 10:59.560
694
+ in people's buy in.
695
+
696
+ 10:59.560 --> 11:01.440
697
+ They're just gonna use it.
698
+
699
+ 11:01.440 --> 11:03.080
700
+ They're gonna inflict it on you.
701
+
702
+ 11:03.080 --> 11:04.480
703
+ And if you don't like it,
704
+
705
+ 11:04.480 --> 11:08.480
706
+ you better not scowl in the direction of any sensors.
707
+
708
+ 11:08.480 --> 11:11.440
709
+ So one, let me just comment on a small tangent.
710
+
711
+ 11:11.440 --> 11:16.000
712
+ Do you know what the idea of adversarial examples
713
+
714
+ 11:16.000 --> 11:20.880
715
+ and deep fakes and so on, what you bring up is actually,
716
+
717
+ 11:20.880 --> 11:25.800
718
+ in one sense, deep fakes provide a comforting protection
719
+
720
+ 11:25.800 --> 11:30.640
721
+ that you can no longer really trust
722
+
723
+ 11:30.640 --> 11:34.560
724
+ that the video of your face was legitimate.
725
+
726
+ 11:34.560 --> 11:37.080
727
+ And therefore you always have an escape clause
728
+
729
+ 11:37.080 --> 11:38.480
730
+ if a government is trying,
731
+
732
+ 11:38.480 --> 11:43.480
733
+ if a stable balanced ethical government
734
+
735
+ 11:43.480 --> 11:46.200
736
+ is trying to accuse you of something,
737
+
738
+ 11:46.200 --> 11:47.040
739
+ at least you have protection.
740
+
741
+ 11:47.040 --> 11:48.720
742
+ You can say it was fake news,
743
+
744
+ 11:48.720 --> 11:50.560
745
+ as is a popular term now.
746
+
747
+ 11:50.560 --> 11:52.320
748
+ Yeah, that's the general thinking of it.
749
+
750
+ 11:52.320 --> 11:54.320
751
+ We know how to go into the video
752
+
753
+ 11:54.320 --> 11:58.320
754
+ and see, for example, your heart rate and respiration
755
+
756
+ 11:58.320 --> 12:02.160
757
+ and whether or not they've been tampered with.
758
+
759
+ 12:02.160 --> 12:05.480
760
+ And we also can put fake heart rate and respiration
761
+
762
+ 12:05.480 --> 12:06.320
763
+ in your video.
764
+
765
+ 12:06.320 --> 12:09.120
766
+ Now too, we decided we needed to do that
767
+
768
+ 12:09.120 --> 12:12.600
769
+ after we developed a way to extract it,
770
+
771
+ 12:12.600 --> 12:15.960
772
+ but we decided we also needed a way to jam it.
773
+
774
+ 12:15.960 --> 12:18.320
775
+ And so the fact that we took time
776
+
777
+ 12:18.320 --> 12:20.920
778
+ to do that other step too,
779
+
780
+ 12:20.920 --> 12:22.560
781
+ that was time that I wasn't spending
782
+
783
+ 12:22.560 --> 12:25.280
784
+ making the machine more effectively intelligent.
785
+
786
+ 12:25.280 --> 12:28.520
787
+ And there's a choice in how we spend our time,
788
+
789
+ 12:28.520 --> 12:32.440
790
+ which is now being swayed a little bit less by this goal
791
+
792
+ 12:32.440 --> 12:34.360
793
+ and a little bit more like by concern
794
+
795
+ 12:34.360 --> 12:36.600
796
+ about what's happening in society
797
+
798
+ 12:36.600 --> 12:38.880
799
+ and what kind of future do we wanna build.
800
+
801
+ 12:38.880 --> 12:41.680
802
+ And as we step back and say,
803
+
804
+ 12:41.680 --> 12:44.600
805
+ okay, we don't just build AI to build AI
806
+
807
+ 12:44.600 --> 12:46.520
808
+ to make Elon Musk more money
809
+
810
+ 12:46.520 --> 12:48.760
811
+ or to make Amazon Jeff Bezos more money.
812
+
813
+ 12:48.760 --> 12:52.920
814
+ You could gosh, that's the wrong ethic.
815
+
816
+ 12:52.920 --> 12:54.160
817
+ Why are we building it?
818
+
819
+ 12:54.160 --> 12:57.240
820
+ What is the point of building AI?
821
+
822
+ 12:57.240 --> 13:01.560
823
+ It used to be, it was driven by researchers in academia
824
+
825
+ 13:01.560 --> 13:04.200
826
+ to get papers published and to make a career for themselves
827
+
828
+ 13:04.200 --> 13:06.080
829
+ and to do something cool, right?
830
+
831
+ 13:06.080 --> 13:07.680
832
+ Cause maybe it could be done.
833
+
834
+ 13:07.680 --> 13:12.480
835
+ Now we realize that this is enabling rich people
836
+
837
+ 13:12.480 --> 13:14.280
838
+ to get vastly richer.
839
+
840
+ 13:15.560 --> 13:19.800
841
+ The poor are, the divide is even larger.
842
+
843
+ 13:19.800 --> 13:22.880
844
+ And is that the kind of future that we want?
845
+
846
+ 13:22.880 --> 13:25.920
847
+ Maybe we wanna think about, maybe we wanna rethink AI.
848
+
849
+ 13:25.920 --> 13:29.120
850
+ Maybe we wanna rethink the problems in society
851
+
852
+ 13:29.120 --> 13:32.760
853
+ that are causing the greatest inequity
854
+
855
+ 13:32.760 --> 13:35.040
856
+ and rethink how to build AI
857
+
858
+ 13:35.040 --> 13:36.760
859
+ that's not about a general intelligence,
860
+
861
+ 13:36.760 --> 13:39.320
862
+ but that's about extending the intelligence
863
+
864
+ 13:39.320 --> 13:41.240
865
+ and capability of the have nots
866
+
867
+ 13:41.240 --> 13:43.800
868
+ so that we close these gaps in society.
869
+
870
+ 13:43.800 --> 13:46.640
871
+ Do you hope that kind of stepping on the break
872
+
873
+ 13:46.640 --> 13:48.000
874
+ happens organically?
875
+
876
+ 13:48.000 --> 13:51.240
877
+ Because I think still majority of the force behind AI
878
+
879
+ 13:51.240 --> 13:52.800
880
+ is the desire to publish papers,
881
+
882
+ 13:52.800 --> 13:55.800
883
+ is to make money without thinking about the why.
884
+
885
+ 13:55.800 --> 13:57.280
886
+ Do you hope it happens organically?
887
+
888
+ 13:57.280 --> 13:59.040
889
+ Is there a room for regulation?
890
+
891
+ 14:01.120 --> 14:02.960
892
+ Yeah, yeah, yeah, great questions.
893
+
894
+ 14:02.960 --> 14:07.360
895
+ I prefer the, they talk about the carrot versus the stick.
896
+
897
+ 14:07.360 --> 14:09.160
898
+ I definitely prefer the carrot to the stick.
899
+
900
+ 14:09.160 --> 14:14.160
901
+ And in our free world, there's only so much stick, right?
902
+
903
+ 14:14.920 --> 14:17.280
904
+ You're gonna find a way around it.
905
+
906
+ 14:17.280 --> 14:21.160
907
+ I generally think less regulation is better.
908
+
909
+ 14:21.160 --> 14:24.440
910
+ That said, even though my position is classically carrot,
911
+
912
+ 14:24.440 --> 14:26.280
913
+ no stick, no regulation,
914
+
915
+ 14:26.280 --> 14:29.080
916
+ I think we do need some regulations in this space.
917
+
918
+ 14:29.080 --> 14:30.720
919
+ I do think we need regulations
920
+
921
+ 14:30.720 --> 14:33.640
922
+ around protecting people with their data,
923
+
924
+ 14:33.640 --> 14:38.200
925
+ that you own your data, not Amazon, not Google.
926
+
927
+ 14:38.200 --> 14:40.800
928
+ I would like to see people own their own data.
929
+
930
+ 14:40.800 --> 14:42.520
931
+ I would also like to see the regulations
932
+
933
+ 14:42.520 --> 14:44.520
934
+ that we have right now around lie detection
935
+
936
+ 14:44.520 --> 14:48.200
937
+ being extended to emotion recognition in general.
938
+
939
+ 14:48.200 --> 14:51.040
940
+ That right now you can't use a lie detector on an employee
941
+
942
+ 14:51.040 --> 14:52.760
943
+ when you're on a candidate,
944
+
945
+ 14:52.760 --> 14:54.720
946
+ when you're interviewing them for a job.
947
+
948
+ 14:54.720 --> 14:57.800
949
+ I think similarly, we need to put in place protection
950
+
951
+ 14:57.800 --> 15:00.720
952
+ around reading people's emotions without their consent
953
+
954
+ 15:00.720 --> 15:02.200
955
+ and in certain cases,
956
+
957
+ 15:02.200 --> 15:06.160
958
+ like characterizing them for a job and other opportunities.
959
+
960
+ 15:06.160 --> 15:09.200
961
+ So I also think that when we're reading emotion
962
+
963
+ 15:09.200 --> 15:11.760
964
+ that's predictive around mental health,
965
+
966
+ 15:11.760 --> 15:14.240
967
+ that that should, even though it's not medical data,
968
+
969
+ 15:14.240 --> 15:16.160
970
+ that that should get the kinds of protections
971
+
972
+ 15:16.160 --> 15:18.520
973
+ that our medical data gets.
974
+
975
+ 15:18.520 --> 15:20.080
976
+ What most people don't know yet
977
+
978
+ 15:20.080 --> 15:22.640
979
+ is right now with your smartphone use,
980
+
981
+ 15:22.640 --> 15:25.280
982
+ and if you're wearing a sensor
983
+
984
+ 15:25.280 --> 15:27.760
985
+ and you wanna learn about your stress and your sleep,
986
+
987
+ 15:27.760 --> 15:29.040
988
+ and your physical activity,
989
+
990
+ 15:29.040 --> 15:30.840
991
+ and how much you're using your phone
992
+
993
+ 15:30.840 --> 15:32.640
994
+ and your social interaction,
995
+
996
+ 15:32.640 --> 15:34.960
997
+ all of that non medical data,
998
+
999
+ 15:34.960 --> 15:37.960
1000
+ when we put it together with machine learning,
1001
+
1002
+ 15:37.960 --> 15:40.160
1003
+ now called AI, even though the founders of AI
1004
+
1005
+ 15:40.160 --> 15:41.720
1006
+ wouldn't have called it that,
1007
+
1008
+ 15:42.960 --> 15:47.960
1009
+ that capability can not only tell that you're calm right now,
1010
+
1011
+ 15:48.440 --> 15:50.840
1012
+ or that you're getting a little stressed,
1013
+
1014
+ 15:50.840 --> 15:53.920
1015
+ but it can also predict how you're likely to be tomorrow.
1016
+
1017
+ 15:53.920 --> 15:55.880
1018
+ If you're likely to be sick or healthy,
1019
+
1020
+ 15:55.880 --> 15:58.720
1021
+ happy or sad, stressed or calm.
1022
+
1023
+ 15:58.720 --> 16:00.640
1024
+ Especially when you're tracking data over time.
1025
+
1026
+ 16:00.640 --> 16:03.760
1027
+ Especially when we're tracking a week of your data or more.
1028
+
1029
+ 16:03.760 --> 16:06.000
1030
+ Do you have an optimism towards,
1031
+
1032
+ 16:06.000 --> 16:07.800
1033
+ that a lot of people on our phones
1034
+
1035
+ 16:07.800 --> 16:10.360
1036
+ are worried about this camera that's looking at us?
1037
+
1038
+ 16:10.360 --> 16:12.520
1039
+ For the most part, on balance,
1040
+
1041
+ 16:12.520 --> 16:16.120
1042
+ are you optimistic about the benefits
1043
+
1044
+ 16:16.120 --> 16:17.480
1045
+ that can be brought from that camera
1046
+
1047
+ 16:17.480 --> 16:19.640
1048
+ that's looking at billions of us,
1049
+
1050
+ 16:19.640 --> 16:22.080
1051
+ or should we be more worried?
1052
+
1053
+ 16:22.080 --> 16:27.080
1054
+ I think we should be a little bit more worried
1055
+
1056
+ 16:28.840 --> 16:32.480
1057
+ about who's looking at us and listening to us.
1058
+
1059
+ 16:32.480 --> 16:36.680
1060
+ The device sitting on your countertop in your kitchen,
1061
+
1062
+ 16:36.680 --> 16:41.680
1063
+ whether it's Alexa or Google Home or Apple, Siri,
1064
+
1065
+ 16:42.120 --> 16:46.440
1066
+ these devices want to listen,
1067
+
1068
+ 16:47.480 --> 16:49.640
1069
+ while they say ostensibly to help us.
1070
+
1071
+ 16:49.640 --> 16:52.080
1072
+ And I think there are great people in these companies
1073
+
1074
+ 16:52.080 --> 16:54.160
1075
+ who do want to help people.
1076
+
1077
+ 16:54.160 --> 16:56.160
1078
+ Let me not brand them all bad.
1079
+
1080
+ 16:56.160 --> 16:59.320
1081
+ I'm a user of products from all of these companies.
1082
+
1083
+ 16:59.320 --> 17:04.320
1084
+ I'm naming all the A companies, Alphabet, Apple, Amazon.
1085
+
1086
+ 17:04.360 --> 17:09.120
1087
+ They are awfully big companies, right?
1088
+
1089
+ 17:09.120 --> 17:11.520
1090
+ They have incredible power.
1091
+
1092
+ 17:11.520 --> 17:16.520
1093
+ And what if China were to buy them, right?
1094
+
1095
+ 17:16.520 --> 17:19.320
1096
+ And suddenly, all of that data
1097
+
1098
+ 17:19.320 --> 17:21.880
1099
+ were not part of free America,
1100
+
1101
+ 17:21.880 --> 17:23.800
1102
+ but all of that data were part of somebody
1103
+
1104
+ 17:23.800 --> 17:26.040
1105
+ who just wants to take over the world
1106
+
1107
+ 17:26.040 --> 17:27.480
1108
+ and you submit to them.
1109
+
1110
+ 17:27.480 --> 17:31.560
1111
+ And guess what happens if you so much as smirk the wrong way
1112
+
1113
+ 17:31.560 --> 17:34.000
1114
+ when they say something that you don't like?
1115
+
1116
+ 17:34.000 --> 17:36.840
1117
+ Well, they have reeducation camps, right?
1118
+
1119
+ 17:36.840 --> 17:38.360
1120
+ That's a nice word for them.
1121
+
1122
+ 17:38.360 --> 17:40.800
1123
+ By the way, they have a surplus of organs
1124
+
1125
+ 17:40.800 --> 17:42.720
1126
+ for people who have surgery these days.
1127
+
1128
+ 17:42.720 --> 17:44.440
1129
+ They don't have an organ donation problem,
1130
+
1131
+ 17:44.440 --> 17:45.640
1132
+ because they take your blood.
1133
+
1134
+ 17:45.640 --> 17:47.480
1135
+ And they know you're a match.
1136
+
1137
+ 17:47.480 --> 17:51.640
1138
+ And the doctors are on record of taking organs from people
1139
+
1140
+ 17:51.640 --> 17:54.680
1141
+ who are perfectly healthy and not prisoners.
1142
+
1143
+ 17:54.680 --> 17:57.960
1144
+ They're just simply not the favored ones of the government.
1145
+
1146
+ 17:58.960 --> 18:03.960
1147
+ And, you know, that's a pretty freaky evil society.
1148
+
1149
+ 18:03.960 --> 18:05.920
1150
+ And we can use the word evil there.
1151
+
1152
+ 18:05.920 --> 18:07.280
1153
+ I was born in the Soviet Union.
1154
+
1155
+ 18:07.280 --> 18:11.480
1156
+ I can certainly connect to the worry
1157
+
1158
+ 18:11.480 --> 18:12.520
1159
+ that you're expressing.
1160
+
1161
+ 18:12.520 --> 18:14.920
1162
+ At the same time, probably both you and I
1163
+
1164
+ 18:14.920 --> 18:19.000
1165
+ and you're very much so, you know,
1166
+
1167
+ 18:19.000 --> 18:22.440
1168
+ there's an exciting possibility
1169
+
1170
+ 18:22.440 --> 18:27.000
1171
+ that you can have a deep connection with the machine.
1172
+
1173
+ 18:27.000 --> 18:28.000
1174
+ Yeah, yeah.
1175
+
1176
+ 18:28.000 --> 18:29.000
1177
+ Right, so.
1178
+
1179
+ 18:29.000 --> 18:35.000
1180
+ Those of us, I've admitted students who say that they,
1181
+
1182
+ 18:35.000 --> 18:37.400
1183
+ you know, when you list, like, who do you most wish
1184
+
1185
+ 18:37.400 --> 18:40.640
1186
+ you could have lunch with or dinner with, right?
1187
+
1188
+ 18:40.640 --> 18:42.680
1189
+ And they'll write, like, I don't like people.
1190
+
1191
+ 18:42.680 --> 18:44.040
1192
+ I just like computers.
1193
+
1194
+ 18:44.040 --> 18:45.760
1195
+ And one of them said to me once
1196
+
1197
+ 18:45.760 --> 18:48.480
1198
+ when I had this party at my house,
1199
+
1200
+ 18:48.480 --> 18:52.560
1201
+ I want you to know this is my only social event of the year,
1202
+
1203
+ 18:52.560 --> 18:54.840
1204
+ my one social event of the year.
1205
+
1206
+ 18:54.840 --> 18:58.080
1207
+ Like, okay, now this is a brilliant machine learning person,
1208
+
1209
+ 18:58.080 --> 18:59.080
1210
+ right?
1211
+
1212
+ 18:59.080 --> 19:01.320
1213
+ And we need that kind of brilliance and machine learning.
1214
+
1215
+ 19:01.320 --> 19:04.040
1216
+ And I love that Computer Science welcomes people
1217
+
1218
+ 19:04.040 --> 19:07.040
1219
+ who love people and people who are very awkward around people.
1220
+
1221
+ 19:07.040 --> 19:12.040
1222
+ I love that this is a field that anybody could join.
1223
+
1224
+ 19:12.040 --> 19:15.040
1225
+ We need all kinds of people.
1226
+
1227
+ 19:15.040 --> 19:17.040
1228
+ And you don't need to be a social person.
1229
+
1230
+ 19:17.040 --> 19:19.040
1231
+ I'm not trying to force people who don't like people
1232
+
1233
+ 19:19.040 --> 19:21.040
1234
+ to suddenly become social.
1235
+
1236
+ 19:21.040 --> 19:26.040
1237
+ At the same time, if most of the people building the AIs
1238
+
1239
+ 19:26.040 --> 19:29.040
1240
+ of the future are the kind of people who don't like people,
1241
+
1242
+ 19:29.040 --> 19:31.040
1243
+ we've got a little bit of a problem.
1244
+
1245
+ 19:31.040 --> 19:32.040
1246
+ Hold on a second.
1247
+
1248
+ 19:32.040 --> 19:34.040
1249
+ So let me, let me push back on that.
1250
+
1251
+ 19:34.040 --> 19:39.040
1252
+ So don't you think a large percentage of the world can,
1253
+
1254
+ 19:39.040 --> 19:41.040
1255
+ you know, there's loneliness.
1256
+
1257
+ 19:41.040 --> 19:44.040
1258
+ There is a huge problem with loneliness and it's growing.
1259
+
1260
+ 19:44.040 --> 19:47.040
1261
+ And so there's a longing for connection.
1262
+
1263
+ 19:47.040 --> 19:51.040
1264
+ If you're lonely, you're part of a big and growing group.
1265
+
1266
+ 19:51.040 --> 19:52.040
1267
+ Yes.
1268
+
1269
+ 19:52.040 --> 19:54.040
1270
+ So we're in it together, I guess.
1271
+
1272
+ 19:54.040 --> 19:56.040
1273
+ If you're lonely, join the group.
1274
+
1275
+ 19:56.040 --> 19:57.040
1276
+ You're not alone.
1277
+
1278
+ 19:57.040 --> 19:58.040
1279
+ You're not alone.
1280
+
1281
+ 19:58.040 --> 20:00.040
1282
+ That's a good line.
1283
+
1284
+ 20:00.040 --> 20:04.040
1285
+ But do you think there's a, you talked about some worry,
1286
+
1287
+ 20:04.040 --> 20:07.040
1288
+ but do you think there's an exciting possibility
1289
+
1290
+ 20:07.040 --> 20:11.040
1291
+ that something like Alexa, when these kinds of tools
1292
+
1293
+ 20:11.040 --> 20:16.040
1294
+ can alleviate that loneliness in a way that other humans can't?
1295
+
1296
+ 20:16.040 --> 20:17.040
1297
+ Yeah.
1298
+
1299
+ 20:17.040 --> 20:18.040
1300
+ Yeah, definitely.
1301
+
1302
+ 20:18.040 --> 20:21.040
1303
+ I mean, a great book can kind of alleviate loneliness.
1304
+
1305
+ 20:21.040 --> 20:22.040
1306
+ Right.
1307
+
1308
+ 20:22.040 --> 20:23.040
1309
+ Exactly.
1310
+
1311
+ 20:23.040 --> 20:25.040
1312
+ Because you just get sucked into this amazing story
1313
+
1314
+ 20:25.040 --> 20:27.040
1315
+ and you can't wait to go spend time with that character.
1316
+
1317
+ 20:27.040 --> 20:28.040
1318
+ Right.
1319
+
1320
+ 20:28.040 --> 20:30.040
1321
+ And they're not a human character.
1322
+
1323
+ 20:30.040 --> 20:32.040
1324
+ There is a human behind it.
1325
+
1326
+ 20:32.040 --> 20:35.040
1327
+ But yeah, it can be an incredibly delightful way
1328
+
1329
+ 20:35.040 --> 20:37.040
1330
+ to pass the hours.
1331
+
1332
+ 20:37.040 --> 20:39.040
1333
+ And it can meet needs.
1334
+
1335
+ 20:39.040 --> 20:43.040
1336
+ Even, you know, I don't read those trashy romance books,
1337
+
1338
+ 20:43.040 --> 20:44.040
1339
+ but somebody does, right?
1340
+
1341
+ 20:44.040 --> 20:46.040
1342
+ And what are they getting from this?
1343
+
1344
+ 20:46.040 --> 20:50.040
1345
+ Well, probably some of that feeling of being there, right?
1346
+
1347
+ 20:50.040 --> 20:54.040
1348
+ Being there in that social moment, that romantic moment,
1349
+
1350
+ 20:54.040 --> 20:56.040
1351
+ or connecting with somebody.
1352
+
1353
+ 20:56.040 --> 20:59.040
1354
+ I've had a similar experience reading some science fiction books, right?
1355
+
1356
+ 20:59.040 --> 21:01.040
1357
+ And connecting with the character, Orson Scott Card.
1358
+
1359
+ 21:01.040 --> 21:05.040
1360
+ And, you know, just amazing writing and Inder's Game
1361
+
1362
+ 21:05.040 --> 21:07.040
1363
+ and Speaker for the Dead, terrible title.
1364
+
1365
+ 21:07.040 --> 21:10.040
1366
+ But those kind of books that pull you into a character,
1367
+
1368
+ 21:10.040 --> 21:13.040
1369
+ and you feel like you're, you feel very social.
1370
+
1371
+ 21:13.040 --> 21:17.040
1372
+ It's very connected, even though it's not responding to you.
1373
+
1374
+ 21:17.040 --> 21:19.040
1375
+ And a computer, of course, can respond to you.
1376
+
1377
+ 21:19.040 --> 21:20.040
1378
+ Right.
1379
+
1380
+ 21:20.040 --> 21:21.040
1381
+ So it can deepen it, right?
1382
+
1383
+ 21:21.040 --> 21:25.040
1384
+ You can have a very deep connection,
1385
+
1386
+ 21:25.040 --> 21:29.040
1387
+ much more than the movie Her, you know, plays up, right?
1388
+
1389
+ 21:29.040 --> 21:30.040
1390
+ Well, much more.
1391
+
1392
+ 21:30.040 --> 21:34.040
1393
+ I mean, movie Her is already a pretty deep connection, right?
1394
+
1395
+ 21:34.040 --> 21:36.040
1396
+ Well, but it's just a movie, right?
1397
+
1398
+ 21:36.040 --> 21:37.040
1399
+ It's scripted.
1400
+
1401
+ 21:37.040 --> 21:38.040
1402
+ It's just, you know.
1403
+
1404
+ 21:38.040 --> 21:42.040
1405
+ But I mean, like, there can be a real interaction
1406
+
1407
+ 21:42.040 --> 21:46.040
1408
+ where the character can learn and you can learn.
1409
+
1410
+ 21:46.040 --> 21:49.040
1411
+ You could imagine it not just being you and one character.
1412
+
1413
+ 21:49.040 --> 21:51.040
1414
+ You can imagine a group of characters.
1415
+
1416
+ 21:51.040 --> 21:53.040
1417
+ You can imagine a group of people and characters,
1418
+
1419
+ 21:53.040 --> 21:56.040
1420
+ human and AI connecting.
1421
+
1422
+ 21:56.040 --> 22:01.040
1423
+ Where maybe a few people can't sort of be friends with everybody,
1424
+
1425
+ 22:01.040 --> 22:07.040
1426
+ but the few people and their AIs can befriend more people.
1427
+
1428
+ 22:07.040 --> 22:10.040
1429
+ There can be an extended human intelligence in there
1430
+
1431
+ 22:10.040 --> 22:14.040
1432
+ where each human can connect with more people that way.
1433
+
1434
+ 22:14.040 --> 22:18.040
1435
+ But it's still very limited.
1436
+
1437
+ 22:18.040 --> 22:21.040
1438
+ But there are just, what I mean is there are many more possibilities
1439
+
1440
+ 22:21.040 --> 22:22.040
1441
+ than what's in that movie.
1442
+
1443
+ 22:22.040 --> 22:24.040
1444
+ So there's a tension here.
1445
+
1446
+ 22:24.040 --> 22:28.040
1447
+ So one, you expressed a really serious concern about privacy,
1448
+
1449
+ 22:28.040 --> 22:31.040
1450
+ about how governments can misuse the information.
1451
+
1452
+ 22:31.040 --> 22:34.040
1453
+ And there's the possibility of this connection.
1454
+
1455
+ 22:34.040 --> 22:36.040
1456
+ So let's look at Alexa.
1457
+
1458
+ 22:36.040 --> 22:38.040
1459
+ So personal assistance.
1460
+
1461
+ 22:38.040 --> 22:42.040
1462
+ For the most part, as far as I'm aware, they ignore your emotion.
1463
+
1464
+ 22:42.040 --> 22:47.040
1465
+ They ignore even the context or the existence of you,
1466
+
1467
+ 22:47.040 --> 22:51.040
1468
+ the intricate, beautiful, complex aspects of who you are,
1469
+
1470
+ 22:51.040 --> 22:56.040
1471
+ except maybe aspects of your voice that help it recognize
1472
+
1473
+ 22:56.040 --> 22:58.040
1474
+ speech recognition.
1475
+
1476
+ 22:58.040 --> 23:03.040
1477
+ Do you think they should move towards trying to understand your emotion?
1478
+
1479
+ 23:03.040 --> 23:07.040
1480
+ All of these companies are very interested in understanding human emotion.
1481
+
1482
+ 23:07.040 --> 23:13.040
1483
+ More people are telling Siri every day they want to kill themselves.
1484
+
1485
+ 23:13.040 --> 23:17.040
1486
+ Apple wants to know the difference between if a person is really suicidal
1487
+
1488
+ 23:17.040 --> 23:21.040
1489
+ versus if a person is just kind of fooling around with Siri.
1490
+
1491
+ 23:21.040 --> 23:25.040
1492
+ The words may be the same, the tone of voice,
1493
+
1494
+ 23:25.040 --> 23:31.040
1495
+ and what surrounds those words is pivotal to understand
1496
+
1497
+ 23:31.040 --> 23:35.040
1498
+ if they should respond in a very serious way, bring help to that person,
1499
+
1500
+ 23:35.040 --> 23:40.040
1501
+ or if they should kind of jokingly tease back,
1502
+
1503
+ 23:40.040 --> 23:45.040
1504
+ you just want to sell me for something else.
1505
+
1506
+ 23:45.040 --> 23:48.040
1507
+ Like, how do you respond when somebody says that?
1508
+
1509
+ 23:48.040 --> 23:53.040
1510
+ Well, you do want to err on the side of being careful and taking it seriously.
1511
+
1512
+ 23:53.040 --> 23:59.040
1513
+ People want to know if the person is happy or stressed.
1514
+
1515
+ 23:59.040 --> 24:03.040
1516
+ In part, well, so let me give you an altruistic reason
1517
+
1518
+ 24:03.040 --> 24:08.040
1519
+ and a business profit motivated reason,
1520
+
1521
+ 24:08.040 --> 24:13.040
1522
+ and there are people in companies that operate on both principles.
1523
+
1524
+ 24:13.040 --> 24:17.040
1525
+ Altruistic people really care about their customers
1526
+
1527
+ 24:17.040 --> 24:20.040
1528
+ and really care about helping you feel a little better at the end of the day,
1529
+
1530
+ 24:20.040 --> 24:24.040
1531
+ and it would just make those people happy if they knew that they made your life better.
1532
+
1533
+ 24:24.040 --> 24:29.040
1534
+ If you came home stressed and after talking with their product, you felt better.
1535
+
1536
+ 24:29.040 --> 24:35.040
1537
+ There are other people who maybe have studied the way affect affects decision making
1538
+
1539
+ 24:35.040 --> 24:39.040
1540
+ and prices people pay, and they know, I don't know if I should tell you,
1541
+
1542
+ 24:39.040 --> 24:44.040
1543
+ the work of Jen Lerner on heartstrings and purse strings.
1544
+
1545
+ 24:44.040 --> 24:50.040
1546
+ If we manipulate you into a slightly sadder mood, you'll pay more.
1547
+
1548
+ 24:50.040 --> 24:54.040
1549
+ You'll pay more to change your situation.
1550
+
1551
+ 24:54.040 --> 24:58.040
1552
+ You'll pay more for something you don't even need to make yourself feel better.
1553
+
1554
+ 24:58.040 --> 25:01.040
1555
+ If they sound a little sad, maybe I don't want to cheer them up.
1556
+
1557
+ 25:01.040 --> 25:05.040
1558
+ Maybe first I want to help them get something,
1559
+
1560
+ 25:05.040 --> 25:09.040
1561
+ a little shopping therapy that helps them.
1562
+
1563
+ 25:09.040 --> 25:13.040
1564
+ Which is really difficult for a company that's primarily funded on advertisements,
1565
+
1566
+ 25:13.040 --> 25:16.040
1567
+ so they're encouraged to get you to...
1568
+
1569
+ 25:16.040 --> 25:20.040
1570
+ To offer you products or Amazon that's primarily funded on you buying things from their store.
1571
+
1572
+ 25:20.040 --> 25:25.040
1573
+ Maybe we need regulation in the future to put a little bit of a wall
1574
+
1575
+ 25:25.040 --> 25:29.040
1576
+ between these agents that have access to our emotion
1577
+
1578
+ 25:29.040 --> 25:32.040
1579
+ and agents that want to sell us stuff.
1580
+
1581
+ 25:32.040 --> 25:38.040
1582
+ Maybe there needs to be a little bit more of a firewall in between those.
1583
+
1584
+ 25:38.040 --> 25:42.040
1585
+ So maybe digging in a little bit on the interaction with Alexa,
1586
+
1587
+ 25:42.040 --> 25:45.040
1588
+ you mentioned, of course, a really serious concern about,
1589
+
1590
+ 25:45.040 --> 25:50.040
1591
+ like recognizing emotion if somebody is speaking of suicide or depression and so on,
1592
+
1593
+ 25:50.040 --> 25:54.040
1594
+ but what about the actual interaction itself?
1595
+
1596
+ 25:54.040 --> 25:57.040
1597
+ Do you think...
1598
+
1599
+ 25:57.040 --> 26:01.040
1600
+ You mentioned clippy and being annoying.
1601
+
1602
+ 26:01.040 --> 26:04.040
1603
+ What is the objective function we're trying to optimize?
1604
+
1605
+ 26:04.040 --> 26:09.040
1606
+ Is it minimize annoyingness or maximize happiness?
1607
+
1608
+ 26:09.040 --> 26:12.040
1609
+ Or if we look at human relations,
1610
+
1611
+ 26:12.040 --> 26:16.040
1612
+ I think that push and pull, the tension, the dance,
1613
+
1614
+ 26:16.040 --> 26:20.040
1615
+ the annoying, the flaws, that's what makes it fun.
1616
+
1617
+ 26:20.040 --> 26:23.040
1618
+ So is there a room for...
1619
+
1620
+ 26:23.040 --> 26:25.040
1621
+ What is the objective function?
1622
+
1623
+ 26:25.040 --> 26:29.040
1624
+ In times when you want to have a little push and pull, think of kids sparring.
1625
+
1626
+ 26:29.040 --> 26:34.040
1627
+ You know, I see my sons and one of them wants to provoke the other to be upset.
1628
+
1629
+ 26:34.040 --> 26:35.040
1630
+ And that's fun.
1631
+
1632
+ 26:35.040 --> 26:38.040
1633
+ And it's actually healthy to learn where your limits are,
1634
+
1635
+ 26:38.040 --> 26:40.040
1636
+ to learn how to self regulate.
1637
+
1638
+ 26:40.040 --> 26:43.040
1639
+ You can imagine a game where it's trying to make you mad
1640
+
1641
+ 26:43.040 --> 26:45.040
1642
+ and you're trying to show self control.
1643
+
1644
+ 26:45.040 --> 26:48.040
1645
+ And so if we're doing a AI human interaction
1646
+
1647
+ 26:48.040 --> 26:51.040
1648
+ that's helping build resilience and self control,
1649
+
1650
+ 26:51.040 --> 26:54.040
1651
+ whether it's to learn how to not be a bully
1652
+
1653
+ 26:54.040 --> 26:58.040
1654
+ or how to turn the other cheek or how to deal with an abusive person in your life,
1655
+
1656
+ 26:58.040 --> 27:03.040
1657
+ then you might need an AI that pushes your buttons, right?
1658
+
1659
+ 27:03.040 --> 27:07.040
1660
+ But in general, do you want an AI that pushes your buttons?
1661
+
1662
+ 27:07.040 --> 27:11.040
1663
+ Probably depends on your personality.
1664
+
1665
+ 27:11.040 --> 27:12.040
1666
+ I don't.
1667
+
1668
+ 27:12.040 --> 27:17.040
1669
+ I want one that's respectful, that is there to serve me
1670
+
1671
+ 27:17.040 --> 27:22.040
1672
+ and that is there to extend my ability to do things.
1673
+
1674
+ 27:22.040 --> 27:24.040
1675
+ I'm not looking for a rival.
1676
+
1677
+ 27:24.040 --> 27:26.040
1678
+ I'm looking for a helper.
1679
+
1680
+ 27:26.040 --> 27:29.040
1681
+ And that's the kind of AI I'd put my money on.
1682
+
1683
+ 27:29.040 --> 27:32.040
1684
+ Your senses for the majority of people in the world,
1685
+
1686
+ 27:32.040 --> 27:34.040
1687
+ in order to have a rich experience,
1688
+
1689
+ 27:34.040 --> 27:36.040
1690
+ that's what they're looking for as well.
1691
+
1692
+ 27:36.040 --> 27:40.040
1693
+ So if you look at the movie Her, spoiler alert,
1694
+
1695
+ 27:40.040 --> 27:45.040
1696
+ I believe the program, the woman in the movie Her,
1697
+
1698
+ 27:45.040 --> 27:50.040
1699
+ leaves the person for somebody else.
1700
+
1701
+ 27:50.040 --> 27:53.040
1702
+ Says they don't want to be dating anymore.
1703
+
1704
+ 27:53.040 --> 27:56.040
1705
+ Right.
1706
+
1707
+ 27:56.040 --> 27:59.040
1708
+ Your senses, if Alexis said, you know what?
1709
+
1710
+ 27:59.040 --> 28:03.040
1711
+ I actually had enough of you for a while,
1712
+
1713
+ 28:03.040 --> 28:05.040
1714
+ so I'm going to shut myself off.
1715
+
1716
+ 28:05.040 --> 28:07.040
1717
+ You don't see that as...
1718
+
1719
+ 28:07.040 --> 28:10.040
1720
+ I'd say you're trash because I'm paid for you, right?
1721
+
1722
+ 28:10.040 --> 28:14.040
1723
+ We've got to remember,
1724
+
1725
+ 28:14.040 --> 28:18.040
1726
+ and this is where this blending human AI
1727
+
1728
+ 28:18.040 --> 28:22.040
1729
+ as if we're equals is really deceptive,
1730
+
1731
+ 28:22.040 --> 28:26.040
1732
+ because AI is something at the end of the day
1733
+
1734
+ 28:26.040 --> 28:29.040
1735
+ that my students and I are making in the lab,
1736
+
1737
+ 28:29.040 --> 28:33.040
1738
+ and we're choosing what it's allowed to say,
1739
+
1740
+ 28:33.040 --> 28:36.040
1741
+ when it's allowed to speak, what it's allowed to listen to,
1742
+
1743
+ 28:36.040 --> 28:39.040
1744
+ what it's allowed to act on,
1745
+
1746
+ 28:39.040 --> 28:43.040
1747
+ given the inputs that we choose to expose it to,
1748
+
1749
+ 28:43.040 --> 28:45.040
1750
+ what outputs it's allowed to have.
1751
+
1752
+ 28:45.040 --> 28:49.040
1753
+ It's all something made by a human,
1754
+
1755
+ 28:49.040 --> 28:52.040
1756
+ and if we want to make something that makes our lives miserable,
1757
+
1758
+ 28:52.040 --> 28:55.040
1759
+ fine, I wouldn't invest in it as a business,
1760
+
1761
+ 28:55.040 --> 28:59.040
1762
+ unless it's just there for self regulation training.
1763
+
1764
+ 28:59.040 --> 29:02.040
1765
+ But I think we need to think about what kind of future we want,
1766
+
1767
+ 29:02.040 --> 29:05.040
1768
+ and actually your question, I really like the...
1769
+
1770
+ 29:05.040 --> 29:07.040
1771
+ What is the objective function?
1772
+
1773
+ 29:07.040 --> 29:10.040
1774
+ Is it to calm people down sometimes?
1775
+
1776
+ 29:10.040 --> 29:14.040
1777
+ Is it to always make people happy and calm them down?
1778
+
1779
+ 29:14.040 --> 29:16.040
1780
+ Well, there was a book about that, right?
1781
+
1782
+ 29:16.040 --> 29:19.040
1783
+ The Brave New World, you know, make everybody happy,
1784
+
1785
+ 29:19.040 --> 29:22.040
1786
+ take your Soma if you're unhappy, take your happy pill,
1787
+
1788
+ 29:22.040 --> 29:24.040
1789
+ and if you refuse to take your happy pill,
1790
+
1791
+ 29:24.040 --> 29:29.040
1792
+ well, we'll threaten you by sending you to Iceland to live there.
1793
+
1794
+ 29:29.040 --> 29:32.040
1795
+ I lived in Iceland three years. It's a great place.
1796
+
1797
+ 29:32.040 --> 29:35.040
1798
+ Don't take your Soma, then go to Iceland.
1799
+
1800
+ 29:35.040 --> 29:37.040
1801
+ A little TV commercial there.
1802
+
1803
+ 29:37.040 --> 29:40.040
1804
+ I was a child there for a few years. It's a wonderful place.
1805
+
1806
+ 29:40.040 --> 29:43.040
1807
+ So that part of the book never scared me.
1808
+
1809
+ 29:43.040 --> 29:48.040
1810
+ But really, do we want AI to manipulate us into submission,
1811
+
1812
+ 29:48.040 --> 29:49.040
1813
+ into making us happy?
1814
+
1815
+ 29:49.040 --> 29:56.040
1816
+ Well, if you are a power obsessed, sick dictator, individual
1817
+
1818
+ 29:56.040 --> 29:59.040
1819
+ who only wants to control other people to get your jollies in life,
1820
+
1821
+ 29:59.040 --> 30:04.040
1822
+ then yeah, you want to use AI to extend your power in your scale
1823
+
1824
+ 30:04.040 --> 30:07.040
1825
+ to force people into submission.
1826
+
1827
+ 30:07.040 --> 30:11.040
1828
+ If you believe that the human race is better off being given freedom
1829
+
1830
+ 30:11.040 --> 30:15.040
1831
+ and the opportunity to do things that might surprise you,
1832
+
1833
+ 30:15.040 --> 30:20.040
1834
+ then you want to use AI to extend people's ability to build,
1835
+
1836
+ 30:20.040 --> 30:23.040
1837
+ you want to build AI that extends human intelligence,
1838
+
1839
+ 30:23.040 --> 30:27.040
1840
+ that empowers the weak and helps balance the power
1841
+
1842
+ 30:27.040 --> 30:29.040
1843
+ between the weak and the strong,
1844
+
1845
+ 30:29.040 --> 30:32.040
1846
+ not that gives more power to the strong.
1847
+
1848
+ 30:32.040 --> 30:38.040
1849
+ So in this process of empowering people and sensing people
1850
+
1851
+ 30:38.040 --> 30:43.040
1852
+ and what is your sense on emotion in terms of recognizing emotion?
1853
+
1854
+ 30:43.040 --> 30:47.040
1855
+ The difference between emotion that is shown and emotion that is felt.
1856
+
1857
+ 30:47.040 --> 30:54.040
1858
+ So yeah, emotion that is expressed on the surface through your face,
1859
+
1860
+ 30:54.040 --> 30:57.040
1861
+ your body, and various other things,
1862
+
1863
+ 30:57.040 --> 31:00.040
1864
+ and what's actually going on deep inside on the biological level,
1865
+
1866
+ 31:00.040 --> 31:04.040
1867
+ on the neuroscience level, or some kind of cognitive level.
1868
+
1869
+ 31:04.040 --> 31:06.040
1870
+ Yeah, yeah.
1871
+
1872
+ 31:06.040 --> 31:08.040
1873
+ So no easy questions here.
1874
+
1875
+ 31:08.040 --> 31:11.040
1876
+ Yeah, I'm sure there's no definitive answer,
1877
+
1878
+ 31:11.040 --> 31:16.040
1879
+ but what's your sense, how far can we get by just looking at the face?
1880
+
1881
+ 31:16.040 --> 31:18.040
1882
+ We're very limited when we just look at the face,
1883
+
1884
+ 31:18.040 --> 31:22.040
1885
+ but we can get further than most people think we can get.
1886
+
1887
+ 31:22.040 --> 31:26.040
1888
+ People think, hey, I have a great poker face,
1889
+
1890
+ 31:26.040 --> 31:28.040
1891
+ therefore all you're ever going to get from me is neutral.
1892
+
1893
+ 31:28.040 --> 31:30.040
1894
+ Well, that's naive.
1895
+
1896
+ 31:30.040 --> 31:35.040
1897
+ We can read with the ordinary camera on your laptop or on your phone.
1898
+
1899
+ 31:35.040 --> 31:39.040
1900
+ We can read from a neutral face if your heart is racing.
1901
+
1902
+ 31:39.040 --> 31:45.040
1903
+ We can read from a neutral face if your breathing is becoming irregular
1904
+
1905
+ 31:45.040 --> 31:47.040
1906
+ and showing signs of stress.
1907
+
1908
+ 31:47.040 --> 31:53.040
1909
+ We can read under some conditions that maybe I won't give you details on,
1910
+
1911
+ 31:53.040 --> 31:57.040
1912
+ how your heart rate variability power is changing.
1913
+
1914
+ 31:57.040 --> 32:03.040
1915
+ That could be a sign of stress even when your heart rate is not necessarily accelerating.
1916
+
1917
+ 32:03.040 --> 32:06.040
1918
+ Sorry, from physiosensors or from the face?
1919
+
1920
+ 32:06.040 --> 32:09.040
1921
+ From the color changes that you cannot even see,
1922
+
1923
+ 32:09.040 --> 32:11.040
1924
+ but the camera can see.
1925
+
1926
+ 32:11.040 --> 32:13.040
1927
+ That's amazing.
1928
+
1929
+ 32:13.040 --> 32:15.040
1930
+ So you can get a lot of signal.
1931
+
1932
+ 32:15.040 --> 32:18.040
1933
+ So we get things people can't see using a regular camera,
1934
+
1935
+ 32:18.040 --> 32:22.040
1936
+ and from that we can tell things about your stress.
1937
+
1938
+ 32:22.040 --> 32:25.040
1939
+ So if you were just sitting there with a blank face
1940
+
1941
+ 32:25.040 --> 32:30.040
1942
+ thinking nobody can read my emotion, well, you're wrong.
1943
+
1944
+ 32:30.040 --> 32:34.040
1945
+ So that's really interesting, but that's from visual information from the face.
1946
+
1947
+ 32:34.040 --> 32:39.040
1948
+ That's almost like cheating your way to the physiological state of the body
1949
+
1950
+ 32:39.040 --> 32:43.040
1951
+ by being very clever with what you can do with vision.
1952
+
1953
+ 32:43.040 --> 32:44.040
1954
+ With signal processing.
1955
+
1956
+ 32:44.040 --> 32:49.040
1957
+ So that's really impressive, but if you just look at the stuff we humans can see,
1958
+
1959
+ 32:49.040 --> 32:54.040
1960
+ the smile, the smirks, the subtle, all the facial actions.
1961
+
1962
+ 32:54.040 --> 32:57.040
1963
+ So then you can hide that on your face for a limited amount of time.
1964
+
1965
+ 32:57.040 --> 33:01.040
1966
+ Now, if you're just going in for a brief interview and you're hiding it,
1967
+
1968
+ 33:01.040 --> 33:03.040
1969
+ that's pretty easy for most people.
1970
+
1971
+ 33:03.040 --> 33:08.040
1972
+ If you are, however, surveilled constantly everywhere you go,
1973
+
1974
+ 33:08.040 --> 33:13.040
1975
+ then it's going to say, gee, you know, Lex used to smile a lot,
1976
+
1977
+ 33:13.040 --> 33:15.040
1978
+ and now I'm not seeing so many smiles.
1979
+
1980
+ 33:15.040 --> 33:22.040
1981
+ And Roz used to, you know, laugh a lot and smile a lot very spontaneously.
1982
+
1983
+ 33:22.040 --> 33:26.040
1984
+ And now I'm only seeing these not so spontaneous looking smiles
1985
+
1986
+ 33:26.040 --> 33:28.040
1987
+ and only when she's asked these questions.
1988
+
1989
+ 33:28.040 --> 33:32.040
1990
+ You know, that's probably not getting enough sleep.
1991
+
1992
+ 33:32.040 --> 33:35.040
1993
+ We could look at that too.
1994
+
1995
+ 33:35.040 --> 33:37.040
1996
+ So now I have to be a little careful too.
1997
+
1998
+ 33:37.040 --> 33:42.040
1999
+ When I say we, you think we can't read your emotion and we can, it's not that binary.
2000
+
2001
+ 33:42.040 --> 33:48.040
2002
+ What we're reading is more some physiological changes that relate to your activation.
2003
+
2004
+ 33:48.040 --> 33:52.040
2005
+ Now, that doesn't mean that we know everything about how you feel.
2006
+
2007
+ 33:52.040 --> 33:54.040
2008
+ In fact, we still know very little about how you feel.
2009
+
2010
+ 33:54.040 --> 33:56.040
2011
+ Your thoughts are still private.
2012
+
2013
+ 33:56.040 --> 34:00.040
2014
+ Your nuanced feelings are still completely private.
2015
+
2016
+ 34:00.040 --> 34:02.040
2017
+ We can't read any of that.
2018
+
2019
+ 34:02.040 --> 34:06.040
2020
+ So there's some relief that we can't read that.
2021
+
2022
+ 34:06.040 --> 34:09.040
2023
+ Even brain imaging can't read that.
2024
+
2025
+ 34:09.040 --> 34:11.040
2026
+ Wearables can't read that.
2027
+
2028
+ 34:11.040 --> 34:18.040
2029
+ However, as we read your body state changes and we know what's going on in your environment,
2030
+
2031
+ 34:18.040 --> 34:21.040
2032
+ and we look at patterns of those over time,
2033
+
2034
+ 34:21.040 --> 34:27.040
2035
+ we can start to make some inferences about what you might be feeling.
2036
+
2037
+ 34:27.040 --> 34:31.040
2038
+ And that is where it's not just the momentary feeling,
2039
+
2040
+ 34:31.040 --> 34:34.040
2041
+ but it's more your stance toward things.
2042
+
2043
+ 34:34.040 --> 34:41.040
2044
+ And that could actually be a little bit more scary with certain kinds of governmental,
2045
+
2046
+ 34:41.040 --> 34:48.040
2047
+ control free people who want to know more about are you on their team or are you not.
2048
+
2049
+ 34:48.040 --> 34:51.040
2050
+ And getting that information through over time.
2051
+
2052
+ 34:51.040 --> 34:54.040
2053
+ So you're saying there's a lot of signal by looking at the change over time.
2054
+
2055
+ 34:54.040 --> 35:00.040
2056
+ So you've done a lot of exciting work both in computer vision and physiological sounds like wearables.
2057
+
2058
+ 35:00.040 --> 35:03.040
2059
+ What do you think is the best modality for,
2060
+
2061
+ 35:03.040 --> 35:08.040
2062
+ what's the best window into the emotional soul?
2063
+
2064
+ 35:08.040 --> 35:10.040
2065
+ Is it the face? Is it the voice?
2066
+
2067
+ 35:10.040 --> 35:12.040
2068
+ Depends what you want to know.
2069
+
2070
+ 35:12.040 --> 35:14.040
2071
+ It depends what you want to know.
2072
+
2073
+ 35:14.040 --> 35:16.040
2074
+ Everything is informative.
2075
+
2076
+ 35:16.040 --> 35:18.040
2077
+ Everything we do is informative.
2078
+
2079
+ 35:18.040 --> 35:20.040
2080
+ So for health and well being and things like that,
2081
+
2082
+ 35:20.040 --> 35:29.040
2083
+ do you find the wearable, measuring physiological signals is the best for health based stuff?
2084
+
2085
+ 35:29.040 --> 35:35.040
2086
+ So here I'm going to answer empirically with data and studies we've been doing.
2087
+
2088
+ 35:35.040 --> 35:37.040
2089
+ We've been doing studies now.
2090
+
2091
+ 35:37.040 --> 35:40.040
2092
+ These are currently running with lots of different kinds of people,
2093
+
2094
+ 35:40.040 --> 35:44.040
2095
+ but where we've published data, and I can speak publicly to it,
2096
+
2097
+ 35:44.040 --> 35:47.040
2098
+ the data are limited right now to New England college students.
2099
+
2100
+ 35:47.040 --> 35:50.040
2101
+ So that's a small group.
2102
+
2103
+ 35:50.040 --> 35:52.040
2104
+ Among New England college students,
2105
+
2106
+ 35:52.040 --> 36:01.040
2107
+ when they are wearing a wearable like the emphatic embrace here that's measuring skin conductance, movement, temperature,
2108
+
2109
+ 36:01.040 --> 36:10.040
2110
+ and when they are using a smartphone that is collecting their time of day of when they're texting,
2111
+
2112
+ 36:10.040 --> 36:17.040
2113
+ who they're texting, their movement around it, their GPS, the weather information based upon their location,
2114
+
2115
+ 36:17.040 --> 36:22.040
2116
+ and when it's using machine learning and putting all of that together and looking not just at right now,
2117
+
2118
+ 36:22.040 --> 36:28.040
2119
+ but looking at your rhythm of behaviors over about a week.
2120
+
2121
+ 36:28.040 --> 36:38.040
2122
+ When we look at that, we are very accurate at forecasting tomorrow's stress, mood, and happy sad, mood, and health.
2123
+
2124
+ 36:38.040 --> 36:43.040
2125
+ And when we look at which pieces of that are most useful,
2126
+
2127
+ 36:43.040 --> 36:48.040
2128
+ first of all, if you have all the pieces, you get the best results.
2129
+
2130
+ 36:48.040 --> 36:53.040
2131
+ If you have only the wearable, you get the next best results.
2132
+
2133
+ 36:53.040 --> 37:00.040
2134
+ And that's still better than 80% accurate at forecasting tomorrow's levels.
2135
+
2136
+ 37:00.040 --> 37:12.040
2137
+ Isn't that exciting because the wearable stuff with physiological information, it feels like it violates privacy less than the noncontact face based methods.
2138
+
2139
+ 37:12.040 --> 37:14.040
2140
+ Yeah, it's interesting.
2141
+
2142
+ 37:14.040 --> 37:18.040
2143
+ I think what people sometimes don't, you know, it's fine.
2144
+
2145
+ 37:18.040 --> 37:22.040
2146
+ The early days people would say, oh, wearing something or giving blood is invasive, right?
2147
+
2148
+ 37:22.040 --> 37:26.040
2149
+ Whereas a camera is less invasive because it's not touching you.
2150
+
2151
+ 37:26.040 --> 37:33.040
2152
+ I think on the contrary, the things that are not touching you are maybe the scariest because you don't know when they're on or off.
2153
+
2154
+ 37:33.040 --> 37:39.040
2155
+ And you don't know when, and you don't know who's behind it, right?
2156
+
2157
+ 37:39.040 --> 37:52.040
2158
+ A wearable, depending upon what's happening to the data on it, if it's just stored locally or if it's streaming and what it is being attached to.
2159
+
2160
+ 37:52.040 --> 37:59.040
2161
+ In a sense, you have the most control over it because it's also very easy to just take it off, right?
2162
+
2163
+ 37:59.040 --> 38:01.040
2164
+ Now it's not sensing me.
2165
+
2166
+ 38:01.040 --> 38:07.040
2167
+ So if I'm uncomfortable with what it's sensing, now I'm free, right?
2168
+
2169
+ 38:07.040 --> 38:13.040
2170
+ If I'm comfortable with what it's sensing, then, and I happen to know everything about this one and what it's doing with it.
2171
+
2172
+ 38:13.040 --> 38:15.040
2173
+ So I'm quite comfortable with it.
2174
+
2175
+ 38:15.040 --> 38:20.040
2176
+ Then I'm, you know, I have control, I'm comfortable.
2177
+
2178
+ 38:20.040 --> 38:26.040
2179
+ Control is one of the biggest factors for an individual in reducing their stress.
2180
+
2181
+ 38:26.040 --> 38:32.040
2182
+ If I have control over it, if I know others to know about it, then my stress is a lot lower.
2183
+
2184
+ 38:32.040 --> 38:38.040
2185
+ And I'm making an informed choice about whether to wear it or not, or when to wear it or not.
2186
+
2187
+ 38:38.040 --> 38:40.040
2188
+ I want to wear it sometimes, maybe not others.
2189
+
2190
+ 38:40.040 --> 38:43.040
2191
+ Right. So that control, yeah, I'm with you.
2192
+
2193
+ 38:43.040 --> 38:49.040
2194
+ That control, even if, yeah, the ability to turn it off is a really empowering thing.
2195
+
2196
+ 38:49.040 --> 38:50.040
2197
+ It's huge.
2198
+
2199
+ 38:50.040 --> 39:00.040
2200
+ And we need to maybe, you know, if there's regulations, maybe that's number one to protect is people's ability to, is easy to opt out as to opt in.
2201
+
2202
+ 39:00.040 --> 39:01.040
2203
+ Right.
2204
+
2205
+ 39:01.040 --> 39:04.040
2206
+ So you've studied a bit of neuroscience as well.
2207
+
2208
+ 39:04.040 --> 39:16.040
2209
+ How have looking at our own minds, sort of the biological stuff or the neurobiological, the neuroscience look at the signals in our brain,
2210
+
2211
+ 39:16.040 --> 39:20.040
2212
+ helped you understand the problem and the approach of effective computing.
2213
+
2214
+ 39:20.040 --> 39:28.040
2215
+ So originally I was a computer architect and I was building hardware and computer designs and I wanted to build ones that work like the brain.
2216
+
2217
+ 39:28.040 --> 39:33.040
2218
+ So I've been studying the brain as long as I've been studying how to build computers.
2219
+
2220
+ 39:33.040 --> 39:35.040
2221
+ Have you figured out anything yet?
2222
+
2223
+ 39:35.040 --> 39:37.040
2224
+ Very little.
2225
+
2226
+ 39:37.040 --> 39:39.040
2227
+ It's so amazing.
2228
+
2229
+ 39:39.040 --> 39:46.040
2230
+ You know, they used to think like, oh, if you remove this chunk of the brain and you find this function goes away, well, that's the part of the brain that did it.
2231
+
2232
+ 39:46.040 --> 39:53.040
2233
+ And then later they realize if you remove this other chunk of the brain, that function comes back and oh no, we really don't understand it.
2234
+
2235
+ 39:53.040 --> 40:02.040
2236
+ Brains are so interesting and changing all the time and able to change in ways that will probably continue to surprise us.
2237
+
2238
+ 40:02.040 --> 40:14.040
2239
+ When we were measuring stress, you may know the story where we found an unusual big skin conductance pattern on one wrist and one of our kids with autism.
2240
+
2241
+ 40:14.040 --> 40:20.040
2242
+ And in trying to figure out how on earth you could be stressed on one wrist and not the other, like how can you get sweaty on one wrist, right?
2243
+
2244
+ 40:20.040 --> 40:26.040
2245
+ When you when you get stressed with that sympathetic fight or flight response, like you kind of should like sweat more in some places than others,
2246
+
2247
+ 40:26.040 --> 40:28.040
2248
+ but not more on one wrist than the other.
2249
+
2250
+ 40:28.040 --> 40:31.040
2251
+ That didn't make any sense.
2252
+
2253
+ 40:31.040 --> 40:37.040
2254
+ We learned that what had actually happened was a part of his brain had unusual electrical activity.
2255
+
2256
+ 40:37.040 --> 40:44.040
2257
+ And that caused an unusually large sweat response on one wrist and not the other.
2258
+
2259
+ 40:44.040 --> 40:49.040
2260
+ And since then we've learned that seizures cause this unusual electrical activity.
2261
+
2262
+ 40:49.040 --> 40:58.040
2263
+ And depending where the seizure is, if it's in one place and it's staying there, you can have a big electrical response we can pick up with a wearable at one part of the body.
2264
+
2265
+ 40:58.040 --> 41:07.040
2266
+ You can also have a seizure that spreads over the whole brain generalized grand mal seizure and that response spreads and we can pick it up pretty much anywhere.
2267
+
2268
+ 41:07.040 --> 41:13.040
2269
+ As we learned this and then later built embrace that's now FDA cleared for seizure detection.
2270
+
2271
+ 41:13.040 --> 41:23.040
2272
+ We have also built relationships with some of the most amazing doctors in the world who not only help people with unusual brain activity or epilepsy,
2273
+
2274
+ 41:23.040 --> 41:35.040
2275
+ but some of them are also surgeons and they're going in and they're implanting electrodes, not just to momentarily read the strange patterns of brain activity that we'd like to see return to normal,
2276
+
2277
+ 41:35.040 --> 41:42.040
2278
+ but also to read out continuously what's happening in some of these deep regions of the brain during most of life when these patients are not seizing.
2279
+
2280
+ 41:42.040 --> 41:45.040
2281
+ Most of the time they're not seizing. Most of the time they're fine.
2282
+
2283
+ 41:45.040 --> 41:58.040
2284
+ And so we are now working on mapping those deep brain regions that you can't even usually get with EEG scalp electrodes because the changes deep inside don't reach the surface.
2285
+
2286
+ 41:58.040 --> 42:04.040
2287
+ But interesting when some of those regions are activated, we see a big skin conductance response.
2288
+
2289
+ 42:04.040 --> 42:08.040
2290
+ Who would have thunk it, right? Like nothing here, but something here.
2291
+
2292
+ 42:08.040 --> 42:17.040
2293
+ In fact, right after seizures that we think are the most dangerous ones that precede what's called Sudep, sudden unexpected death and epilepsy.
2294
+
2295
+ 42:17.040 --> 42:23.040
2296
+ There's a period where the brain waves go flat and it looks like the person's brain has stopped, but it hasn't.
2297
+
2298
+ 42:23.040 --> 42:32.040
2299
+ The activity has gone deep into a region that can make the cortical activity look flat, like a quick shutdown signal here.
2300
+
2301
+ 42:32.040 --> 42:38.040
2302
+ It can unfortunately cause breathing to stop if it progresses long enough.
2303
+
2304
+ 42:38.040 --> 42:43.040
2305
+ Before that happens, we see a big skin conductance response in the data that we have.
2306
+
2307
+ 42:43.040 --> 42:46.040
2308
+ The longer this flattening, the bigger our response here.
2309
+
2310
+ 42:46.040 --> 42:52.040
2311
+ So we have been trying to learn, you know, initially like, why are we getting a big response here when there's nothing here?
2312
+
2313
+ 42:52.040 --> 42:55.040
2314
+ Well, it turns out there's something much deeper.
2315
+
2316
+ 42:55.040 --> 43:05.040
2317
+ So we can now go inside the brains of some of these individuals, fabulous people who usually aren't seizing and get this data and start to map it.
2318
+
2319
+ 43:05.040 --> 43:09.040
2320
+ So that's active research that we're doing right now with top medical partners.
2321
+
2322
+ 43:09.040 --> 43:18.040
2323
+ So this wearable sensor that's looking at skin conductance can capture sort of the ripples of the complexity of what's going on in our brain.
2324
+
2325
+ 43:18.040 --> 43:27.040
2326
+ So this little device, you have a hope that you can start to get the signal from the interesting things happening in the brain.
2327
+
2328
+ 43:27.040 --> 43:35.040
2329
+ Yeah, we've already published the strong correlations between the size of this response and the flattening that happens afterwards.
2330
+
2331
+ 43:35.040 --> 43:42.040
2332
+ And unfortunately also in a real suit up case where the patient died because the, well, we don't know why.
2333
+
2334
+ 43:42.040 --> 43:48.040
2335
+ We don't know if somebody was there, it would have definitely prevented it, but we know that most suit ups happen when the person's alone.
2336
+
2337
+ 43:48.040 --> 44:01.040
2338
+ And in this case, a suit up is an acronym, S U D E P, and it stands for the number two cause of years of life lost actually among all neurological disorders.
2339
+
2340
+ 44:01.040 --> 44:06.040
2341
+ Stroke is number one, suit up is number two, but most people haven't heard of it.
2342
+
2343
+ 44:06.040 --> 44:11.040
2344
+ Actually, I'll plug my Ted talk, it's on the front page of Ted right now, that talks about this.
2345
+
2346
+ 44:11.040 --> 44:14.040
2347
+ And we hope to change that.
2348
+
2349
+ 44:14.040 --> 44:21.040
2350
+ I hope everybody who's heard of SIDS and stroke will now hear of suit up because we think in most cases it's preventable.
2351
+
2352
+ 44:21.040 --> 44:27.040
2353
+ If people take their meds and aren't alone when they have a seizure, not guaranteed to be preventable.
2354
+
2355
+ 44:27.040 --> 44:31.040
2356
+ There are some exceptions, but we think most cases probably are.
2357
+
2358
+ 44:31.040 --> 44:41.040
2359
+ So you have this embrace now in the version two wristband, right, for epilepsy management. That's the one that's FDA approved.
2360
+
2361
+ 44:41.040 --> 44:42.040
2362
+ Yes.
2363
+
2364
+ 44:42.040 --> 44:44.040
2365
+ Which is kind of cleared.
2366
+
2367
+ 44:44.040 --> 44:45.040
2368
+ They say.
2369
+
2370
+ 44:45.040 --> 44:46.040
2371
+ Sorry.
2372
+
2373
+ 44:46.040 --> 44:47.040
2374
+ No, it's okay.
2375
+
2376
+ 44:47.040 --> 44:49.040
2377
+ It essentially means it's approved for marketing.
2378
+
2379
+ 44:49.040 --> 44:50.040
2380
+ Got it.
2381
+
2382
+ 44:50.040 --> 44:53.040
2383
+ Just a side note, how difficult is that to do?
2384
+
2385
+ 44:53.040 --> 44:57.040
2386
+ It's essentially getting FDA approval for computer science technology.
2387
+
2388
+ 44:57.040 --> 45:04.040
2389
+ It's so agonizing. It's much harder than publishing multiple papers in top medical journals.
2390
+
2391
+ 45:04.040 --> 45:11.040
2392
+ Yeah, we published peer reviewed, top medical journal, neurology, best results, and that's not good enough for the FDA.
2393
+
2394
+ 45:11.040 --> 45:19.040
2395
+ Is that system, so if we look at the peer review of medical journals, there's flaws, the strengths, is the FDA approval process?
2396
+
2397
+ 45:19.040 --> 45:22.040
2398
+ How does it compare to the peer review process?
2399
+
2400
+ 45:22.040 --> 45:24.040
2401
+ Does it have its strength?
2402
+
2403
+ 45:24.040 --> 45:26.040
2404
+ I take peer review over FDA any day.
2405
+
2406
+ 45:26.040 --> 45:28.040
2407
+ Is that a good thing? Is that a good thing for FDA?
2408
+
2409
+ 45:28.040 --> 45:32.040
2410
+ You're saying, does it stop some amazing technology from getting through?
2411
+
2412
+ 45:32.040 --> 45:33.040
2413
+ Yeah, it does.
2414
+
2415
+ 45:33.040 --> 45:37.040
2416
+ The FDA performs a very important good role in keeping people safe.
2417
+
2418
+ 45:37.040 --> 45:44.040
2419
+ They keep things, they put you through tons of safety testing and that's wonderful and that's great.
2420
+
2421
+ 45:44.040 --> 45:48.040
2422
+ I'm all in favor of the safety testing.
2423
+
2424
+ 45:48.040 --> 45:54.040
2425
+ Sometimes they put you through additional testing that they don't have to explain why they put you through it.
2426
+
2427
+ 45:54.040 --> 46:00.040
2428
+ You don't understand why you're going through it and it doesn't make sense and that's very frustrating.
2429
+
2430
+ 46:00.040 --> 46:09.040
2431
+ Maybe they have really good reasons and it would do people a service to articulate those reasons.
2432
+
2433
+ 46:09.040 --> 46:10.040
2434
+ Be more transparent.
2435
+
2436
+ 46:10.040 --> 46:12.040
2437
+ Be more transparent.
2438
+
2439
+ 46:12.040 --> 46:27.040
2440
+ As part of Empatica, we have sensors, so what kind of problems can we crack? What kind of things from seizures to autism to, I think I've heard you mentioned depression.
2441
+
2442
+ 46:27.040 --> 46:35.040
2443
+ What kind of things can we alleviate? Can we detect? What's your hope of how we can make the world a better place with this wearable tech?
2444
+
2445
+ 46:35.040 --> 46:48.040
2446
+ I would really like to see my fellow brilliant researchers step back and say, what are the really hard problems that we don't know how to solve
2447
+
2448
+ 46:48.040 --> 46:57.040
2449
+ that come from people maybe we don't even see in our normal life because they're living in the poorer places, they're stuck on the bus,
2450
+
2451
+ 46:57.040 --> 47:05.040
2452
+ they can't even afford the Uber or the Lyft or the data plan or all these other wonderful things we have that we keep improving on.
2453
+
2454
+ 47:05.040 --> 47:14.040
2455
+ Meanwhile, there's all these folks left behind in the world and they're struggling with horrible diseases, with depression, with epilepsy, with diabetes,
2456
+
2457
+ 47:14.040 --> 47:25.040
2458
+ with just awful stuff that maybe a little more time and attention hanging out with them and learning what are their challenges in life, what are their needs?
2459
+
2460
+ 47:25.040 --> 47:36.040
2461
+ How do we help them have job skills? How do we help them have a hope and a future and a chance to have the great life that so many of us building technology have?
2462
+
2463
+ 47:36.040 --> 47:43.040
2464
+ And then how would that reshape the kinds of AI that we build? How would that reshape the new apps that we build?
2465
+
2466
+ 47:43.040 --> 47:57.040
2467
+ Or maybe we need to focus on how to make things more low cost and green instead of $1,000 phones. I mean, come on. Why can't we be thinking more about things that do more with less for these books?
2468
+
2469
+ 47:57.040 --> 48:09.040
2470
+ Quality of life is not related to the cost of your phone. It's not something that, it's been shown that what, about $75,000 of income and happiness is the same.
2471
+
2472
+ 48:09.040 --> 48:16.040
2473
+ However, I can tell you, you get a lot of happiness from helping other people. You get a lot more than $75,000 buys.
2474
+
2475
+ 48:16.040 --> 48:30.040
2476
+ So how do we connect up the people who have real needs with the people who have the ability to build the future and build the kind of future that truly improves the lives of all the people that are currently being left behind?
2477
+
2478
+ 48:30.040 --> 48:53.040
2479
+ So let me return just briefly on a point, maybe in movie her. So do you think if we look farther into the future, you said so much of the benefit from making our technology more empathetic to us human beings would make them better tools, empower us, make our lives better.
2480
+
2481
+ 48:53.040 --> 49:08.040
2482
+ If we look farther into the future, do you think we'll ever create an AI system that we can fall in love with and loves us back on a level that is similar to human to human interaction, like in the movie her or beyond?
2483
+
2484
+ 49:08.040 --> 49:25.040
2485
+ I think we can simulate it in ways that could, you know, sustain engagement for a while. Would it be as good as another person? I don't think so for if you're used to like good people.
2486
+
2487
+ 49:25.040 --> 49:39.040
2488
+ If you've just grown up with nothing but abuse and you can't stand human beings, can we do something that helps you there that gives you something through a machine? Yeah, but that's pretty low bar, right? If you've only encountered pretty awful people.
2489
+
2490
+ 49:39.040 --> 50:00.040
2491
+ If you've encountered wonderful, amazing people, we're nowhere near building anything like that. And I'm, I would not bet on building it. I would bet instead on building the kinds of AI that helps all helps kind of raise all boats that helps all people be better people helps all
2492
+
2493
+ 50:00.040 --> 50:14.040
2494
+ people figure out if they're getting sick tomorrow and helps give them what they need to stay well tomorrow. That's the kind of AI I want to build that improves human lives, not the kind of AI that just walks on the Tonight Show and people go, wow, look how smart that is.
2495
+
2496
+ 50:14.040 --> 50:18.040
2497
+ You know, really, like, and then it goes back in a box, you know.
2498
+
2499
+ 50:18.040 --> 50:40.040
2500
+ So on that point, if we continue looking a little bit into the future, do you think an AI that's empathetic and does improve our lives need to have a physical presence, a body, and even let me cautiously say the C word consciousness and even fear of mortality.
2501
+
2502
+ 50:40.040 --> 50:59.040
2503
+ So some of those human characteristics, do you think it needs to have those aspects? Or can it remain simply machine learning tool that learns from data of behavior that that learns to make us based on previous patterns feel better?
2504
+
2505
+ 50:59.040 --> 51:02.040
2506
+ Or does it need those elements of consciousness?
2507
+
2508
+ 51:02.040 --> 51:13.040
2509
+ It depends on your goals. If you're making a movie, it needs a body, it needs a gorgeous body, it needs to act like it has consciousness, it needs to act like it has emotion, right, because that's what sells.
2510
+
2511
+ 51:13.040 --> 51:17.040
2512
+ That's what's going to get me to show up and enjoy the movie. Okay.
2513
+
2514
+ 51:17.040 --> 51:34.040
2515
+ In real life, does it need all that? Well, if you've read Orson Scott Card, Ender's Game, Speaker for the Dead, you know, it could just be like a little voice in your earring, right, and you could have an intimate relationship and it could get to know you, and it doesn't need to be a robot.
2516
+
2517
+ 51:34.040 --> 51:43.040
2518
+ But that doesn't make this compelling of a movie, right? I mean, we already think it's kind of weird when a guy looks like he's talking to himself on the train, you know, even though it's earbuds.
2519
+
2520
+ 51:43.040 --> 51:57.040
2521
+ So we have these, embodied is more powerful, embodied when you compare interactions with an embodied robot versus a video of a robot versus no robot.
2522
+
2523
+ 51:57.040 --> 52:08.040
2524
+ The robot is more engaging, the robot gets our attention more, the robot when you walk in your house is more likely to get you to remember to do the things that you asked it to do because it's kind of got a physical presence.
2525
+
2526
+ 52:08.040 --> 52:21.040
2527
+ You can avoid it if you don't like it, it could see you're avoiding it. There's a lot of power to being embodied. There will be embodied AIs. They have great power and opportunity and potential.
2528
+
2529
+ 52:21.040 --> 52:32.040
2530
+ There will also be AIs that aren't embodied that just our little software assistants that help us with different things that may get to know things about us.
2531
+
2532
+ 52:32.040 --> 52:42.040
2533
+ Will they be conscious? There will be attempts to program them to make them appear to be conscious. We can already write programs that make it look like, what do you mean?
2534
+
2535
+ 52:42.040 --> 52:52.040
2536
+ Of course, I'm aware that you're there, right? I mean, it's trivial to say stuff like that. It's easy to fool people. But does it actually have conscious experience like we do?
2537
+
2538
+ 52:52.040 --> 53:01.040
2539
+ Nobody has a clue how to do that yet. That seems to be something that is beyond what any of us knows how to build now.
2540
+
2541
+ 53:01.040 --> 53:11.040
2542
+ Will it have to have that? I think you can get pretty far with a lot of stuff without it. Will we accord it rights?
2543
+
2544
+ 53:11.040 --> 53:16.040
2545
+ Well, that's more a political game that it is a question of real consciousness.
2546
+
2547
+ 53:16.040 --> 53:25.040
2548
+ Can you go to jail for turning off Alexa? It's a question for an election maybe a few decades from now.
2549
+
2550
+ 53:25.040 --> 53:34.040
2551
+ Sophia Robot's already been given rights as a citizen in Saudi Arabia, right? Even before women have full rights.
2552
+
2553
+ 53:34.040 --> 53:42.040
2554
+ Then the robot was still put back in the box to be shipped to the next place where it would get a paid appearance, right?
2555
+
2556
+ 53:42.040 --> 53:50.040
2557
+ Yeah, it's dark and almost comedic, if not absurd.
2558
+
2559
+ 53:50.040 --> 54:03.040
2560
+ So I've heard you speak about your journey in finding faith and how you discovered some wisdoms about life and beyond from reading the Bible.
2561
+
2562
+ 54:03.040 --> 54:11.040
2563
+ And I've also heard you say that you said scientists too often assume that nothing exists beyond what can be currently measured.
2564
+
2565
+ 54:11.040 --> 54:28.040
2566
+ Materialism and scientism. In some sense, this assumption enables the near term scientific method, assuming that we can uncover the mysteries of this world by the mechanisms of measurement that we currently have.
2567
+
2568
+ 54:28.040 --> 54:38.040
2569
+ But we easily forget that we've made this assumption. So what do you think we missed out on by making that assumption?
2570
+
2571
+ 54:38.040 --> 54:49.040
2572
+ It's fine to limit the scientific method to things we can measure and reason about and reproduce. That's fine.
2573
+
2574
+ 54:49.040 --> 54:57.040
2575
+ I think we have to recognize that sometimes we scientists also believe in things that happen historically, you know, like I believe the Holocaust happened.
2576
+
2577
+ 54:57.040 --> 55:11.040
2578
+ I can't prove events from past history scientifically. You prove them with historical evidence, right, with the impact they had on people with eyewitness testimony and things like that.
2579
+
2580
+ 55:11.040 --> 55:21.040
2581
+ So a good thinker recognizes that science is one of many ways to get knowledge. It's not the only way.
2582
+
2583
+ 55:21.040 --> 55:31.040
2584
+ And there's been some really bad philosophy and bad thinking recently called scientism where people say science is the only way to get to truth.
2585
+
2586
+ 55:31.040 --> 55:43.040
2587
+ And it's not. It just isn't. There are other ways that work also, like knowledge of love with someone. You don't prove your love through science, right?
2588
+
2589
+ 55:43.040 --> 56:01.040
2590
+ So history, philosophy, love, a lot of other things in life show us that there's more ways to gain knowledge and truth if you're willing to believe there is such a thing, and I believe there is, than science.
2591
+
2592
+ 56:01.040 --> 56:14.040
2593
+ I am a scientist, however, and in my science, I do limit my science to the things that the scientific method can do. But I recognize that it's myopic to say that that's all there is.
2594
+
2595
+ 56:14.040 --> 56:28.040
2596
+ Right. Just like you listed, there's all the why questions. And really, we know if we're being honest with ourselves, the percent of what we really know is basically zero relative to the full mystery of those.
2597
+
2598
+ 56:28.040 --> 56:34.040
2599
+ Measure theory, a set of measures, if I have a finite amount of knowledge, which I do.
2600
+
2601
+ 56:34.040 --> 56:45.040
2602
+ So you said that you believe in truth. So let me ask that old question. What do you think this thing is all about? Life on earth?
2603
+
2604
+ 56:45.040 --> 56:53.040
2605
+ Life, the universe and everything. I can't put Douglas Adams 42. My favorite number. By the way, that's my street address.
2606
+
2607
+ 56:53.040 --> 57:00.040
2608
+ My husband and I guessed the exact same number for our house. We got to pick it. And there's a reason we picked 42. Yeah.
2609
+
2610
+ 57:00.040 --> 57:05.040
2611
+ So is it just 42 or do you have other words that you can put around it?
2612
+
2613
+ 57:05.040 --> 57:15.040
2614
+ Well, I think there's a grand adventure and I think this life is a part of it. I think there's a lot more to it than meets the eye and the heart and the mind and the soul here.
2615
+
2616
+ 57:15.040 --> 57:28.040
2617
+ We see, but through a glass dimly in this life, we see only a part of all there is to know. If people haven't read the Bible, they should if they consider themselves educated.
2618
+
2619
+ 57:28.040 --> 57:36.040
2620
+ And you could read Proverbs and find tremendous wisdom in there that cannot be scientifically proven.
2621
+
2622
+ 57:36.040 --> 57:43.040
2623
+ But when you read it, there's something in you like a musician knows when the instruments played right and it's beautiful.
2624
+
2625
+ 57:43.040 --> 57:54.040
2626
+ There's something in you that comes alive and knows that there's a truth there that like your strings are being plucked by the master instead of by me playing when I pluck it.
2627
+
2628
+ 57:54.040 --> 57:57.040
2629
+ But probably when you play, it sounds spectacular.
2630
+
2631
+ 57:57.040 --> 58:11.040
2632
+ And when you encounter those truths, there's something in you that sings and knows that there is more than what I can prove mathematically or program a computer to do.
2633
+
2634
+ 58:11.040 --> 58:19.040
2635
+ Don't get me wrong. The math is gorgeous. The computer programming can be brilliant. It's inspiring, right? We want to do more.
2636
+
2637
+ 58:19.040 --> 58:26.040
2638
+ None of this squashes my desire to do science or to get knowledge through science. I'm not dissing the science at all.
2639
+
2640
+ 58:26.040 --> 58:33.040
2641
+ I grow even more in awe of what the science can do because I'm more in awe of all there is we don't know.
2642
+
2643
+ 58:33.040 --> 58:41.040
2644
+ And really at the heart of science, you have to have a belief that there's truth that there's something greater to be discovered.
2645
+
2646
+ 58:41.040 --> 58:48.040
2647
+ And some scientists may not want to use the faith word, but it's faith that drives us to do science.
2648
+
2649
+ 58:48.040 --> 59:00.040
2650
+ It's faith that there is truth that there's something to know that we don't know that it's worth knowing that it's worth working hard and that there is meaning that there is such a thing as meaning,
2651
+
2652
+ 59:00.040 --> 59:07.040
2653
+ which by the way, science can't prove either. We have to kind of start with some assumptions that there's things like truth and meaning.
2654
+
2655
+ 59:07.040 --> 59:14.040
2656
+ And these are really questions philosophers own, right? This is their space of philosophers and theologians at some level.
2657
+
2658
+ 59:14.040 --> 59:24.040
2659
+ So these are things science, you know, if we when people claim that science will tell you all truth, that's there's a name for that.
2660
+
2661
+ 59:24.040 --> 59:29.040
2662
+ It's it's its own kind of faith. It's scientism and it's very myopic.
2663
+
2664
+ 59:29.040 --> 59:37.040
2665
+ Yeah, there's a much bigger world out there to be explored in ways that science may not, at least for now, allows to explore.
2666
+
2667
+ 59:37.040 --> 59:45.040
2668
+ Yeah. And there's meaning and purpose and hope and joy and love and all these awesome things that make it all worthwhile too.
2669
+
2670
+ 59:45.040 --> 59:49.040
2671
+ I don't think there's a better way to end it, Ross. Thank you so much for talking today.
2672
+
2673
+ 59:49.040 --> 1:00:11.040
2674
+ Thanks, Lex. What a pleasure. Great questions.
2675
+
vtt/episode_025_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_026_large.vtt ADDED
@@ -0,0 +1,2534 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:02.700
4
+ The following is a conversation with Sean Carroll.
5
+
6
+ 00:02.700 --> 00:04.900
7
+ He's a theoretical physicist at Caltech
8
+
9
+ 00:04.900 --> 00:08.780
10
+ specializing in quantum mechanics, gravity, and cosmology.
11
+
12
+ 00:08.780 --> 00:11.620
13
+ He's the author of several popular books,
14
+
15
+ 00:11.620 --> 00:15.340
16
+ one on the arrow of time called From Eternity to Here,
17
+
18
+ 00:15.340 --> 00:17.820
19
+ one on the Higgs boson called Particle
20
+
21
+ 00:17.820 --> 00:19.140
22
+ at the End of the Universe,
23
+
24
+ 00:19.140 --> 00:22.540
25
+ and one on science and philosophy called The Big Picture
26
+
27
+ 00:22.540 --> 00:26.340
28
+ on the Origins of Life, Meaning, and the Universe Itself.
29
+
30
+ 00:26.340 --> 00:28.700
31
+ He has an upcoming book on quantum mechanics
32
+
33
+ 00:28.700 --> 00:32.660
34
+ that you can preorder now called Something Deeply Hidden.
35
+
36
+ 00:32.660 --> 00:36.060
37
+ He writes one of my favorite blogs on his website,
38
+
39
+ 00:36.060 --> 00:37.980
40
+ preposterousuniverse.com.
41
+
42
+ 00:37.980 --> 00:40.460
43
+ I recommend clicking on the Greatest Hits link
44
+
45
+ 00:40.460 --> 00:43.340
46
+ that lists accessible, interesting posts
47
+
48
+ 00:43.340 --> 00:45.660
49
+ on the arrow of time, dark matter, dark energy,
50
+
51
+ 00:45.660 --> 00:47.620
52
+ the Big Bang, general relativity,
53
+
54
+ 00:47.620 --> 00:49.580
55
+ string theory, quantum mechanics,
56
+
57
+ 00:49.580 --> 00:53.180
58
+ and the big meta questions about the philosophy of science,
59
+
60
+ 00:53.180 --> 00:57.620
61
+ God, ethics, politics, academia, and much, much more.
62
+
63
+ 00:57.620 --> 01:00.300
64
+ Finally, and perhaps most famously,
65
+
66
+ 01:00.300 --> 01:03.660
67
+ he's the host of a podcast called Mindscape
68
+
69
+ 01:03.660 --> 01:06.940
70
+ that you should subscribe to and support on Patreon.
71
+
72
+ 01:06.940 --> 01:08.820
73
+ Along with the Joe Rogan experience,
74
+
75
+ 01:08.820 --> 01:10.500
76
+ Sam Harris's Making Sense,
77
+
78
+ 01:10.500 --> 01:13.100
79
+ and Dan Carlin's Hardcore History,
80
+
81
+ 01:13.100 --> 01:15.860
82
+ Sean's Mindscape podcast is one of my favorite ways
83
+
84
+ 01:15.860 --> 01:18.820
85
+ to learn new ideas or explore different perspectives
86
+
87
+ 01:18.820 --> 01:22.140
88
+ and ideas that I thought I understood.
89
+
90
+ 01:22.140 --> 01:24.660
91
+ It was truly an honor to meet
92
+
93
+ 01:24.660 --> 01:27.240
94
+ and spend a couple hours with Sean.
95
+
96
+ 01:27.240 --> 01:28.980
97
+ It's a bit heartbreaking to say
98
+
99
+ 01:28.980 --> 01:30.540
100
+ that for the first time ever,
101
+
102
+ 01:30.540 --> 01:32.500
103
+ the audio recorder for this podcast
104
+
105
+ 01:32.500 --> 01:34.940
106
+ died in the middle of our conversation.
107
+
108
+ 01:34.940 --> 01:36.320
109
+ There's technical reasons for this,
110
+
111
+ 01:36.320 --> 01:38.380
112
+ having to do with phantom power
113
+
114
+ 01:38.380 --> 01:41.060
115
+ that I now understand and will avoid.
116
+
117
+ 01:41.060 --> 01:44.220
118
+ It took me one hour to notice and fix the problem.
119
+
120
+ 01:44.220 --> 01:48.340
121
+ So, much like the universe is 68% dark energy,
122
+
123
+ 01:48.340 --> 01:51.340
124
+ roughly the same amount from this conversation was lost,
125
+
126
+ 01:51.340 --> 01:54.220
127
+ except in the memories of the two people involved
128
+
129
+ 01:54.220 --> 01:56.320
130
+ and in my notes.
131
+
132
+ 01:56.320 --> 01:59.940
133
+ I'm sure we'll talk again and continue this conversation
134
+
135
+ 01:59.940 --> 02:02.420
136
+ on this podcast or on Sean's.
137
+
138
+ 02:02.420 --> 02:05.300
139
+ And of course, I look forward to it.
140
+
141
+ 02:05.300 --> 02:07.820
142
+ This is the Artificial Intelligence podcast.
143
+
144
+ 02:07.820 --> 02:11.060
145
+ If you enjoy it, subscribe on YouTube, iTunes,
146
+
147
+ 02:11.060 --> 02:12.520
148
+ support it on Patreon,
149
+
150
+ 02:12.520 --> 02:16.660
151
+ or simply connect with me on Twitter at Lex Friedman.
152
+
153
+ 02:16.660 --> 02:21.380
154
+ And now, here's my conversation with Sean Carroll.
155
+
156
+ 02:21.380 --> 02:23.540
157
+ What do you think is more interesting and impactful,
158
+
159
+ 02:23.540 --> 02:26.860
160
+ understanding how the universe works at a fundamental level
161
+
162
+ 02:26.860 --> 02:29.180
163
+ or understanding how the human mind works?
164
+
165
+ 02:29.180 --> 02:31.960
166
+ You know, of course this is a crazy,
167
+
168
+ 02:31.960 --> 02:33.940
169
+ meaningless, unanswerable question in some sense,
170
+
171
+ 02:33.940 --> 02:35.140
172
+ because they're both very interesting
173
+
174
+ 02:35.140 --> 02:37.500
175
+ and there's no absolute scale of interestingness
176
+
177
+ 02:37.500 --> 02:39.180
178
+ that we can rate them on.
179
+
180
+ 02:39.180 --> 02:41.140
181
+ There's a glib answer that says the human brain
182
+
183
+ 02:41.140 --> 02:43.060
184
+ is part of the universe, right?
185
+
186
+ 02:43.060 --> 02:44.420
187
+ And therefore, understanding the universe
188
+
189
+ 02:44.420 --> 02:47.020
190
+ is more fundamental than understanding the human brain.
191
+
192
+ 02:47.020 --> 02:49.580
193
+ But do you really believe that once we understand
194
+
195
+ 02:49.580 --> 02:51.500
196
+ the fundamental way the universe works
197
+
198
+ 02:51.500 --> 02:53.740
199
+ at the particle level, the forces,
200
+
201
+ 02:53.740 --> 02:55.820
202
+ we would be able to understand how the mind works?
203
+
204
+ 02:55.820 --> 02:56.660
205
+ No, certainly not.
206
+
207
+ 02:56.660 --> 02:58.740
208
+ We cannot understand how ice cream works
209
+
210
+ 02:58.740 --> 03:01.060
211
+ just from understanding how particles work, right?
212
+
213
+ 03:01.060 --> 03:02.740
214
+ So I'm a big believer in emergence.
215
+
216
+ 03:02.740 --> 03:05.300
217
+ I'm a big believer that there are different ways
218
+
219
+ 03:05.300 --> 03:06.660
220
+ of talking about the world
221
+
222
+ 03:07.900 --> 03:11.180
223
+ beyond just the most fundamental microscopic one.
224
+
225
+ 03:11.180 --> 03:13.860
226
+ You know, when we talk about tables and chairs
227
+
228
+ 03:13.860 --> 03:15.120
229
+ and planets and people,
230
+
231
+ 03:15.120 --> 03:17.300
232
+ we're not talking the language of particle physics
233
+
234
+ 03:17.300 --> 03:18.380
235
+ and cosmology.
236
+
237
+ 03:18.380 --> 03:20.860
238
+ So, but understanding the universe,
239
+
240
+ 03:20.860 --> 03:24.060
241
+ you didn't say just at the most fundamental level, right?
242
+
243
+ 03:24.060 --> 03:26.740
244
+ So understanding the universe at all levels
245
+
246
+ 03:26.740 --> 03:28.200
247
+ is part of that.
248
+
249
+ 03:28.200 --> 03:29.940
250
+ I do think, you know, to be a little bit more fair
251
+
252
+ 03:29.940 --> 03:33.980
253
+ to the question, there probably are general principles
254
+
255
+ 03:33.980 --> 03:38.500
256
+ of complexity, biology, information processing,
257
+
258
+ 03:38.500 --> 03:41.820
259
+ memory, knowledge, creativity
260
+
261
+ 03:41.820 --> 03:45.620
262
+ that go beyond just the human brain, right?
263
+
264
+ 03:45.620 --> 03:47.800
265
+ And maybe one could count understanding those
266
+
267
+ 03:47.800 --> 03:49.140
268
+ as part of understanding the universe.
269
+
270
+ 03:49.140 --> 03:50.480
271
+ The human brain, as far as we know,
272
+
273
+ 03:50.480 --> 03:54.300
274
+ is the most complex thing in the universe.
275
+
276
+ 03:54.300 --> 03:57.420
277
+ So there's, it's certainly absurd to think
278
+
279
+ 03:57.420 --> 03:58.860
280
+ that by understanding the fundamental laws
281
+
282
+ 03:58.860 --> 04:00.340
283
+ of particle physics,
284
+
285
+ 04:00.340 --> 04:02.860
286
+ you get any direct insight on how the brain works.
287
+
288
+ 04:02.860 --> 04:05.940
289
+ But then there's this step from the fundamentals
290
+
291
+ 04:05.940 --> 04:08.700
292
+ of particle physics to information processing,
293
+
294
+ 04:08.700 --> 04:10.820
295
+ which a lot of physicists and philosophers
296
+
297
+ 04:10.820 --> 04:12.460
298
+ may be a little bit carelessly take
299
+
300
+ 04:12.460 --> 04:14.620
301
+ when they talk about artificial intelligence.
302
+
303
+ 04:14.620 --> 04:18.020
304
+ Do you think of the universe
305
+
306
+ 04:18.020 --> 04:21.300
307
+ as a kind of a computational device?
308
+
309
+ 04:21.300 --> 04:24.140
310
+ No, to be like, the honest answer there is no.
311
+
312
+ 04:24.140 --> 04:26.300
313
+ There's a sense in which the universe
314
+
315
+ 04:26.300 --> 04:29.140
316
+ processes information, clearly.
317
+
318
+ 04:29.140 --> 04:30.700
319
+ There's a sense in which the universe
320
+
321
+ 04:30.700 --> 04:33.880
322
+ is like a computer, clearly.
323
+
324
+ 04:33.880 --> 04:36.500
325
+ But in some sense, I think,
326
+
327
+ 04:36.500 --> 04:38.540
328
+ I tried to say this once on my blog
329
+
330
+ 04:38.540 --> 04:39.360
331
+ and no one agreed with me,
332
+
333
+ 04:39.360 --> 04:42.360
334
+ but the universe is more like a computation
335
+
336
+ 04:42.360 --> 04:45.060
337
+ than a computer because the universe happens once.
338
+
339
+ 04:45.060 --> 04:46.900
340
+ A computer is a general purpose machine, right?
341
+
342
+ 04:46.900 --> 04:48.700
343
+ That you can ask it different questions,
344
+
345
+ 04:48.700 --> 04:50.140
346
+ even a pocket calculator, right?
347
+
348
+ 04:50.140 --> 04:52.980
349
+ And it's set up to answer certain kinds of questions.
350
+
351
+ 04:52.980 --> 04:54.340
352
+ The universe isn't that.
353
+
354
+ 04:54.340 --> 04:57.360
355
+ So information processing happens in the universe,
356
+
357
+ 04:57.360 --> 04:59.220
358
+ but it's not what the universe is.
359
+
360
+ 04:59.220 --> 05:01.580
361
+ And I know your MIT colleague, Seth Lloyd,
362
+
363
+ 05:01.580 --> 05:03.820
364
+ feels very differently about this, right?
365
+
366
+ 05:03.820 --> 05:07.220
367
+ Well, you're thinking of the universe as a closed system.
368
+
369
+ 05:07.220 --> 05:08.060
370
+ I am.
371
+
372
+ 05:08.060 --> 05:11.780
373
+ So what makes a computer more like a PC,
374
+
375
+ 05:12.980 --> 05:15.500
376
+ like a computing machine is that there's a human
377
+
378
+ 05:15.500 --> 05:19.100
379
+ that every once comes up to it and moves the mouse around.
380
+
381
+ 05:19.100 --> 05:19.940
382
+ So input.
383
+
384
+ 05:19.940 --> 05:20.760
385
+ Gives it input.
386
+
387
+ 05:20.760 --> 05:21.600
388
+ Gives it input.
389
+
390
+ 05:23.500 --> 05:26.300
391
+ And that's why you're saying it's just a computation,
392
+
393
+ 05:26.300 --> 05:29.260
394
+ a deterministic thing that's just unrolling.
395
+
396
+ 05:29.260 --> 05:32.220
397
+ But the immense complexity of it
398
+
399
+ 05:32.220 --> 05:34.420
400
+ is nevertheless like processing.
401
+
402
+ 05:34.420 --> 05:39.420
403
+ There's a state and then it changes with good rules.
404
+
405
+ 05:40.140 --> 05:41.660
406
+ And there's a sense for a lot of people
407
+
408
+ 05:41.660 --> 05:44.420
409
+ that if the brain operates,
410
+
411
+ 05:44.420 --> 05:46.460
412
+ the human brain operates within that world,
413
+
414
+ 05:46.460 --> 05:49.340
415
+ then it's simply just a small subset of that.
416
+
417
+ 05:49.340 --> 05:52.500
418
+ And so there's no reason we can't build
419
+
420
+ 05:52.500 --> 05:55.560
421
+ arbitrarily great intelligences.
422
+
423
+ 05:55.560 --> 05:56.400
424
+ Yeah.
425
+
426
+ 05:56.400 --> 05:58.660
427
+ Do you think of intelligence in this way?
428
+
429
+ 05:58.660 --> 05:59.580
430
+ Intelligence is tricky.
431
+
432
+ 05:59.580 --> 06:01.660
433
+ I don't have a definition of it offhand.
434
+
435
+ 06:01.660 --> 06:05.460
436
+ So I remember this panel discussion that I saw on YouTube.
437
+
438
+ 06:05.460 --> 06:07.620
439
+ I wasn't there, but Seth Lloyd was on the panel.
440
+
441
+ 06:07.620 --> 06:10.540
442
+ And so was Martin Rees, the famous astrophysicist.
443
+
444
+ 06:10.540 --> 06:13.780
445
+ And Seth gave his shtick for why the universe is a computer
446
+
447
+ 06:13.780 --> 06:14.820
448
+ and explained this.
449
+
450
+ 06:14.820 --> 06:19.140
451
+ And Martin Rees said, so what is not a computer?
452
+
453
+ 06:19.140 --> 06:22.000
454
+ And Seth was like, oh, that's a good question.
455
+
456
+ 06:22.000 --> 06:22.840
457
+ I'm not sure.
458
+
459
+ 06:22.840 --> 06:24.960
460
+ Because if you have a sufficiently broad definition
461
+
462
+ 06:24.960 --> 06:28.360
463
+ of what a computer is, then everything is, right?
464
+
465
+ 06:28.360 --> 06:32.140
466
+ And the simile or the analogy gains force
467
+
468
+ 06:32.140 --> 06:34.380
469
+ when it excludes some things.
470
+
471
+ 06:34.380 --> 06:36.260
472
+ You know, is the moon going around the earth
473
+
474
+ 06:36.260 --> 06:38.620
475
+ performing a computation?
476
+
477
+ 06:38.620 --> 06:41.320
478
+ I can come up with definitions in which the answer is yes,
479
+
480
+ 06:41.320 --> 06:43.820
481
+ but it's not a very useful computation.
482
+
483
+ 06:43.820 --> 06:46.140
484
+ I think that it's absolutely helpful
485
+
486
+ 06:46.140 --> 06:49.620
487
+ to think about the universe in certain situations,
488
+
489
+ 06:49.620 --> 06:53.020
490
+ certain contexts, as an information processing device.
491
+
492
+ 06:53.020 --> 06:54.860
493
+ I'm even guilty of writing a paper
494
+
495
+ 06:54.860 --> 06:56.820
496
+ called Quantum Circuit Cosmology,
497
+
498
+ 06:56.820 --> 06:59.260
499
+ where we modeled the whole universe as a quantum circuit.
500
+
501
+ 06:59.260 --> 07:00.100
502
+ As a circuit.
503
+
504
+ 07:00.100 --> 07:01.340
505
+ As a circuit, yeah.
506
+
507
+ 07:01.340 --> 07:02.860
508
+ With qubits kind of thing?
509
+
510
+ 07:02.860 --> 07:05.040
511
+ With qubits basically, right, yeah.
512
+
513
+ 07:05.040 --> 07:07.440
514
+ So, and qubits becoming more and more entangled.
515
+
516
+ 07:07.440 --> 07:09.660
517
+ So do we wanna digress a little bit?
518
+
519
+ 07:09.660 --> 07:10.500
520
+ Let's do it.
521
+
522
+ 07:10.500 --> 07:11.340
523
+ It's kind of fun.
524
+
525
+ 07:11.340 --> 07:13.700
526
+ So here's a mystery about the universe
527
+
528
+ 07:13.700 --> 07:16.880
529
+ that is so deep and profound that nobody talks about it.
530
+
531
+ 07:16.880 --> 07:19.080
532
+ Space expands, right?
533
+
534
+ 07:19.080 --> 07:21.940
535
+ And we talk about, in a certain region of space,
536
+
537
+ 07:21.940 --> 07:23.620
538
+ a certain number of degrees of freedom,
539
+
540
+ 07:23.620 --> 07:25.540
541
+ a certain number of ways that the quantum fields
542
+
543
+ 07:25.540 --> 07:28.800
544
+ and the particles in that region can arrange themselves.
545
+
546
+ 07:28.800 --> 07:32.220
547
+ That number of degrees of freedom in a region of space
548
+
549
+ 07:32.220 --> 07:33.820
550
+ is arguably finite.
551
+
552
+ 07:33.820 --> 07:36.660
553
+ We actually don't know how many there are,
554
+
555
+ 07:36.660 --> 07:37.820
556
+ but there's a very good argument
557
+
558
+ 07:37.820 --> 07:39.420
559
+ that says it's a finite number.
560
+
561
+ 07:39.420 --> 07:43.540
562
+ So as the universe expands and space gets bigger,
563
+
564
+ 07:44.900 --> 07:46.780
565
+ are there more degrees of freedom?
566
+
567
+ 07:46.780 --> 07:48.540
568
+ If it's an infinite number, it doesn't really matter.
569
+
570
+ 07:48.540 --> 07:49.980
571
+ Infinity times two is still infinity.
572
+
573
+ 07:49.980 --> 07:53.480
574
+ But if it's a finite number, then there's more space,
575
+
576
+ 07:53.480 --> 07:54.420
577
+ so there's more degrees of freedom.
578
+
579
+ 07:54.420 --> 07:55.740
580
+ So where did they come from?
581
+
582
+ 07:55.740 --> 07:58.020
583
+ That would mean the universe is not a closed system.
584
+
585
+ 07:58.020 --> 08:01.500
586
+ There's more degrees of freedom popping into existence.
587
+
588
+ 08:01.500 --> 08:03.460
589
+ So what we suggested was
590
+
591
+ 08:03.460 --> 08:05.340
592
+ that there are more degrees of freedom,
593
+
594
+ 08:05.340 --> 08:07.980
595
+ and it's not that they're not there to start,
596
+
597
+ 08:07.980 --> 08:10.860
598
+ but they're not entangled to start.
599
+
600
+ 08:10.860 --> 08:12.820
601
+ So the universe that you and I know of,
602
+
603
+ 08:12.820 --> 08:15.440
604
+ the three dimensions around us that we see,
605
+
606
+ 08:15.440 --> 08:18.100
607
+ we said those are the entangled degrees of freedom
608
+
609
+ 08:18.100 --> 08:19.620
610
+ making up space time.
611
+
612
+ 08:19.620 --> 08:20.920
613
+ And as the universe expands,
614
+
615
+ 08:20.920 --> 08:25.180
616
+ there are a whole bunch of qubits in their zero state
617
+
618
+ 08:25.180 --> 08:28.140
619
+ that become entangled with the rest of space time
620
+
621
+ 08:28.140 --> 08:31.180
622
+ through the action of these quantum circuits.
623
+
624
+ 08:31.180 --> 08:35.580
625
+ So what does it mean that there's now more
626
+
627
+ 08:35.580 --> 08:39.300
628
+ degrees of freedom as they become more entangled?
629
+
630
+ 08:39.300 --> 08:40.300
631
+ Yeah, so.
632
+
633
+ 08:40.300 --> 08:41.660
634
+ As the universe expands.
635
+
636
+ 08:41.660 --> 08:43.300
637
+ That's right, so there's more and more degrees of freedom
638
+
639
+ 08:43.300 --> 08:46.420
640
+ that are entangled, that are playing part,
641
+
642
+ 08:46.420 --> 08:47.360
643
+ playing the role of part
644
+
645
+ 08:47.360 --> 08:49.600
646
+ of the entangled space time structure.
647
+
648
+ 08:49.600 --> 08:51.980
649
+ So the basic, the underlying philosophy is
650
+
651
+ 08:51.980 --> 08:54.620
652
+ that space time itself arises from the entanglement
653
+
654
+ 08:54.620 --> 08:57.560
655
+ of some fundamental quantum degrees of freedom.
656
+
657
+ 08:57.560 --> 09:00.820
658
+ Wow, okay, so at which point
659
+
660
+ 09:00.820 --> 09:05.260
661
+ is most of the entanglement happening?
662
+
663
+ 09:05.260 --> 09:07.460
664
+ Are we talking about close to the Big Bang?
665
+
666
+ 09:07.460 --> 09:11.820
667
+ Are we talking about throughout the time of the life?
668
+
669
+ 09:11.820 --> 09:12.660
670
+ Throughout history, yeah.
671
+
672
+ 09:12.660 --> 09:15.140
673
+ So the idea is that at the Big Bang,
674
+
675
+ 09:15.140 --> 09:16.780
676
+ almost all the degrees of freedom
677
+
678
+ 09:16.780 --> 09:19.700
679
+ that the universe could have were there,
680
+
681
+ 09:19.700 --> 09:22.420
682
+ but they were unentangled with anything else.
683
+
684
+ 09:22.420 --> 09:23.900
685
+ And that's a reflection of the fact
686
+
687
+ 09:23.900 --> 09:25.620
688
+ that the Big Bang had a low entropy.
689
+
690
+ 09:25.620 --> 09:28.020
691
+ It was a very simple, very small place.
692
+
693
+ 09:28.020 --> 09:31.420
694
+ And as space expands, more and more degrees of freedom
695
+
696
+ 09:31.420 --> 09:34.300
697
+ become entangled with the rest of the world.
698
+
699
+ 09:34.300 --> 09:35.940
700
+ Well, I have to ask John Carroll,
701
+
702
+ 09:35.940 --> 09:37.880
703
+ what do you think of the thought experiment
704
+
705
+ 09:37.880 --> 09:41.580
706
+ from Nick Bostrom that we're living in a simulation?
707
+
708
+ 09:41.580 --> 09:44.980
709
+ So I think, let me contextualize that a little bit more.
710
+
711
+ 09:44.980 --> 09:48.340
712
+ I think people don't actually take this thought experiments.
713
+
714
+ 09:48.340 --> 09:50.460
715
+ I think it's quite interesting.
716
+
717
+ 09:50.460 --> 09:52.900
718
+ It's not very useful, but it's quite interesting.
719
+
720
+ 09:52.900 --> 09:54.500
721
+ From the perspective of AI,
722
+
723
+ 09:54.500 --> 09:58.020
724
+ a lot of the learning that can be done usually happens
725
+
726
+ 09:58.020 --> 10:00.580
727
+ in simulation from artificial examples.
728
+
729
+ 10:00.580 --> 10:03.840
730
+ And so it's a constructive question to ask,
731
+
732
+ 10:04.900 --> 10:08.240
733
+ how difficult is our real world to simulate?
734
+
735
+ 10:08.240 --> 10:09.360
736
+ Right.
737
+
738
+ 10:09.360 --> 10:12.180
739
+ Which is kind of a dual part of,
740
+
741
+ 10:12.180 --> 10:14.100
742
+ if we're living in a simulation
743
+
744
+ 10:14.100 --> 10:16.420
745
+ and somebody built that simulation,
746
+
747
+ 10:16.420 --> 10:18.860
748
+ if you were to try to do it yourself, how hard would it be?
749
+
750
+ 10:18.860 --> 10:21.100
751
+ So obviously we could be living in a simulation.
752
+
753
+ 10:21.100 --> 10:23.000
754
+ If you just want the physical possibility,
755
+
756
+ 10:23.000 --> 10:25.420
757
+ then I completely agree that it's physically possible.
758
+
759
+ 10:25.420 --> 10:27.380
760
+ I don't think that we actually are.
761
+
762
+ 10:27.380 --> 10:30.300
763
+ So take this one piece of data into consideration.
764
+
765
+ 10:30.300 --> 10:33.960
766
+ You know, we live in a big universe, okay?
767
+
768
+ 10:35.140 --> 10:38.500
769
+ There's two trillion galaxies in our observable universe
770
+
771
+ 10:38.500 --> 10:41.660
772
+ with 200 billion stars in each galaxy, et cetera.
773
+
774
+ 10:41.660 --> 10:44.940
775
+ It would seem to be a waste of resources
776
+
777
+ 10:44.940 --> 10:46.540
778
+ to have a universe that big going on
779
+
780
+ 10:46.540 --> 10:47.540
781
+ just to do a simulation.
782
+
783
+ 10:47.540 --> 10:50.140
784
+ So in other words, I want to be a good Bayesian.
785
+
786
+ 10:50.140 --> 10:52.940
787
+ I want to ask under this hypothesis,
788
+
789
+ 10:52.940 --> 10:54.960
790
+ what do I expect to see?
791
+
792
+ 10:54.960 --> 10:56.780
793
+ So the first thing I would say is I wouldn't expect
794
+
795
+ 10:56.780 --> 11:00.340
796
+ to see a universe that was that big, okay?
797
+
798
+ 11:00.340 --> 11:02.540
799
+ The second thing is I wouldn't expect the resolution
800
+
801
+ 11:02.540 --> 11:05.020
802
+ of the universe to be as good as it is.
803
+
804
+ 11:05.020 --> 11:08.740
805
+ So it's always possible that if our superhuman simulators
806
+
807
+ 11:08.740 --> 11:09.900
808
+ only have finite resources,
809
+
810
+ 11:09.900 --> 11:12.420
811
+ that they don't render the entire universe, right?
812
+
813
+ 11:12.420 --> 11:14.340
814
+ That the part that is out there,
815
+
816
+ 11:14.340 --> 11:16.300
817
+ the two trillion galaxies,
818
+
819
+ 11:16.300 --> 11:19.640
820
+ isn't actually being simulated fully, okay?
821
+
822
+ 11:19.640 --> 11:22.740
823
+ But then the obvious extrapolation of that
824
+
825
+ 11:22.740 --> 11:25.500
826
+ is that only I am being simulated fully.
827
+
828
+ 11:25.500 --> 11:29.220
829
+ Like the rest of you are just non player characters, right?
830
+
831
+ 11:29.220 --> 11:30.520
832
+ I'm the only thing that is real.
833
+
834
+ 11:30.520 --> 11:32.780
835
+ The rest of you are just chat bots.
836
+
837
+ 11:32.780 --> 11:34.320
838
+ Beyond this wall, I see the wall,
839
+
840
+ 11:34.320 --> 11:36.020
841
+ but there is literally nothing
842
+
843
+ 11:36.020 --> 11:37.360
844
+ on the other side of the wall.
845
+
846
+ 11:37.360 --> 11:39.300
847
+ That is sort of the Bayesian prediction.
848
+
849
+ 11:39.300 --> 11:40.180
850
+ That's what it would be like
851
+
852
+ 11:40.180 --> 11:42.240
853
+ to do an efficient simulation of me.
854
+
855
+ 11:42.240 --> 11:45.700
856
+ So like none of that seems quite realistic.
857
+
858
+ 11:45.700 --> 11:50.700
859
+ I don't see, I hear the argument that it's just possible
860
+
861
+ 11:50.900 --> 11:53.300
862
+ and easy to simulate lots of things.
863
+
864
+ 11:53.300 --> 11:55.780
865
+ I don't see any evidence from what we know
866
+
867
+ 11:55.780 --> 11:59.340
868
+ about our universe that we look like a simulated universe.
869
+
870
+ 11:59.340 --> 12:00.180
871
+ Now, maybe you can say,
872
+
873
+ 12:00.180 --> 12:01.980
874
+ well, we don't know what it would look like,
875
+
876
+ 12:01.980 --> 12:04.520
877
+ but that's just abandoning your Bayesian responsibilities.
878
+
879
+ 12:04.520 --> 12:07.660
880
+ Like your job is to say under this theory,
881
+
882
+ 12:07.660 --> 12:09.500
883
+ here's what you would expect to see.
884
+
885
+ 12:09.500 --> 12:11.660
886
+ Yeah, so certainly if you think about simulation
887
+
888
+ 12:11.660 --> 12:14.340
889
+ as a thing that's like a video game
890
+
891
+ 12:14.340 --> 12:17.740
892
+ where only a small subset is being rendered.
893
+
894
+ 12:17.740 --> 12:22.740
895
+ But say the entire, all the laws of physics,
896
+
897
+ 12:22.740 --> 12:26.540
898
+ the entire closed system of the quote unquote universe,
899
+
900
+ 12:26.540 --> 12:27.780
901
+ it had a creator.
902
+
903
+ 12:27.780 --> 12:29.320
904
+ Yeah, it's always possible.
905
+
906
+ 12:29.320 --> 12:32.280
907
+ Right, so that's not useful to think about
908
+
909
+ 12:32.280 --> 12:34.020
910
+ when you're thinking about physics.
911
+
912
+ 12:34.020 --> 12:36.220
913
+ The way Nick Bostrom phrases it,
914
+
915
+ 12:36.220 --> 12:39.100
916
+ if it's possible to simulate a universe,
917
+
918
+ 12:39.100 --> 12:40.500
919
+ eventually we'll do it.
920
+
921
+ 12:40.500 --> 12:41.340
922
+ Right.
923
+
924
+ 12:42.700 --> 12:44.860
925
+ You can use that by the way for a lot of things.
926
+
927
+ 12:44.860 --> 12:45.700
928
+ Well, yeah.
929
+
930
+ 12:45.700 --> 12:48.540
931
+ But I guess the question is,
932
+
933
+ 12:48.540 --> 12:52.340
934
+ how hard is it to create a universe?
935
+
936
+ 12:52.340 --> 12:53.820
937
+ I wrote a little blog post about this
938
+
939
+ 12:53.820 --> 12:55.460
940
+ and maybe I'm missing something,
941
+
942
+ 12:55.460 --> 12:57.680
943
+ but there's an argument that says not only
944
+
945
+ 12:57.680 --> 13:00.500
946
+ that it might be possible to simulate a universe,
947
+
948
+ 13:00.500 --> 13:05.500
949
+ but probably if you imagine that you actually attribute
950
+
951
+ 13:05.980 --> 13:08.860
952
+ consciousness and agency to the little things
953
+
954
+ 13:08.860 --> 13:12.020
955
+ that we're simulating, to our little artificial beings,
956
+
957
+ 13:12.020 --> 13:13.420
958
+ there's probably a lot more of them
959
+
960
+ 13:13.420 --> 13:15.500
961
+ than there are ordinary organic beings in the universe
962
+
963
+ 13:15.500 --> 13:17.420
964
+ or there will be in the future, right?
965
+
966
+ 13:17.420 --> 13:18.500
967
+ So there's an argument that not only
968
+
969
+ 13:18.500 --> 13:20.760
970
+ is being a simulation possible,
971
+
972
+ 13:20.760 --> 13:23.560
973
+ it's probable because in the space
974
+
975
+ 13:23.560 --> 13:24.960
976
+ of all living consciousnesses,
977
+
978
+ 13:24.960 --> 13:26.620
979
+ most of them are being simulated, right?
980
+
981
+ 13:26.620 --> 13:28.860
982
+ Most of them are not at the top level.
983
+
984
+ 13:28.860 --> 13:30.540
985
+ I think that argument must be wrong
986
+
987
+ 13:30.540 --> 13:33.100
988
+ because it follows from that argument that,
989
+
990
+ 13:33.100 --> 13:36.920
991
+ if we're simulated, but we can also simulate other things,
992
+
993
+ 13:36.920 --> 13:38.840
994
+ well, but if we can simulate other things,
995
+
996
+ 13:38.840 --> 13:41.840
997
+ they can simulate other things, right?
998
+
999
+ 13:41.840 --> 13:44.320
1000
+ If we give them enough power and resolution
1001
+
1002
+ 13:44.320 --> 13:45.980
1003
+ and ultimately we'll reach a bottom
1004
+
1005
+ 13:45.980 --> 13:49.140
1006
+ because the laws of physics in our universe have a bottom,
1007
+
1008
+ 13:49.140 --> 13:51.000
1009
+ we're made of atoms and so forth,
1010
+
1011
+ 13:51.000 --> 13:55.100
1012
+ so there will be the cheapest possible simulations.
1013
+
1014
+ 13:55.100 --> 13:57.700
1015
+ And if you believe the original argument,
1016
+
1017
+ 13:57.700 --> 13:59.340
1018
+ you should conclude that we should be
1019
+
1020
+ 13:59.340 --> 14:00.940
1021
+ in the cheapest possible simulation
1022
+
1023
+ 14:00.940 --> 14:02.660
1024
+ because that's where most people are.
1025
+
1026
+ 14:02.660 --> 14:03.620
1027
+ But we don't look like that.
1028
+
1029
+ 14:03.620 --> 14:06.860
1030
+ It doesn't look at all like we're at the edge of resolution,
1031
+
1032
+ 14:06.860 --> 14:09.540
1033
+ that we're 16 bit things.
1034
+
1035
+ 14:09.540 --> 14:13.020
1036
+ It seems much easier to make much lower level things
1037
+
1038
+ 14:13.020 --> 14:13.860
1039
+ than we are.
1040
+
1041
+ 14:14.980 --> 14:18.220
1042
+ And also, I questioned the whole approach
1043
+
1044
+ 14:18.220 --> 14:19.460
1045
+ to the anthropic principle
1046
+
1047
+ 14:19.460 --> 14:22.340
1048
+ that says we are typical observers in the universe.
1049
+
1050
+ 14:22.340 --> 14:23.660
1051
+ I think that that's not actually,
1052
+
1053
+ 14:23.660 --> 14:27.340
1054
+ I think that there's a lot of selection that we can do
1055
+
1056
+ 14:27.340 --> 14:30.180
1057
+ that we're typical within things we already know,
1058
+
1059
+ 14:30.180 --> 14:32.280
1060
+ but not typical within all of the universe.
1061
+
1062
+ 14:32.280 --> 14:35.800
1063
+ So do you think there's intelligent life,
1064
+
1065
+ 14:35.800 --> 14:37.860
1066
+ however you would like to define intelligent life,
1067
+
1068
+ 14:37.860 --> 14:39.940
1069
+ out there in the universe?
1070
+
1071
+ 14:39.940 --> 14:44.660
1072
+ My guess is that there is not intelligent life
1073
+
1074
+ 14:44.660 --> 14:48.820
1075
+ in the observable universe other than us, simply
1076
+
1077
+ 14:48.820 --> 14:52.540
1078
+ on the basis of the fact that the likely number
1079
+
1080
+ 14:52.540 --> 14:56.340
1081
+ of other intelligent species in the observable universe,
1082
+
1083
+ 14:56.340 --> 15:00.320
1084
+ there's two likely numbers, zero or billions.
1085
+
1086
+ 15:01.500 --> 15:02.580
1087
+ And if there had been billions,
1088
+
1089
+ 15:02.580 --> 15:04.140
1090
+ you would have noticed already.
1091
+
1092
+ 15:05.300 --> 15:07.340
1093
+ For there to be literally like a small number,
1094
+
1095
+ 15:07.340 --> 15:09.380
1096
+ like, you know, Star Trek,
1097
+
1098
+ 15:09.380 --> 15:13.300
1099
+ there's a dozen intelligent civilizations in our galaxy,
1100
+
1101
+ 15:13.300 --> 15:17.340
1102
+ but not a billion, that's weird.
1103
+
1104
+ 15:17.340 --> 15:18.500
1105
+ That's sort of bizarre to me.
1106
+
1107
+ 15:18.500 --> 15:21.020
1108
+ It's easy for me to imagine that there are zero others
1109
+
1110
+ 15:21.020 --> 15:22.620
1111
+ because there's just a big bottleneck
1112
+
1113
+ 15:22.620 --> 15:24.980
1114
+ to making multicellular life
1115
+
1116
+ 15:24.980 --> 15:27.020
1117
+ or technological life or whatever.
1118
+
1119
+ 15:27.020 --> 15:28.580
1120
+ It's very hard for me to imagine
1121
+
1122
+ 15:28.580 --> 15:30.140
1123
+ that there's a whole bunch out there
1124
+
1125
+ 15:30.140 --> 15:32.300
1126
+ that have somehow remained hidden from us.
1127
+
1128
+ 15:32.300 --> 15:34.700
1129
+ The question I'd like to ask
1130
+
1131
+ 15:34.700 --> 15:36.820
1132
+ is what would intelligent life look like?
1133
+
1134
+ 15:38.140 --> 15:40.500
1135
+ What I mean by that question and where it's going
1136
+
1137
+ 15:40.500 --> 15:45.500
1138
+ is what if intelligent life is just in some very big ways
1139
+
1140
+ 15:47.260 --> 15:51.500
1141
+ different than the one that has on Earth?
1142
+
1143
+ 15:51.500 --> 15:53.900
1144
+ That there's all kinds of intelligent life
1145
+
1146
+ 15:53.900 --> 15:55.420
1147
+ that operates at different scales
1148
+
1149
+ 15:55.420 --> 15:57.300
1150
+ of both size and temporal.
1151
+
1152
+ 15:57.300 --> 15:59.300
1153
+ Right, that's a great possibility
1154
+
1155
+ 15:59.300 --> 16:00.800
1156
+ because I think we should be humble
1157
+
1158
+ 16:00.800 --> 16:02.640
1159
+ about what intelligence is, what life is.
1160
+
1161
+ 16:02.640 --> 16:04.020
1162
+ We don't even agree on what life is,
1163
+
1164
+ 16:04.020 --> 16:07.020
1165
+ much less what intelligent life is, right?
1166
+
1167
+ 16:07.020 --> 16:08.980
1168
+ So that's an argument for humility,
1169
+
1170
+ 16:08.980 --> 16:10.860
1171
+ saying there could be intelligent life
1172
+
1173
+ 16:10.860 --> 16:13.620
1174
+ of a very different character, right?
1175
+
1176
+ 16:13.620 --> 16:18.060
1177
+ Like you could imagine the dolphins are intelligent
1178
+
1179
+ 16:18.060 --> 16:20.500
1180
+ but never invent space travel
1181
+
1182
+ 16:20.500 --> 16:21.460
1183
+ because they live in the ocean
1184
+
1185
+ 16:21.460 --> 16:23.220
1186
+ and they don't have thumbs, right?
1187
+
1188
+ 16:24.180 --> 16:27.860
1189
+ So they never invent technology, they never invent smelting.
1190
+
1191
+ 16:27.860 --> 16:32.020
1192
+ Maybe the universe is full of intelligent species
1193
+
1194
+ 16:32.020 --> 16:34.060
1195
+ that just don't make technology, right?
1196
+
1197
+ 16:34.060 --> 16:36.320
1198
+ That's compatible with the data, I think.
1199
+
1200
+ 16:36.320 --> 16:39.840
1201
+ And I think maybe what you're pointing at
1202
+
1203
+ 16:39.840 --> 16:44.440
1204
+ is even more out there versions of intelligence,
1205
+
1206
+ 16:44.440 --> 16:47.560
1207
+ intelligence in intermolecular clouds
1208
+
1209
+ 16:47.560 --> 16:49.440
1210
+ or on the surface of a neutron star
1211
+
1212
+ 16:49.440 --> 16:51.760
1213
+ or in between the galaxies in giant things
1214
+
1215
+ 16:51.760 --> 16:54.560
1216
+ where the equivalent of a heartbeat is 100 million years.
1217
+
1218
+ 16:56.440 --> 16:58.080
1219
+ On the one hand, yes,
1220
+
1221
+ 16:58.080 --> 16:59.860
1222
+ we should be very open minded about those things.
1223
+
1224
+ 16:59.860 --> 17:04.860
1225
+ On the other hand, all of us share the same laws of physics.
1226
+
1227
+ 17:04.860 --> 17:08.240
1228
+ There might be something about the laws of physics,
1229
+
1230
+ 17:08.240 --> 17:09.400
1231
+ even though we don't currently know
1232
+
1233
+ 17:09.400 --> 17:10.920
1234
+ exactly what that thing would be,
1235
+
1236
+ 17:10.920 --> 17:15.920
1237
+ that makes meters and years
1238
+
1239
+ 17:16.160 --> 17:18.920
1240
+ the right length and timescales for intelligent life.
1241
+
1242
+ 17:19.880 --> 17:22.240
1243
+ Maybe not, but we're made of atoms,
1244
+
1245
+ 17:22.240 --> 17:23.780
1246
+ atoms have a certain size,
1247
+
1248
+ 17:23.780 --> 17:27.280
1249
+ we orbit stars or stars have a certain lifetime.
1250
+
1251
+ 17:27.280 --> 17:30.300
1252
+ It's not impossible to me that there's a sweet spot
1253
+
1254
+ 17:30.300 --> 17:32.200
1255
+ for intelligent life that we find ourselves in.
1256
+
1257
+ 17:32.200 --> 17:33.800
1258
+ So I'm open minded either way,
1259
+
1260
+ 17:33.800 --> 17:35.280
1261
+ I'm open minded either being humble
1262
+
1263
+ 17:35.280 --> 17:37.080
1264
+ and there's all sorts of different kinds of life
1265
+
1266
+ 17:37.080 --> 17:39.280
1267
+ or no, there's a reason we just don't know it yet
1268
+
1269
+ 17:39.280 --> 17:42.080
1270
+ why life like ours is the kind of life that's out there.
1271
+
1272
+ 17:42.080 --> 17:43.320
1273
+ Yeah, I'm of two minds too,
1274
+
1275
+ 17:43.320 --> 17:47.200
1276
+ but I often wonder if our brains is just designed
1277
+
1278
+ 17:47.200 --> 17:52.200
1279
+ to quite obviously to operate and see the world
1280
+
1281
+ 17:52.720 --> 17:56.360
1282
+ in these timescales and we're almost blind
1283
+
1284
+ 17:56.360 --> 18:01.200
1285
+ and the tools we've created for detecting things are blind
1286
+
1287
+ 18:01.200 --> 18:02.760
1288
+ to the kind of observation needed
1289
+
1290
+ 18:02.760 --> 18:04.920
1291
+ to see intelligent life at other scales.
1292
+
1293
+ 18:04.920 --> 18:07.040
1294
+ Well, I'm totally open to that,
1295
+
1296
+ 18:07.040 --> 18:09.240
1297
+ but so here's another argument I would make,
1298
+
1299
+ 18:09.240 --> 18:11.520
1300
+ we have looked for intelligent life,
1301
+
1302
+ 18:11.520 --> 18:14.120
1303
+ but we've looked at for it in the dumbest way we can,
1304
+
1305
+ 18:14.120 --> 18:16.600
1306
+ by turning radio telescopes to the sky.
1307
+
1308
+ 18:16.600 --> 18:21.040
1309
+ And why in the world would a super advanced civilization
1310
+
1311
+ 18:21.040 --> 18:24.040
1312
+ randomly beam out radio signals wastefully
1313
+
1314
+ 18:24.040 --> 18:25.440
1315
+ in all directions into the universe?
1316
+
1317
+ 18:25.440 --> 18:27.280
1318
+ That just doesn't make any sense,
1319
+
1320
+ 18:27.280 --> 18:29.100
1321
+ especially because in order to think
1322
+
1323
+ 18:29.100 --> 18:32.020
1324
+ that you would actually contact another civilization,
1325
+
1326
+ 18:32.020 --> 18:33.840
1327
+ you would have to do it forever,
1328
+
1329
+ 18:33.840 --> 18:35.840
1330
+ you have to keep doing it for millions of years,
1331
+
1332
+ 18:35.840 --> 18:38.280
1333
+ that sounds like a waste of resources.
1334
+
1335
+ 18:38.280 --> 18:43.120
1336
+ If you thought that there were other solar systems
1337
+
1338
+ 18:43.120 --> 18:44.520
1339
+ with planets around them,
1340
+
1341
+ 18:44.520 --> 18:47.000
1342
+ where maybe intelligent life didn't yet exist,
1343
+
1344
+ 18:47.000 --> 18:48.600
1345
+ but might someday,
1346
+
1347
+ 18:48.600 --> 18:51.380
1348
+ you wouldn't try to talk to it with radio waves,
1349
+
1350
+ 18:51.380 --> 18:53.600
1351
+ you would send a spacecraft out there
1352
+
1353
+ 18:53.600 --> 18:55.560
1354
+ and you would park it around there
1355
+
1356
+ 18:55.560 --> 18:57.360
1357
+ and it would be like, from our point of view,
1358
+
1359
+ 18:57.360 --> 19:00.700
1360
+ it'd be like 2001, where there was a monolith.
1361
+
1362
+ 19:00.700 --> 19:01.540
1363
+ Monolith.
1364
+
1365
+ 19:01.540 --> 19:02.380
1366
+ There could be an artifact,
1367
+
1368
+ 19:02.380 --> 19:04.520
1369
+ in fact, the other way works also, right?
1370
+
1371
+ 19:04.520 --> 19:07.360
1372
+ There could be artifacts in our solar system
1373
+
1374
+ 19:08.440 --> 19:10.480
1375
+ that have been put there
1376
+
1377
+ 19:10.480 --> 19:12.280
1378
+ by other technologically advanced civilizations
1379
+
1380
+ 19:12.280 --> 19:14.640
1381
+ and that's how we will eventually contact them.
1382
+
1383
+ 19:14.640 --> 19:16.840
1384
+ We just haven't explored the solar system well enough yet
1385
+
1386
+ 19:16.840 --> 19:17.680
1387
+ to find them.
1388
+
1389
+ 19:18.580 --> 19:20.000
1390
+ The reason why we don't think about that
1391
+
1392
+ 19:20.000 --> 19:21.520
1393
+ is because we're young and impatient, right?
1394
+
1395
+ 19:21.520 --> 19:24.000
1396
+ Like, it would take more than my lifetime
1397
+
1398
+ 19:24.000 --> 19:26.080
1399
+ to actually send something to another star system
1400
+
1401
+ 19:26.080 --> 19:27.800
1402
+ and wait for it and then come back.
1403
+
1404
+ 19:27.800 --> 19:30.800
1405
+ So, but if we start thinking on hundreds of thousands
1406
+
1407
+ 19:30.800 --> 19:32.720
1408
+ of years or million year time scales,
1409
+
1410
+ 19:32.720 --> 19:34.600
1411
+ that's clearly the right thing to do.
1412
+
1413
+ 19:34.600 --> 19:36.800
1414
+ Are you excited by the thing
1415
+
1416
+ 19:36.800 --> 19:39.360
1417
+ that Elon Musk is doing with SpaceX in general?
1418
+
1419
+ 19:39.360 --> 19:41.620
1420
+ Space, but the idea of space exploration,
1421
+
1422
+ 19:41.620 --> 19:45.360
1423
+ even though your, or your species is young and impatient?
1424
+
1425
+ 19:45.360 --> 19:46.200
1426
+ Yeah.
1427
+
1428
+ 19:46.200 --> 19:49.200
1429
+ No, I do think that space travel is crucially important,
1430
+
1431
+ 19:49.200 --> 19:50.800
1432
+ long term.
1433
+
1434
+ 19:50.800 --> 19:52.500
1435
+ Even to other star systems.
1436
+
1437
+ 19:52.500 --> 19:57.500
1438
+ And I think that many people overestimate the difficulty
1439
+
1440
+ 19:57.500 --> 20:00.940
1441
+ because they say, look, if you travel 1% the speed of light
1442
+
1443
+ 20:00.940 --> 20:02.020
1444
+ to another star system,
1445
+
1446
+ 20:02.020 --> 20:04.060
1447
+ we'll be dead before we get there, right?
1448
+
1449
+ 20:04.060 --> 20:06.180
1450
+ And I think that it's much easier.
1451
+
1452
+ 20:06.180 --> 20:08.120
1453
+ And therefore, when they write their science fiction stories,
1454
+
1455
+ 20:08.120 --> 20:09.580
1456
+ they imagine we'd go faster than the speed of light
1457
+
1458
+ 20:09.580 --> 20:11.700
1459
+ because otherwise they're too impatient, right?
1460
+
1461
+ 20:11.700 --> 20:13.600
1462
+ We're not gonna go faster than the speed of light,
1463
+
1464
+ 20:13.600 --> 20:16.020
1465
+ but we could easily imagine that the human lifespan
1466
+
1467
+ 20:16.020 --> 20:18.100
1468
+ gets extended to thousands of years.
1469
+
1470
+ 20:18.100 --> 20:19.140
1471
+ And once you do that,
1472
+
1473
+ 20:19.140 --> 20:21.180
1474
+ then the stars are much closer effectively, right?
1475
+
1476
+ 20:21.180 --> 20:23.260
1477
+ And then what's a hundred year trip, right?
1478
+
1479
+ 20:23.260 --> 20:25.820
1480
+ So I think that that's gonna be the future,
1481
+
1482
+ 20:25.820 --> 20:28.700
1483
+ the far future, not my lifetime once again,
1484
+
1485
+ 20:28.700 --> 20:30.380
1486
+ but baby steps.
1487
+
1488
+ 20:30.380 --> 20:32.420
1489
+ Unless your lifetime gets extended.
1490
+
1491
+ 20:32.420 --> 20:34.740
1492
+ Well, it's in a race against time, right?
1493
+
1494
+ 20:34.740 --> 20:37.340
1495
+ A friend of mine who actually thinks about these things
1496
+
1497
+ 20:37.340 --> 20:40.460
1498
+ said, you know, you and I are gonna die,
1499
+
1500
+ 20:40.460 --> 20:43.060
1501
+ but I don't know about our grandchildren.
1502
+
1503
+ 20:43.060 --> 20:45.940
1504
+ That's, I don't know, predicting the future is hard,
1505
+
1506
+ 20:45.940 --> 20:47.900
1507
+ but that's at least a plausible scenario.
1508
+
1509
+ 20:47.900 --> 20:51.820
1510
+ And so, yeah, no, I think that as we discussed earlier,
1511
+
1512
+ 20:51.820 --> 20:56.780
1513
+ there are threats to the earth, known and unknown, right?
1514
+
1515
+ 20:56.780 --> 21:01.780
1516
+ Having spread humanity and biology elsewhere
1517
+
1518
+ 21:02.580 --> 21:04.940
1519
+ is a really important longterm goal.
1520
+
1521
+ 21:04.940 --> 21:08.900
1522
+ What kind of questions can science not currently answer,
1523
+
1524
+ 21:08.900 --> 21:09.920
1525
+ but might soon?
1526
+
1527
+ 21:11.480 --> 21:13.860
1528
+ When you think about the problems and the mysteries
1529
+
1530
+ 21:13.860 --> 21:17.840
1531
+ before us that may be within reach of science.
1532
+
1533
+ 21:17.840 --> 21:20.300
1534
+ I think an obvious one is the origin of life.
1535
+
1536
+ 21:20.300 --> 21:22.780
1537
+ We don't know how that happened.
1538
+
1539
+ 21:22.780 --> 21:25.300
1540
+ There's a difficulty in knowing how it happened historically
1541
+
1542
+ 21:25.300 --> 21:27.240
1543
+ actually, you know, literally on earth,
1544
+
1545
+ 21:27.240 --> 21:30.500
1546
+ but starting life from non life is something
1547
+
1548
+ 21:30.500 --> 21:32.420
1549
+ I kind of think we're close to, right?
1550
+
1551
+ 21:32.420 --> 21:33.240
1552
+ We're really.
1553
+
1554
+ 21:33.240 --> 21:34.080
1555
+ You really think so?
1556
+
1557
+ 21:34.080 --> 21:36.740
1558
+ Like how difficult is it to start life?
1559
+
1560
+ 21:36.740 --> 21:39.260
1561
+ Well, I've talked to people,
1562
+
1563
+ 21:39.260 --> 21:41.780
1564
+ including on the podcast about this.
1565
+
1566
+ 21:41.780 --> 21:43.340
1567
+ You know, life requires three things.
1568
+
1569
+ 21:43.340 --> 21:44.220
1570
+ Life as we know it.
1571
+
1572
+ 21:44.220 --> 21:45.500
1573
+ So there's a difference with life,
1574
+
1575
+ 21:45.500 --> 21:47.060
1576
+ which who knows what it is,
1577
+
1578
+ 21:47.060 --> 21:48.140
1579
+ and life as we know it,
1580
+
1581
+ 21:48.140 --> 21:50.780
1582
+ which we can talk about with some intelligence.
1583
+
1584
+ 21:50.780 --> 21:53.840
1585
+ So life as we know it requires compartmentalization.
1586
+
1587
+ 21:53.840 --> 21:56.660
1588
+ You need like a little membrane around your cell.
1589
+
1590
+ 21:56.660 --> 21:58.980
1591
+ Metabolism, you need to take in food and eat it
1592
+
1593
+ 21:58.980 --> 22:01.020
1594
+ and let that make you do things.
1595
+
1596
+ 22:01.020 --> 22:02.620
1597
+ And then replication, okay?
1598
+
1599
+ 22:02.620 --> 22:04.620
1600
+ So you need to have some information about who you are
1601
+
1602
+ 22:04.620 --> 22:07.880
1603
+ that you pass down to future generations.
1604
+
1605
+ 22:07.880 --> 22:11.780
1606
+ In the lab, compartmentalization seems pretty easy.
1607
+
1608
+ 22:11.780 --> 22:13.780
1609
+ Not hard to make lipid bilayers
1610
+
1611
+ 22:13.780 --> 22:16.760
1612
+ that come into little cellular walls pretty easily.
1613
+
1614
+ 22:16.760 --> 22:19.260
1615
+ Metabolism and replication are hard,
1616
+
1617
+ 22:20.160 --> 22:21.900
1618
+ but replication we're close to.
1619
+
1620
+ 22:21.900 --> 22:24.960
1621
+ People have made RNA like molecules in the lab
1622
+
1623
+ 22:24.960 --> 22:28.840
1624
+ that I think the state of the art is,
1625
+
1626
+ 22:28.840 --> 22:30.660
1627
+ they're not able to make one molecule
1628
+
1629
+ 22:30.660 --> 22:32.060
1630
+ that reproduces itself,
1631
+
1632
+ 22:32.060 --> 22:33.600
1633
+ but they're able to make two molecules
1634
+
1635
+ 22:33.600 --> 22:35.260
1636
+ that reproduce each other.
1637
+
1638
+ 22:35.260 --> 22:36.100
1639
+ So that's okay.
1640
+
1641
+ 22:36.100 --> 22:37.100
1642
+ That's pretty close.
1643
+
1644
+ 22:38.060 --> 22:41.060
1645
+ Metabolism is harder, believe it or not,
1646
+
1647
+ 22:41.060 --> 22:42.900
1648
+ even though it's sort of the most obvious thing,
1649
+
1650
+ 22:42.900 --> 22:44.940
1651
+ but you want some sort of controlled metabolism
1652
+
1653
+ 22:44.940 --> 22:47.500
1654
+ and the actual cellular machinery in our bodies
1655
+
1656
+ 22:47.500 --> 22:48.660
1657
+ is quite complicated.
1658
+
1659
+ 22:48.660 --> 22:50.940
1660
+ It's hard to see it just popping into existence
1661
+
1662
+ 22:50.940 --> 22:51.780
1663
+ all by itself.
1664
+
1665
+ 22:51.780 --> 22:52.860
1666
+ It probably took a while,
1667
+
1668
+ 22:53.740 --> 22:56.100
1669
+ but we're making progress.
1670
+
1671
+ 22:56.100 --> 22:57.240
1672
+ And in fact, I don't think we're spending
1673
+
1674
+ 22:57.240 --> 22:58.580
1675
+ nearly enough money on it.
1676
+
1677
+ 22:58.580 --> 23:01.780
1678
+ If I were the NSF, I would flood this area with money
1679
+
1680
+ 23:01.780 --> 23:05.220
1681
+ because it would change our view of the world
1682
+
1683
+ 23:05.220 --> 23:06.780
1684
+ if we could actually make life in the lab
1685
+
1686
+ 23:06.780 --> 23:09.420
1687
+ and understand how it was made originally here on earth.
1688
+
1689
+ 23:09.420 --> 23:11.160
1690
+ And I'm sure it'd have some ripple effects
1691
+
1692
+ 23:11.160 --> 23:12.940
1693
+ that help cure disease and so on.
1694
+
1695
+ 23:12.940 --> 23:14.380
1696
+ I mean, just that understanding.
1697
+
1698
+ 23:14.380 --> 23:16.700
1699
+ So synthetic biology is a wonderful big frontier
1700
+
1701
+ 23:16.700 --> 23:17.980
1702
+ where we're making cells.
1703
+
1704
+ 23:18.940 --> 23:21.100
1705
+ Right now, the best way to do that
1706
+
1707
+ 23:21.100 --> 23:23.620
1708
+ is to borrow heavily from existing biology, right?
1709
+
1710
+ 23:23.620 --> 23:25.380
1711
+ Well, Craig Venter several years ago
1712
+
1713
+ 23:25.380 --> 23:28.220
1714
+ created an artificial cell, but all he did was,
1715
+
1716
+ 23:28.220 --> 23:29.860
1717
+ not all he did, it was a tremendous accomplishment,
1718
+
1719
+ 23:29.860 --> 23:33.180
1720
+ but all he did was take out the DNA from a cell
1721
+
1722
+ 23:33.180 --> 23:37.200
1723
+ and put in entirely new DNA and let it boot up and go.
1724
+
1725
+ 23:37.200 --> 23:42.200
1726
+ What about the leap to creating intelligent life on earth?
1727
+
1728
+ 23:43.420 --> 23:44.260
1729
+ Yeah.
1730
+
1731
+ 23:44.260 --> 23:45.860
1732
+ Again, we define intelligence, of course,
1733
+
1734
+ 23:45.860 --> 23:49.860
1735
+ but let's just even say Homo sapiens,
1736
+
1737
+ 23:49.860 --> 23:54.480
1738
+ the modern intelligence in our human brain.
1739
+
1740
+ 23:55.340 --> 23:58.660
1741
+ Do you have a sense of what's involved in that leap
1742
+
1743
+ 23:58.660 --> 24:00.420
1744
+ and how big of a leap that is?
1745
+
1746
+ 24:00.420 --> 24:03.300
1747
+ So AI would count in this, or do you really want life?
1748
+
1749
+ 24:03.300 --> 24:06.420
1750
+ Do you want really an organism in some sense?
1751
+
1752
+ 24:06.420 --> 24:07.540
1753
+ AI would count, I think.
1754
+
1755
+ 24:07.540 --> 24:08.980
1756
+ Okay.
1757
+
1758
+ 24:08.980 --> 24:11.020
1759
+ Yeah, of course, of course AI would count.
1760
+
1761
+ 24:11.020 --> 24:13.460
1762
+ Well, let's say artificial consciousness, right?
1763
+
1764
+ 24:13.460 --> 24:15.500
1765
+ So I do not think we are on the threshold
1766
+
1767
+ 24:15.500 --> 24:16.760
1768
+ of creating artificial consciousness.
1769
+
1770
+ 24:16.760 --> 24:18.180
1771
+ I think it's possible.
1772
+
1773
+ 24:18.180 --> 24:20.300
1774
+ I'm not, again, very educated about how close we are,
1775
+
1776
+ 24:20.300 --> 24:22.100
1777
+ but my impression is not that we're really close
1778
+
1779
+ 24:22.100 --> 24:24.820
1780
+ because we understand how little we understand
1781
+
1782
+ 24:24.820 --> 24:26.460
1783
+ of consciousness and what it is.
1784
+
1785
+ 24:26.460 --> 24:28.440
1786
+ So if we don't have any idea what it is,
1787
+
1788
+ 24:28.440 --> 24:29.780
1789
+ it's hard to imagine we're on the threshold
1790
+
1791
+ 24:29.780 --> 24:31.620
1792
+ of making it ourselves.
1793
+
1794
+ 24:32.500 --> 24:34.500
1795
+ But it's doable, it's possible.
1796
+
1797
+ 24:34.500 --> 24:35.960
1798
+ I don't see any obstacles in principle.
1799
+
1800
+ 24:35.960 --> 24:38.160
1801
+ So yeah, I would hold out some interest
1802
+
1803
+ 24:38.160 --> 24:40.220
1804
+ in that happening eventually.
1805
+
1806
+ 24:40.220 --> 24:42.700
1807
+ I think in general, consciousness,
1808
+
1809
+ 24:42.700 --> 24:44.420
1810
+ I think we would be just surprised
1811
+
1812
+ 24:44.420 --> 24:49.060
1813
+ how easy consciousness is once we create intelligence.
1814
+
1815
+ 24:49.060 --> 24:50.540
1816
+ I think consciousness is a thing
1817
+
1818
+ 24:50.540 --> 24:54.000
1819
+ that's just something we all fake.
1820
+
1821
+ 24:55.540 --> 24:56.380
1822
+ Well, good.
1823
+
1824
+ 24:56.380 --> 24:57.680
1825
+ No, actually, I like this idea that in fact,
1826
+
1827
+ 24:57.680 --> 25:00.500
1828
+ consciousness is way less mysterious than we think
1829
+
1830
+ 25:00.500 --> 25:02.620
1831
+ because we're all at every time, at every moment,
1832
+
1833
+ 25:02.620 --> 25:04.500
1834
+ less conscious than we think we are, right?
1835
+
1836
+ 25:04.500 --> 25:05.460
1837
+ We can fool things.
1838
+
1839
+ 25:05.460 --> 25:07.780
1840
+ And I think that plus the idea
1841
+
1842
+ 25:07.780 --> 25:11.180
1843
+ that you not only have artificial intelligent systems,
1844
+
1845
+ 25:11.180 --> 25:12.980
1846
+ but you put them in a body, right,
1847
+
1848
+ 25:12.980 --> 25:14.280
1849
+ give them a robot body,
1850
+
1851
+ 25:15.620 --> 25:18.460
1852
+ that will help the faking a lot.
1853
+
1854
+ 25:18.460 --> 25:20.980
1855
+ Yeah, I think creating consciousness
1856
+
1857
+ 25:20.980 --> 25:25.140
1858
+ in artificial consciousness is as simple
1859
+
1860
+ 25:25.140 --> 25:30.020
1861
+ as asking a Roomba to say, I'm conscious,
1862
+
1863
+ 25:30.020 --> 25:32.780
1864
+ and refusing to be talked out of it.
1865
+
1866
+ 25:32.780 --> 25:33.820
1867
+ Could be, it could be.
1868
+
1869
+ 25:33.820 --> 25:36.740
1870
+ And I mean, I'm almost being silly,
1871
+
1872
+ 25:36.740 --> 25:38.280
1873
+ but that's what we do.
1874
+
1875
+ 25:39.660 --> 25:40.940
1876
+ That's what we do with each other.
1877
+
1878
+ 25:40.940 --> 25:42.020
1879
+ This is the kind of,
1880
+
1881
+ 25:42.020 --> 25:44.500
1882
+ that consciousness is also a social construct.
1883
+
1884
+ 25:44.500 --> 25:47.860
1885
+ And a lot of our ideas of intelligence is a social construct.
1886
+
1887
+ 25:47.860 --> 25:52.820
1888
+ And so reaching that bar involves something that's beyond,
1889
+
1890
+ 25:52.820 --> 25:54.940
1891
+ that doesn't necessarily involve
1892
+
1893
+ 25:54.940 --> 25:57.720
1894
+ the fundamental understanding of how you go
1895
+
1896
+ 25:57.720 --> 26:02.500
1897
+ from electrons to neurons to cognition.
1898
+
1899
+ 26:02.500 --> 26:05.060
1900
+ No, actually, I think that is an extremely good point.
1901
+
1902
+ 26:05.060 --> 26:08.660
1903
+ And in fact, what it suggests is,
1904
+
1905
+ 26:08.660 --> 26:10.540
1906
+ so yeah, you referred to Kate Darling,
1907
+
1908
+ 26:10.540 --> 26:11.940
1909
+ who I had on the podcast,
1910
+
1911
+ 26:11.940 --> 26:16.440
1912
+ and who does these experiments with very simple robots,
1913
+
1914
+ 26:16.440 --> 26:18.060
1915
+ but they look like animals,
1916
+
1917
+ 26:18.060 --> 26:20.740
1918
+ and they can look like they're experiencing pain,
1919
+
1920
+ 26:20.740 --> 26:23.380
1921
+ and we human beings react very negatively
1922
+
1923
+ 26:23.380 --> 26:24.400
1924
+ to these little robots
1925
+
1926
+ 26:24.400 --> 26:26.300
1927
+ looking like they're experiencing pain.
1928
+
1929
+ 26:26.300 --> 26:29.980
1930
+ And what you wanna say is, yeah, but they're just robots.
1931
+
1932
+ 26:29.980 --> 26:31.700
1933
+ It's not really pain, right?
1934
+
1935
+ 26:31.700 --> 26:33.080
1936
+ It's just some electrons going around.
1937
+
1938
+ 26:33.080 --> 26:36.300
1939
+ But then you realize, you and I are just electrons
1940
+
1941
+ 26:36.300 --> 26:38.380
1942
+ going around, and that's what pain is also.
1943
+
1944
+ 26:38.380 --> 26:43.060
1945
+ And so what I would have an easy time imagining
1946
+
1947
+ 26:43.060 --> 26:44.740
1948
+ is that there is a spectrum
1949
+
1950
+ 26:44.740 --> 26:47.420
1951
+ between these simple little robots that Kate works with
1952
+
1953
+ 26:47.420 --> 26:49.420
1954
+ and a human being,
1955
+
1956
+ 26:49.420 --> 26:50.940
1957
+ where there are things that sort of
1958
+
1959
+ 26:50.940 --> 26:52.840
1960
+ by some strict definition,
1961
+
1962
+ 26:52.840 --> 26:55.460
1963
+ Turing test level thing are not conscious,
1964
+
1965
+ 26:55.460 --> 26:58.580
1966
+ but nevertheless walk and talk like they're conscious.
1967
+
1968
+ 26:58.580 --> 27:00.220
1969
+ And it could be that the future is,
1970
+
1971
+ 27:00.220 --> 27:02.460
1972
+ I mean, Siri is close, right?
1973
+
1974
+ 27:02.460 --> 27:04.540
1975
+ And so it might be the future
1976
+
1977
+ 27:04.540 --> 27:07.100
1978
+ has a lot more agents like that.
1979
+
1980
+ 27:07.100 --> 27:08.860
1981
+ And in fact, rather than someday going,
1982
+
1983
+ 27:08.860 --> 27:10.700
1984
+ aha, we have consciousness,
1985
+
1986
+ 27:10.700 --> 27:13.180
1987
+ we'll just creep up on it with more and more
1988
+
1989
+ 27:13.180 --> 27:15.220
1990
+ accurate reflections of what we expect.
1991
+
1992
+ 27:15.220 --> 27:18.320
1993
+ And in the future, maybe the present,
1994
+
1995
+ 27:18.320 --> 27:20.800
1996
+ for example, we haven't met before,
1997
+
1998
+ 27:20.800 --> 27:25.300
1999
+ and you're basically assuming that I'm human as it's a high
2000
+
2001
+ 27:25.300 --> 27:28.560
2002
+ probability at this time because the yeah,
2003
+
2004
+ 27:28.560 --> 27:30.200
2005
+ but in the future,
2006
+
2007
+ 27:30.200 --> 27:32.000
2008
+ there might be question marks around that, right?
2009
+
2010
+ 27:32.000 --> 27:33.340
2011
+ Yeah, no, absolutely.
2012
+
2013
+ 27:33.340 --> 27:35.740
2014
+ Certainly videos are almost to the point
2015
+
2016
+ 27:35.740 --> 27:36.740
2017
+ where you shouldn't trust them already.
2018
+
2019
+ 27:36.740 --> 27:39.060
2020
+ Photos you can't trust, right?
2021
+
2022
+ 27:39.060 --> 27:41.700
2023
+ Videos is easier to trust,
2024
+
2025
+ 27:41.700 --> 27:44.020
2026
+ but we're getting worse that,
2027
+
2028
+ 27:44.020 --> 27:46.540
2029
+ we're getting better at faking them, right?
2030
+
2031
+ 27:46.540 --> 27:48.780
2032
+ Yeah, so physical embodied people,
2033
+
2034
+ 27:48.780 --> 27:51.020
2035
+ what's so hard about faking that?
2036
+
2037
+ 27:51.020 --> 27:51.980
2038
+ So this is very depressing,
2039
+
2040
+ 27:51.980 --> 27:53.420
2041
+ this conversation we're having right now.
2042
+
2043
+ 27:53.420 --> 27:54.340
2044
+ So I mean,
2045
+
2046
+ 27:54.340 --> 27:55.180
2047
+ To me, it's exciting.
2048
+
2049
+ 27:55.180 --> 27:56.300
2050
+ To me, you're doing it.
2051
+
2052
+ 27:56.300 --> 27:57.780
2053
+ So it's exciting to you,
2054
+
2055
+ 27:57.780 --> 27:59.060
2056
+ but it's a sobering thought.
2057
+
2058
+ 27:59.060 --> 28:00.420
2059
+ We're very bad, right?
2060
+
2061
+ 28:00.420 --> 28:02.820
2062
+ At imagining what the next 50 years are gonna be like
2063
+
2064
+ 28:02.820 --> 28:04.220
2065
+ when we're in the middle of a phase transition
2066
+
2067
+ 28:04.220 --> 28:05.260
2068
+ as we are right now.
2069
+
2070
+ 28:05.260 --> 28:06.740
2071
+ Yeah, and I, in general,
2072
+
2073
+ 28:06.740 --> 28:09.220
2074
+ I'm not blind to all the threats.
2075
+
2076
+ 28:09.220 --> 28:14.220
2077
+ I am excited by the power of technology to solve,
2078
+
2079
+ 28:14.540 --> 28:18.060
2080
+ to protect us against the threats as they evolve.
2081
+
2082
+ 28:18.060 --> 28:22.340
2083
+ I'm not as much as Steven Pinker optimistic about the world,
2084
+
2085
+ 28:22.340 --> 28:23.740
2086
+ but in everything I've seen,
2087
+
2088
+ 28:23.740 --> 28:27.300
2089
+ all of the brilliant people in the world that I've met
2090
+
2091
+ 28:27.300 --> 28:29.160
2092
+ are good people.
2093
+
2094
+ 28:29.160 --> 28:30.800
2095
+ So the army of the good
2096
+
2097
+ 28:30.800 --> 28:33.400
2098
+ in terms of the development of technology is large.
2099
+
2100
+ 28:33.400 --> 28:36.860
2101
+ Okay, you're way more optimistic than I am.
2102
+
2103
+ 28:37.820 --> 28:39.060
2104
+ I think that goodness and badness
2105
+
2106
+ 28:39.060 --> 28:40.900
2107
+ are equally distributed among intelligent
2108
+
2109
+ 28:40.900 --> 28:42.700
2110
+ and unintelligent people.
2111
+
2112
+ 28:42.700 --> 28:44.660
2113
+ I don't see much of a correlation there.
2114
+
2115
+ 28:44.660 --> 28:46.060
2116
+ Interesting.
2117
+
2118
+ 28:46.060 --> 28:47.300
2119
+ Neither of us have proof.
2120
+
2121
+ 28:47.300 --> 28:48.420
2122
+ Yeah, exactly.
2123
+
2124
+ 28:48.420 --> 28:50.660
2125
+ Again, opinions are free, right?
2126
+
2127
+ 28:50.660 --> 28:52.540
2128
+ Nor definitions of good and evil.
2129
+
2130
+ 28:52.540 --> 28:57.460
2131
+ We come without definitions or without data opinions.
2132
+
2133
+ 28:57.460 --> 29:01.980
2134
+ So what kind of questions can science not currently answer
2135
+
2136
+ 29:01.980 --> 29:04.380
2137
+ and may never be able to answer in your view?
2138
+
2139
+ 29:04.380 --> 29:06.940
2140
+ Well, the obvious one is what is good and bad?
2141
+
2142
+ 29:06.940 --> 29:07.860
2143
+ What is right and wrong?
2144
+
2145
+ 29:07.860 --> 29:09.460
2146
+ I think that there are questions that,
2147
+
2148
+ 29:09.460 --> 29:11.300
2149
+ science tells us what happens,
2150
+
2151
+ 29:11.300 --> 29:13.260
2152
+ what the world is and what it does.
2153
+
2154
+ 29:13.260 --> 29:14.740
2155
+ It doesn't say what the world should do
2156
+
2157
+ 29:14.740 --> 29:15.580
2158
+ or what we should do,
2159
+
2160
+ 29:15.580 --> 29:17.800
2161
+ because we're part of the world.
2162
+
2163
+ 29:17.800 --> 29:19.200
2164
+ But we are part of the world
2165
+
2166
+ 29:19.200 --> 29:21.460
2167
+ and we have the ability to feel like something's right,
2168
+
2169
+ 29:21.460 --> 29:22.740
2170
+ something's wrong.
2171
+
2172
+ 29:22.740 --> 29:25.660
2173
+ And to make a very long story very short,
2174
+
2175
+ 29:25.660 --> 29:28.000
2176
+ I think that the idea of moral philosophy
2177
+
2178
+ 29:28.000 --> 29:30.100
2179
+ is systematizing our intuitions
2180
+
2181
+ 29:30.100 --> 29:31.700
2182
+ of what is right and what is wrong.
2183
+
2184
+ 29:31.700 --> 29:34.580
2185
+ And science might be able to predict ahead of time
2186
+
2187
+ 29:34.580 --> 29:36.180
2188
+ what we will do,
2189
+
2190
+ 29:36.180 --> 29:38.000
2191
+ but it won't ever be able to judge
2192
+
2193
+ 29:38.000 --> 29:39.600
2194
+ whether we should have done it or not.
2195
+
2196
+ 29:39.600 --> 29:43.620
2197
+ So, you're kind of unique in terms of scientists.
2198
+
2199
+ 29:43.620 --> 29:45.520
2200
+ Listen, it doesn't have to do with podcasts,
2201
+
2202
+ 29:45.520 --> 29:47.660
2203
+ but even just reaching out,
2204
+
2205
+ 29:47.660 --> 29:49.080
2206
+ I think you referred to as sort of
2207
+
2208
+ 29:49.080 --> 29:51.300
2209
+ doing interdisciplinary science.
2210
+
2211
+ 29:51.300 --> 29:54.100
2212
+ So you reach out and talk to people
2213
+
2214
+ 29:54.100 --> 29:55.980
2215
+ that are outside of your discipline,
2216
+
2217
+ 29:55.980 --> 30:00.140
2218
+ which I always hope that's what science was for.
2219
+
2220
+ 30:00.140 --> 30:02.300
2221
+ In fact, I was a little disillusioned
2222
+
2223
+ 30:02.300 --> 30:06.420
2224
+ when I realized that academia is very siloed.
2225
+
2226
+ 30:06.420 --> 30:07.260
2227
+ Yeah.
2228
+
2229
+ 30:07.260 --> 30:09.560
2230
+ And so the question is,
2231
+
2232
+ 30:10.700 --> 30:13.020
2233
+ how, at your own level,
2234
+
2235
+ 30:13.020 --> 30:15.380
2236
+ how do you prepare for these conversations?
2237
+
2238
+ 30:15.380 --> 30:16.900
2239
+ How do you think about these conversations?
2240
+
2241
+ 30:16.900 --> 30:18.300
2242
+ How do you open your mind enough
2243
+
2244
+ 30:18.300 --> 30:20.220
2245
+ to have these conversations?
2246
+
2247
+ 30:20.220 --> 30:21.940
2248
+ And it may be a little bit broader,
2249
+
2250
+ 30:21.940 --> 30:24.380
2251
+ how can you advise other scientists
2252
+
2253
+ 30:24.380 --> 30:26.260
2254
+ to have these kinds of conversations?
2255
+
2256
+ 30:26.260 --> 30:28.180
2257
+ Not at the podcast,
2258
+
2259
+ 30:28.180 --> 30:29.860
2260
+ the fact that you're doing a podcast is awesome,
2261
+
2262
+ 30:29.860 --> 30:31.380
2263
+ other people get to hear them,
2264
+
2265
+ 30:31.380 --> 30:34.700
2266
+ but it's also good to have it without mics in general.
2267
+
2268
+ 30:34.700 --> 30:37.460
2269
+ It's a good question, but a tough one to answer.
2270
+
2271
+ 30:37.460 --> 30:40.980
2272
+ I think about a guy I know who's a personal trainer,
2273
+
2274
+ 30:40.980 --> 30:43.240
2275
+ and he was asked on a podcast,
2276
+
2277
+ 30:43.240 --> 30:45.700
2278
+ how do we psych ourselves up to do a workout?
2279
+
2280
+ 30:45.700 --> 30:48.340
2281
+ How do we make that discipline to go and work out?
2282
+
2283
+ 30:48.340 --> 30:50.300
2284
+ And he's like, why are you asking me?
2285
+
2286
+ 30:50.300 --> 30:52.340
2287
+ I can't stop working out.
2288
+
2289
+ 30:52.340 --> 30:54.380
2290
+ I don't need to psych myself up.
2291
+
2292
+ 30:54.380 --> 30:57.340
2293
+ So, and likewise, he asked me,
2294
+
2295
+ 30:57.340 --> 30:59.740
2296
+ how do you get to have interdisciplinary conversations
2297
+
2298
+ 30:59.740 --> 31:00.700
2299
+ on all sorts of different things,
2300
+
2301
+ 31:00.700 --> 31:01.660
2302
+ all sorts of different people?
2303
+
2304
+ 31:01.660 --> 31:04.860
2305
+ I'm like, that's what makes me go, right?
2306
+
2307
+ 31:04.860 --> 31:07.380
2308
+ Like that's, I couldn't stop doing that.
2309
+
2310
+ 31:07.380 --> 31:09.660
2311
+ I did that long before any of them were recorded.
2312
+
2313
+ 31:09.660 --> 31:12.380
2314
+ In fact, a lot of the motivation for starting recording it
2315
+
2316
+ 31:12.380 --> 31:14.420
2317
+ was making sure I would read all these books
2318
+
2319
+ 31:14.420 --> 31:15.460
2320
+ that I had purchased, right?
2321
+
2322
+ 31:15.460 --> 31:17.700
2323
+ Like all these books I wanted to read,
2324
+
2325
+ 31:17.700 --> 31:18.900
2326
+ not enough time to read them.
2327
+
2328
+ 31:18.900 --> 31:20.700
2329
+ And now if I have the motivation,
2330
+
2331
+ 31:20.700 --> 31:23.220
2332
+ cause I'm gonna interview Pat Churchland,
2333
+
2334
+ 31:23.220 --> 31:25.180
2335
+ I'm gonna finally read her book.
2336
+
2337
+ 31:25.180 --> 31:29.460
2338
+ You know, and it's absolutely true
2339
+
2340
+ 31:29.460 --> 31:31.700
2341
+ that academia is extraordinarily siloed, right?
2342
+
2343
+ 31:31.700 --> 31:32.780
2344
+ We don't talk to people.
2345
+
2346
+ 31:32.780 --> 31:34.260
2347
+ We rarely do.
2348
+
2349
+ 31:34.260 --> 31:36.460
2350
+ And in fact, when we do, it's punished.
2351
+
2352
+ 31:36.460 --> 31:38.820
2353
+ You know, like the people who do it successfully
2354
+
2355
+ 31:38.820 --> 31:41.420
2356
+ generally first became very successful
2357
+
2358
+ 31:41.420 --> 31:43.100
2359
+ within their little siloed discipline.
2360
+
2361
+ 31:43.100 --> 31:46.380
2362
+ And only then did they start expanding out.
2363
+
2364
+ 31:46.380 --> 31:47.660
2365
+ If you're a young person, you know,
2366
+
2367
+ 31:47.660 --> 31:48.940
2368
+ I have graduate students.
2369
+
2370
+ 31:48.940 --> 31:52.980
2371
+ I try to be very, very candid with them about this,
2372
+
2373
+ 31:52.980 --> 31:55.580
2374
+ that it's, you know, most graduate students
2375
+
2376
+ 31:55.580 --> 31:57.420
2377
+ are to not become faculty members, right?
2378
+
2379
+ 31:57.420 --> 31:59.020
2380
+ It's a tough road.
2381
+
2382
+ 31:59.020 --> 32:03.140
2383
+ And so live the life you wanna live,
2384
+
2385
+ 32:03.140 --> 32:04.620
2386
+ but do it with your eyes open
2387
+
2388
+ 32:04.620 --> 32:06.900
2389
+ about what it does to your job chances.
2390
+
2391
+ 32:06.900 --> 32:09.580
2392
+ And the more broad you are
2393
+
2394
+ 32:09.580 --> 32:12.900
2395
+ and the less time you spend hyper specializing
2396
+
2397
+ 32:12.900 --> 32:15.780
2398
+ in your field, the lower your job chances are.
2399
+
2400
+ 32:15.780 --> 32:17.060
2401
+ That's just an academic reality.
2402
+
2403
+ 32:17.060 --> 32:20.060
2404
+ It's terrible, I don't like it, but it's a reality.
2405
+
2406
+ 32:20.060 --> 32:22.540
2407
+ And for some people, that's fine.
2408
+
2409
+ 32:22.540 --> 32:24.660
2410
+ Like there's plenty of people who are wonderful scientists
2411
+
2412
+ 32:24.660 --> 32:27.140
2413
+ who have zero interest in branching out
2414
+
2415
+ 32:27.140 --> 32:30.740
2416
+ and talking to things, to anyone outside their field.
2417
+
2418
+ 32:30.740 --> 32:33.740
2419
+ But it is disillusioning to me.
2420
+
2421
+ 32:33.740 --> 32:36.180
2422
+ Some of the, you know, romantic notion I had
2423
+
2424
+ 32:36.180 --> 32:38.220
2425
+ of the intellectual academic life
2426
+
2427
+ 32:38.220 --> 32:39.940
2428
+ is belied by the reality of it.
2429
+
2430
+ 32:39.940 --> 32:43.500
2431
+ The idea that we should reach out beyond our discipline
2432
+
2433
+ 32:43.500 --> 32:48.500
2434
+ and that is a positive good is just so rare
2435
+
2436
+ 32:48.500 --> 32:53.500
2437
+ in universities that it may as well not exist at all.
2438
+
2439
+ 32:53.900 --> 32:57.660
2440
+ But that said, even though you're saying you're doing it
2441
+
2442
+ 32:57.660 --> 33:00.300
2443
+ like the personal trainer, because you just can't help it,
2444
+
2445
+ 33:00.300 --> 33:02.940
2446
+ you're also an inspiration to others.
2447
+
2448
+ 33:02.940 --> 33:04.980
2449
+ Like I could speak for myself.
2450
+
2451
+ 33:05.780 --> 33:09.540
2452
+ You know, I also have a career I'm thinking about, right?
2453
+
2454
+ 33:09.540 --> 33:12.060
2455
+ And without your podcast,
2456
+
2457
+ 33:12.060 --> 33:15.060
2458
+ I may have not have been doing this at all, right?
2459
+
2460
+ 33:15.060 --> 33:19.540
2461
+ So it makes me realize that these kinds of conversations
2462
+
2463
+ 33:19.540 --> 33:23.340
2464
+ is kind of what science is about in many ways.
2465
+
2466
+ 33:23.340 --> 33:26.500
2467
+ The reason we write papers, this exchange of ideas,
2468
+
2469
+ 33:27.460 --> 33:30.540
2470
+ is it's much harder to do interdisciplinary papers,
2471
+
2472
+ 33:30.540 --> 33:31.380
2473
+ I would say.
2474
+
2475
+ 33:31.380 --> 33:35.140
2476
+ And conversations are easier.
2477
+
2478
+ 33:35.140 --> 33:36.820
2479
+ So conversations is the beginning.
2480
+
2481
+ 33:36.820 --> 33:41.180
2482
+ And in the field of AI, it's obvious
2483
+
2484
+ 33:41.180 --> 33:45.580
2485
+ that we should think outside of pure computer vision
2486
+
2487
+ 33:45.580 --> 33:47.540
2488
+ competitions on a particular data sets.
2489
+
2490
+ 33:47.540 --> 33:49.660
2491
+ We should think about the broader impact
2492
+
2493
+ 33:49.660 --> 33:53.740
2494
+ of how this can be, you know, reaching out to physics,
2495
+
2496
+ 33:53.740 --> 33:57.220
2497
+ to psychology, to neuroscience and having these
2498
+
2499
+ 33:57.220 --> 34:00.580
2500
+ conversations so that you're an inspiration.
2501
+
2502
+ 34:00.580 --> 34:05.220
2503
+ And so never know how the world changes.
2504
+
2505
+ 34:05.220 --> 34:08.540
2506
+ I mean, the fact that this stuff is out there
2507
+
2508
+ 34:08.540 --> 34:12.300
2509
+ and I've a huge number of people come up to me,
2510
+
2511
+ 34:12.300 --> 34:16.100
2512
+ grad students, really loving the podcast, inspired by it.
2513
+
2514
+ 34:16.100 --> 34:18.660
2515
+ And they will probably have that,
2516
+
2517
+ 34:18.660 --> 34:20.740
2518
+ they'll be ripple effects when they become faculty
2519
+
2520
+ 34:20.740 --> 34:21.580
2521
+ and so on and so on.
2522
+
2523
+ 34:21.580 --> 34:25.300
2524
+ We can end on a balance between pessimism and optimism.
2525
+
2526
+ 34:25.300 --> 34:27.780
2527
+ And Sean, thank you so much for talking to me, it was awesome.
2528
+
2529
+ 34:27.780 --> 34:29.460
2530
+ No, Lex, thank you very much for this conversation.
2531
+
2532
+ 34:29.460 --> 34:49.460
2533
+ It was great.
2534
+
vtt/episode_026_small.vtt ADDED
@@ -0,0 +1,2567 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:02.760
4
+ The following is a conversation with Sean Carroll.
5
+
6
+ 00:02.760 --> 00:04.920
7
+ He's a theoretical physicist at Caltech,
8
+
9
+ 00:04.920 --> 00:08.800
10
+ specializing in quantum mechanics, gravity, and cosmology.
11
+
12
+ 00:08.800 --> 00:11.640
13
+ He's the author of several popular books,
14
+
15
+ 00:11.640 --> 00:15.360
16
+ one on the Arrow of Time called From Eternity to Hear,
17
+
18
+ 00:15.360 --> 00:17.840
19
+ one on the Higgs Boson called Particle
20
+
21
+ 00:17.840 --> 00:19.160
22
+ at the End of the Universe,
23
+
24
+ 00:19.160 --> 00:22.560
25
+ and one on Science and Philosophy called The Big Picture,
26
+
27
+ 00:22.560 --> 00:26.360
28
+ on the origins of life, meaning, and the universe itself.
29
+
30
+ 00:26.360 --> 00:28.720
31
+ He has an upcoming book on quantum mechanics
32
+
33
+ 00:28.720 --> 00:32.760
34
+ that you can preorder now called Something Deeply Hidden.
35
+
36
+ 00:32.760 --> 00:36.040
37
+ He writes one of my favorite blogs on his website,
38
+
39
+ 00:36.040 --> 00:37.960
40
+ preposterousuniverse.com.
41
+
42
+ 00:37.960 --> 00:40.440
43
+ I recommend clicking on the greatest hits link
44
+
45
+ 00:40.440 --> 00:44.400
46
+ that lists accessible, interesting posts on the Arrow of Time,
47
+
48
+ 00:44.400 --> 00:47.600
49
+ dark matter, dark energy, the Big Bang, general relativity,
50
+
51
+ 00:47.600 --> 00:49.560
52
+ string theory, quantum mechanics,
53
+
54
+ 00:49.560 --> 00:53.160
55
+ and the big meta questions about the philosophy of science,
56
+
57
+ 00:53.160 --> 00:57.600
58
+ God, ethics, politics, academia, and much, much more.
59
+
60
+ 00:57.600 --> 01:00.280
61
+ Finally, and perhaps most famously,
62
+
63
+ 01:00.280 --> 01:03.640
64
+ he's the host of a podcast called Mindscape
65
+
66
+ 01:03.640 --> 01:06.920
67
+ that you should subscribe to and support on Patreon.
68
+
69
+ 01:06.920 --> 01:08.800
70
+ Along with the Joe Rogan experience,
71
+
72
+ 01:08.800 --> 01:10.480
73
+ Sam Harris is Making Sense,
74
+
75
+ 01:10.480 --> 01:13.080
76
+ and Dan Carlin's Hardcore History.
77
+
78
+ 01:13.080 --> 01:15.840
79
+ Sean's Mindscape podcast is one of my favorite ways
80
+
81
+ 01:15.840 --> 01:18.800
82
+ to learn new ideas or explore different perspectives
83
+
84
+ 01:18.800 --> 01:22.120
85
+ and ideas that I thought I understood.
86
+
87
+ 01:22.120 --> 01:25.880
88
+ It was truly an honor to meet and spend a couple hours
89
+
90
+ 01:25.880 --> 01:27.200
91
+ with Sean.
92
+
93
+ 01:27.200 --> 01:30.480
94
+ It's a bit heartbreaking to say that for the first time ever,
95
+
96
+ 01:30.480 --> 01:32.760
97
+ the audio recorder for this podcast died
98
+
99
+ 01:32.760 --> 01:34.880
100
+ in the middle of our conversation.
101
+
102
+ 01:34.880 --> 01:36.280
103
+ There are technical reasons for this,
104
+
105
+ 01:36.280 --> 01:38.360
106
+ having to do with phantom power
107
+
108
+ 01:38.360 --> 01:41.040
109
+ that I now understand and will avoid.
110
+
111
+ 01:41.040 --> 01:44.200
112
+ It took me one hour to notice and fix the problem.
113
+
114
+ 01:44.200 --> 01:48.280
115
+ So, much like the universe's 68% dark energy,
116
+
117
+ 01:48.280 --> 01:51.280
118
+ roughly the same amount from this conversation was lost,
119
+
120
+ 01:51.280 --> 01:54.160
121
+ except in the memories of the two people involved
122
+
123
+ 01:54.160 --> 01:56.280
124
+ and in my notes.
125
+
126
+ 01:56.280 --> 01:59.920
127
+ I'm sure we'll talk again and continue this conversation
128
+
129
+ 01:59.920 --> 02:02.440
130
+ on this podcast or on Sean's.
131
+
132
+ 02:02.440 --> 02:05.320
133
+ And of course, I look forward to it.
134
+
135
+ 02:05.320 --> 02:07.840
136
+ This is the Artificial Intelligence podcast.
137
+
138
+ 02:07.840 --> 02:09.960
139
+ If you enjoy it, subscribe on YouTube,
140
+
141
+ 02:09.960 --> 02:12.520
142
+ iTunes, support on Patreon,
143
+
144
+ 02:12.520 --> 02:16.680
145
+ or simply connect with me on Twitter at Lex Freedman.
146
+
147
+ 02:16.680 --> 02:21.360
148
+ And now, here's my conversation with Sean Carroll.
149
+
150
+ 02:21.360 --> 02:23.520
151
+ What do you think is more interesting and impactful?
152
+
153
+ 02:23.520 --> 02:25.840
154
+ Understanding how the universe works
155
+
156
+ 02:25.840 --> 02:26.880
157
+ at a fundamental level
158
+
159
+ 02:26.880 --> 02:29.200
160
+ or understanding how the human mind works?
161
+
162
+ 02:29.200 --> 02:32.440
163
+ You know, of course this is a crazy meaningless
164
+
165
+ 02:32.440 --> 02:33.960
166
+ unanswerable question in some sense,
167
+
168
+ 02:33.960 --> 02:35.160
169
+ because they're both very interesting
170
+
171
+ 02:35.160 --> 02:37.520
172
+ and there's no absolute scale of interestingness
173
+
174
+ 02:37.520 --> 02:39.160
175
+ that we can rate them on.
176
+
177
+ 02:39.160 --> 02:41.160
178
+ There's a glib answer that says the human brain
179
+
180
+ 02:41.160 --> 02:43.080
181
+ is part of the universe, right?
182
+
183
+ 02:43.080 --> 02:44.400
184
+ And therefore, understanding the universe
185
+
186
+ 02:44.400 --> 02:47.000
187
+ is more fundamental than understanding the human brain.
188
+
189
+ 02:47.000 --> 02:49.600
190
+ But do you really believe that once we understand
191
+
192
+ 02:49.600 --> 02:51.520
193
+ the fundamental way the universe works
194
+
195
+ 02:51.520 --> 02:52.680
196
+ at the particle level,
197
+
198
+ 02:52.680 --> 02:55.800
199
+ the forces we would be able to understand how the mind works?
200
+
201
+ 02:55.800 --> 02:56.640
202
+ No, certainly not.
203
+
204
+ 02:56.640 --> 02:58.760
205
+ We cannot understand how ice cream works
206
+
207
+ 02:58.760 --> 03:01.040
208
+ just from understanding how particles work, right?
209
+
210
+ 03:01.040 --> 03:02.760
211
+ So I'm a big believer in emergence.
212
+
213
+ 03:02.760 --> 03:05.320
214
+ I'm a big believer that there are different ways
215
+
216
+ 03:05.320 --> 03:06.640
217
+ of talking about the world
218
+
219
+ 03:07.880 --> 03:11.200
220
+ beyond just the most fundamental microscopic one.
221
+
222
+ 03:11.200 --> 03:13.880
223
+ You know, when we talk about tables and chairs
224
+
225
+ 03:13.880 --> 03:15.120
226
+ and planets and people,
227
+
228
+ 03:15.120 --> 03:16.400
229
+ we're not talking the language
230
+
231
+ 03:16.400 --> 03:18.360
232
+ of particle physics and cosmology.
233
+
234
+ 03:18.360 --> 03:20.880
235
+ So, but understanding the universe,
236
+
237
+ 03:20.880 --> 03:24.040
238
+ you didn't say just at the most fundamental level, right?
239
+
240
+ 03:24.040 --> 03:28.200
241
+ So understanding the universe at all levels is part of that.
242
+
243
+ 03:28.200 --> 03:29.960
244
+ I do think, you know, to be a little bit more fair
245
+
246
+ 03:29.960 --> 03:33.960
247
+ to the question, there probably are general principles
248
+
249
+ 03:33.960 --> 03:38.520
250
+ of complexity, biology, information processing,
251
+
252
+ 03:38.520 --> 03:41.840
253
+ memory, knowledge, creativity
254
+
255
+ 03:41.840 --> 03:45.600
256
+ that go beyond just the human brain, right?
257
+
258
+ 03:45.600 --> 03:47.800
259
+ And maybe one could count understanding those
260
+
261
+ 03:47.800 --> 03:49.120
262
+ as part of understanding the universe.
263
+
264
+ 03:49.120 --> 03:53.040
265
+ The human brain, as far as we know, is the most complex thing
266
+
267
+ 03:53.040 --> 03:54.320
268
+ in the universe.
269
+
270
+ 03:54.320 --> 03:57.440
271
+ So there's, it's certainly absurd to think
272
+
273
+ 03:57.440 --> 03:58.880
274
+ that by understanding the fundamental laws
275
+
276
+ 03:58.880 --> 04:00.400
277
+ of particle physics,
278
+
279
+ 04:00.400 --> 04:02.880
280
+ you get any direct insight on how the brain works.
281
+
282
+ 04:02.880 --> 04:04.360
283
+ But then there's this step
284
+
285
+ 04:04.360 --> 04:06.840
286
+ from the fundamentals of particle physics
287
+
288
+ 04:06.840 --> 04:08.680
289
+ to information processing,
290
+
291
+ 04:08.680 --> 04:10.840
292
+ which a lot of physicists and philosophers
293
+
294
+ 04:10.840 --> 04:12.520
295
+ may be a little bit carelessly take
296
+
297
+ 04:12.520 --> 04:14.680
298
+ when they talk about artificial intelligence.
299
+
300
+ 04:14.680 --> 04:18.080
301
+ Do you think of the universe
302
+
303
+ 04:18.080 --> 04:21.360
304
+ as a kind of a computational device?
305
+
306
+ 04:21.360 --> 04:22.200
307
+ No.
308
+
309
+ 04:22.200 --> 04:24.200
310
+ To be like the honest answer there is no.
311
+
312
+ 04:24.200 --> 04:27.600
313
+ There's a sense in which the universe processes information
314
+
315
+ 04:27.600 --> 04:29.200
316
+ clearly.
317
+
318
+ 04:29.200 --> 04:32.720
319
+ There's a sense in which the universe is like a computer,
320
+
321
+ 04:32.720 --> 04:33.920
322
+ clearly.
323
+
324
+ 04:33.920 --> 04:36.560
325
+ But in some sense, I think,
326
+
327
+ 04:36.560 --> 04:38.560
328
+ I tried to say this once on my blog
329
+
330
+ 04:38.560 --> 04:39.400
331
+ and no one agreed with me,
332
+
333
+ 04:39.400 --> 04:42.400
334
+ but the universe is more like a computation
335
+
336
+ 04:42.400 --> 04:45.080
337
+ than a computer because the universe happens once.
338
+
339
+ 04:45.080 --> 04:46.960
340
+ A computer is a general purpose machine, right?
341
+
342
+ 04:46.960 --> 04:48.680
343
+ You can ask it different questions,
344
+
345
+ 04:48.680 --> 04:50.120
346
+ even a pocket calculator, right?
347
+
348
+ 04:50.120 --> 04:52.960
349
+ And it's set up to answer certain kinds of questions.
350
+
351
+ 04:52.960 --> 04:54.320
352
+ The universe isn't that.
353
+
354
+ 04:54.320 --> 04:57.360
355
+ So information processing happens in the universe,
356
+
357
+ 04:57.360 --> 04:59.120
358
+ but it's not what the universe is.
359
+
360
+ 04:59.120 --> 05:01.560
361
+ And I know your MIT colleague, Seth Lloyd,
362
+
363
+ 05:01.560 --> 05:03.840
364
+ feels very differently about this, right?
365
+
366
+ 05:03.840 --> 05:07.240
367
+ Well, you're thinking of the universe as a closed system.
368
+
369
+ 05:07.240 --> 05:08.080
370
+ I am.
371
+
372
+ 05:08.080 --> 05:11.760
373
+ So what makes a computer more like a PC,
374
+
375
+ 05:13.000 --> 05:14.560
376
+ like a computing machine,
377
+
378
+ 05:14.560 --> 05:17.720
379
+ is that there's a human that comes up to it
380
+
381
+ 05:17.720 --> 05:19.120
382
+ and moves the mouse around,
383
+
384
+ 05:19.120 --> 05:21.680
385
+ so input gives it input.
386
+
387
+ 05:21.680 --> 05:26.320
388
+ And that's why you're saying it's just a computation,
389
+
390
+ 05:26.320 --> 05:29.280
391
+ a deterministic thing that's just unrolling.
392
+
393
+ 05:29.280 --> 05:32.240
394
+ But the immense complexity of it
395
+
396
+ 05:32.240 --> 05:34.440
397
+ is nevertheless like processing.
398
+
399
+ 05:34.440 --> 05:39.440
400
+ There's a state and it changes with rules.
401
+
402
+ 05:40.160 --> 05:41.680
403
+ And there's a sense for a lot of people
404
+
405
+ 05:41.680 --> 05:45.400
406
+ that if the brain operates, the human brain operates
407
+
408
+ 05:45.400 --> 05:46.520
409
+ within that world,
410
+
411
+ 05:46.520 --> 05:49.400
412
+ then it's simply just a small subset of that.
413
+
414
+ 05:49.400 --> 05:52.560
415
+ And so there's no reason we can't build
416
+
417
+ 05:52.560 --> 05:55.600
418
+ arbitrarily great intelligences.
419
+
420
+ 05:55.600 --> 05:56.440
421
+ Yeah.
422
+
423
+ 05:56.440 --> 05:58.720
424
+ Do you think of intelligence in this way?
425
+
426
+ 05:58.720 --> 05:59.640
427
+ Intelligence is tricky.
428
+
429
+ 05:59.640 --> 06:01.720
430
+ I don't have a definition of it offhand.
431
+
432
+ 06:01.720 --> 06:04.640
433
+ So I remember this panel discussion
434
+
435
+ 06:04.640 --> 06:06.240
436
+ that I saw on YouTube, I wasn't there,
437
+
438
+ 06:06.240 --> 06:07.720
439
+ but Seth Lloyd was on the panel.
440
+
441
+ 06:07.720 --> 06:10.560
442
+ And so was Martin Rees, the famous astrophysicist.
443
+
444
+ 06:10.560 --> 06:13.800
445
+ And Seth gave his shtick for why the universe is a computer
446
+
447
+ 06:13.800 --> 06:14.840
448
+ and explained this.
449
+
450
+ 06:14.840 --> 06:19.360
451
+ And Martin Rees said, so what is not a computer?
452
+
453
+ 06:19.360 --> 06:22.000
454
+ And Seth is like, oh, that's a good question.
455
+
456
+ 06:22.000 --> 06:22.840
457
+ I'm not sure.
458
+
459
+ 06:22.840 --> 06:24.960
460
+ Because if you have a sufficiently broad definition
461
+
462
+ 06:24.960 --> 06:28.360
463
+ of what a computer is, then everything is, right?
464
+
465
+ 06:28.360 --> 06:32.160
466
+ And similarly, or the analogy gains force
467
+
468
+ 06:32.160 --> 06:34.560
469
+ when it excludes some things.
470
+
471
+ 06:34.560 --> 06:38.640
472
+ Is the moon going around the earth performing a computation?
473
+
474
+ 06:38.640 --> 06:41.320
475
+ I can come up with definitions in which the answer is yes,
476
+
477
+ 06:41.320 --> 06:43.840
478
+ but it's not a very useful computation.
479
+
480
+ 06:43.840 --> 06:46.120
481
+ I think that it's absolutely helpful
482
+
483
+ 06:46.120 --> 06:49.600
484
+ to think about the universe in certain situations,
485
+
486
+ 06:49.600 --> 06:53.080
487
+ certain contexts, as an information processing device.
488
+
489
+ 06:53.080 --> 06:54.840
490
+ I'm even guilty of writing a paper
491
+
492
+ 06:54.840 --> 06:56.960
493
+ called Quantum Circuit Cosmology, where
494
+
495
+ 06:56.960 --> 06:59.280
496
+ we modeled the whole universe as a quantum circuit.
497
+
498
+ 06:59.280 --> 07:00.320
499
+ As a circuit.
500
+
501
+ 07:00.320 --> 07:01.440
502
+ As a circuit, yeah.
503
+
504
+ 07:01.440 --> 07:02.840
505
+ With qubits kind of thing.
506
+
507
+ 07:02.840 --> 07:05.000
508
+ With qubits, basically, right.
509
+
510
+ 07:05.000 --> 07:07.400
511
+ So in qubits, becoming more and more entangled.
512
+
513
+ 07:07.400 --> 07:09.640
514
+ So do we want to digress a little bit?
515
+
516
+ 07:09.640 --> 07:11.000
517
+ Because this is kind of fun.
518
+
519
+ 07:11.000 --> 07:13.680
520
+ So here's a mystery about the universe
521
+
522
+ 07:13.680 --> 07:16.840
523
+ that is so deep and profound that nobody talks about it.
524
+
525
+ 07:16.840 --> 07:19.040
526
+ Space expands, right?
527
+
528
+ 07:19.040 --> 07:21.880
529
+ And we talk about, in a certain region of space,
530
+
531
+ 07:21.880 --> 07:23.560
532
+ a certain number of degrees of freedom,
533
+
534
+ 07:23.560 --> 07:25.480
535
+ a certain number of ways that the quantum fields
536
+
537
+ 07:25.480 --> 07:28.800
538
+ and the particles in that region can arrange themselves.
539
+
540
+ 07:28.800 --> 07:32.200
541
+ That number of degrees of freedom in a region of space
542
+
543
+ 07:32.200 --> 07:33.800
544
+ is arguably finite.
545
+
546
+ 07:33.800 --> 07:36.640
547
+ We actually don't know how many there are,
548
+
549
+ 07:36.640 --> 07:39.440
550
+ but there's a very good argument that says it's a finite number.
551
+
552
+ 07:39.440 --> 07:44.920
553
+ So as the universe expands and space gets bigger,
554
+
555
+ 07:44.920 --> 07:46.560
556
+ are there more degrees of freedom?
557
+
558
+ 07:46.560 --> 07:48.520
559
+ If it's an infinite number, it doesn't really matter.
560
+
561
+ 07:48.520 --> 07:50.000
562
+ Infinity times 2 is still infinity.
563
+
564
+ 07:50.000 --> 07:53.160
565
+ But if it's a finite number, then there's more space,
566
+
567
+ 07:53.160 --> 07:54.480
568
+ so there's more degrees of freedom.
569
+
570
+ 07:54.480 --> 07:55.760
571
+ So where did they come from?
572
+
573
+ 07:55.760 --> 07:58.000
574
+ That would mean the universe is not a closed system.
575
+
576
+ 07:58.000 --> 08:01.520
577
+ There's more degrees of freedom popping into existence.
578
+
579
+ 08:01.520 --> 08:05.320
580
+ So what we suggested was that there are more degrees of freedom.
581
+
582
+ 08:05.320 --> 08:07.960
583
+ And it's not that they're not there to start,
584
+
585
+ 08:07.960 --> 08:10.880
586
+ but they're not entangled to start.
587
+
588
+ 08:10.880 --> 08:12.800
589
+ So the universe that you and I know of,
590
+
591
+ 08:12.800 --> 08:15.440
592
+ the three dimensions around us that we see,
593
+
594
+ 08:15.440 --> 08:18.080
595
+ we said those are the entangled degrees of freedom
596
+
597
+ 08:18.080 --> 08:19.640
598
+ making up space time.
599
+
600
+ 08:19.640 --> 08:22.640
601
+ As the universe expands, there are a whole bunch of qubits
602
+
603
+ 08:22.640 --> 08:26.840
604
+ in their zero state that become entangled
605
+
606
+ 08:26.840 --> 08:28.720
607
+ with the rest of space time through the action
608
+
609
+ 08:28.720 --> 08:31.200
610
+ of these quantum circuits.
611
+
612
+ 08:31.200 --> 08:37.080
613
+ So what does it mean that there's now more degrees of freedom
614
+
615
+ 08:37.080 --> 08:39.280
616
+ as they become more entangled?
617
+
618
+ 08:39.280 --> 08:40.280
619
+ Yeah.
620
+
621
+ 08:40.280 --> 08:41.640
622
+ As the universe expands.
623
+
624
+ 08:41.640 --> 08:41.960
625
+ That's right.
626
+
627
+ 08:41.960 --> 08:43.280
628
+ So there's more and more degrees of freedom
629
+
630
+ 08:43.280 --> 08:47.320
631
+ that are entangled, that are playing the role of part
632
+
633
+ 08:47.320 --> 08:49.600
634
+ of the entangled space time structure.
635
+
636
+ 08:49.600 --> 08:53.320
637
+ So the underlying philosophy is that space time itself
638
+
639
+ 08:53.320 --> 08:55.600
640
+ arises from the entanglement of some fundamental quantum
641
+
642
+ 08:55.600 --> 08:57.680
643
+ degrees of freedom.
644
+
645
+ 08:57.680 --> 08:58.280
646
+ Wow.
647
+
648
+ 08:58.280 --> 08:59.780
649
+ OK.
650
+
651
+ 08:59.780 --> 09:05.200
652
+ At which point is most of the entanglement happening?
653
+
654
+ 09:05.200 --> 09:07.400
655
+ Are we talking about close to the Big Bang?
656
+
657
+ 09:07.400 --> 09:11.840
658
+ Are we talking about throughout the time of the life of the
659
+
660
+ 09:11.840 --> 09:12.340
661
+ universe?
662
+
663
+ 09:12.340 --> 09:12.840
664
+ Yeah.
665
+
666
+ 09:12.840 --> 09:15.080
667
+ So the idea is that at the Big Bang,
668
+
669
+ 09:15.080 --> 09:17.760
670
+ almost all the degrees of freedom that the universe could
671
+
672
+ 09:17.760 --> 09:22.400
673
+ have were there, but they were unentangled with anything else.
674
+
675
+ 09:22.400 --> 09:23.840
676
+ And that's a reflection of the fact
677
+
678
+ 09:23.840 --> 09:25.560
679
+ that the Big Bang had a low entropy.
680
+
681
+ 09:25.560 --> 09:28.080
682
+ It was a very simple, very small place.
683
+
684
+ 09:28.080 --> 09:31.360
685
+ And as space expands, more and more degrees of freedom
686
+
687
+ 09:31.360 --> 09:34.240
688
+ become entangled with the rest of the world.
689
+
690
+ 09:34.240 --> 09:35.960
691
+ Well, I have to ask John Carroll,
692
+
693
+ 09:35.960 --> 09:38.160
694
+ what do you think of the thought experiment from Nick
695
+
696
+ 09:38.160 --> 09:41.560
697
+ Bostrom that we're living in a simulation?
698
+
699
+ 09:41.560 --> 09:44.880
700
+ So I think let me contextualize that a little bit more.
701
+
702
+ 09:44.880 --> 09:48.320
703
+ I think people don't actually take this thought experiment.
704
+
705
+ 09:48.320 --> 09:50.360
706
+ I think it's quite interesting.
707
+
708
+ 09:50.360 --> 09:52.880
709
+ It's not very useful, but it's quite interesting.
710
+
711
+ 09:52.880 --> 09:55.440
712
+ From the perspective of AI, a lot of the learning
713
+
714
+ 09:55.440 --> 09:59.280
715
+ that can be done usually happens in simulation,
716
+
717
+ 09:59.280 --> 10:01.440
718
+ artificial examples.
719
+
720
+ 10:01.440 --> 10:03.040
721
+ And so it's a constructive question
722
+
723
+ 10:03.040 --> 10:09.360
724
+ to ask how difficult is our real world to simulate,
725
+
726
+ 10:09.360 --> 10:12.400
727
+ which is kind of a dual part of, if we're
728
+
729
+ 10:12.400 --> 10:16.400
730
+ living in a simulation and somebody built that simulation,
731
+
732
+ 10:16.400 --> 10:18.840
733
+ if you were to try to do it yourself, how hard would it be?
734
+
735
+ 10:18.840 --> 10:21.080
736
+ So obviously, we could be living in a simulation.
737
+
738
+ 10:21.080 --> 10:22.960
739
+ If you just want the physical possibility,
740
+
741
+ 10:22.960 --> 10:25.360
742
+ then I completely agree that it's physically possible.
743
+
744
+ 10:25.360 --> 10:27.360
745
+ I don't think that we actually are.
746
+
747
+ 10:27.360 --> 10:31.880
748
+ So take this one piece of data into consideration.
749
+
750
+ 10:31.880 --> 10:35.080
751
+ We live in a big universe.
752
+
753
+ 10:35.080 --> 10:38.480
754
+ There's two trillion galaxies in our observable universe
755
+
756
+ 10:38.480 --> 10:41.640
757
+ with 200 billion stars in each galaxy, et cetera.
758
+
759
+ 10:41.640 --> 10:44.920
760
+ It would seem to be a waste of resources
761
+
762
+ 10:44.920 --> 10:47.600
763
+ to have a universe that big going on just to do a simulation.
764
+
765
+ 10:47.600 --> 10:50.120
766
+ So in other words, I want to be a good Bayesian.
767
+
768
+ 10:50.120 --> 10:54.920
769
+ I want to ask, under this hypothesis, what do I expect to see?
770
+
771
+ 10:54.920 --> 10:56.080
772
+ So the first thing I would say is I
773
+
774
+ 10:56.080 --> 11:00.280
775
+ wouldn't expect to see a universe that was that big.
776
+
777
+ 11:00.280 --> 11:02.560
778
+ The second thing is I wouldn't expect the resolution
779
+
780
+ 11:02.560 --> 11:05.000
781
+ of the universe to be as good as it is.
782
+
783
+ 11:05.000 --> 11:08.960
784
+ So it's always possible that if our superhuman simulators only
785
+
786
+ 11:08.960 --> 11:10.840
787
+ have finite resources that they don't render
788
+
789
+ 11:10.840 --> 11:14.360
790
+ the entire universe, that the part that is out there,
791
+
792
+ 11:14.360 --> 11:17.040
793
+ the two trillion galaxies, isn't actually
794
+
795
+ 11:17.040 --> 11:19.600
796
+ being simulated fully.
797
+
798
+ 11:19.600 --> 11:22.720
799
+ But then the obvious extrapolation of that
800
+
801
+ 11:22.720 --> 11:25.640
802
+ is that only I am being simulated fully.
803
+
804
+ 11:25.640 --> 11:29.240
805
+ The rest of you are just nonplayer characters.
806
+
807
+ 11:29.240 --> 11:30.520
808
+ I'm the only thing that is real.
809
+
810
+ 11:30.520 --> 11:32.720
811
+ The rest of you are just chatbots.
812
+
813
+ 11:32.720 --> 11:34.320
814
+ Beyond this wall, I see the wall,
815
+
816
+ 11:34.320 --> 11:37.360
817
+ but there is literally nothing on the other side of the wall.
818
+
819
+ 11:37.360 --> 11:39.000
820
+ That is sort of the Bayesian prediction.
821
+
822
+ 11:39.000 --> 11:40.400
823
+ That's what it would be like to do
824
+
825
+ 11:40.400 --> 11:42.240
826
+ an efficient simulation of me.
827
+
828
+ 11:42.240 --> 11:45.760
829
+ So none of that seems quite realistic.
830
+
831
+ 11:45.760 --> 11:50.880
832
+ I don't see, I hear the argument that it's just possible
833
+
834
+ 11:50.880 --> 11:53.280
835
+ and easy to simulate lots of things.
836
+
837
+ 11:53.280 --> 11:57.280
838
+ I don't see any evidence from what we know about our universe
839
+
840
+ 11:57.280 --> 11:59.280
841
+ that we look like a simulated universe.
842
+
843
+ 11:59.280 --> 12:01.120
844
+ Now, maybe you can say, well, we don't know what it would
845
+
846
+ 12:01.120 --> 12:03.000
847
+ look like, but that's just abandoning
848
+
849
+ 12:03.000 --> 12:04.520
850
+ your Bayesian responsibilities.
851
+
852
+ 12:04.520 --> 12:07.680
853
+ Like your job is to say, under this theory,
854
+
855
+ 12:07.680 --> 12:09.480
856
+ here's what you would expect to see.
857
+
858
+ 12:09.480 --> 12:11.680
859
+ Yeah, so certainly if you think about a simulation
860
+
861
+ 12:11.680 --> 12:16.680
862
+ as a thing that's like a video game where only a small subset
863
+
864
+ 12:16.680 --> 12:22.880
865
+ is being readied, but say all the laws of physics,
866
+
867
+ 12:22.880 --> 12:26.560
868
+ the entire closed system of the quote unquote universe,
869
+
870
+ 12:26.560 --> 12:27.800
871
+ it had a creator.
872
+
873
+ 12:27.800 --> 12:29.640
874
+ Yeah, it's always possible.
875
+
876
+ 12:29.640 --> 12:32.280
877
+ So that's not useful to think about
878
+
879
+ 12:32.280 --> 12:34.040
880
+ when you're thinking about physics.
881
+
882
+ 12:34.040 --> 12:38.080
883
+ The way Nick Bostrom phrases it, if it's possible
884
+
885
+ 12:38.080 --> 12:40.520
886
+ to simulate a universe, eventually we'll do it.
887
+
888
+ 12:40.520 --> 12:42.720
889
+ Right.
890
+
891
+ 12:42.720 --> 12:45.560
892
+ You can use that, by the way, for a lot of things.
893
+
894
+ 12:45.560 --> 12:49.840
895
+ But I guess the question is, how hard is it
896
+
897
+ 12:49.840 --> 12:52.320
898
+ to create a universe?
899
+
900
+ 12:52.320 --> 12:53.800
901
+ I wrote a little blog post about this,
902
+
903
+ 12:53.800 --> 12:55.440
904
+ and maybe I'm missing something.
905
+
906
+ 12:55.440 --> 12:57.680
907
+ But there's an argument that says not only
908
+
909
+ 12:57.680 --> 13:00.480
910
+ that it might be possible to simulate a universe,
911
+
912
+ 13:00.480 --> 13:05.400
913
+ but probably, if you imagine that you actually
914
+
915
+ 13:05.400 --> 13:07.320
916
+ attribute consciousness and agency
917
+
918
+ 13:07.320 --> 13:09.920
919
+ to the little things that we're simulating,
920
+
921
+ 13:09.920 --> 13:12.400
922
+ to our little artificial beings, there's probably
923
+
924
+ 13:12.400 --> 13:15.000
925
+ a lot more of them than there are ordinary organic beings
926
+
927
+ 13:15.000 --> 13:17.400
928
+ in the universe, or there will be in the future.
929
+
930
+ 13:17.400 --> 13:19.600
931
+ So there's an argument that not only is being a simulation
932
+
933
+ 13:19.600 --> 13:23.520
934
+ possible, it's probable, because in the space
935
+
936
+ 13:23.520 --> 13:25.480
937
+ of all living consciousnesses, most of them
938
+
939
+ 13:25.480 --> 13:26.600
940
+ are being simulated.
941
+
942
+ 13:26.600 --> 13:28.840
943
+ Most of them are not at the top level.
944
+
945
+ 13:28.840 --> 13:30.520
946
+ I think that argument must be wrong,
947
+
948
+ 13:30.520 --> 13:34.080
949
+ because it follows from that argument that if we're simulated,
950
+
951
+ 13:34.080 --> 13:36.880
952
+ but we can also simulate other things.
953
+
954
+ 13:36.880 --> 13:38.800
955
+ Well, but if we can simulate other things,
956
+
957
+ 13:38.800 --> 13:41.800
958
+ they can simulate other things.
959
+
960
+ 13:41.800 --> 13:44.280
961
+ If we give them enough power and resolution,
962
+
963
+ 13:44.280 --> 13:46.000
964
+ and ultimately, we'll reach a bottom,
965
+
966
+ 13:46.000 --> 13:47.800
967
+ because the laws of physics in our universe
968
+
969
+ 13:47.800 --> 13:51.120
970
+ have a bottom, we're made of atoms and so forth.
971
+
972
+ 13:51.120 --> 13:55.080
973
+ So there will be the cheapest possible simulations.
974
+
975
+ 13:55.080 --> 13:57.680
976
+ And if you believe the original argument,
977
+
978
+ 13:57.680 --> 13:59.920
979
+ you should conclude that we should be in the cheapest
980
+
981
+ 13:59.920 --> 14:02.560
982
+ possible simulation, because that's where most people are.
983
+
984
+ 14:02.560 --> 14:03.640
985
+ But we don't look like that.
986
+
987
+ 14:03.640 --> 14:06.880
988
+ It doesn't look at all like we're at the edge of resolution,
989
+
990
+ 14:06.880 --> 14:09.960
991
+ that we're 16 bit things.
992
+
993
+ 14:09.960 --> 14:12.840
994
+ It seems much easier to make much lower level things
995
+
996
+ 14:12.840 --> 14:14.160
997
+ than we are.
998
+
999
+ 14:14.160 --> 14:18.200
1000
+ So, and also, I question the whole approach
1001
+
1002
+ 14:18.200 --> 14:19.840
1003
+ to the anthropic principle that says
1004
+
1005
+ 14:19.840 --> 14:22.320
1006
+ we are typical observers in the universe.
1007
+
1008
+ 14:22.320 --> 14:23.640
1009
+ I think that that's not actually,
1010
+
1011
+ 14:23.640 --> 14:27.320
1012
+ I think that there's a lot of selection that we can do
1013
+
1014
+ 14:27.320 --> 14:30.120
1015
+ that were typical within things we already know,
1016
+
1017
+ 14:30.120 --> 14:32.240
1018
+ but not typical within all the universe.
1019
+
1020
+ 14:32.240 --> 14:35.760
1021
+ So do you think there is intelligent life,
1022
+
1023
+ 14:35.760 --> 14:37.800
1024
+ however you would like to define intelligent life
1025
+
1026
+ 14:37.800 --> 14:39.920
1027
+ out there in the universe?
1028
+
1029
+ 14:39.920 --> 14:44.640
1030
+ My guess is that there is not intelligent life
1031
+
1032
+ 14:44.640 --> 14:46.840
1033
+ in the observable universe other than us.
1034
+
1035
+ 14:48.320 --> 14:52.480
1036
+ Simply on the basis of the fact that the likely number
1037
+
1038
+ 14:52.480 --> 14:56.320
1039
+ of other intelligent species in the observable universe,
1040
+
1041
+ 14:56.320 --> 15:00.280
1042
+ there's two likely numbers, zero or billions.
1043
+
1044
+ 15:01.480 --> 15:02.560
1045
+ And if there had been billions,
1046
+
1047
+ 15:02.560 --> 15:04.000
1048
+ you would have noticed already.
1049
+
1050
+ 15:05.040 --> 15:07.320
1051
+ For there to be literally like a small number,
1052
+
1053
+ 15:07.320 --> 15:12.320
1054
+ like Star Trek, there's a dozen intelligent civilizations
1055
+
1056
+ 15:12.440 --> 15:15.040
1057
+ in our galaxy, but not a billion.
1058
+
1059
+ 15:16.240 --> 15:18.480
1060
+ That's weird, that's sort of bizarre to me.
1061
+
1062
+ 15:18.480 --> 15:21.000
1063
+ It's easy for me to imagine that there are zero others
1064
+
1065
+ 15:21.000 --> 15:22.600
1066
+ because there's just a big bottleneck
1067
+
1068
+ 15:22.600 --> 15:24.960
1069
+ to making multicellular life
1070
+
1071
+ 15:24.960 --> 15:27.040
1072
+ or technological life or whatever.
1073
+
1074
+ 15:27.040 --> 15:28.560
1075
+ It's very hard for me to imagine
1076
+
1077
+ 15:28.560 --> 15:30.160
1078
+ that there's a whole bunch out there
1079
+
1080
+ 15:30.160 --> 15:32.280
1081
+ that have somehow remained hidden from us.
1082
+
1083
+ 15:32.280 --> 15:34.880
1084
+ The question I'd like to ask is,
1085
+
1086
+ 15:34.880 --> 15:37.240
1087
+ what would intelligent life look like?
1088
+
1089
+ 15:37.240 --> 15:41.120
1090
+ What I mean by that question and where it's going is,
1091
+
1092
+ 15:41.120 --> 15:45.120
1093
+ what if intelligent life is just fundamentally,
1094
+
1095
+ 15:45.120 --> 15:49.120
1096
+ in some very big ways, different than the one
1097
+
1098
+ 15:49.120 --> 15:51.480
1099
+ that has on Earth.
1100
+
1101
+ 15:51.480 --> 15:53.880
1102
+ That there's all kinds of intelligent life
1103
+
1104
+ 15:53.880 --> 15:57.560
1105
+ that operates at different scales of both size and temporal.
1106
+
1107
+ 15:57.560 --> 15:59.280
1108
+ That's a great possibility
1109
+
1110
+ 15:59.280 --> 16:00.800
1111
+ because I think we should be humble
1112
+
1113
+ 16:00.800 --> 16:02.640
1114
+ about what intelligence is, what life is.
1115
+
1116
+ 16:02.640 --> 16:04.040
1117
+ We don't even agree on what life is,
1118
+
1119
+ 16:04.040 --> 16:06.040
1120
+ much less what intelligent life is, right?
1121
+
1122
+ 16:06.040 --> 16:08.200
1123
+ So that's an argument for humility,
1124
+
1125
+ 16:08.200 --> 16:10.080
1126
+ saying there could be intelligent life
1127
+
1128
+ 16:10.080 --> 16:12.800
1129
+ of a very different character, right?
1130
+
1131
+ 16:12.800 --> 16:17.240
1132
+ You could imagine that dolphins are intelligent
1133
+
1134
+ 16:17.240 --> 16:19.760
1135
+ but never invent space travel
1136
+
1137
+ 16:19.760 --> 16:20.760
1138
+ because they live in the ocean
1139
+
1140
+ 16:20.760 --> 16:22.760
1141
+ and they don't have thumbs, right?
1142
+
1143
+ 16:22.760 --> 16:25.840
1144
+ So they never invent technology, they never invent smelting.
1145
+
1146
+ 16:26.840 --> 16:31.200
1147
+ Maybe the universe is full of intelligent species
1148
+
1149
+ 16:31.200 --> 16:33.200
1150
+ that just don't make technology, right?
1151
+
1152
+ 16:33.200 --> 16:35.440
1153
+ That's compatible with the data, I think.
1154
+
1155
+ 16:35.440 --> 16:38.560
1156
+ And I think maybe what you're pointing at
1157
+
1158
+ 16:38.560 --> 16:42.560
1159
+ is even more out there versions of intelligence,
1160
+
1161
+ 16:42.560 --> 16:46.240
1162
+ you know, intelligence in intermolecular clouds
1163
+
1164
+ 16:46.240 --> 16:48.160
1165
+ or on the surface of a neutron star
1166
+
1167
+ 16:48.160 --> 16:50.360
1168
+ or in between the galaxies in giant things
1169
+
1170
+ 16:50.360 --> 16:52.840
1171
+ where the equivalent of a heartbeat is 100 million years.
1172
+
1173
+ 16:54.840 --> 16:56.760
1174
+ On the one hand, yes,
1175
+
1176
+ 16:56.760 --> 16:58.560
1177
+ we should be very open minded about those things.
1178
+
1179
+ 16:58.560 --> 17:03.560
1180
+ On the other hand, we all of us share the same laws of physics.
1181
+
1182
+ 17:03.560 --> 17:07.040
1183
+ There might be something about the laws of physics
1184
+
1185
+ 17:07.040 --> 17:08.560
1186
+ even though we don't currently know exactly
1187
+
1188
+ 17:08.560 --> 17:12.560
1189
+ what that thing would be that makes meters
1190
+
1191
+ 17:12.560 --> 17:16.560
1192
+ and years the right length and time scales
1193
+
1194
+ 17:16.560 --> 17:19.560
1195
+ for intelligent life, maybe not.
1196
+
1197
+ 17:19.560 --> 17:22.560
1198
+ But we're made of atoms, atoms have a certain size,
1199
+
1200
+ 17:22.560 --> 17:25.560
1201
+ we orbit stars, our stars have a certain lifetime.
1202
+
1203
+ 17:25.560 --> 17:28.560
1204
+ It's not impossible to me that there's a sweet spot
1205
+
1206
+ 17:28.560 --> 17:30.560
1207
+ for intelligent life that we find ourselves in.
1208
+
1209
+ 17:30.560 --> 17:33.560
1210
+ So I'm open minded either way, I'm open minded either being humble
1211
+
1212
+ 17:33.560 --> 17:35.560
1213
+ and there's all sorts of different kinds of life
1214
+
1215
+ 17:35.560 --> 17:37.560
1216
+ or no, there's a reason we just don't know it yet
1217
+
1218
+ 17:37.560 --> 17:40.560
1219
+ why life like ours is the kind of life that's out there.
1220
+
1221
+ 17:40.560 --> 17:43.560
1222
+ Yeah, I'm of two minds too, but I often wonder
1223
+
1224
+ 17:43.560 --> 17:48.560
1225
+ if our brains is just designed to, quite obviously,
1226
+
1227
+ 17:48.560 --> 17:53.560
1228
+ to operate and see the world on these time scales.
1229
+
1230
+ 17:53.560 --> 17:57.560
1231
+ And we're almost blind and the tools we've created
1232
+
1233
+ 17:57.560 --> 18:01.560
1234
+ for detecting things are blind to the kind of observation
1235
+
1236
+ 18:01.560 --> 18:04.560
1237
+ needed to see intelligent life at other scales.
1238
+
1239
+ 18:04.560 --> 18:06.560
1240
+ Well, I'm totally open to that,
1241
+
1242
+ 18:06.560 --> 18:08.560
1243
+ but so here's another argument I would make.
1244
+
1245
+ 18:08.560 --> 18:10.560
1246
+ We have looked for intelligent life,
1247
+
1248
+ 18:10.560 --> 18:13.560
1249
+ but we've looked at for it in the dumbest way we can
1250
+
1251
+ 18:13.560 --> 18:15.560
1252
+ by turning radio telescopes to the sky.
1253
+
1254
+ 18:15.560 --> 18:20.560
1255
+ And why in the world would a super advanced civilization
1256
+
1257
+ 18:20.560 --> 18:23.560
1258
+ randomly beam out radio signals wastefully
1259
+
1260
+ 18:23.560 --> 18:25.560
1261
+ in all directions into the universe?
1262
+
1263
+ 18:25.560 --> 18:28.560
1264
+ It just doesn't make any sense, especially because
1265
+
1266
+ 18:28.560 --> 18:30.560
1267
+ in order to think that you would actually contact
1268
+
1269
+ 18:30.560 --> 18:33.560
1270
+ another civilization, you would have to do it forever.
1271
+
1272
+ 18:33.560 --> 18:35.560
1273
+ You have to keep doing it for millions of years.
1274
+
1275
+ 18:35.560 --> 18:37.560
1276
+ That sounds like a waste of resources.
1277
+
1278
+ 18:37.560 --> 18:42.560
1279
+ If you thought that there were other solar systems
1280
+
1281
+ 18:42.560 --> 18:45.560
1282
+ with planets around them where maybe intelligent life
1283
+
1284
+ 18:45.560 --> 18:48.560
1285
+ didn't yet exist, but might someday,
1286
+
1287
+ 18:48.560 --> 18:51.560
1288
+ you wouldn't try to talk to it with radio waves.
1289
+
1290
+ 18:51.560 --> 18:53.560
1291
+ You would send a spacecraft out there
1292
+
1293
+ 18:53.560 --> 18:55.560
1294
+ and you would park it around there.
1295
+
1296
+ 18:55.560 --> 18:57.560
1297
+ And it would be like, from our point of view,
1298
+
1299
+ 18:57.560 --> 19:00.560
1300
+ it would be like 2001 where there was a monolith.
1301
+
1302
+ 19:00.560 --> 19:02.560
1303
+ There could be an artifact.
1304
+
1305
+ 19:02.560 --> 19:04.560
1306
+ In fact, the other way works also, right?
1307
+
1308
+ 19:04.560 --> 19:07.560
1309
+ There could be artifacts in our solar system
1310
+
1311
+ 19:07.560 --> 19:11.560
1312
+ that have been put there by other technologically advanced
1313
+
1314
+ 19:11.560 --> 19:14.560
1315
+ civilizations, and that's how we will eventually contact them.
1316
+
1317
+ 19:14.560 --> 19:16.560
1318
+ We just haven't explored the solar system well enough yet
1319
+
1320
+ 19:16.560 --> 19:18.560
1321
+ to find them.
1322
+
1323
+ 19:18.560 --> 19:20.560
1324
+ The reason why we don't think about that is because
1325
+
1326
+ 19:20.560 --> 19:21.560
1327
+ we're young and impatient, right?
1328
+
1329
+ 19:21.560 --> 19:23.560
1330
+ It's like it would take more than my lifetime
1331
+
1332
+ 19:23.560 --> 19:25.560
1333
+ to actually send something to another star system
1334
+
1335
+ 19:25.560 --> 19:27.560
1336
+ and wait for it and then come back.
1337
+
1338
+ 19:27.560 --> 19:30.560
1339
+ But if we start thinking on hundreds of thousands of years
1340
+
1341
+ 19:30.560 --> 19:32.560
1342
+ or a million year time scales,
1343
+
1344
+ 19:32.560 --> 19:34.560
1345
+ that's clearly the right thing to do.
1346
+
1347
+ 19:34.560 --> 19:38.560
1348
+ Are you excited by the thing that Elon Musk is doing with SpaceX
1349
+
1350
+ 19:38.560 --> 19:41.560
1351
+ in general, but the idea of space exploration,
1352
+
1353
+ 19:41.560 --> 19:45.560
1354
+ even though you're species is young and impatient?
1355
+
1356
+ 19:45.560 --> 19:50.560
1357
+ No, I do think that space travel is crucially important, long term.
1358
+
1359
+ 19:50.560 --> 19:52.560
1360
+ Even to other star systems.
1361
+
1362
+ 19:52.560 --> 19:57.560
1363
+ And I think that many people overestimate the difficulty
1364
+
1365
+ 19:57.560 --> 20:00.560
1366
+ because they say, look, if you travel 1% the speed of light
1367
+
1368
+ 20:00.560 --> 20:03.560
1369
+ to another star system, we'll be dead before we get there, right?
1370
+
1371
+ 20:03.560 --> 20:05.560
1372
+ And I think that it's much easier.
1373
+
1374
+ 20:05.560 --> 20:07.560
1375
+ And therefore, when they write their science fiction stories,
1376
+
1377
+ 20:07.560 --> 20:09.560
1378
+ they imagine we'd go faster than the speed of light
1379
+
1380
+ 20:09.560 --> 20:11.560
1381
+ because otherwise they're too impatient, right?
1382
+
1383
+ 20:11.560 --> 20:13.560
1384
+ We're not going to go faster than the speed of light,
1385
+
1386
+ 20:13.560 --> 20:15.560
1387
+ but we could easily imagine that the human lifespan
1388
+
1389
+ 20:15.560 --> 20:17.560
1390
+ gets extended to thousands of years.
1391
+
1392
+ 20:17.560 --> 20:19.560
1393
+ And once you do that, then the stars are much closer.
1394
+
1395
+ 20:19.560 --> 20:20.560
1396
+ Effectively, right?
1397
+
1398
+ 20:20.560 --> 20:22.560
1399
+ What's 100 year trip, right?
1400
+
1401
+ 20:22.560 --> 20:26.560
1402
+ So I think that that's going to be the future, the far future,
1403
+
1404
+ 20:26.560 --> 20:29.560
1405
+ not my lifetime once again, but baby steps.
1406
+
1407
+ 20:29.560 --> 20:31.560
1408
+ Unless your lifetime gets extended.
1409
+
1410
+ 20:31.560 --> 20:33.560
1411
+ Well, it's in a race against time, right?
1412
+
1413
+ 20:33.560 --> 20:37.560
1414
+ A friend of mine who actually thinks about these things said,
1415
+
1416
+ 20:37.560 --> 20:39.560
1417
+ you know, you and I are going to die,
1418
+
1419
+ 20:39.560 --> 20:42.560
1420
+ but I don't know about our grandchildren.
1421
+
1422
+ 20:42.560 --> 20:45.560
1423
+ I don't know, predicting the future is hard,
1424
+
1425
+ 20:45.560 --> 20:47.560
1426
+ but that's the least plausible scenario.
1427
+
1428
+ 20:47.560 --> 20:51.560
1429
+ And so, yeah, no, I think that as we discussed earlier,
1430
+
1431
+ 20:51.560 --> 20:56.560
1432
+ there are threats to the earth, known and unknown, right?
1433
+
1434
+ 20:56.560 --> 21:02.560
1435
+ Having spread humanity and biology elsewhere
1436
+
1437
+ 21:02.560 --> 21:04.560
1438
+ is a really important longterm goal.
1439
+
1440
+ 21:04.560 --> 21:08.560
1441
+ What kind of questions can science not currently answer,
1442
+
1443
+ 21:08.560 --> 21:11.560
1444
+ but might soon?
1445
+
1446
+ 21:11.560 --> 21:14.560
1447
+ When you think about the problems and the mysteries before us,
1448
+
1449
+ 21:14.560 --> 21:17.560
1450
+ that may be within reach of science.
1451
+
1452
+ 21:17.560 --> 21:19.560
1453
+ I think an obvious one is the origin of life.
1454
+
1455
+ 21:19.560 --> 21:21.560
1456
+ We don't know how that happened.
1457
+
1458
+ 21:21.560 --> 21:24.560
1459
+ There's a difficulty in knowing how it happened historically,
1460
+
1461
+ 21:24.560 --> 21:26.560
1462
+ actually, you know, literally on earth,
1463
+
1464
+ 21:26.560 --> 21:29.560
1465
+ but starting life from nonlife
1466
+
1467
+ 21:29.560 --> 21:32.560
1468
+ is something I kind of think we're close to, right?
1469
+
1470
+ 21:32.560 --> 21:33.560
1471
+ You really think so?
1472
+
1473
+ 21:33.560 --> 21:35.560
1474
+ Like, how difficult is it to start life?
1475
+
1476
+ 21:35.560 --> 21:36.560
1477
+ I do.
1478
+
1479
+ 21:36.560 --> 21:40.560
1480
+ Well, I've talked to people, including on the podcast, about this.
1481
+
1482
+ 21:40.560 --> 21:42.560
1483
+ You know, life requires three things.
1484
+
1485
+ 21:42.560 --> 21:44.560
1486
+ Life as we know it.
1487
+
1488
+ 21:44.560 --> 21:46.560
1489
+ There's a difference between life, who knows what it is,
1490
+
1491
+ 21:46.560 --> 21:47.560
1492
+ and life as we know it,
1493
+
1494
+ 21:47.560 --> 21:50.560
1495
+ which we can talk about with some intelligence.
1496
+
1497
+ 21:50.560 --> 21:53.560
1498
+ Life as we know it requires compartmentalization.
1499
+
1500
+ 21:53.560 --> 21:56.560
1501
+ You need a little membrane around your cell.
1502
+
1503
+ 21:56.560 --> 21:58.560
1504
+ Metabolism, you need to take in food and eat it
1505
+
1506
+ 21:58.560 --> 22:00.560
1507
+ and let that make you do things.
1508
+
1509
+ 22:00.560 --> 22:02.560
1510
+ And then replication.
1511
+
1512
+ 22:02.560 --> 22:04.560
1513
+ You need to have some information about who you are,
1514
+
1515
+ 22:04.560 --> 22:07.560
1516
+ that you pass down to future generations.
1517
+
1518
+ 22:07.560 --> 22:11.560
1519
+ In the lab, compartmentalization seems pretty easy,
1520
+
1521
+ 22:11.560 --> 22:13.560
1522
+ not hard to make lipid bilayers
1523
+
1524
+ 22:13.560 --> 22:16.560
1525
+ that come into little cellular walls pretty easily.
1526
+
1527
+ 22:16.560 --> 22:19.560
1528
+ Metabolism and replication are hard,
1529
+
1530
+ 22:19.560 --> 22:21.560
1531
+ but replication we're close to.
1532
+
1533
+ 22:21.560 --> 22:25.560
1534
+ People have made RNA like molecules in the lab that...
1535
+
1536
+ 22:25.560 --> 22:28.560
1537
+ I think the state of the art is
1538
+
1539
+ 22:28.560 --> 22:31.560
1540
+ they're not able to make one molecule that reproduces itself,
1541
+
1542
+ 22:31.560 --> 22:34.560
1543
+ but they're able to make two molecules that reproduce each other.
1544
+
1545
+ 22:34.560 --> 22:37.560
1546
+ So that's okay. That's pretty close.
1547
+
1548
+ 22:37.560 --> 22:40.560
1549
+ Metabolism is harder, believe it or not,
1550
+
1551
+ 22:40.560 --> 22:42.560
1552
+ even though it's sort of the most obvious thing,
1553
+
1554
+ 22:42.560 --> 22:44.560
1555
+ but you want some sort of controlled metabolism
1556
+
1557
+ 22:44.560 --> 22:48.560
1558
+ and the actual cellular machinery in our bodies is quite complicated.
1559
+
1560
+ 22:48.560 --> 22:51.560
1561
+ It's hard to see it just popping into existence all by itself.
1562
+
1563
+ 22:51.560 --> 22:53.560
1564
+ It probably took a while.
1565
+
1566
+ 22:53.560 --> 22:55.560
1567
+ But we're making progress.
1568
+
1569
+ 22:55.560 --> 22:58.560
1570
+ In fact, I don't think we're spending nearly enough money on it.
1571
+
1572
+ 22:58.560 --> 23:01.560
1573
+ If I were the NSF, I would flood this area with money
1574
+
1575
+ 23:01.560 --> 23:04.560
1576
+ because it would change our view of the world
1577
+
1578
+ 23:04.560 --> 23:06.560
1579
+ if we could actually make life in the lab
1580
+
1581
+ 23:06.560 --> 23:09.560
1582
+ and understand how it was made originally here on Earth.
1583
+
1584
+ 23:09.560 --> 23:11.560
1585
+ I'm sure it would have some ripple effects
1586
+
1587
+ 23:11.560 --> 23:13.560
1588
+ that help cure diseases and so on.
1589
+
1590
+ 23:13.560 --> 23:15.560
1591
+ That's right.
1592
+
1593
+ 23:15.560 --> 23:18.560
1594
+ Synthetic biology is a wonderful big frontier where we're making cells.
1595
+
1596
+ 23:18.560 --> 23:21.560
1597
+ Right now, the best way to do that
1598
+
1599
+ 23:21.560 --> 23:23.560
1600
+ is to borrow heavily from existing biology.
1601
+
1602
+ 23:23.560 --> 23:26.560
1603
+ Craig Ventner several years ago created an artificial cell,
1604
+
1605
+ 23:26.560 --> 23:28.560
1606
+ but all he did was...
1607
+
1608
+ 23:28.560 --> 23:30.560
1609
+ not all he did, it was a tremendous accomplishment,
1610
+
1611
+ 23:30.560 --> 23:33.560
1612
+ but all he did was take out the DNA from a cell
1613
+
1614
+ 23:33.560 --> 23:36.560
1615
+ and put in entirely new DNA and let it boot up and go.
1616
+
1617
+ 23:36.560 --> 23:43.560
1618
+ What about the leap to creating intelligent life on Earth?
1619
+
1620
+ 23:43.560 --> 23:45.560
1621
+ However, again, we define intelligence, of course,
1622
+
1623
+ 23:45.560 --> 23:49.560
1624
+ but let's just even say homo sapiens,
1625
+
1626
+ 23:49.560 --> 23:54.560
1627
+ the modern intelligence in our human brain.
1628
+
1629
+ 23:54.560 --> 23:58.560
1630
+ Do you have a sense of what's involved in that leap
1631
+
1632
+ 23:58.560 --> 24:00.560
1633
+ and how big of a leap that is?
1634
+
1635
+ 24:00.560 --> 24:02.560
1636
+ So AI would count in this?
1637
+
1638
+ 24:02.560 --> 24:04.560
1639
+ Or do you really want life?
1640
+
1641
+ 24:04.560 --> 24:06.560
1642
+ AI would count in some sense.
1643
+
1644
+ 24:06.560 --> 24:08.560
1645
+ AI would count, I think.
1646
+
1647
+ 24:08.560 --> 24:10.560
1648
+ Of course, AI would count.
1649
+
1650
+ 24:10.560 --> 24:12.560
1651
+ Well, let's say artificial consciousness.
1652
+
1653
+ 24:12.560 --> 24:14.560
1654
+ I do not think we are on the threshold
1655
+
1656
+ 24:14.560 --> 24:16.560
1657
+ of creating artificial consciousness.
1658
+
1659
+ 24:16.560 --> 24:18.560
1660
+ I think it's possible.
1661
+
1662
+ 24:18.560 --> 24:20.560
1663
+ I'm not, again, very educated about how close we are,
1664
+
1665
+ 24:20.560 --> 24:22.560
1666
+ but my impression is not that we're really close
1667
+
1668
+ 24:22.560 --> 24:24.560
1669
+ because we understand how little we understand
1670
+
1671
+ 24:24.560 --> 24:26.560
1672
+ of consciousness and what it is.
1673
+
1674
+ 24:26.560 --> 24:28.560
1675
+ So if we don't have any idea what it is,
1676
+
1677
+ 24:28.560 --> 24:30.560
1678
+ it's hard to imagine we're on the threshold
1679
+
1680
+ 24:30.560 --> 24:32.560
1681
+ of making it ourselves.
1682
+
1683
+ 24:32.560 --> 24:34.560
1684
+ But it's doable, it's possible.
1685
+
1686
+ 24:34.560 --> 24:36.560
1687
+ I don't see any obstacles in principle,
1688
+
1689
+ 24:36.560 --> 24:38.560
1690
+ so yeah, I would hold out some interest
1691
+
1692
+ 24:38.560 --> 24:40.560
1693
+ in that happening eventually.
1694
+
1695
+ 24:40.560 --> 24:42.560
1696
+ I think in general, consciousness,
1697
+
1698
+ 24:42.560 --> 24:44.560
1699
+ I think it would be just surprised
1700
+
1701
+ 24:44.560 --> 24:46.560
1702
+ how easy consciousness is
1703
+
1704
+ 24:46.560 --> 24:48.560
1705
+ once we create intelligence.
1706
+
1707
+ 24:48.560 --> 24:50.560
1708
+ I think consciousness is a thing
1709
+
1710
+ 24:50.560 --> 24:54.560
1711
+ that's just something we all fake.
1712
+
1713
+ 24:54.560 --> 24:56.560
1714
+ Well, good.
1715
+
1716
+ 24:56.560 --> 24:58.560
1717
+ No, actually, I like this idea that, in fact,
1718
+
1719
+ 24:58.560 --> 25:00.560
1720
+ consciousness is way less mysterious than we think
1721
+
1722
+ 25:00.560 --> 25:02.560
1723
+ because we're all at every time,
1724
+
1725
+ 25:02.560 --> 25:04.560
1726
+ at every moment, less conscious than we think we are.
1727
+
1728
+ 25:04.560 --> 25:06.560
1729
+ We can fool things.
1730
+
1731
+ 25:06.560 --> 25:08.560
1732
+ And I think that plus the idea that you
1733
+
1734
+ 25:08.560 --> 25:10.560
1735
+ not only have artificial intelligence systems,
1736
+
1737
+ 25:10.560 --> 25:12.560
1738
+ but you put them in a body,
1739
+
1740
+ 25:12.560 --> 25:14.560
1741
+ give them a robot body,
1742
+
1743
+ 25:14.560 --> 25:18.560
1744
+ that will help the faking a lot.
1745
+
1746
+ 25:18.560 --> 25:20.560
1747
+ Yeah, I think creating consciousness
1748
+
1749
+ 25:20.560 --> 25:22.560
1750
+ in artificial consciousness
1751
+
1752
+ 25:22.560 --> 25:24.560
1753
+ is as simple
1754
+
1755
+ 25:24.560 --> 25:26.560
1756
+ as asking a Roomba
1757
+
1758
+ 25:26.560 --> 25:28.560
1759
+ to say, I'm conscious
1760
+
1761
+ 25:28.560 --> 25:32.560
1762
+ and refusing to be talked out of it.
1763
+
1764
+ 25:32.560 --> 25:34.560
1765
+ It could be.
1766
+
1767
+ 25:34.560 --> 25:36.560
1768
+ I mean, I'm almost being silly,
1769
+
1770
+ 25:36.560 --> 25:38.560
1771
+ but that's what we do.
1772
+
1773
+ 25:38.560 --> 25:40.560
1774
+ That's what we do with each other.
1775
+
1776
+ 25:40.560 --> 25:44.560
1777
+ The consciousness is also a social construct,
1778
+
1779
+ 25:44.560 --> 25:46.560
1780
+ and a lot of our ideas of intelligence
1781
+
1782
+ 25:46.560 --> 25:48.560
1783
+ is a social construct,
1784
+
1785
+ 25:48.560 --> 25:50.560
1786
+ and so reaching that bar involves
1787
+
1788
+ 25:50.560 --> 25:52.560
1789
+ something that's beyond,
1790
+
1791
+ 25:52.560 --> 25:54.560
1792
+ that doesn't necessarily involve
1793
+
1794
+ 25:54.560 --> 25:56.560
1795
+ the fundamental understanding
1796
+
1797
+ 25:56.560 --> 25:58.560
1798
+ of how you go from
1799
+
1800
+ 25:58.560 --> 26:00.560
1801
+ electrons to neurons
1802
+
1803
+ 26:00.560 --> 26:02.560
1804
+ to cognition.
1805
+
1806
+ 26:02.560 --> 26:04.560
1807
+ No, actually, I think that is an extremely good point,
1808
+
1809
+ 26:04.560 --> 26:06.560
1810
+ and in fact,
1811
+
1812
+ 26:06.560 --> 26:08.560
1813
+ what it suggests is,
1814
+
1815
+ 26:08.560 --> 26:10.560
1816
+ so yeah, you referred to Kate Darling,
1817
+
1818
+ 26:10.560 --> 26:12.560
1819
+ who I had on the podcast,
1820
+
1821
+ 26:12.560 --> 26:14.560
1822
+ and who does these experiments with
1823
+
1824
+ 26:14.560 --> 26:16.560
1825
+ very simple robots,
1826
+
1827
+ 26:16.560 --> 26:18.560
1828
+ but they look like animals,
1829
+
1830
+ 26:18.560 --> 26:20.560
1831
+ and they can look like they're experiencing pain,
1832
+
1833
+ 26:20.560 --> 26:22.560
1834
+ and we human beings react
1835
+
1836
+ 26:22.560 --> 26:24.560
1837
+ very negatively to these little robots
1838
+
1839
+ 26:24.560 --> 26:26.560
1840
+ looking like they're experiencing pain,
1841
+
1842
+ 26:26.560 --> 26:28.560
1843
+ and what you want to say is,
1844
+
1845
+ 26:28.560 --> 26:30.560
1846
+ yeah, but they're just robots.
1847
+
1848
+ 26:30.560 --> 26:32.560
1849
+ It's not really pain.
1850
+
1851
+ 26:32.560 --> 26:34.560
1852
+ It's just some electrons going around,
1853
+
1854
+ 26:34.560 --> 26:36.560
1855
+ but then you realize you and I
1856
+
1857
+ 26:36.560 --> 26:38.560
1858
+ are just electrons going around,
1859
+
1860
+ 26:38.560 --> 26:40.560
1861
+ and that's what pain is also.
1862
+
1863
+ 26:40.560 --> 26:42.560
1864
+ What I would have an easy time imagining
1865
+
1866
+ 26:42.560 --> 26:44.560
1867
+ is that there is a spectrum
1868
+
1869
+ 26:44.560 --> 26:46.560
1870
+ between these simple little robots
1871
+
1872
+ 26:46.560 --> 26:48.560
1873
+ that Kate works with
1874
+
1875
+ 26:48.560 --> 26:50.560
1876
+ and a human being,
1877
+
1878
+ 26:50.560 --> 26:52.560
1879
+ where there are things that,
1880
+
1881
+ 26:52.560 --> 26:54.560
1882
+ like a human touring test level thing
1883
+
1884
+ 26:54.560 --> 26:56.560
1885
+ are not conscious,
1886
+
1887
+ 26:56.560 --> 26:58.560
1888
+ but nevertheless walk and talk
1889
+
1890
+ 26:58.560 --> 27:00.560
1891
+ like they're conscious,
1892
+
1893
+ 27:00.560 --> 27:02.560
1894
+ and it could be that the future is,
1895
+
1896
+ 27:02.560 --> 27:04.560
1897
+ I mean, Siri is close, right?
1898
+
1899
+ 27:04.560 --> 27:06.560
1900
+ And so it might be the future
1901
+
1902
+ 27:06.560 --> 27:08.560
1903
+ has a lot more agents like that,
1904
+
1905
+ 27:08.560 --> 27:10.560
1906
+ and in fact, rather than someday going,
1907
+
1908
+ 27:10.560 --> 27:12.560
1909
+ aha, we have consciousness,
1910
+
1911
+ 27:12.560 --> 27:14.560
1912
+ we'll just creep up on it
1913
+
1914
+ 27:14.560 --> 27:16.560
1915
+ with more and more accurate reflections
1916
+
1917
+ 27:16.560 --> 27:18.560
1918
+ of what we expect.
1919
+
1920
+ 27:18.560 --> 27:20.560
1921
+ And in the future, maybe the present,
1922
+
1923
+ 27:20.560 --> 27:22.560
1924
+ and you're basically assuming
1925
+
1926
+ 27:22.560 --> 27:24.560
1927
+ that I'm human.
1928
+
1929
+ 27:24.560 --> 27:26.560
1930
+ I get a high probability.
1931
+
1932
+ 27:26.560 --> 27:28.560
1933
+ At this time, because the,
1934
+
1935
+ 27:28.560 --> 27:30.560
1936
+ but in the future,
1937
+
1938
+ 27:30.560 --> 27:32.560
1939
+ there might be question marks around that, right?
1940
+
1941
+ 27:32.560 --> 27:34.560
1942
+ Yeah, no, absolutely.
1943
+
1944
+ 27:34.560 --> 27:36.560
1945
+ Certainly videos are almost to the point
1946
+
1947
+ 27:36.560 --> 27:38.560
1948
+ where you shouldn't trust them already.
1949
+
1950
+ 27:38.560 --> 27:40.560
1951
+ Photos you can't trust, right?
1952
+
1953
+ 27:40.560 --> 27:42.560
1954
+ Videos is easier to trust,
1955
+
1956
+ 27:42.560 --> 27:44.560
1957
+ but we're getting worse.
1958
+
1959
+ 27:44.560 --> 27:46.560
1960
+ We're getting better at faking them, right?
1961
+
1962
+ 27:46.560 --> 27:48.560
1963
+ Yeah, so physical, embodied people,
1964
+
1965
+ 27:48.560 --> 27:50.560
1966
+ what's so hard about faking that?
1967
+
1968
+ 27:50.560 --> 27:52.560
1969
+ This is very depressing,
1970
+
1971
+ 27:52.560 --> 27:54.560
1972
+ this conversation we're having right now.
1973
+
1974
+ 27:54.560 --> 27:56.560
1975
+ To me, it's exciting.
1976
+
1977
+ 27:56.560 --> 27:58.560
1978
+ You're doing it, so it's exciting to you,
1979
+
1980
+ 27:58.560 --> 28:00.560
1981
+ but it's a sobering thought.
1982
+
1983
+ 28:00.560 --> 28:02.560
1984
+ We're very bad at imagining
1985
+
1986
+ 28:02.560 --> 28:04.560
1987
+ what the next 50 years are going to be like
1988
+
1989
+ 28:04.560 --> 28:06.560
1990
+ when we're in the middle of a phase transition
1991
+
1992
+ 28:06.560 --> 28:08.560
1993
+ as we are right now.
1994
+
1995
+ 28:08.560 --> 28:10.560
1996
+ Yeah, and in general,
1997
+
1998
+ 28:10.560 --> 28:12.560
1999
+ I'm not blind to all the threats.
2000
+
2001
+ 28:12.560 --> 28:14.560
2002
+ I am excited by the power of technology
2003
+
2004
+ 28:14.560 --> 28:16.560
2005
+ to solve,
2006
+
2007
+ 28:16.560 --> 28:18.560
2008
+ as they evolve.
2009
+
2010
+ 28:18.560 --> 28:20.560
2011
+ I'm not as much as Steven Pinker
2012
+
2013
+ 28:20.560 --> 28:22.560
2014
+ optimistic about the world,
2015
+
2016
+ 28:22.560 --> 28:24.560
2017
+ but in everything I've seen,
2018
+
2019
+ 28:24.560 --> 28:26.560
2020
+ all the brilliant people in the world
2021
+
2022
+ 28:26.560 --> 28:28.560
2023
+ that I've met are good people.
2024
+
2025
+ 28:28.560 --> 28:30.560
2026
+ So the army of the good
2027
+
2028
+ 28:30.560 --> 28:32.560
2029
+ in terms of the development of technology is large.
2030
+
2031
+ 28:32.560 --> 28:34.560
2032
+ Okay, you're way more
2033
+
2034
+ 28:34.560 --> 28:36.560
2035
+ optimistic than I am.
2036
+
2037
+ 28:36.560 --> 28:38.560
2038
+ I think that goodness and badness
2039
+
2040
+ 28:38.560 --> 28:40.560
2041
+ are equally distributed among intelligent
2042
+
2043
+ 28:40.560 --> 28:42.560
2044
+ and unintelligent people.
2045
+
2046
+ 28:42.560 --> 28:44.560
2047
+ I don't see much of a correlation there.
2048
+
2049
+ 28:44.560 --> 28:46.560
2050
+ Interesting.
2051
+
2052
+ 28:46.560 --> 28:48.560
2053
+ Neither of us have proof.
2054
+
2055
+ 28:48.560 --> 28:50.560
2056
+ Yeah, exactly. Again, opinions are free, right?
2057
+
2058
+ 28:50.560 --> 28:52.560
2059
+ Nor definitions of good and evil.
2060
+
2061
+ 28:52.560 --> 28:54.560
2062
+ Without definitions
2063
+
2064
+ 28:54.560 --> 28:56.560
2065
+ or without data
2066
+
2067
+ 28:56.560 --> 28:58.560
2068
+ opinions.
2069
+
2070
+ 28:58.560 --> 29:00.560
2071
+ So what kind of questions can science not
2072
+
2073
+ 29:00.560 --> 29:02.560
2074
+ currently answer
2075
+
2076
+ 29:02.560 --> 29:04.560
2077
+ and may never be able to answer in your view?
2078
+
2079
+ 29:04.560 --> 29:06.560
2080
+ Well, the obvious one is what is good and bad.
2081
+
2082
+ 29:06.560 --> 29:08.560
2083
+ What is right and wrong?
2084
+
2085
+ 29:08.560 --> 29:10.560
2086
+ I think that there are questions that science tells us
2087
+
2088
+ 29:10.560 --> 29:12.560
2089
+ what happens, what the world is,
2090
+
2091
+ 29:12.560 --> 29:14.560
2092
+ doesn't say what the world should do
2093
+
2094
+ 29:14.560 --> 29:16.560
2095
+ or what we should do because we're part of the world.
2096
+
2097
+ 29:16.560 --> 29:18.560
2098
+ But we are part of the world
2099
+
2100
+ 29:18.560 --> 29:20.560
2101
+ and we have the ability to feel like
2102
+
2103
+ 29:20.560 --> 29:22.560
2104
+ something's right, something's wrong.
2105
+
2106
+ 29:22.560 --> 29:24.560
2107
+ And to make a very long story
2108
+
2109
+ 29:24.560 --> 29:26.560
2110
+ very short, I think that the idea
2111
+
2112
+ 29:26.560 --> 29:28.560
2113
+ of moral philosophy is
2114
+
2115
+ 29:28.560 --> 29:30.560
2116
+ systematizing our intuitions of what is right
2117
+
2118
+ 29:30.560 --> 29:32.560
2119
+ and what is wrong.
2120
+
2121
+ 29:32.560 --> 29:34.560
2122
+ And science might be able to predict ahead of time
2123
+
2124
+ 29:34.560 --> 29:36.560
2125
+ what we will do,
2126
+
2127
+ 29:36.560 --> 29:38.560
2128
+ but it won't ever be able to judge
2129
+
2130
+ 29:38.560 --> 29:40.560
2131
+ whether we should have done it or not.
2132
+
2133
+ 29:40.560 --> 29:42.560
2134
+ You know, you're kind of unique in terms of scientists.
2135
+
2136
+ 29:42.560 --> 29:44.560
2137
+ It doesn't
2138
+
2139
+ 29:44.560 --> 29:46.560
2140
+ have to do with podcasts, but
2141
+
2142
+ 29:46.560 --> 29:48.560
2143
+ even just reaching out, I think you refer to
2144
+
2145
+ 29:48.560 --> 29:50.560
2146
+ as sort of doing interdisciplinary science.
2147
+
2148
+ 29:50.560 --> 29:52.560
2149
+ So you reach out
2150
+
2151
+ 29:52.560 --> 29:54.560
2152
+ and talk to people
2153
+
2154
+ 29:54.560 --> 29:56.560
2155
+ that are outside of your discipline,
2156
+
2157
+ 29:56.560 --> 29:58.560
2158
+ which I always
2159
+
2160
+ 29:58.560 --> 30:00.560
2161
+ hope that's what science was for.
2162
+
2163
+ 30:00.560 --> 30:02.560
2164
+ In fact, I was a little disillusioned
2165
+
2166
+ 30:02.560 --> 30:04.560
2167
+ when I realized that academia
2168
+
2169
+ 30:04.560 --> 30:06.560
2170
+ is very siloed.
2171
+
2172
+ 30:06.560 --> 30:08.560
2173
+ Yeah.
2174
+
2175
+ 30:08.560 --> 30:10.560
2176
+ The question is
2177
+
2178
+ 30:10.560 --> 30:12.560
2179
+ how,
2180
+
2181
+ 30:12.560 --> 30:14.560
2182
+ at your own level, how do you prepare for these conversations?
2183
+
2184
+ 30:14.560 --> 30:16.560
2185
+ How do you think about these conversations?
2186
+
2187
+ 30:16.560 --> 30:18.560
2188
+ How do you open your mind enough
2189
+
2190
+ 30:18.560 --> 30:20.560
2191
+ to have these conversations?
2192
+
2193
+ 30:20.560 --> 30:22.560
2194
+ And it may be a little bit broader.
2195
+
2196
+ 30:22.560 --> 30:24.560
2197
+ How can you advise other scientists
2198
+
2199
+ 30:24.560 --> 30:26.560
2200
+ to have these kinds of conversations?
2201
+
2202
+ 30:26.560 --> 30:28.560
2203
+ Not at the podcast.
2204
+
2205
+ 30:28.560 --> 30:30.560
2206
+ The fact that you're doing a podcast is awesome.
2207
+
2208
+ 30:30.560 --> 30:32.560
2209
+ Other people get to hear them.
2210
+
2211
+ 30:32.560 --> 30:34.560
2212
+ But it's also good to have it without mics in general.
2213
+
2214
+ 30:34.560 --> 30:36.560
2215
+ It's a good question, but a tough one
2216
+
2217
+ 30:36.560 --> 30:38.560
2218
+ to answer. I think about
2219
+
2220
+ 30:38.560 --> 30:40.560
2221
+ a guy I know is a personal trainer
2222
+
2223
+ 30:40.560 --> 30:42.560
2224
+ and he was asked on a podcast
2225
+
2226
+ 30:42.560 --> 30:44.560
2227
+ how do we psych ourselves up
2228
+
2229
+ 30:44.560 --> 30:46.560
2230
+ to do a workout? How do we make
2231
+
2232
+ 30:46.560 --> 30:48.560
2233
+ that discipline to go and work out?
2234
+
2235
+ 30:48.560 --> 30:50.560
2236
+ And he's like, why are you asking me?
2237
+
2238
+ 30:50.560 --> 30:52.560
2239
+ I can't stop working out.
2240
+
2241
+ 30:52.560 --> 30:54.560
2242
+ I don't need to psych myself up.
2243
+
2244
+ 30:54.560 --> 30:56.560
2245
+ Likewise, you asked me
2246
+
2247
+ 30:56.560 --> 30:58.560
2248
+ how do you get to have
2249
+
2250
+ 30:58.560 --> 31:00.560
2251
+ interdisciplinary conversations and all sorts of different things
2252
+
2253
+ 31:00.560 --> 31:02.560
2254
+ with all sorts of different people?
2255
+
2256
+ 31:02.560 --> 31:04.560
2257
+ That's what makes me go.
2258
+
2259
+ 31:04.560 --> 31:06.560
2260
+ I couldn't stop
2261
+
2262
+ 31:06.560 --> 31:08.560
2263
+ doing that. I did that long before
2264
+
2265
+ 31:08.560 --> 31:10.560
2266
+ any of them were recorded. In fact,
2267
+
2268
+ 31:10.560 --> 31:12.560
2269
+ a lot of the motivation for starting recording it
2270
+
2271
+ 31:12.560 --> 31:14.560
2272
+ was making sure I would read all these books
2273
+
2274
+ 31:14.560 --> 31:16.560
2275
+ that I had purchased. All these books
2276
+
2277
+ 31:16.560 --> 31:18.560
2278
+ I wanted to read. Not enough time to read them.
2279
+
2280
+ 31:18.560 --> 31:20.560
2281
+ And now, if I have the motivation
2282
+
2283
+ 31:20.560 --> 31:22.560
2284
+ because I'm going to interview Pat
2285
+
2286
+ 31:22.560 --> 31:24.560
2287
+ Churchland, I'm going to finally read her
2288
+
2289
+ 31:24.560 --> 31:26.560
2290
+ book.
2291
+
2292
+ 31:26.560 --> 31:28.560
2293
+ And
2294
+
2295
+ 31:28.560 --> 31:30.560
2296
+ it's absolutely true that academia is
2297
+
2298
+ 31:30.560 --> 31:32.560
2299
+ extraordinarily siloed. We don't talk to people.
2300
+
2301
+ 31:32.560 --> 31:34.560
2302
+ We rarely do.
2303
+
2304
+ 31:34.560 --> 31:36.560
2305
+ And in fact, when we do, it's punished.
2306
+
2307
+ 31:36.560 --> 31:38.560
2308
+ The people who do it successfully
2309
+
2310
+ 31:38.560 --> 31:40.560
2311
+ generally first became
2312
+
2313
+ 31:40.560 --> 31:42.560
2314
+ very successful within their little siloed discipline.
2315
+
2316
+ 31:42.560 --> 31:44.560
2317
+ And only then
2318
+
2319
+ 31:44.560 --> 31:46.560
2320
+ did they start expanding out.
2321
+
2322
+ 31:46.560 --> 31:48.560
2323
+ If you're a young person, I have graduate students
2324
+
2325
+ 31:48.560 --> 31:50.560
2326
+ and I try to be very, very
2327
+
2328
+ 31:50.560 --> 31:52.560
2329
+ candid with them about this.
2330
+
2331
+ 31:52.560 --> 31:54.560
2332
+ That it's
2333
+
2334
+ 31:54.560 --> 31:56.560
2335
+ most graduate students do not become faculty members.
2336
+
2337
+ 31:56.560 --> 31:58.560
2338
+ It's a tough road.
2339
+
2340
+ 31:58.560 --> 32:00.560
2341
+ And so
2342
+
2343
+ 32:00.560 --> 32:02.560
2344
+ you live the life you want to live
2345
+
2346
+ 32:02.560 --> 32:04.560
2347
+ but do it with your eyes open
2348
+
2349
+ 32:04.560 --> 32:06.560
2350
+ about what it does to your job chances.
2351
+
2352
+ 32:06.560 --> 32:08.560
2353
+ And the more
2354
+
2355
+ 32:08.560 --> 32:10.560
2356
+ broad you are and the less
2357
+
2358
+ 32:10.560 --> 32:12.560
2359
+ time you spend hyper
2360
+
2361
+ 32:12.560 --> 32:14.560
2362
+ specializing in your field, the lower
2363
+
2364
+ 32:14.560 --> 32:16.560
2365
+ your job chances are. That's just an academic
2366
+
2367
+ 32:16.560 --> 32:18.560
2368
+ reality. It's terrible. I don't like it.
2369
+
2370
+ 32:18.560 --> 32:20.560
2371
+ But it's a reality.
2372
+
2373
+ 32:20.560 --> 32:22.560
2374
+ And for some people
2375
+
2376
+ 32:22.560 --> 32:24.560
2377
+ that's fine. Like there's plenty of people
2378
+
2379
+ 32:24.560 --> 32:26.560
2380
+ who are wonderful scientists who have zero
2381
+
2382
+ 32:26.560 --> 32:28.560
2383
+ interest in branching out and talking to
2384
+
2385
+ 32:28.560 --> 32:30.560
2386
+ things to anyone outside their field.
2387
+
2388
+ 32:30.560 --> 32:32.560
2389
+ But
2390
+
2391
+ 32:32.560 --> 32:34.560
2392
+ it is disillusioning to me
2393
+
2394
+ 32:34.560 --> 32:36.560
2395
+ some of the romantic notion
2396
+
2397
+ 32:36.560 --> 32:38.560
2398
+ I had of the intellectual academic life
2399
+
2400
+ 32:38.560 --> 32:40.560
2401
+ is belied by the reality
2402
+
2403
+ 32:40.560 --> 32:42.560
2404
+ of it. The idea that we should
2405
+
2406
+ 32:42.560 --> 32:44.560
2407
+ reach out beyond our discipline
2408
+
2409
+ 32:44.560 --> 32:46.560
2410
+ and that is a positive good
2411
+
2412
+ 32:46.560 --> 32:48.560
2413
+ is just so
2414
+
2415
+ 32:48.560 --> 32:50.560
2416
+ rare in
2417
+
2418
+ 32:50.560 --> 32:52.560
2419
+ universities that it may as well
2420
+
2421
+ 32:52.560 --> 32:54.560
2422
+ not exist at all. But
2423
+
2424
+ 32:54.560 --> 32:56.560
2425
+ that said, even though you're saying
2426
+
2427
+ 32:56.560 --> 32:58.560
2428
+ you're doing it like the personal trainer
2429
+
2430
+ 32:58.560 --> 33:00.560
2431
+ because you just can't help it, you're also
2432
+
2433
+ 33:00.560 --> 33:02.560
2434
+ an inspiration to others.
2435
+
2436
+ 33:02.560 --> 33:04.560
2437
+ Like I could speak for myself.
2438
+
2439
+ 33:04.560 --> 33:06.560
2440
+ You know,
2441
+
2442
+ 33:06.560 --> 33:08.560
2443
+ I also have a career I'm thinking about
2444
+
2445
+ 33:08.560 --> 33:10.560
2446
+ right. And without
2447
+
2448
+ 33:10.560 --> 33:12.560
2449
+ your podcast, I may have
2450
+
2451
+ 33:12.560 --> 33:14.560
2452
+ not have been doing this at all.
2453
+
2454
+ 33:14.560 --> 33:16.560
2455
+ Right. So it
2456
+
2457
+ 33:16.560 --> 33:18.560
2458
+ makes me realize that these kinds
2459
+
2460
+ 33:18.560 --> 33:20.560
2461
+ of conversations is kind of what science is about.
2462
+
2463
+ 33:20.560 --> 33:22.560
2464
+ In many
2465
+
2466
+ 33:22.560 --> 33:24.560
2467
+ ways. The reason we write papers
2468
+
2469
+ 33:24.560 --> 33:26.560
2470
+ this exchange of ideas
2471
+
2472
+ 33:26.560 --> 33:28.560
2473
+ is much harder to do
2474
+
2475
+ 33:28.560 --> 33:30.560
2476
+ into the disciplinary papers, I would say.
2477
+
2478
+ 33:30.560 --> 33:32.560
2479
+ Yeah. Right.
2480
+
2481
+ 33:32.560 --> 33:34.560
2482
+ And conversations are easier.
2483
+
2484
+ 33:34.560 --> 33:36.560
2485
+ So conversations is the beginning
2486
+
2487
+ 33:36.560 --> 33:38.560
2488
+ and in the field of AI
2489
+
2490
+ 33:38.560 --> 33:40.560
2491
+ that it's
2492
+
2493
+ 33:40.560 --> 33:42.560
2494
+ obvious that we should think outside
2495
+
2496
+ 33:42.560 --> 33:44.560
2497
+ of pure
2498
+
2499
+ 33:44.560 --> 33:46.560
2500
+ computer vision competitions and in particular
2501
+
2502
+ 33:46.560 --> 33:48.560
2503
+ data sets. We should think about the broader
2504
+
2505
+ 33:48.560 --> 33:50.560
2506
+ impact of how this can be
2507
+
2508
+ 33:50.560 --> 33:52.560
2509
+ you know, reaching
2510
+
2511
+ 33:52.560 --> 33:54.560
2512
+ out to physics, to psychology
2513
+
2514
+ 33:54.560 --> 33:56.560
2515
+ to neuroscience
2516
+
2517
+ 33:56.560 --> 33:58.560
2518
+ and having these conversations.
2519
+
2520
+ 33:58.560 --> 34:00.560
2521
+ So you're an inspiration
2522
+
2523
+ 34:00.560 --> 34:02.560
2524
+ and so. Well, thank you very much.
2525
+
2526
+ 34:02.560 --> 34:04.560
2527
+ Never know how the world
2528
+
2529
+ 34:04.560 --> 34:06.560
2530
+ changes. I mean
2531
+
2532
+ 34:06.560 --> 34:08.560
2533
+ the fact that this stuff is out there
2534
+
2535
+ 34:08.560 --> 34:10.560
2536
+ and I've
2537
+
2538
+ 34:10.560 --> 34:12.560
2539
+ a huge number of people come up to me
2540
+
2541
+ 34:12.560 --> 34:14.560
2542
+ grad students really loving the
2543
+
2544
+ 34:14.560 --> 34:16.560
2545
+ podcast inspired by it and
2546
+
2547
+ 34:16.560 --> 34:18.560
2548
+ they will probably have that
2549
+
2550
+ 34:18.560 --> 34:20.560
2551
+ there'll be ripple effects when they become faculty
2552
+
2553
+ 34:20.560 --> 34:22.560
2554
+ and so on. So we can end
2555
+
2556
+ 34:22.560 --> 34:24.560
2557
+ on a balance between pessimism
2558
+
2559
+ 34:24.560 --> 34:26.560
2560
+ and optimism and Sean, thank you so much
2561
+
2562
+ 34:26.560 --> 34:28.560
2563
+ for talking. It was awesome. No, Lex, thank you very
2564
+
2565
+ 34:28.560 --> 34:52.560
2566
+ much for this conversation. It was great.
2567
+
vtt/episode_027_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_028_large.vtt ADDED
@@ -0,0 +1,2705 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.120
4
+ The following is a conversation with Chris Sampson.
5
+
6
+ 00:03.120 --> 00:06.000
7
+ He was a CTO of the Google self driving car team,
8
+
9
+ 00:06.000 --> 00:08.880
10
+ a key engineer and leader behind the Carnegie Mellon
11
+
12
+ 00:08.880 --> 00:12.000
13
+ University autonomous vehicle entries in the DARPA Grand
14
+
15
+ 00:12.000 --> 00:16.160
16
+ Challenges and the winner of the DARPA Urban Challenge.
17
+
18
+ 00:16.160 --> 00:20.100
19
+ Today, he's the CEO of Aurora Innovation, an autonomous
20
+
21
+ 00:20.100 --> 00:21.360
22
+ vehicle software company.
23
+
24
+ 00:21.360 --> 00:23.600
25
+ He started with Sterling Anderson,
26
+
27
+ 00:23.600 --> 00:25.960
28
+ who was the former director of Tesla Autopilot,
29
+
30
+ 00:25.960 --> 00:30.120
31
+ and drew back now, Uber's former autonomy and perception lead.
32
+
33
+ 00:30.120 --> 00:32.880
34
+ Chris is one of the top roboticists and autonomous
35
+
36
+ 00:32.880 --> 00:36.320
37
+ vehicle experts in the world, and a longtime voice
38
+
39
+ 00:36.320 --> 00:38.840
40
+ of reason in a space that is shrouded
41
+
42
+ 00:38.840 --> 00:41.320
43
+ in both mystery and hype.
44
+
45
+ 00:41.320 --> 00:43.600
46
+ He both acknowledges the incredible challenges
47
+
48
+ 00:43.600 --> 00:46.480
49
+ involved in solving the problem of autonomous driving
50
+
51
+ 00:46.480 --> 00:49.760
52
+ and is working hard to solve it.
53
+
54
+ 00:49.760 --> 00:52.400
55
+ This is the Artificial Intelligence podcast.
56
+
57
+ 00:52.400 --> 00:54.720
58
+ If you enjoy it, subscribe on YouTube,
59
+
60
+ 00:54.720 --> 00:57.920
61
+ give it five stars on iTunes, support it on Patreon,
62
+
63
+ 00:57.920 --> 00:59.720
64
+ or simply connect with me on Twitter
65
+
66
+ 00:59.720 --> 01:03.240
67
+ at Lex Friedman, spelled F R I D M A N.
68
+
69
+ 01:03.240 --> 01:09.120
70
+ And now, here's my conversation with Chris Sampson.
71
+
72
+ 01:09.120 --> 01:11.960
73
+ You were part of both the DARPA Grand Challenge
74
+
75
+ 01:11.960 --> 01:13.880
76
+ and the DARPA Urban Challenge teams
77
+
78
+ 01:13.880 --> 01:17.040
79
+ at CMU with Red Whitaker.
80
+
81
+ 01:17.040 --> 01:19.720
82
+ What technical or philosophical things
83
+
84
+ 01:19.720 --> 01:22.240
85
+ have you learned from these races?
86
+
87
+ 01:22.240 --> 01:26.600
88
+ I think the high order bit was that it could be done.
89
+
90
+ 01:26.600 --> 01:30.200
91
+ I think that was the thing that was
92
+
93
+ 01:30.200 --> 01:34.880
94
+ incredible about the first of the Grand Challenges,
95
+
96
+ 01:34.880 --> 01:38.160
97
+ that I remember I was a grad student at Carnegie Mellon,
98
+
99
+ 01:38.160 --> 01:45.360
100
+ and there was kind of this dichotomy of it
101
+
102
+ 01:45.360 --> 01:46.720
103
+ seemed really hard, so that would
104
+
105
+ 01:46.720 --> 01:48.800
106
+ be cool and interesting.
107
+
108
+ 01:48.800 --> 01:52.800
109
+ But at the time, we were the only robotics institute around,
110
+
111
+ 01:52.800 --> 01:55.560
112
+ and so if we went into it and fell on our faces,
113
+
114
+ 01:55.560 --> 01:58.360
115
+ that would be embarrassing.
116
+
117
+ 01:58.360 --> 02:01.120
118
+ So I think just having the will to go do it,
119
+
120
+ 02:01.120 --> 02:02.880
121
+ to try to do this thing that at the time
122
+
123
+ 02:02.880 --> 02:05.000
124
+ was marked as darn near impossible,
125
+
126
+ 02:05.000 --> 02:06.960
127
+ and then after a couple of tries,
128
+
129
+ 02:06.960 --> 02:08.420
130
+ be able to actually make it happen,
131
+
132
+ 02:08.420 --> 02:12.320
133
+ I think that was really exciting.
134
+
135
+ 02:12.320 --> 02:15.040
136
+ But at which point did you believe it was possible?
137
+
138
+ 02:15.040 --> 02:16.960
139
+ Did you from the very beginning?
140
+
141
+ 02:16.960 --> 02:18.000
142
+ Did you personally?
143
+
144
+ 02:18.000 --> 02:19.800
145
+ Because you're one of the lead engineers.
146
+
147
+ 02:19.800 --> 02:21.800
148
+ You actually had to do a lot of the work.
149
+
150
+ 02:21.800 --> 02:23.880
151
+ Yeah, I was the technical director there,
152
+
153
+ 02:23.880 --> 02:26.120
154
+ and did a lot of the work, along with a bunch
155
+
156
+ 02:26.120 --> 02:28.420
157
+ of other really good people.
158
+
159
+ 02:28.420 --> 02:29.760
160
+ Did I believe it could be done?
161
+
162
+ 02:29.760 --> 02:31.080
163
+ Yeah, of course.
164
+
165
+ 02:31.080 --> 02:32.760
166
+ Why would you go do something you thought
167
+
168
+ 02:32.760 --> 02:34.800
169
+ was completely impossible?
170
+
171
+ 02:34.800 --> 02:36.260
172
+ We thought it was going to be hard.
173
+
174
+ 02:36.260 --> 02:37.800
175
+ We didn't know how we were going to be able to do it.
176
+
177
+ 02:37.800 --> 02:42.880
178
+ We didn't know if we'd be able to do it the first time.
179
+
180
+ 02:42.880 --> 02:45.960
181
+ Turns out we couldn't.
182
+
183
+ 02:45.960 --> 02:48.400
184
+ That, yeah, I guess you have to.
185
+
186
+ 02:48.400 --> 02:52.960
187
+ I think there's a certain benefit to naivete, right?
188
+
189
+ 02:52.960 --> 02:55.440
190
+ That if you don't know how hard something really is,
191
+
192
+ 02:55.440 --> 02:59.600
193
+ you try different things, and it gives you an opportunity
194
+
195
+ 02:59.600 --> 03:04.120
196
+ that others who are wiser maybe don't have.
197
+
198
+ 03:04.120 --> 03:05.720
199
+ What were the biggest pain points?
200
+
201
+ 03:05.720 --> 03:08.880
202
+ Mechanical, sensors, hardware, software,
203
+
204
+ 03:08.880 --> 03:11.800
205
+ algorithms for mapping, localization,
206
+
207
+ 03:11.800 --> 03:13.680
208
+ just general perception, control?
209
+
210
+ 03:13.680 --> 03:15.320
211
+ Like hardware, software, first of all?
212
+
213
+ 03:15.320 --> 03:20.120
214
+ I think that's the joy of this field, is that it's all hard
215
+
216
+ 03:20.120 --> 03:25.360
217
+ and that you have to be good at each part of it.
218
+
219
+ 03:25.360 --> 03:32.360
220
+ So for the urban challenges, if I look back at it from today,
221
+
222
+ 03:32.360 --> 03:38.960
223
+ it should be easy today, that it was a static world.
224
+
225
+ 03:38.960 --> 03:40.800
226
+ There weren't other actors moving through it,
227
+
228
+ 03:40.800 --> 03:42.480
229
+ is what that means.
230
+
231
+ 03:42.480 --> 03:47.080
232
+ It was out in the desert, so you get really good GPS.
233
+
234
+ 03:47.080 --> 03:51.400
235
+ So that went, and we could map it roughly.
236
+
237
+ 03:51.400 --> 03:55.160
238
+ And so in retrospect now, it's within the realm of things
239
+
240
+ 03:55.160 --> 03:57.840
241
+ we could do back then.
242
+
243
+ 03:57.840 --> 03:59.720
244
+ Just actually getting the vehicle and the,
245
+
246
+ 03:59.720 --> 04:00.680
247
+ there's a bunch of engineering work
248
+
249
+ 04:00.680 --> 04:04.760
250
+ to get the vehicle so that we could control it and drive it.
251
+
252
+ 04:04.760 --> 04:09.600
253
+ That's still a pain today, but it was even more so back then.
254
+
255
+ 04:09.600 --> 04:14.280
256
+ And then the uncertainty of exactly what they wanted us to do
257
+
258
+ 04:14.280 --> 04:17.040
259
+ was part of the challenge as well.
260
+
261
+ 04:17.040 --> 04:19.440
262
+ Right, you didn't actually know the track heading in here.
263
+
264
+ 04:19.440 --> 04:21.480
265
+ You knew approximately, but you didn't actually
266
+
267
+ 04:21.480 --> 04:23.520
268
+ know the route that was going to be taken.
269
+
270
+ 04:23.520 --> 04:24.920
271
+ That's right, we didn't know the route.
272
+
273
+ 04:24.920 --> 04:28.600
274
+ We didn't even really, the way the rules had been described,
275
+
276
+ 04:28.600 --> 04:29.800
277
+ you had to kind of guess.
278
+
279
+ 04:29.800 --> 04:33.360
280
+ So if you think back to that challenge,
281
+
282
+ 04:33.360 --> 04:36.960
283
+ the idea was that the government would give us,
284
+
285
+ 04:36.960 --> 04:40.320
286
+ the DARPA would give us a set of waypoints
287
+
288
+ 04:40.320 --> 04:43.520
289
+ and kind of the width that you had to stay within
290
+
291
+ 04:43.520 --> 04:46.800
292
+ between the line that went between each of those waypoints.
293
+
294
+ 04:46.800 --> 04:49.280
295
+ And so the most devious thing they could have done
296
+
297
+ 04:49.280 --> 04:53.280
298
+ is set a kilometer wide corridor across a field
299
+
300
+ 04:53.280 --> 04:58.280
301
+ of scrub brush and rocks and said, go figure it out.
302
+
303
+ 04:58.520 --> 05:01.920
304
+ Fortunately, it really, it turned into basically driving
305
+
306
+ 05:01.920 --> 05:05.000
307
+ along a set of trails, which is much more relevant
308
+
309
+ 05:05.000 --> 05:07.920
310
+ to the application they were looking for.
311
+
312
+ 05:08.760 --> 05:12.080
313
+ But no, it was a hell of a thing back in the day.
314
+
315
+ 05:12.080 --> 05:16.640
316
+ So the legend, Red, was kind of leading that effort
317
+
318
+ 05:16.640 --> 05:19.120
319
+ in terms of just broadly speaking.
320
+
321
+ 05:19.120 --> 05:22.040
322
+ So you're a leader now.
323
+
324
+ 05:22.040 --> 05:25.000
325
+ What have you learned from Red about leadership?
326
+
327
+ 05:25.000 --> 05:26.200
328
+ I think there's a couple things.
329
+
330
+ 05:26.200 --> 05:31.080
331
+ One is go and try those really hard things.
332
+
333
+ 05:31.080 --> 05:34.480
334
+ That's where there is an incredible opportunity.
335
+
336
+ 05:34.480 --> 05:36.560
337
+ I think the other big one, though,
338
+
339
+ 05:36.560 --> 05:40.680
340
+ is to see people for who they can be, not who they are.
341
+
342
+ 05:41.720 --> 05:43.720
343
+ It's one of the things that I actually,
344
+
345
+ 05:43.720 --> 05:46.080
346
+ one of the deepest lessons I learned from Red
347
+
348
+ 05:46.080 --> 05:50.200
349
+ was that he would look at undergraduates
350
+
351
+ 05:50.200 --> 05:55.200
352
+ or graduate students and empower them to be leaders,
353
+
354
+ 05:56.120 --> 06:00.320
355
+ to have responsibility, to do great things
356
+
357
+ 06:00.320 --> 06:04.480
358
+ that I think another person might look at them
359
+
360
+ 06:04.480 --> 06:06.600
361
+ and think, oh, well, that's just an undergraduate student.
362
+
363
+ 06:06.600 --> 06:07.720
364
+ What could they know?
365
+
366
+ 06:08.680 --> 06:12.720
367
+ And so I think that kind of trust but verify,
368
+
369
+ 06:12.720 --> 06:14.480
370
+ have confidence in what people can become,
371
+
372
+ 06:14.480 --> 06:16.680
373
+ I think is a really powerful thing.
374
+
375
+ 06:16.680 --> 06:20.440
376
+ So through that, let's just fast forward through the history.
377
+
378
+ 06:20.440 --> 06:24.160
379
+ Can you maybe talk through the technical evolution
380
+
381
+ 06:24.160 --> 06:26.200
382
+ of autonomous vehicle systems
383
+
384
+ 06:26.200 --> 06:29.960
385
+ from the first two Grand Challenges to the Urban Challenge
386
+
387
+ 06:29.960 --> 06:33.560
388
+ to today, are there major shifts in your mind
389
+
390
+ 06:33.560 --> 06:37.240
391
+ or is it the same kind of technology just made more robust?
392
+
393
+ 06:37.240 --> 06:39.840
394
+ I think there's been some big, big steps.
395
+
396
+ 06:40.880 --> 06:43.720
397
+ So for the Grand Challenge,
398
+
399
+ 06:43.720 --> 06:48.720
400
+ the real technology that unlocked that was HD mapping.
401
+
402
+ 06:51.400 --> 06:54.200
403
+ Prior to that, a lot of the off road robotics work
404
+
405
+ 06:55.160 --> 06:58.480
406
+ had been done without any real prior model
407
+
408
+ 06:58.480 --> 07:01.400
409
+ of what the vehicle was going to encounter.
410
+
411
+ 07:01.400 --> 07:04.880
412
+ And so that innovation that the fact that we could get
413
+
414
+ 07:05.960 --> 07:10.960
415
+ decimeter resolution models was really a big deal.
416
+
417
+ 07:13.440 --> 07:18.200
418
+ And that allowed us to kind of bound the complexity
419
+
420
+ 07:18.200 --> 07:19.680
421
+ of the driving problem the vehicle had
422
+
423
+ 07:19.680 --> 07:21.040
424
+ and allowed it to operate at speed
425
+
426
+ 07:21.040 --> 07:23.800
427
+ because we could assume things about the environment
428
+
429
+ 07:23.800 --> 07:25.360
430
+ that it was going to encounter.
431
+
432
+ 07:25.360 --> 07:29.720
433
+ So that was the big step there.
434
+
435
+ 07:31.280 --> 07:35.280
436
+ For the Urban Challenge,
437
+
438
+ 07:37.240 --> 07:39.280
439
+ one of the big technological innovations there
440
+
441
+ 07:39.280 --> 07:41.040
442
+ was the multi beam LIDAR
443
+
444
+ 07:41.960 --> 07:45.760
445
+ and being able to generate high resolution,
446
+
447
+ 07:45.760 --> 07:48.680
448
+ mid to long range 3D models of the world
449
+
450
+ 07:48.680 --> 07:53.680
451
+ and use that for understanding the world around the vehicle.
452
+
453
+ 07:53.680 --> 07:56.600
454
+ And that was really kind of a game changing technology.
455
+
456
+ 07:58.600 --> 08:00.000
457
+ In parallel with that,
458
+
459
+ 08:00.000 --> 08:04.360
460
+ we saw a bunch of other technologies
461
+
462
+ 08:04.360 --> 08:06.120
463
+ that had been kind of converging
464
+
465
+ 08:06.120 --> 08:08.440
466
+ half their day in the sun.
467
+
468
+ 08:08.440 --> 08:12.560
469
+ So Bayesian estimation had been,
470
+
471
+ 08:12.560 --> 08:17.560
472
+ SLAM had been a big field in robotics.
473
+
474
+ 08:17.840 --> 08:20.760
475
+ You would go to a conference a couple of years before that
476
+
477
+ 08:20.760 --> 08:24.880
478
+ and every paper would effectively have SLAM somewhere in it.
479
+
480
+ 08:24.880 --> 08:29.320
481
+ And so seeing that the Bayesian estimation techniques
482
+
483
+ 08:30.720 --> 08:33.400
484
+ play out on a very visible stage,
485
+
486
+ 08:33.400 --> 08:36.520
487
+ I thought that was pretty exciting to see.
488
+
489
+ 08:38.080 --> 08:41.560
490
+ And mostly SLAM was done based on LIDAR at that time.
491
+
492
+ 08:41.560 --> 08:44.560
493
+ Yeah, and in fact, we weren't really doing SLAM per se
494
+
495
+ 08:45.600 --> 08:47.480
496
+ in real time because we had a model ahead of time,
497
+
498
+ 08:47.480 --> 08:51.040
499
+ we had a roadmap, but we were doing localization.
500
+
501
+ 08:51.040 --> 08:53.560
502
+ And we were using the LIDAR or the cameras
503
+
504
+ 08:53.560 --> 08:55.400
505
+ depending on who exactly was doing it
506
+
507
+ 08:55.400 --> 08:57.560
508
+ to localize to a model of the world.
509
+
510
+ 08:57.560 --> 09:00.160
511
+ And I thought that was a big step
512
+
513
+ 09:00.160 --> 09:05.160
514
+ from kind of naively trusting GPS, INS before that.
515
+
516
+ 09:06.640 --> 09:09.840
517
+ And again, lots of work had been going on in this field.
518
+
519
+ 09:09.840 --> 09:13.040
520
+ Certainly this was not doing anything
521
+
522
+ 09:13.040 --> 09:16.840
523
+ particularly innovative in SLAM or in localization,
524
+
525
+ 09:16.840 --> 09:20.200
526
+ but it was seeing that technology necessary
527
+
528
+ 09:20.200 --> 09:21.800
529
+ in a real application on a big stage,
530
+
531
+ 09:21.800 --> 09:23.080
532
+ I thought was very cool.
533
+
534
+ 09:23.080 --> 09:24.000
535
+ So for the urban challenge,
536
+
537
+ 09:24.000 --> 09:28.600
538
+ those are already maps constructed offline in general.
539
+
540
+ 09:28.600 --> 09:30.920
541
+ And did people do that individually,
542
+
543
+ 09:30.920 --> 09:33.600
544
+ did individual teams do it individually
545
+
546
+ 09:33.600 --> 09:36.440
547
+ so they had their own different approaches there
548
+
549
+ 09:36.440 --> 09:41.440
550
+ or did everybody kind of share that information
551
+
552
+ 09:41.720 --> 09:42.880
553
+ at least intuitively?
554
+
555
+ 09:42.880 --> 09:47.880
556
+ So DARPA gave all the teams a model of the world, a map.
557
+
558
+ 09:49.640 --> 09:53.240
559
+ And then one of the things that we had to figure out
560
+
561
+ 09:53.240 --> 09:56.080
562
+ back then was, and it's still one of these things
563
+
564
+ 09:56.080 --> 09:57.280
565
+ that trips people up today
566
+
567
+ 09:57.280 --> 10:00.280
568
+ is actually the coordinate system.
569
+
570
+ 10:00.280 --> 10:03.080
571
+ So you get a latitude longitude
572
+
573
+ 10:03.080 --> 10:05.040
574
+ and to so many decimal places,
575
+
576
+ 10:05.040 --> 10:07.360
577
+ you don't really care about kind of the ellipsoid
578
+
579
+ 10:07.360 --> 10:09.560
580
+ of the earth that's being used.
581
+
582
+ 10:09.560 --> 10:12.240
583
+ But when you want to get to 10 centimeter
584
+
585
+ 10:12.240 --> 10:14.400
586
+ or centimeter resolution,
587
+
588
+ 10:14.400 --> 10:18.520
589
+ you care whether the coordinate system is NADS 83
590
+
591
+ 10:18.520 --> 10:23.520
592
+ or WGS 84 or these are different ways to describe
593
+
594
+ 10:24.200 --> 10:26.760
595
+ both the kind of non sphericalness of the earth,
596
+
597
+ 10:26.760 --> 10:31.080
598
+ but also kind of the, I think,
599
+
600
+ 10:31.080 --> 10:32.080
601
+ I can't remember which one,
602
+
603
+ 10:32.080 --> 10:33.600
604
+ the tectonic shifts that are happening
605
+
606
+ 10:33.600 --> 10:37.000
607
+ and how to transform the global datum as a function of that.
608
+
609
+ 10:37.000 --> 10:41.020
610
+ So getting a map and then actually matching it to reality
611
+
612
+ 10:41.020 --> 10:42.880
613
+ to centimeter resolution, that was kind of interesting
614
+
615
+ 10:42.880 --> 10:44.040
616
+ and fun back then.
617
+
618
+ 10:44.040 --> 10:46.760
619
+ So how much work was the perception doing there?
620
+
621
+ 10:46.760 --> 10:51.760
622
+ So how much were you relying on localization based on maps
623
+
624
+ 10:52.480 --> 10:55.760
625
+ without using perception to register to the maps?
626
+
627
+ 10:55.760 --> 10:58.000
628
+ And I guess the question is how advanced
629
+
630
+ 10:58.000 --> 10:59.800
631
+ was perception at that point?
632
+
633
+ 10:59.800 --> 11:01.960
634
+ It's certainly behind where we are today, right?
635
+
636
+ 11:01.960 --> 11:05.840
637
+ We're more than a decade since the urban challenge.
638
+
639
+ 11:05.840 --> 11:08.640
640
+ But the core of it was there.
641
+
642
+ 11:08.640 --> 11:13.120
643
+ That we were tracking vehicles.
644
+
645
+ 11:13.120 --> 11:15.640
646
+ We had to do that at 100 plus meter range
647
+
648
+ 11:15.640 --> 11:18.320
649
+ because we had to merge with other traffic.
650
+
651
+ 11:18.320 --> 11:21.240
652
+ We were using, again, Bayesian estimates
653
+
654
+ 11:21.240 --> 11:23.860
655
+ for state of these vehicles.
656
+
657
+ 11:23.860 --> 11:25.580
658
+ We had to deal with a bunch of the problems
659
+
660
+ 11:25.580 --> 11:26.920
661
+ that you think of today,
662
+
663
+ 11:26.920 --> 11:29.820
664
+ of predicting where that vehicle's going to be
665
+
666
+ 11:29.820 --> 11:31.060
667
+ a few seconds into the future.
668
+
669
+ 11:31.060 --> 11:32.380
670
+ We had to deal with the fact
671
+
672
+ 11:32.380 --> 11:35.320
673
+ that there were multiple hypotheses for that
674
+
675
+ 11:35.320 --> 11:37.660
676
+ because a vehicle at an intersection might be going right
677
+
678
+ 11:37.660 --> 11:38.780
679
+ or it might be going straight
680
+
681
+ 11:38.780 --> 11:40.620
682
+ or it might be making a left turn.
683
+
684
+ 11:41.500 --> 11:44.120
685
+ And we had to deal with the challenge of the fact
686
+
687
+ 11:44.120 --> 11:47.600
688
+ that our behavior was going to impact the behavior
689
+
690
+ 11:47.600 --> 11:48.960
691
+ of that other operator.
692
+
693
+ 11:48.960 --> 11:53.480
694
+ And we did a lot of that in relatively naive ways,
695
+
696
+ 11:53.480 --> 11:54.820
697
+ but it kind of worked.
698
+
699
+ 11:54.820 --> 11:57.080
700
+ Still had to have some kind of solution.
701
+
702
+ 11:57.080 --> 11:59.960
703
+ And so where does that, 10 years later,
704
+
705
+ 11:59.960 --> 12:01.520
706
+ where does that take us today
707
+
708
+ 12:01.520 --> 12:04.260
709
+ from that artificial city construction
710
+
711
+ 12:04.260 --> 12:07.000
712
+ to real cities to the urban environment?
713
+
714
+ 12:07.000 --> 12:09.160
715
+ Yeah, I think the biggest thing
716
+
717
+ 12:09.160 --> 12:14.160
718
+ is that the actors are truly unpredictable.
719
+
720
+ 12:15.720 --> 12:18.800
721
+ That most of the time, the drivers on the road,
722
+
723
+ 12:18.800 --> 12:23.800
724
+ the other road users are out there behaving well,
725
+
726
+ 12:24.080 --> 12:25.880
727
+ but every once in a while they're not.
728
+
729
+ 12:27.080 --> 12:32.080
730
+ The variety of other vehicles is, you have all of them.
731
+
732
+ 12:32.080 --> 12:35.840
733
+ In terms of behavior, in terms of perception, or both?
734
+
735
+ 12:35.840 --> 12:36.680
736
+ Both.
737
+
738
+ 12:38.740 --> 12:40.520
739
+ Back then we didn't have to deal with cyclists,
740
+
741
+ 12:40.520 --> 12:42.800
742
+ we didn't have to deal with pedestrians,
743
+
744
+ 12:42.800 --> 12:44.800
745
+ didn't have to deal with traffic lights.
746
+
747
+ 12:46.260 --> 12:49.400
748
+ The scale over which that you have to operate is now
749
+
750
+ 12:49.400 --> 12:51.120
751
+ is much larger than the air base
752
+
753
+ 12:51.120 --> 12:52.720
754
+ that we were thinking about back then.
755
+
756
+ 12:52.720 --> 12:55.420
757
+ So what, easy question,
758
+
759
+ 12:56.280 --> 12:59.720
760
+ what do you think is the hardest part about driving?
761
+
762
+ 12:59.720 --> 13:00.560
763
+ Easy question.
764
+
765
+ 13:00.560 --> 13:02.560
766
+ Yeah, no, I'm joking.
767
+
768
+ 13:02.560 --> 13:07.440
769
+ I'm sure nothing really jumps out at you as one thing,
770
+
771
+ 13:07.440 --> 13:12.440
772
+ but in the jump from the urban challenge to the real world,
773
+
774
+ 13:12.920 --> 13:15.320
775
+ is there something that's a particular,
776
+
777
+ 13:15.320 --> 13:18.480
778
+ you foresee as very serious, difficult challenge?
779
+
780
+ 13:18.480 --> 13:21.080
781
+ I think the most fundamental difference
782
+
783
+ 13:21.080 --> 13:25.340
784
+ is that we're doing it for real.
785
+
786
+ 13:26.760 --> 13:28.960
787
+ That in that environment,
788
+
789
+ 13:28.960 --> 13:31.880
790
+ it was both a limited complexity environment
791
+
792
+ 13:31.880 --> 13:33.240
793
+ because certain actors weren't there,
794
+
795
+ 13:33.240 --> 13:35.380
796
+ because the roads were maintained,
797
+
798
+ 13:35.380 --> 13:37.360
799
+ there were barriers keeping people separate
800
+
801
+ 13:37.360 --> 13:39.400
802
+ from robots at the time,
803
+
804
+ 13:40.840 --> 13:43.300
805
+ and it only had to work for 60 miles.
806
+
807
+ 13:43.300 --> 13:46.160
808
+ Which, looking at it from 2006,
809
+
810
+ 13:46.160 --> 13:48.960
811
+ it had to work for 60 miles, right?
812
+
813
+ 13:48.960 --> 13:50.940
814
+ Looking at it from now,
815
+
816
+ 13:51.880 --> 13:53.720
817
+ we want things that will go and drive
818
+
819
+ 13:53.720 --> 13:57.160
820
+ for half a million miles,
821
+
822
+ 13:57.160 --> 14:00.020
823
+ and it's just a different game.
824
+
825
+ 14:00.940 --> 14:03.480
826
+ So how important,
827
+
828
+ 14:03.480 --> 14:06.080
829
+ you said LiDAR came into the game early on,
830
+
831
+ 14:06.080 --> 14:07.880
832
+ and it's really the primary driver
833
+
834
+ 14:07.880 --> 14:10.240
835
+ of autonomous vehicles today as a sensor.
836
+
837
+ 14:10.240 --> 14:11.920
838
+ So how important is the role of LiDAR
839
+
840
+ 14:11.920 --> 14:14.800
841
+ in the sensor suite in the near term?
842
+
843
+ 14:14.800 --> 14:16.740
844
+ So I think it's essential.
845
+
846
+ 14:17.920 --> 14:20.480
847
+ I believe, but I also believe that cameras are essential,
848
+
849
+ 14:20.480 --> 14:22.120
850
+ and I believe the radar is essential.
851
+
852
+ 14:22.120 --> 14:26.280
853
+ I think that you really need to use
854
+
855
+ 14:26.280 --> 14:28.720
856
+ the composition of data from these different sensors
857
+
858
+ 14:28.720 --> 14:32.640
859
+ if you want the thing to really be robust.
860
+
861
+ 14:32.640 --> 14:34.360
862
+ The question I wanna ask,
863
+
864
+ 14:34.360 --> 14:35.600
865
+ let's see if we can untangle it,
866
+
867
+ 14:35.600 --> 14:39.320
868
+ is what are your thoughts on the Elon Musk
869
+
870
+ 14:39.320 --> 14:42.340
871
+ provocative statement that LiDAR is a crutch,
872
+
873
+ 14:42.340 --> 14:47.340
874
+ that it's a kind of, I guess, growing pains,
875
+
876
+ 14:47.760 --> 14:49.920
877
+ and that much of the perception task
878
+
879
+ 14:49.920 --> 14:52.120
880
+ can be done with cameras?
881
+
882
+ 14:52.120 --> 14:55.440
883
+ So I think it is undeniable
884
+
885
+ 14:55.440 --> 14:59.360
886
+ that people walk around without lasers in their foreheads,
887
+
888
+ 14:59.360 --> 15:01.880
889
+ and they can get into vehicles and drive them,
890
+
891
+ 15:01.880 --> 15:05.600
892
+ and so there's an existence proof
893
+
894
+ 15:05.600 --> 15:09.600
895
+ that you can drive using passive vision.
896
+
897
+ 15:10.880 --> 15:12.720
898
+ No doubt, can't argue with that.
899
+
900
+ 15:12.720 --> 15:14.680
901
+ In terms of sensors, yeah, so there's proof.
902
+
903
+ 15:14.680 --> 15:16.000
904
+ Yeah, in terms of sensors, right?
905
+
906
+ 15:16.000 --> 15:20.200
907
+ So there's an example that we all go do it,
908
+
909
+ 15:20.200 --> 15:21.380
910
+ many of us every day.
911
+
912
+ 15:21.380 --> 15:26.380
913
+ In terms of LiDAR being a crutch, sure.
914
+
915
+ 15:28.180 --> 15:33.100
916
+ But in the same way that the combustion engine
917
+
918
+ 15:33.100 --> 15:35.260
919
+ was a crutch on the path to an electric vehicle,
920
+
921
+ 15:35.260 --> 15:39.300
922
+ in the same way that any technology ultimately gets
923
+
924
+ 15:40.840 --> 15:44.380
925
+ replaced by some superior technology in the future,
926
+
927
+ 15:44.380 --> 15:47.740
928
+ and really the way that I look at this
929
+
930
+ 15:47.740 --> 15:51.460
931
+ is that the way we get around on the ground,
932
+
933
+ 15:51.460 --> 15:53.920
934
+ the way that we use transportation is broken,
935
+
936
+ 15:55.280 --> 15:59.740
937
+ and that we have this, I think the number I saw this morning,
938
+
939
+ 15:59.740 --> 16:04.060
940
+ 37,000 Americans killed last year on our roads,
941
+
942
+ 16:04.060 --> 16:05.380
943
+ and that's just not acceptable.
944
+
945
+ 16:05.380 --> 16:09.460
946
+ And so any technology that we can bring to bear
947
+
948
+ 16:09.460 --> 16:12.860
949
+ that accelerates this self driving technology
950
+
951
+ 16:12.860 --> 16:14.640
952
+ coming to market and saving lives
953
+
954
+ 16:14.640 --> 16:17.320
955
+ is technology we should be using.
956
+
957
+ 16:18.280 --> 16:20.840
958
+ And it feels just arbitrary to say,
959
+
960
+ 16:20.840 --> 16:25.840
961
+ well, I'm not okay with using lasers
962
+
963
+ 16:26.240 --> 16:27.820
964
+ because that's whatever,
965
+
966
+ 16:27.820 --> 16:30.720
967
+ but I am okay with using an eight megapixel camera
968
+
969
+ 16:30.720 --> 16:32.880
970
+ or a 16 megapixel camera.
971
+
972
+ 16:32.880 --> 16:34.640
973
+ These are just bits of technology,
974
+
975
+ 16:34.640 --> 16:36.360
976
+ and we should be taking the best technology
977
+
978
+ 16:36.360 --> 16:41.360
979
+ from the tool bin that allows us to go and solve a problem.
980
+
981
+ 16:41.360 --> 16:45.160
982
+ The question I often talk to, well, obviously you do as well,
983
+
984
+ 16:45.160 --> 16:48.280
985
+ to sort of automotive companies,
986
+
987
+ 16:48.280 --> 16:51.360
988
+ and if there's one word that comes up more often
989
+
990
+ 16:51.360 --> 16:55.280
991
+ than anything, it's cost, and trying to drive costs down.
992
+
993
+ 16:55.280 --> 17:00.280
994
+ So while it's true that it's a tragic number, the 37,000,
995
+
996
+ 17:01.400 --> 17:04.880
997
+ the question is, and I'm not the one asking this question
998
+
999
+ 17:04.880 --> 17:05.820
1000
+ because I hate this question,
1001
+
1002
+ 17:05.820 --> 17:09.960
1003
+ but we want to find the cheapest sensor suite
1004
+
1005
+ 17:09.960 --> 17:13.280
1006
+ that creates a safe vehicle.
1007
+
1008
+ 17:13.280 --> 17:18.220
1009
+ So in that uncomfortable trade off,
1010
+
1011
+ 17:18.220 --> 17:23.220
1012
+ do you foresee LiDAR coming down in cost in the future,
1013
+
1014
+ 17:23.680 --> 17:26.680
1015
+ or do you see a day where level four autonomy
1016
+
1017
+ 17:26.680 --> 17:29.880
1018
+ is possible without LiDAR?
1019
+
1020
+ 17:29.880 --> 17:32.880
1021
+ I see both of those, but it's really a matter of time.
1022
+
1023
+ 17:32.880 --> 17:36.040
1024
+ And I think really, maybe I would talk to the question
1025
+
1026
+ 17:36.040 --> 17:37.840
1027
+ you asked about the cheapest sensor.
1028
+
1029
+ 17:37.840 --> 17:40.360
1030
+ I don't think that's actually what you want.
1031
+
1032
+ 17:40.360 --> 17:45.360
1033
+ What you want is a sensor suite that is economically viable.
1034
+
1035
+ 17:45.680 --> 17:49.440
1036
+ And then after that, everything is about margin
1037
+
1038
+ 17:49.440 --> 17:52.120
1039
+ and driving costs out of the system.
1040
+
1041
+ 17:52.120 --> 17:55.360
1042
+ What you also want is a sensor suite that works.
1043
+
1044
+ 17:55.360 --> 17:58.200
1045
+ And so it's great to tell a story about
1046
+
1047
+ 17:59.600 --> 18:03.260
1048
+ how it would be better to have a self driving system
1049
+
1050
+ 18:03.260 --> 18:08.040
1051
+ with a $50 sensor instead of a $500 sensor.
1052
+
1053
+ 18:08.040 --> 18:10.520
1054
+ But if the $500 sensor makes it work
1055
+
1056
+ 18:10.520 --> 18:14.760
1057
+ and the $50 sensor doesn't work, who cares?
1058
+
1059
+ 18:15.680 --> 18:20.020
1060
+ So long as you can actually have an economic opportunity,
1061
+
1062
+ 18:20.020 --> 18:21.520
1063
+ there's an economic opportunity there.
1064
+
1065
+ 18:21.520 --> 18:23.760
1066
+ And the economic opportunity is important
1067
+
1068
+ 18:23.760 --> 18:27.760
1069
+ because that's how you actually have a sustainable business
1070
+
1071
+ 18:27.760 --> 18:31.120
1072
+ and that's how you can actually see this come to scale
1073
+
1074
+ 18:31.120 --> 18:32.400
1075
+ and be out in the world.
1076
+
1077
+ 18:32.400 --> 18:34.780
1078
+ And so when I look at LiDAR,
1079
+
1080
+ 18:35.960 --> 18:38.880
1081
+ I see a technology that has no underlying
1082
+
1083
+ 18:38.880 --> 18:42.420
1084
+ fundamentally expense to it, fundamental expense to it.
1085
+
1086
+ 18:42.420 --> 18:46.080
1087
+ It's going to be more expensive than an imager
1088
+
1089
+ 18:46.080 --> 18:50.360
1090
+ because CMOS processes or FAP processes
1091
+
1092
+ 18:51.360 --> 18:55.080
1093
+ are dramatically more scalable than mechanical processes.
1094
+
1095
+ 18:56.200 --> 18:58.320
1096
+ But we still should be able to drive costs down
1097
+
1098
+ 18:58.320 --> 19:00.120
1099
+ substantially on that side.
1100
+
1101
+ 19:00.120 --> 19:04.840
1102
+ And then I also do think that with the right business model
1103
+
1104
+ 19:05.880 --> 19:07.560
1105
+ you can absorb more,
1106
+
1107
+ 19:07.560 --> 19:09.480
1108
+ certainly more cost on the bill of materials.
1109
+
1110
+ 19:09.480 --> 19:12.600
1111
+ Yeah, if the sensor suite works, extra value is provided,
1112
+
1113
+ 19:12.600 --> 19:15.480
1114
+ thereby you don't need to drive costs down to zero.
1115
+
1116
+ 19:15.480 --> 19:17.100
1117
+ It's the basic economics.
1118
+
1119
+ 19:17.100 --> 19:18.820
1120
+ You've talked about your intuition
1121
+
1122
+ 19:18.820 --> 19:22.200
1123
+ that level two autonomy is problematic
1124
+
1125
+ 19:22.200 --> 19:25.920
1126
+ because of the human factor of vigilance,
1127
+
1128
+ 19:25.920 --> 19:28.040
1129
+ decrement, complacency, over trust and so on,
1130
+
1131
+ 19:28.040 --> 19:29.600
1132
+ just us being human.
1133
+
1134
+ 19:29.600 --> 19:31.120
1135
+ We over trust the system,
1136
+
1137
+ 19:31.120 --> 19:34.240
1138
+ we start doing even more so partaking
1139
+
1140
+ 19:34.240 --> 19:37.180
1141
+ in the secondary activities like smartphones and so on.
1142
+
1143
+ 19:38.680 --> 19:43.000
1144
+ Have your views evolved on this point in either direction?
1145
+
1146
+ 19:43.000 --> 19:44.800
1147
+ Can you speak to it?
1148
+
1149
+ 19:44.800 --> 19:47.480
1150
+ So, and I want to be really careful
1151
+
1152
+ 19:47.480 --> 19:50.380
1153
+ because sometimes this gets twisted in a way
1154
+
1155
+ 19:50.380 --> 19:53.040
1156
+ that I certainly didn't intend.
1157
+
1158
+ 19:53.040 --> 19:58.040
1159
+ So active safety systems are a really important technology
1160
+
1161
+ 19:58.040 --> 20:00.680
1162
+ that we should be pursuing and integrating into vehicles.
1163
+
1164
+ 20:02.080 --> 20:04.280
1165
+ And there's an opportunity in the near term
1166
+
1167
+ 20:04.280 --> 20:06.520
1168
+ to reduce accidents, reduce fatalities,
1169
+
1170
+ 20:06.520 --> 20:10.320
1171
+ and we should be pushing on that.
1172
+
1173
+ 20:11.960 --> 20:14.680
1174
+ Level two systems are systems
1175
+
1176
+ 20:14.680 --> 20:18.080
1177
+ where the vehicle is controlling two axes.
1178
+
1179
+ 20:18.080 --> 20:21.720
1180
+ So braking and throttle slash steering.
1181
+
1182
+ 20:23.480 --> 20:25.680
1183
+ And I think there are variants of level two systems
1184
+
1185
+ 20:25.680 --> 20:27.280
1186
+ that are supporting the driver.
1187
+
1188
+ 20:27.280 --> 20:31.080
1189
+ That absolutely we should encourage to be out there.
1190
+
1191
+ 20:31.080 --> 20:32.880
1192
+ Where I think there's a real challenge
1193
+
1194
+ 20:32.880 --> 20:37.640
1195
+ is in the human factors part around this
1196
+
1197
+ 20:37.640 --> 20:41.240
1198
+ and the misconception from the public
1199
+
1200
+ 20:41.240 --> 20:43.600
1201
+ around the capability set that that enables
1202
+
1203
+ 20:43.600 --> 20:45.640
1204
+ and the trust that they should have in it.
1205
+
1206
+ 20:46.640 --> 20:50.000
1207
+ And that is where I kind of,
1208
+
1209
+ 20:50.000 --> 20:52.920
1210
+ I'm actually incrementally more concerned
1211
+
1212
+ 20:52.920 --> 20:54.440
1213
+ around level three systems
1214
+
1215
+ 20:54.440 --> 20:58.440
1216
+ and how exactly a level two system is marketed and delivered
1217
+
1218
+ 20:58.440 --> 21:01.840
1219
+ and how much effort people have put into those human factors.
1220
+
1221
+ 21:01.840 --> 21:05.640
1222
+ So I still believe several things around this.
1223
+
1224
+ 21:05.640 --> 21:09.440
1225
+ One is people will overtrust the technology.
1226
+
1227
+ 21:09.440 --> 21:11.440
1228
+ We've seen over the last few weeks
1229
+
1230
+ 21:11.440 --> 21:14.040
1231
+ a spate of people sleeping in their Tesla.
1232
+
1233
+ 21:14.920 --> 21:19.920
1234
+ I watched an episode last night of Trevor Noah
1235
+
1236
+ 21:19.920 --> 21:23.920
1237
+ talking about this and him,
1238
+
1239
+ 21:23.920 --> 21:26.720
1240
+ this is a smart guy who has a lot of resources
1241
+
1242
+ 21:26.720 --> 21:30.720
1243
+ at his disposal describing a Tesla as a self driving car
1244
+
1245
+ 21:30.720 --> 21:33.480
1246
+ and that why shouldn't people be sleeping in their Tesla?
1247
+
1248
+ 21:33.480 --> 21:36.560
1249
+ And it's like, well, because it's not a self driving car
1250
+
1251
+ 21:36.560 --> 21:38.840
1252
+ and it is not intended to be
1253
+
1254
+ 21:38.840 --> 21:43.840
1255
+ and these people will almost certainly die at some point
1256
+
1257
+ 21:46.400 --> 21:48.040
1258
+ or hurt other people.
1259
+
1260
+ 21:48.040 --> 21:50.080
1261
+ And so we need to really be thoughtful
1262
+
1263
+ 21:50.080 --> 21:51.840
1264
+ about how that technology is described
1265
+
1266
+ 21:51.840 --> 21:53.280
1267
+ and brought to market.
1268
+
1269
+ 21:54.240 --> 21:59.240
1270
+ I also think that because of the economic challenges
1271
+
1272
+ 21:59.240 --> 22:01.240
1273
+ we were just talking about,
1274
+
1275
+ 22:01.240 --> 22:05.160
1276
+ that these level two driver assistance systems,
1277
+
1278
+ 22:05.160 --> 22:07.280
1279
+ that technology path will diverge
1280
+
1281
+ 22:07.280 --> 22:10.200
1282
+ from the technology path that we need to be on
1283
+
1284
+ 22:10.200 --> 22:14.080
1285
+ to actually deliver truly self driving vehicles,
1286
+
1287
+ 22:14.080 --> 22:16.920
1288
+ ones where you can get in it and drive it.
1289
+
1290
+ 22:16.920 --> 22:20.800
1291
+ Can get in it and sleep and have the equivalent
1292
+
1293
+ 22:20.800 --> 22:24.680
1294
+ or better safety than a human driver behind the wheel.
1295
+
1296
+ 22:24.680 --> 22:27.520
1297
+ Because again, the economics are very different
1298
+
1299
+ 22:28.480 --> 22:30.880
1300
+ in those two worlds and so that leads
1301
+
1302
+ 22:30.880 --> 22:32.800
1303
+ to divergent technology.
1304
+
1305
+ 22:32.800 --> 22:34.680
1306
+ So you just don't see the economics
1307
+
1308
+ 22:34.680 --> 22:38.560
1309
+ of gradually increasing from level two
1310
+
1311
+ 22:38.560 --> 22:41.600
1312
+ and doing so quickly enough
1313
+
1314
+ 22:41.600 --> 22:44.480
1315
+ to where it doesn't cause safety, critical safety concerns.
1316
+
1317
+ 22:44.480 --> 22:47.680
1318
+ You believe that it needs to diverge at this point
1319
+
1320
+ 22:48.680 --> 22:50.800
1321
+ into basically different routes.
1322
+
1323
+ 22:50.800 --> 22:55.560
1324
+ And really that comes back to what are those L2
1325
+
1326
+ 22:55.560 --> 22:57.080
1327
+ and L1 systems doing?
1328
+
1329
+ 22:57.080 --> 22:59.840
1330
+ And they are driver assistance functions
1331
+
1332
+ 22:59.840 --> 23:04.400
1333
+ where the people that are marketing that responsibly
1334
+
1335
+ 23:04.400 --> 23:08.000
1336
+ are being very clear and putting human factors in place
1337
+
1338
+ 23:08.000 --> 23:12.440
1339
+ such that the driver is actually responsible for the vehicle
1340
+
1341
+ 23:12.440 --> 23:15.160
1342
+ and that the technology is there to support the driver.
1343
+
1344
+ 23:15.160 --> 23:19.880
1345
+ And the safety cases that are built around those
1346
+
1347
+ 23:19.880 --> 23:24.040
1348
+ are dependent on that driver attention and attentiveness.
1349
+
1350
+ 23:24.040 --> 23:28.000
1351
+ And at that point, you can kind of give up
1352
+
1353
+ 23:29.160 --> 23:31.240
1354
+ to some degree for economic reasons,
1355
+
1356
+ 23:31.240 --> 23:33.480
1357
+ you can give up on say false negatives.
1358
+
1359
+ 23:34.800 --> 23:36.200
1360
+ And the way to think about this
1361
+
1362
+ 23:36.200 --> 23:39.320
1363
+ is for a four collision mitigation braking system,
1364
+
1365
+ 23:39.320 --> 23:43.960
1366
+ if it half the times the driver missed a vehicle
1367
+
1368
+ 23:43.960 --> 23:46.080
1369
+ in front of it, it hit the brakes
1370
+
1371
+ 23:46.080 --> 23:47.680
1372
+ and brought the vehicle to a stop,
1373
+
1374
+ 23:47.680 --> 23:51.640
1375
+ that would be an incredible, incredible advance
1376
+
1377
+ 23:51.640 --> 23:53.040
1378
+ in safety on our roads, right?
1379
+
1380
+ 23:53.040 --> 23:55.000
1381
+ That would be equivalent to seat belts.
1382
+
1383
+ 23:55.000 --> 23:56.600
1384
+ But it would mean that if that vehicle
1385
+
1386
+ 23:56.600 --> 23:59.440
1387
+ wasn't being monitored, it would hit one out of two cars.
1388
+
1389
+ 24:00.600 --> 24:05.120
1390
+ And so economically, that's a perfectly good solution
1391
+
1392
+ 24:05.120 --> 24:06.280
1393
+ for a driver assistance system.
1394
+
1395
+ 24:06.280 --> 24:07.240
1396
+ What you should do at that point,
1397
+
1398
+ 24:07.240 --> 24:09.240
1399
+ if you can get it to work 50% of the time,
1400
+
1401
+ 24:09.240 --> 24:10.520
1402
+ is drive the cost out of that
1403
+
1404
+ 24:10.520 --> 24:13.320
1405
+ so you can get it on as many vehicles as possible.
1406
+
1407
+ 24:13.320 --> 24:14.760
1408
+ But driving the cost out of it
1409
+
1410
+ 24:14.760 --> 24:18.800
1411
+ doesn't drive up performance on the false negative case.
1412
+
1413
+ 24:18.800 --> 24:21.440
1414
+ And so you'll continue to not have a technology
1415
+
1416
+ 24:21.440 --> 24:25.680
1417
+ that could really be available for a self driven vehicle.
1418
+
1419
+ 24:25.680 --> 24:28.440
1420
+ So clearly the communication,
1421
+
1422
+ 24:28.440 --> 24:31.600
1423
+ and this probably applies to all four vehicles as well,
1424
+
1425
+ 24:31.600 --> 24:34.440
1426
+ the marketing and communication
1427
+
1428
+ 24:34.440 --> 24:37.040
1429
+ of what the technology is actually capable of,
1430
+
1431
+ 24:37.040 --> 24:38.400
1432
+ how hard it is, how easy it is,
1433
+
1434
+ 24:38.400 --> 24:41.000
1435
+ all that kind of stuff is highly problematic.
1436
+
1437
+ 24:41.000 --> 24:45.640
1438
+ So say everybody in the world was perfectly communicated
1439
+
1440
+ 24:45.640 --> 24:48.400
1441
+ and were made to be completely aware
1442
+
1443
+ 24:48.400 --> 24:50.000
1444
+ of every single technology out there,
1445
+
1446
+ 24:50.000 --> 24:52.840
1447
+ what it's able to do.
1448
+
1449
+ 24:52.840 --> 24:54.120
1450
+ What's your intuition?
1451
+
1452
+ 24:54.120 --> 24:56.880
1453
+ And now we're maybe getting into philosophical ground.
1454
+
1455
+ 24:56.880 --> 25:00.000
1456
+ Is it possible to have a level two vehicle
1457
+
1458
+ 25:00.000 --> 25:03.280
1459
+ where we don't over trust it?
1460
+
1461
+ 25:04.680 --> 25:05.800
1462
+ I don't think so.
1463
+
1464
+ 25:05.800 --> 25:10.800
1465
+ If people truly understood the risks and internalized it,
1466
+
1467
+ 25:11.160 --> 25:14.320
1468
+ then sure, you could do that safely.
1469
+
1470
+ 25:14.320 --> 25:16.160
1471
+ But that's a world that doesn't exist.
1472
+
1473
+ 25:16.160 --> 25:17.520
1474
+ The people are going to,
1475
+
1476
+ 25:18.720 --> 25:20.760
1477
+ if the facts are put in front of them,
1478
+
1479
+ 25:20.760 --> 25:24.440
1480
+ they're gonna then combine that with their experience.
1481
+
1482
+ 25:24.440 --> 25:28.360
1483
+ And let's say they're using an L2 system
1484
+
1485
+ 25:28.360 --> 25:30.800
1486
+ and they go up and down the 101 every day
1487
+
1488
+ 25:30.800 --> 25:32.720
1489
+ and they do that for a month.
1490
+
1491
+ 25:32.720 --> 25:36.200
1492
+ And it just worked every day for a month.
1493
+
1494
+ 25:36.200 --> 25:39.000
1495
+ Like that's pretty compelling at that point,
1496
+
1497
+ 25:39.000 --> 25:41.800
1498
+ just even if you know the statistics,
1499
+
1500
+ 25:41.800 --> 25:43.400
1501
+ you're like, well, I don't know,
1502
+
1503
+ 25:43.400 --> 25:44.760
1504
+ maybe there's something funny about those.
1505
+
1506
+ 25:44.760 --> 25:46.920
1507
+ Maybe they're driving in difficult places.
1508
+
1509
+ 25:46.920 --> 25:49.840
1510
+ Like I've seen it with my own eyes, it works.
1511
+
1512
+ 25:49.840 --> 25:52.400
1513
+ And the problem is that that sample size that they have,
1514
+
1515
+ 25:52.400 --> 25:53.880
1516
+ so it's 30 miles up and down,
1517
+
1518
+ 25:53.880 --> 25:56.360
1519
+ so 60 miles times 30 days,
1520
+
1521
+ 25:56.360 --> 25:58.720
1522
+ so 60, 180, 1,800 miles.
1523
+
1524
+ 25:58.720 --> 26:03.280
1525
+ Like that's a drop in the bucket
1526
+
1527
+ 26:03.280 --> 26:07.640
1528
+ compared to the, what, 85 million miles between fatalities.
1529
+
1530
+ 26:07.640 --> 26:11.400
1531
+ And so they don't really have a true estimate
1532
+
1533
+ 26:11.400 --> 26:14.440
1534
+ based on their personal experience of the real risks,
1535
+
1536
+ 26:14.440 --> 26:15.640
1537
+ but they're gonna trust it anyway,
1538
+
1539
+ 26:15.640 --> 26:16.480
1540
+ because it's hard not to.
1541
+
1542
+ 26:16.480 --> 26:18.640
1543
+ It worked for a month, what's gonna change?
1544
+
1545
+ 26:18.640 --> 26:21.640
1546
+ So even if you start a perfect understanding of the system,
1547
+
1548
+ 26:21.640 --> 26:24.160
1549
+ your own experience will make it drift.
1550
+
1551
+ 26:24.160 --> 26:25.920
1552
+ I mean, that's a big concern.
1553
+
1554
+ 26:25.920 --> 26:28.160
1555
+ Over a year, over two years even,
1556
+
1557
+ 26:28.160 --> 26:29.440
1558
+ it doesn't have to be months.
1559
+
1560
+ 26:29.440 --> 26:32.920
1561
+ And I think that as this technology moves
1562
+
1563
+ 26:32.920 --> 26:37.760
1564
+ from what I would say is kind of the more technology savvy
1565
+
1566
+ 26:37.760 --> 26:40.880
1567
+ ownership group to the mass market,
1568
+
1569
+ 26:42.640 --> 26:44.600
1570
+ you may be able to have some of those folks
1571
+
1572
+ 26:44.600 --> 26:46.280
1573
+ who are really familiar with technology,
1574
+
1575
+ 26:46.280 --> 26:48.840
1576
+ they may be able to internalize it better.
1577
+
1578
+ 26:48.840 --> 26:50.800
1579
+ And your kind of immunization
1580
+
1581
+ 26:50.800 --> 26:53.360
1582
+ against this kind of false risk assessment
1583
+
1584
+ 26:53.360 --> 26:54.280
1585
+ might last longer,
1586
+
1587
+ 26:54.280 --> 26:58.680
1588
+ but as folks who aren't as savvy about that
1589
+
1590
+ 26:58.680 --> 27:00.880
1591
+ read the material and they compare that
1592
+
1593
+ 27:00.880 --> 27:02.160
1594
+ to their personal experience,
1595
+
1596
+ 27:02.160 --> 27:07.160
1597
+ I think there it's going to move more quickly.
1598
+
1599
+ 27:08.160 --> 27:11.280
1600
+ So your work, the program that you've created at Google
1601
+
1602
+ 27:11.280 --> 27:16.280
1603
+ and now at Aurora is focused more on the second path
1604
+
1605
+ 27:16.600 --> 27:18.480
1606
+ of creating full autonomy.
1607
+
1608
+ 27:18.480 --> 27:20.880
1609
+ So it's such a fascinating,
1610
+
1611
+ 27:20.880 --> 27:24.560
1612
+ I think it's one of the most interesting AI problems
1613
+
1614
+ 27:24.560 --> 27:25.600
1615
+ of the century, right?
1616
+
1617
+ 27:25.600 --> 27:28.280
1618
+ It's, I just talked to a lot of people,
1619
+
1620
+ 27:28.280 --> 27:29.440
1621
+ just regular people, I don't know,
1622
+
1623
+ 27:29.440 --> 27:31.720
1624
+ my mom, about autonomous vehicles,
1625
+
1626
+ 27:31.720 --> 27:34.520
1627
+ and you begin to grapple with ideas
1628
+
1629
+ 27:34.520 --> 27:38.080
1630
+ of giving your life control over to a machine.
1631
+
1632
+ 27:38.080 --> 27:40.040
1633
+ It's philosophically interesting,
1634
+
1635
+ 27:40.040 --> 27:41.760
1636
+ it's practically interesting.
1637
+
1638
+ 27:41.760 --> 27:43.720
1639
+ So let's talk about safety.
1640
+
1641
+ 27:43.720 --> 27:46.240
1642
+ How do you think we demonstrate,
1643
+
1644
+ 27:46.240 --> 27:47.880
1645
+ you've spoken about metrics in the past,
1646
+
1647
+ 27:47.880 --> 27:51.880
1648
+ how do you think we demonstrate to the world
1649
+
1650
+ 27:51.880 --> 27:56.160
1651
+ that an autonomous vehicle, an Aurora system is safe?
1652
+
1653
+ 27:56.160 --> 27:57.320
1654
+ This is one where it's difficult
1655
+
1656
+ 27:57.320 --> 27:59.280
1657
+ because there isn't a soundbite answer.
1658
+
1659
+ 27:59.280 --> 28:04.280
1660
+ That we have to show a combination of work
1661
+
1662
+ 28:05.960 --> 28:08.360
1663
+ that was done diligently and thoughtfully,
1664
+
1665
+ 28:08.360 --> 28:10.840
1666
+ and this is where something like a functional safety process
1667
+
1668
+ 28:10.840 --> 28:11.680
1669
+ is part of that.
1670
+
1671
+ 28:11.680 --> 28:14.360
1672
+ It's like here's the way we did the work,
1673
+
1674
+ 28:15.280 --> 28:17.160
1675
+ that means that we were very thorough.
1676
+
1677
+ 28:17.160 --> 28:20.040
1678
+ So if you believe that what we said
1679
+
1680
+ 28:20.040 --> 28:21.440
1681
+ about this is the way we did it,
1682
+
1683
+ 28:21.440 --> 28:22.720
1684
+ then you can have some confidence
1685
+
1686
+ 28:22.720 --> 28:25.200
1687
+ that we were thorough in the engineering work
1688
+
1689
+ 28:25.200 --> 28:26.920
1690
+ we put into the system.
1691
+
1692
+ 28:26.920 --> 28:28.920
1693
+ And then on top of that,
1694
+
1695
+ 28:28.920 --> 28:32.000
1696
+ to kind of demonstrate that we weren't just thorough,
1697
+
1698
+ 28:32.000 --> 28:33.800
1699
+ we were actually good at what we did,
1700
+
1701
+ 28:35.280 --> 28:38.200
1702
+ there'll be a kind of a collection of evidence
1703
+
1704
+ 28:38.200 --> 28:40.440
1705
+ in terms of demonstrating that the capabilities
1706
+
1707
+ 28:40.440 --> 28:42.920
1708
+ worked the way we thought they did,
1709
+
1710
+ 28:42.920 --> 28:45.320
1711
+ statistically and to whatever degree
1712
+
1713
+ 28:45.320 --> 28:47.280
1714
+ we can demonstrate that,
1715
+
1716
+ 28:48.160 --> 28:50.320
1717
+ both in some combination of simulations,
1718
+
1719
+ 28:50.320 --> 28:53.080
1720
+ some combination of unit testing
1721
+
1722
+ 28:53.080 --> 28:54.640
1723
+ and decomposition testing,
1724
+
1725
+ 28:54.640 --> 28:57.000
1726
+ and then some part of it will be on road data.
1727
+
1728
+ 28:58.160 --> 29:02.680
1729
+ And I think the way we'll ultimately
1730
+
1731
+ 29:02.680 --> 29:04.000
1732
+ convey this to the public
1733
+
1734
+ 29:04.000 --> 29:06.760
1735
+ is there'll be clearly some conversation
1736
+
1737
+ 29:06.760 --> 29:08.200
1738
+ with the public about it,
1739
+
1740
+ 29:08.200 --> 29:12.040
1741
+ but we'll kind of invoke the kind of the trusted nodes
1742
+
1743
+ 29:12.040 --> 29:13.880
1744
+ and that we'll spend more time
1745
+
1746
+ 29:13.880 --> 29:17.280
1747
+ being able to go into more depth with folks like NHTSA
1748
+
1749
+ 29:17.280 --> 29:19.720
1750
+ and other federal and state regulatory bodies
1751
+
1752
+ 29:19.720 --> 29:22.080
1753
+ and kind of given that they are
1754
+
1755
+ 29:22.080 --> 29:25.200
1756
+ operating in the public interest and they're trusted,
1757
+
1758
+ 29:26.240 --> 29:28.640
1759
+ that if we can show enough work to them
1760
+
1761
+ 29:28.640 --> 29:30.000
1762
+ that they're convinced,
1763
+
1764
+ 29:30.000 --> 29:33.800
1765
+ then I think we're in a pretty good place.
1766
+
1767
+ 29:33.800 --> 29:35.000
1768
+ That means you work with people
1769
+
1770
+ 29:35.000 --> 29:36.920
1771
+ that are essentially experts at safety
1772
+
1773
+ 29:36.920 --> 29:39.000
1774
+ to try to discuss and show.
1775
+
1776
+ 29:39.000 --> 29:41.720
1777
+ Do you think, the answer's probably no,
1778
+
1779
+ 29:41.720 --> 29:42.920
1780
+ but just in case,
1781
+
1782
+ 29:42.920 --> 29:44.360
1783
+ do you think there exists a metric?
1784
+
1785
+ 29:44.360 --> 29:46.320
1786
+ So currently people have been using
1787
+
1788
+ 29:46.320 --> 29:48.200
1789
+ number of disengagements.
1790
+
1791
+ 29:48.200 --> 29:50.120
1792
+ And it quickly turns into a marketing scheme
1793
+
1794
+ 29:50.120 --> 29:54.280
1795
+ to sort of you alter the experiments you run to adjust.
1796
+
1797
+ 29:54.280 --> 29:56.280
1798
+ I think you've spoken that you don't like.
1799
+
1800
+ 29:56.280 --> 29:57.120
1801
+ Don't love it.
1802
+
1803
+ 29:57.120 --> 29:59.680
1804
+ No, in fact, I was on the record telling DMV
1805
+
1806
+ 29:59.680 --> 30:01.960
1807
+ that I thought this was not a great metric.
1808
+
1809
+ 30:01.960 --> 30:05.280
1810
+ Do you think it's possible to create a metric,
1811
+
1812
+ 30:05.280 --> 30:09.440
1813
+ a number that could demonstrate safety
1814
+
1815
+ 30:09.440 --> 30:12.320
1816
+ outside of fatalities?
1817
+
1818
+ 30:12.320 --> 30:13.440
1819
+ So I do.
1820
+
1821
+ 30:13.440 --> 30:16.560
1822
+ And I think that it won't be just one number.
1823
+
1824
+ 30:17.600 --> 30:21.280
1825
+ So as we are internally grappling with this,
1826
+
1827
+ 30:21.280 --> 30:23.560
1828
+ and at some point we'll be able to talk
1829
+
1830
+ 30:23.560 --> 30:25.040
1831
+ more publicly about it,
1832
+
1833
+ 30:25.040 --> 30:28.520
1834
+ is how do we think about human performance
1835
+
1836
+ 30:28.520 --> 30:29.840
1837
+ in different tasks,
1838
+
1839
+ 30:29.840 --> 30:32.160
1840
+ say detecting traffic lights
1841
+
1842
+ 30:32.160 --> 30:36.200
1843
+ or safely making a left turn across traffic?
1844
+
1845
+ 30:37.680 --> 30:40.080
1846
+ And what do we think the failure rates are
1847
+
1848
+ 30:40.080 --> 30:42.520
1849
+ for those different capabilities for people?
1850
+
1851
+ 30:42.520 --> 30:44.760
1852
+ And then demonstrating to ourselves
1853
+
1854
+ 30:44.760 --> 30:48.480
1855
+ and then ultimately folks in the regulatory role
1856
+
1857
+ 30:48.480 --> 30:50.760
1858
+ and then ultimately the public
1859
+
1860
+ 30:50.760 --> 30:52.400
1861
+ that we have confidence that our system
1862
+
1863
+ 30:52.400 --> 30:54.760
1864
+ will work better than that.
1865
+
1866
+ 30:54.760 --> 30:57.040
1867
+ And so these individual metrics
1868
+
1869
+ 30:57.040 --> 31:00.720
1870
+ will kind of tell a compelling story ultimately.
1871
+
1872
+ 31:01.760 --> 31:03.920
1873
+ I do think at the end of the day
1874
+
1875
+ 31:03.920 --> 31:06.640
1876
+ what we care about in terms of safety
1877
+
1878
+ 31:06.640 --> 31:11.320
1879
+ is life saved and injuries reduced.
1880
+
1881
+ 31:12.160 --> 31:15.280
1882
+ And then ultimately kind of casualty dollars
1883
+
1884
+ 31:16.440 --> 31:19.360
1885
+ that people aren't having to pay to get their car fixed.
1886
+
1887
+ 31:19.360 --> 31:22.680
1888
+ And I do think that in aviation
1889
+
1890
+ 31:22.680 --> 31:25.880
1891
+ they look at a kind of an event pyramid
1892
+
1893
+ 31:25.880 --> 31:28.600
1894
+ where a crash is at the top of that
1895
+
1896
+ 31:28.600 --> 31:30.440
1897
+ and that's the worst event obviously
1898
+
1899
+ 31:30.440 --> 31:34.240
1900
+ and then there's injuries and near miss events and whatnot
1901
+
1902
+ 31:34.240 --> 31:37.320
1903
+ and violation of operating procedures
1904
+
1905
+ 31:37.320 --> 31:40.160
1906
+ and you kind of build a statistical model
1907
+
1908
+ 31:40.160 --> 31:44.440
1909
+ of the relevance of the low severity things
1910
+
1911
+ 31:44.440 --> 31:45.280
1912
+ or the high severity things.
1913
+
1914
+ 31:45.280 --> 31:46.120
1915
+ And I think that's something
1916
+
1917
+ 31:46.120 --> 31:48.200
1918
+ where we'll be able to look at as well
1919
+
1920
+ 31:48.200 --> 31:51.840
1921
+ because an event per 85 million miles
1922
+
1923
+ 31:51.840 --> 31:54.440
1924
+ is statistically a difficult thing
1925
+
1926
+ 31:54.440 --> 31:56.800
1927
+ even at the scale of the U.S.
1928
+
1929
+ 31:56.800 --> 31:59.360
1930
+ to kind of compare directly.
1931
+
1932
+ 31:59.360 --> 32:02.240
1933
+ And that event fatality that's connected
1934
+
1935
+ 32:02.240 --> 32:07.240
1936
+ to an autonomous vehicle is significantly
1937
+
1938
+ 32:07.440 --> 32:09.160
1939
+ at least currently magnified
1940
+
1941
+ 32:09.160 --> 32:12.320
1942
+ in the amount of attention it gets.
1943
+
1944
+ 32:12.320 --> 32:15.080
1945
+ So that speaks to public perception.
1946
+
1947
+ 32:15.080 --> 32:16.720
1948
+ I think the most popular topic
1949
+
1950
+ 32:16.720 --> 32:19.480
1951
+ about autonomous vehicles in the public
1952
+
1953
+ 32:19.480 --> 32:23.080
1954
+ is the trolley problem formulation, right?
1955
+
1956
+ 32:23.080 --> 32:27.000
1957
+ Which has, let's not get into that too much
1958
+
1959
+ 32:27.000 --> 32:29.600
1960
+ but is misguided in many ways.
1961
+
1962
+ 32:29.600 --> 32:32.320
1963
+ But it speaks to the fact that people are grappling
1964
+
1965
+ 32:32.320 --> 32:36.160
1966
+ with this idea of giving control over to a machine.
1967
+
1968
+ 32:36.160 --> 32:41.160
1969
+ So how do you win the hearts and minds of the people
1970
+
1971
+ 32:41.560 --> 32:44.600
1972
+ that autonomy is something that could be a part
1973
+
1974
+ 32:44.600 --> 32:45.520
1975
+ of their lives?
1976
+
1977
+ 32:45.520 --> 32:47.640
1978
+ I think you let them experience it, right?
1979
+
1980
+ 32:47.640 --> 32:50.440
1981
+ I think it's right.
1982
+
1983
+ 32:50.440 --> 32:52.800
1984
+ I think people should be skeptical.
1985
+
1986
+ 32:52.800 --> 32:55.680
1987
+ I think people should ask questions.
1988
+
1989
+ 32:55.680 --> 32:57.000
1990
+ I think they should doubt
1991
+
1992
+ 32:57.000 --> 33:00.120
1993
+ because this is something new and different.
1994
+
1995
+ 33:00.120 --> 33:01.880
1996
+ They haven't touched it yet.
1997
+
1998
+ 33:01.880 --> 33:03.640
1999
+ And I think that's perfectly reasonable.
2000
+
2001
+ 33:03.640 --> 33:07.320
2002
+ And, but at the same time,
2003
+
2004
+ 33:07.320 --> 33:09.320
2005
+ it's clear there's an opportunity to make the road safer.
2006
+
2007
+ 33:09.320 --> 33:12.440
2008
+ It's clear that we can improve access to mobility.
2009
+
2010
+ 33:12.440 --> 33:14.960
2011
+ It's clear that we can reduce the cost of mobility.
2012
+
2013
+ 33:16.640 --> 33:19.480
2014
+ And that once people try that
2015
+
2016
+ 33:19.480 --> 33:22.720
2017
+ and understand that it's safe
2018
+
2019
+ 33:22.720 --> 33:24.440
2020
+ and are able to use in their daily lives,
2021
+
2022
+ 33:24.440 --> 33:25.280
2023
+ I think it's one of these things
2024
+
2025
+ 33:25.280 --> 33:28.040
2026
+ that will just be obvious.
2027
+
2028
+ 33:28.040 --> 33:32.240
2029
+ And I've seen this practically in demonstrations
2030
+
2031
+ 33:32.240 --> 33:35.560
2032
+ that I've given where I've had people come in
2033
+
2034
+ 33:35.560 --> 33:38.840
2035
+ and they're very skeptical.
2036
+
2037
+ 33:38.840 --> 33:40.440
2038
+ Again, in a vehicle, my favorite one
2039
+
2040
+ 33:40.440 --> 33:42.560
2041
+ is taking somebody out on the freeway
2042
+
2043
+ 33:42.560 --> 33:46.000
2044
+ and we're on the 101 driving at 65 miles an hour.
2045
+
2046
+ 33:46.000 --> 33:48.400
2047
+ And after 10 minutes, they kind of turn and ask,
2048
+
2049
+ 33:48.400 --> 33:49.480
2050
+ is that all it does?
2051
+
2052
+ 33:49.480 --> 33:52.080
2053
+ And you're like, it's a self driving car.
2054
+
2055
+ 33:52.080 --> 33:54.840
2056
+ I'm not sure exactly what you thought it would do, right?
2057
+
2058
+ 33:54.840 --> 33:57.920
2059
+ But it becomes mundane,
2060
+
2061
+ 33:58.840 --> 34:01.480
2062
+ which is exactly what you want a technology
2063
+
2064
+ 34:01.480 --> 34:02.720
2065
+ like this to be, right?
2066
+
2067
+ 34:02.720 --> 34:07.280
2068
+ We don't really, when I turn the light switch on in here,
2069
+
2070
+ 34:07.280 --> 34:12.000
2071
+ I don't think about the complexity of those electrons
2072
+
2073
+ 34:12.000 --> 34:14.200
2074
+ being pushed down a wire from wherever it was
2075
+
2076
+ 34:14.200 --> 34:15.240
2077
+ and being generated.
2078
+
2079
+ 34:15.240 --> 34:19.080
2080
+ It's like, I just get annoyed if it doesn't work, right?
2081
+
2082
+ 34:19.080 --> 34:21.400
2083
+ And what I value is the fact
2084
+
2085
+ 34:21.400 --> 34:23.080
2086
+ that I can do other things in this space.
2087
+
2088
+ 34:23.080 --> 34:24.560
2089
+ I can see my colleagues.
2090
+
2091
+ 34:24.560 --> 34:26.160
2092
+ I can read stuff on a paper.
2093
+
2094
+ 34:26.160 --> 34:29.200
2095
+ I can not be afraid of the dark.
2096
+
2097
+ 34:30.360 --> 34:33.320
2098
+ And I think that's what we want this technology to be like
2099
+
2100
+ 34:33.320 --> 34:34.640
2101
+ is it's in the background
2102
+
2103
+ 34:34.640 --> 34:37.120
2104
+ and people get to have those life experiences
2105
+
2106
+ 34:37.120 --> 34:38.440
2107
+ and do so safely.
2108
+
2109
+ 34:38.440 --> 34:42.160
2110
+ So putting this technology in the hands of people
2111
+
2112
+ 34:42.160 --> 34:46.320
2113
+ speaks to scale of deployment, right?
2114
+
2115
+ 34:46.320 --> 34:50.880
2116
+ So what do you think the dreaded question about the future
2117
+
2118
+ 34:50.880 --> 34:53.560
2119
+ because nobody can predict the future,
2120
+
2121
+ 34:53.560 --> 34:57.240
2122
+ but just maybe speak poetically
2123
+
2124
+ 34:57.240 --> 35:00.880
2125
+ about when do you think we'll see a large scale deployment
2126
+
2127
+ 35:00.880 --> 35:05.880
2128
+ of autonomous vehicles, 10,000, those kinds of numbers?
2129
+
2130
+ 35:06.680 --> 35:08.240
2131
+ We'll see that within 10 years.
2132
+
2133
+ 35:09.240 --> 35:10.240
2134
+ I'm pretty confident.
2135
+
2136
+ 35:14.040 --> 35:16.040
2137
+ What's an impressive scale?
2138
+
2139
+ 35:16.040 --> 35:19.200
2140
+ What moment, so you've done the DARPA challenge
2141
+
2142
+ 35:19.200 --> 35:20.440
2143
+ where there's one vehicle.
2144
+
2145
+ 35:20.440 --> 35:23.960
2146
+ At which moment does it become, wow, this is serious scale?
2147
+
2148
+ 35:23.960 --> 35:26.520
2149
+ So I think the moment it gets serious
2150
+
2151
+ 35:26.520 --> 35:31.520
2152
+ is when we really do have a driverless vehicle
2153
+
2154
+ 35:32.240 --> 35:34.120
2155
+ operating on public roads
2156
+
2157
+ 35:35.000 --> 35:37.960
2158
+ and that we can do that kind of continuously.
2159
+
2160
+ 35:37.960 --> 35:38.880
2161
+ Without a safety driver.
2162
+
2163
+ 35:38.880 --> 35:40.440
2164
+ Without a safety driver in the vehicle.
2165
+
2166
+ 35:40.440 --> 35:41.560
2167
+ I think at that moment,
2168
+
2169
+ 35:41.560 --> 35:44.400
2170
+ we've kind of crossed the zero to one threshold.
2171
+
2172
+ 35:45.920 --> 35:50.200
2173
+ And then it is about how do we continue to scale that?
2174
+
2175
+ 35:50.200 --> 35:53.960
2176
+ How do we build the right business models?
2177
+
2178
+ 35:53.960 --> 35:56.320
2179
+ How do we build the right customer experience around it
2180
+
2181
+ 35:56.320 --> 35:59.960
2182
+ so that it is actually a useful product out in the world?
2183
+
2184
+ 36:00.960 --> 36:03.600
2185
+ And I think that is really,
2186
+
2187
+ 36:03.600 --> 36:05.920
2188
+ at that point it moves from
2189
+
2190
+ 36:05.920 --> 36:09.200
2191
+ what is this kind of mixed science engineering project
2192
+
2193
+ 36:09.200 --> 36:12.360
2194
+ into engineering and commercialization
2195
+
2196
+ 36:12.360 --> 36:15.840
2197
+ and really starting to deliver on the value
2198
+
2199
+ 36:15.840 --> 36:20.680
2200
+ that we all see here and actually making that real in the world.
2201
+
2202
+ 36:20.680 --> 36:22.240
2203
+ What do you think that deployment looks like?
2204
+
2205
+ 36:22.240 --> 36:26.440
2206
+ Where do we first see the inkling of no safety driver,
2207
+
2208
+ 36:26.440 --> 36:28.600
2209
+ one or two cars here and there?
2210
+
2211
+ 36:28.600 --> 36:29.800
2212
+ Is it on the highway?
2213
+
2214
+ 36:29.800 --> 36:33.160
2215
+ Is it in specific routes in the urban environment?
2216
+
2217
+ 36:33.160 --> 36:36.920
2218
+ I think it's gonna be urban, suburban type environments.
2219
+
2220
+ 36:37.880 --> 36:41.560
2221
+ Yeah, with Aurora, when we thought about how to tackle this,
2222
+
2223
+ 36:41.560 --> 36:45.040
2224
+ it was kind of in vogue to think about trucking
2225
+
2226
+ 36:46.040 --> 36:47.800
2227
+ as opposed to urban driving.
2228
+
2229
+ 36:47.800 --> 36:51.280
2230
+ And again, the human intuition around this
2231
+
2232
+ 36:51.280 --> 36:55.400
2233
+ is that freeways are easier to drive on
2234
+
2235
+ 36:57.080 --> 36:59.280
2236
+ because everybody's kind of going in the same direction
2237
+
2238
+ 36:59.280 --> 37:01.560
2239
+ and lanes are a little wider, et cetera.
2240
+
2241
+ 37:01.560 --> 37:03.320
2242
+ And I think that that intuition is pretty good,
2243
+
2244
+ 37:03.320 --> 37:06.040
2245
+ except we don't really care about most of the time.
2246
+
2247
+ 37:06.040 --> 37:08.400
2248
+ We care about all of the time.
2249
+
2250
+ 37:08.400 --> 37:10.880
2251
+ And when you're driving on a freeway with a truck,
2252
+
2253
+ 37:10.880 --> 37:13.440
2254
+ say 70 miles an hour,
2255
+
2256
+ 37:14.600 --> 37:16.240
2257
+ and you've got 70,000 pound load with you,
2258
+
2259
+ 37:16.240 --> 37:18.880
2260
+ that's just an incredible amount of kinetic energy.
2261
+
2262
+ 37:18.880 --> 37:21.440
2263
+ And so when that goes wrong, it goes really wrong.
2264
+
2265
+ 37:22.640 --> 37:27.640
2266
+ And those challenges that you see occur more rarely,
2267
+
2268
+ 37:27.800 --> 37:31.120
2269
+ so you don't get to learn as quickly.
2270
+
2271
+ 37:31.120 --> 37:34.720
2272
+ And they're incrementally more difficult than urban driving,
2273
+
2274
+ 37:34.720 --> 37:37.440
2275
+ but they're not easier than urban driving.
2276
+
2277
+ 37:37.440 --> 37:41.640
2278
+ And so I think this happens in moderate speed
2279
+
2280
+ 37:41.640 --> 37:45.280
2281
+ urban environments because if two vehicles crash
2282
+
2283
+ 37:45.280 --> 37:48.120
2284
+ at 25 miles per hour, it's not good,
2285
+
2286
+ 37:48.120 --> 37:50.120
2287
+ but probably everybody walks away.
2288
+
2289
+ 37:51.080 --> 37:53.720
2290
+ And those events where there's the possibility
2291
+
2292
+ 37:53.720 --> 37:55.800
2293
+ for that occurring happen frequently.
2294
+
2295
+ 37:55.800 --> 37:58.000
2296
+ So we get to learn more rapidly.
2297
+
2298
+ 37:58.000 --> 38:01.360
2299
+ We get to do that with lower risk for everyone.
2300
+
2301
+ 38:02.520 --> 38:04.360
2302
+ And then we can deliver value to people
2303
+
2304
+ 38:04.360 --> 38:05.880
2305
+ that need to get from one place to another.
2306
+
2307
+ 38:05.880 --> 38:08.160
2308
+ And once we've got that solved,
2309
+
2310
+ 38:08.160 --> 38:11.320
2311
+ then the freeway driving part of this just falls out.
2312
+
2313
+ 38:11.320 --> 38:13.080
2314
+ But we're able to learn more safely,
2315
+
2316
+ 38:13.080 --> 38:15.200
2317
+ more quickly in the urban environment.
2318
+
2319
+ 38:15.200 --> 38:18.760
2320
+ So 10 years and then scale 20, 30 year,
2321
+
2322
+ 38:18.760 --> 38:22.040
2323
+ who knows if a sufficiently compelling experience
2324
+
2325
+ 38:22.040 --> 38:24.400
2326
+ is created, it could be faster and slower.
2327
+
2328
+ 38:24.400 --> 38:27.160
2329
+ Do you think there could be breakthroughs
2330
+
2331
+ 38:27.160 --> 38:29.920
2332
+ and what kind of breakthroughs might there be
2333
+
2334
+ 38:29.920 --> 38:32.400
2335
+ that completely change that timeline?
2336
+
2337
+ 38:32.400 --> 38:35.360
2338
+ Again, not only am I asking you to predict the future,
2339
+
2340
+ 38:35.360 --> 38:37.360
2341
+ I'm asking you to predict breakthroughs
2342
+
2343
+ 38:37.360 --> 38:38.360
2344
+ that haven't happened yet.
2345
+
2346
+ 38:38.360 --> 38:41.440
2347
+ So what's the, I think another way to ask that
2348
+
2349
+ 38:41.440 --> 38:44.320
2350
+ would be if I could wave a magic wand,
2351
+
2352
+ 38:44.320 --> 38:46.720
2353
+ what part of the system would I make work today
2354
+
2355
+ 38:46.720 --> 38:49.480
2356
+ to accelerate it as quickly as possible?
2357
+
2358
+ 38:52.120 --> 38:54.200
2359
+ Don't say infrastructure, please don't say infrastructure.
2360
+
2361
+ 38:54.200 --> 38:56.320
2362
+ No, it's definitely not infrastructure.
2363
+
2364
+ 38:56.320 --> 39:00.600
2365
+ It's really that perception forecasting capability.
2366
+
2367
+ 39:00.600 --> 39:04.840
2368
+ So if tomorrow you could give me a perfect model
2369
+
2370
+ 39:04.840 --> 39:06.960
2371
+ of what's happened, what is happening
2372
+
2373
+ 39:06.960 --> 39:09.200
2374
+ and what will happen for the next five seconds
2375
+
2376
+ 39:10.360 --> 39:13.040
2377
+ around a vehicle on the roadway,
2378
+
2379
+ 39:13.040 --> 39:15.360
2380
+ that would accelerate things pretty dramatically.
2381
+
2382
+ 39:15.360 --> 39:17.600
2383
+ Are you, in terms of staying up at night,
2384
+
2385
+ 39:17.600 --> 39:21.760
2386
+ are you mostly bothered by cars, pedestrians or cyclists?
2387
+
2388
+ 39:21.760 --> 39:25.960
2389
+ So I worry most about the vulnerable road users
2390
+
2391
+ 39:25.960 --> 39:28.480
2392
+ about the combination of cyclists and cars, right?
2393
+
2394
+ 39:28.480 --> 39:31.960
2395
+ Or cyclists and pedestrians because they're not in armor.
2396
+
2397
+ 39:31.960 --> 39:36.480
2398
+ The cars, they're bigger, they've got protection
2399
+
2400
+ 39:36.480 --> 39:39.440
2401
+ for the people and so the ultimate risk is lower there.
2402
+
2403
+ 39:41.080 --> 39:43.240
2404
+ Whereas a pedestrian or a cyclist,
2405
+
2406
+ 39:43.240 --> 39:46.480
2407
+ they're out on the road and they don't have any protection
2408
+
2409
+ 39:46.480 --> 39:49.720
2410
+ and so we need to pay extra attention to that.
2411
+
2412
+ 39:49.720 --> 39:54.120
2413
+ Do you think about a very difficult technical challenge
2414
+
2415
+ 39:55.720 --> 39:58.520
2416
+ of the fact that pedestrians,
2417
+
2418
+ 39:58.520 --> 40:00.240
2419
+ if you try to protect pedestrians
2420
+
2421
+ 40:00.240 --> 40:04.560
2422
+ by being careful and slow, they'll take advantage of that.
2423
+
2424
+ 40:04.560 --> 40:09.040
2425
+ So the game theoretic dance, does that worry you
2426
+
2427
+ 40:09.040 --> 40:12.480
2428
+ of how, from a technical perspective, how we solve that?
2429
+
2430
+ 40:12.480 --> 40:14.560
2431
+ Because as humans, the way we solve that
2432
+
2433
+ 40:14.560 --> 40:17.240
2434
+ is kind of nudge our way through the pedestrians
2435
+
2436
+ 40:17.240 --> 40:20.000
2437
+ which doesn't feel, from a technical perspective,
2438
+
2439
+ 40:20.000 --> 40:22.300
2440
+ as a appropriate algorithm.
2441
+
2442
+ 40:23.200 --> 40:25.920
2443
+ But do you think about how we solve that problem?
2444
+
2445
+ 40:25.920 --> 40:30.920
2446
+ Yeah, I think there's two different concepts there.
2447
+
2448
+ 40:31.360 --> 40:35.820
2449
+ So one is, am I worried that because these vehicles
2450
+
2451
+ 40:35.820 --> 40:37.600
2452
+ are self driving, people will kind of step in the road
2453
+
2454
+ 40:37.600 --> 40:38.640
2455
+ and take advantage of them?
2456
+
2457
+ 40:38.640 --> 40:43.640
2458
+ And I've heard this and I don't really believe it
2459
+
2460
+ 40:43.760 --> 40:45.960
2461
+ because if I'm driving down the road
2462
+
2463
+ 40:45.960 --> 40:48.400
2464
+ and somebody steps in front of me, I'm going to stop.
2465
+
2466
+ 40:50.600 --> 40:53.660
2467
+ Even if I'm annoyed, I'm not gonna just drive
2468
+
2469
+ 40:53.660 --> 40:56.400
2470
+ through a person stood in the road.
2471
+
2472
+ 40:56.400 --> 41:00.400
2473
+ And so I think today people can take advantage of this
2474
+
2475
+ 41:00.400 --> 41:02.560
2476
+ and you do see some people do it.
2477
+
2478
+ 41:02.560 --> 41:04.180
2479
+ I guess there's an incremental risk
2480
+
2481
+ 41:04.180 --> 41:05.880
2482
+ because maybe they have lower confidence
2483
+
2484
+ 41:05.880 --> 41:07.720
2485
+ that I'm gonna see them than they might have
2486
+
2487
+ 41:07.720 --> 41:10.400
2488
+ for an automated vehicle and so maybe that shifts
2489
+
2490
+ 41:10.400 --> 41:12.040
2491
+ it a little bit.
2492
+
2493
+ 41:12.040 --> 41:14.360
2494
+ But I think people don't wanna get hit by cars.
2495
+
2496
+ 41:14.360 --> 41:17.080
2497
+ And so I think that I'm not that worried
2498
+
2499
+ 41:17.080 --> 41:18.760
2500
+ about people walking out of the 101
2501
+
2502
+ 41:18.760 --> 41:23.760
2503
+ and creating chaos more than they would today.
2504
+
2505
+ 41:24.400 --> 41:27.040
2506
+ Regarding kind of the nudging through a big stream
2507
+
2508
+ 41:27.040 --> 41:30.040
2509
+ of pedestrians leaving a concert or something,
2510
+
2511
+ 41:30.040 --> 41:33.520
2512
+ I think that is further down the technology pipeline.
2513
+
2514
+ 41:33.520 --> 41:36.960
2515
+ I think that you're right, that's tricky.
2516
+
2517
+ 41:36.960 --> 41:38.620
2518
+ I don't think it's necessarily,
2519
+
2520
+ 41:40.360 --> 41:43.600
2521
+ I think the algorithm people use for this is pretty simple.
2522
+
2523
+ 41:43.600 --> 41:44.800
2524
+ It's kind of just move forward slowly
2525
+
2526
+ 41:44.800 --> 41:46.800
2527
+ and if somebody's really close then stop.
2528
+
2529
+ 41:46.800 --> 41:50.880
2530
+ And I think that that probably can be replicated
2531
+
2532
+ 41:50.880 --> 41:54.040
2533
+ pretty easily and particularly given that
2534
+
2535
+ 41:54.040 --> 41:55.720
2536
+ you don't do this at 30 miles an hour,
2537
+
2538
+ 41:55.720 --> 41:59.080
2539
+ you do it at one, that even in those situations
2540
+
2541
+ 41:59.080 --> 42:01.200
2542
+ the risk is relatively minimal.
2543
+
2544
+ 42:01.200 --> 42:03.640
2545
+ But it's not something we're thinking about
2546
+
2547
+ 42:03.640 --> 42:04.560
2548
+ in any serious way.
2549
+
2550
+ 42:04.560 --> 42:07.920
2551
+ And probably that's less an algorithm problem
2552
+
2553
+ 42:07.920 --> 42:10.160
2554
+ and more creating a human experience.
2555
+
2556
+ 42:10.160 --> 42:14.300
2557
+ So the HCI people that create a visual display
2558
+
2559
+ 42:14.300 --> 42:16.260
2560
+ that you're pleasantly as a pedestrian
2561
+
2562
+ 42:16.260 --> 42:20.760
2563
+ nudged out of the way, that's an experience problem,
2564
+
2565
+ 42:20.760 --> 42:22.000
2566
+ not an algorithm problem.
2567
+
2568
+ 42:22.880 --> 42:25.480
2569
+ Who's the main competitor to Aurora today?
2570
+
2571
+ 42:25.480 --> 42:28.640
2572
+ And how do you outcompete them in the long run?
2573
+
2574
+ 42:28.640 --> 42:31.200
2575
+ So we really focus a lot on what we're doing here.
2576
+
2577
+ 42:31.200 --> 42:34.480
2578
+ I think that, I've said this a few times,
2579
+
2580
+ 42:34.480 --> 42:37.960
2581
+ that this is a huge difficult problem
2582
+
2583
+ 42:37.960 --> 42:40.320
2584
+ and it's great that a bunch of companies are tackling it
2585
+
2586
+ 42:40.320 --> 42:42.320
2587
+ because I think it's so important for society
2588
+
2589
+ 42:42.320 --> 42:43.800
2590
+ that somebody gets there.
2591
+
2592
+ 42:43.800 --> 42:48.800
2593
+ So we don't spend a whole lot of time
2594
+
2595
+ 42:49.120 --> 42:51.600
2596
+ thinking tactically about who's out there
2597
+
2598
+ 42:51.600 --> 42:55.240
2599
+ and how do we beat that person individually.
2600
+
2601
+ 42:55.240 --> 42:58.720
2602
+ What are we trying to do to go faster ultimately?
2603
+
2604
+ 42:59.760 --> 43:02.640
2605
+ Well part of it is the leadership team we have
2606
+
2607
+ 43:02.640 --> 43:04.200
2608
+ has got pretty tremendous experience.
2609
+
2610
+ 43:04.200 --> 43:06.440
2611
+ And so we kind of understand the landscape
2612
+
2613
+ 43:06.440 --> 43:09.160
2614
+ and understand where the cul de sacs are to some degree
2615
+
2616
+ 43:09.160 --> 43:10.980
2617
+ and we try and avoid those.
2618
+
2619
+ 43:10.980 --> 43:14.260
2620
+ I think there's a part of it,
2621
+
2622
+ 43:14.260 --> 43:16.260
2623
+ just this great team we've built.
2624
+
2625
+ 43:16.260 --> 43:19.080
2626
+ People, this is a technology and a company
2627
+
2628
+ 43:19.080 --> 43:22.320
2629
+ that people believe in the mission of
2630
+
2631
+ 43:22.320 --> 43:23.740
2632
+ and so it allows us to attract
2633
+
2634
+ 43:23.740 --> 43:25.740
2635
+ just awesome people to go work.
2636
+
2637
+ 43:26.800 --> 43:29.320
2638
+ We've got a culture I think that people appreciate
2639
+
2640
+ 43:29.320 --> 43:30.460
2641
+ that allows them to focus,
2642
+
2643
+ 43:30.460 --> 43:33.120
2644
+ allows them to really spend time solving problems.
2645
+
2646
+ 43:33.120 --> 43:35.900
2647
+ And I think that keeps them energized.
2648
+
2649
+ 43:35.900 --> 43:38.940
2650
+ And then we've invested hard,
2651
+
2652
+ 43:38.940 --> 43:43.500
2653
+ invested heavily in the infrastructure
2654
+
2655
+ 43:43.500 --> 43:46.540
2656
+ and architectures that we think will ultimately accelerate us.
2657
+
2658
+ 43:46.540 --> 43:50.660
2659
+ So because of the folks we're able to bring in early on,
2660
+
2661
+ 43:50.660 --> 43:53.540
2662
+ because of the great investors we have,
2663
+
2664
+ 43:53.540 --> 43:56.780
2665
+ we don't spend all of our time doing demos
2666
+
2667
+ 43:56.780 --> 43:58.660
2668
+ and kind of leaping from one demo to the next.
2669
+
2670
+ 43:58.660 --> 44:02.820
2671
+ We've been given the freedom to invest in
2672
+
2673
+ 44:03.940 --> 44:05.500
2674
+ infrastructure to do machine learning,
2675
+
2676
+ 44:05.500 --> 44:08.600
2677
+ infrastructure to pull data from our on road testing,
2678
+
2679
+ 44:08.600 --> 44:11.500
2680
+ infrastructure to use that to accelerate engineering.
2681
+
2682
+ 44:11.500 --> 44:14.480
2683
+ And I think that early investment
2684
+
2685
+ 44:14.480 --> 44:17.340
2686
+ and continuing investment in those kind of tools
2687
+
2688
+ 44:17.340 --> 44:19.780
2689
+ will ultimately allow us to accelerate
2690
+
2691
+ 44:19.780 --> 44:21.940
2692
+ and do something pretty incredible.
2693
+
2694
+ 44:21.940 --> 44:23.420
2695
+ Chris, beautifully put.
2696
+
2697
+ 44:23.420 --> 44:24.660
2698
+ It's a good place to end.
2699
+
2700
+ 44:24.660 --> 44:26.500
2701
+ Thank you so much for talking today.
2702
+
2703
+ 44:26.500 --> 44:47.940
2704
+ Thank you very much. Really enjoyed it.
2705
+
vtt/episode_028_small.vtt ADDED
@@ -0,0 +1,2630 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.120
4
+ The following is a conversation with Chris Ermsen.
5
+
6
+ 00:03.120 --> 00:06.040
7
+ He was the CTO of the Google self driving car team,
8
+
9
+ 00:06.040 --> 00:08.880
10
+ a key engineer and leader behind the Carnegie Mellon
11
+
12
+ 00:08.880 --> 00:11.240
13
+ University, autonomous vehicle entries
14
+
15
+ 00:11.240 --> 00:14.120
16
+ in the DARPA Grand Challenges and the winner
17
+
18
+ 00:14.120 --> 00:16.160
19
+ of the DARPA Urban Challenge.
20
+
21
+ 00:16.160 --> 00:19.480
22
+ Today, he's the CEO of Aurora Innovation,
23
+
24
+ 00:19.480 --> 00:21.360
25
+ an autonomous vehicle software company.
26
+
27
+ 00:21.360 --> 00:23.600
28
+ He started with Sterling Anderson,
29
+
30
+ 00:23.600 --> 00:26.000
31
+ who was the former director of Tesla Autopilot
32
+
33
+ 00:26.000 --> 00:30.160
34
+ and drew back now Uber's former autonomy and perception lead.
35
+
36
+ 00:30.160 --> 00:33.160
37
+ Chris is one of the top roboticist and autonomous vehicle
38
+
39
+ 00:33.160 --> 00:37.440
40
+ experts in the world and a long time voice of reason
41
+
42
+ 00:37.440 --> 00:41.320
43
+ in a space that is shrouded in both mystery and hype.
44
+
45
+ 00:41.320 --> 00:43.600
46
+ He both acknowledges the incredible challenges
47
+
48
+ 00:43.600 --> 00:46.560
49
+ involved in solving the problem of autonomous driving
50
+
51
+ 00:46.560 --> 00:49.680
52
+ and is working hard to solve it.
53
+
54
+ 00:49.680 --> 00:52.440
55
+ This is the Artificial Intelligence Podcast.
56
+
57
+ 00:52.440 --> 00:54.720
58
+ If you enjoy it, subscribe on YouTube,
59
+
60
+ 00:54.720 --> 00:57.920
61
+ give it five stars on iTunes, support it on Patreon,
62
+
63
+ 00:57.920 --> 00:59.760
64
+ or simply connect with me on Twitter
65
+
66
+ 00:59.760 --> 01:03.280
67
+ at Lex Freedman spelled FRID MAN.
68
+
69
+ 01:03.280 --> 01:09.160
70
+ And now, here's my conversation with Chris Ermsen.
71
+
72
+ 01:09.160 --> 01:11.960
73
+ You were part of both the DARPA Grand Challenge
74
+
75
+ 01:11.960 --> 01:17.040
76
+ and the DARPA Urban Challenge teams at CMU with Red Whitaker.
77
+
78
+ 01:17.040 --> 01:19.720
79
+ What technical or philosophical things
80
+
81
+ 01:19.720 --> 01:22.280
82
+ have you learned from these races?
83
+
84
+ 01:22.280 --> 01:26.640
85
+ I think the high order bit was that it could be done.
86
+
87
+ 01:26.640 --> 01:32.880
88
+ I think that was the thing that was incredible about the first
89
+
90
+ 01:32.880 --> 01:36.440
91
+ of the Grand Challenges, that I remember I was a grad
92
+
93
+ 01:36.440 --> 01:41.440
94
+ student at Carnegie Mellon, and there we
95
+
96
+ 01:41.440 --> 01:46.320
97
+ was kind of this dichotomy of it seemed really hard,
98
+
99
+ 01:46.320 --> 01:48.800
100
+ so that would be cool and interesting.
101
+
102
+ 01:48.800 --> 01:51.720
103
+ But at the time, we were the only robotics
104
+
105
+ 01:51.720 --> 01:54.960
106
+ institute around, and so if we went into it and fell
107
+
108
+ 01:54.960 --> 01:58.320
109
+ in our faces, that would be embarrassing.
110
+
111
+ 01:58.320 --> 02:01.160
112
+ So I think just having the will to go do it,
113
+
114
+ 02:01.160 --> 02:03.360
115
+ to try to do this thing that at the time was marked
116
+
117
+ 02:03.360 --> 02:07.120
118
+ as darn near impossible, and then after a couple of tries,
119
+
120
+ 02:07.120 --> 02:11.360
121
+ be able to actually make it happen, I think that was really
122
+
123
+ 02:11.360 --> 02:12.360
124
+ exciting.
125
+
126
+ 02:12.360 --> 02:15.120
127
+ But at which point did you believe it was possible?
128
+
129
+ 02:15.120 --> 02:17.000
130
+ Did you, from the very beginning,
131
+
132
+ 02:17.000 --> 02:18.360
133
+ did you personally, because you're
134
+
135
+ 02:18.360 --> 02:20.320
136
+ one of the lead engineers, you actually
137
+
138
+ 02:20.320 --> 02:21.800
139
+ had to do a lot of the work?
140
+
141
+ 02:21.800 --> 02:23.840
142
+ Yeah, I was the technical director there,
143
+
144
+ 02:23.840 --> 02:26.120
145
+ and did a lot of the work, along with a bunch
146
+
147
+ 02:26.120 --> 02:28.440
148
+ of other really good people.
149
+
150
+ 02:28.440 --> 02:29.760
151
+ Did I believe it could be done?
152
+
153
+ 02:29.760 --> 02:31.120
154
+ Yeah, of course.
155
+
156
+ 02:31.120 --> 02:33.400
157
+ Why would you go do something you thought was impossible,
158
+
159
+ 02:33.400 --> 02:34.880
160
+ completely impossible?
161
+
162
+ 02:34.880 --> 02:36.280
163
+ We thought it was going to be hard.
164
+
165
+ 02:36.280 --> 02:38.080
166
+ We didn't know how we're going to be able to do it.
167
+
168
+ 02:38.080 --> 02:42.880
169
+ We didn't know if we'd be able to do it the first time.
170
+
171
+ 02:42.880 --> 02:46.000
172
+ Turns out we couldn't.
173
+
174
+ 02:46.000 --> 02:48.400
175
+ That, yeah, I guess you have to.
176
+
177
+ 02:48.400 --> 02:52.920
178
+ I think there's a certain benefit to naivete,
179
+
180
+ 02:52.920 --> 02:55.400
181
+ that if you don't know how hard something really is,
182
+
183
+ 02:55.400 --> 02:59.560
184
+ you try different things, and it gives you an opportunity
185
+
186
+ 02:59.560 --> 03:04.080
187
+ that others who are wiser maybe don't have.
188
+
189
+ 03:04.080 --> 03:05.680
190
+ What were the biggest pain points?
191
+
192
+ 03:05.680 --> 03:09.360
193
+ Mechanical, sensors, hardware, software, algorithms
194
+
195
+ 03:09.360 --> 03:12.760
196
+ for mapping, localization, just general perception,
197
+
198
+ 03:12.760 --> 03:15.440
199
+ control, like hardware, software, first of all.
200
+
201
+ 03:15.440 --> 03:17.840
202
+ I think that's the joy of this field,
203
+
204
+ 03:17.840 --> 03:20.040
205
+ is that it's all hard.
206
+
207
+ 03:20.040 --> 03:25.160
208
+ And that you have to be good at each part of it.
209
+
210
+ 03:25.160 --> 03:32.280
211
+ So for the urban challenges, if I look back at it from today,
212
+
213
+ 03:32.280 --> 03:36.200
214
+ it should be easy today.
215
+
216
+ 03:36.200 --> 03:38.880
217
+ That it was a static world.
218
+
219
+ 03:38.880 --> 03:40.720
220
+ There weren't other actors moving through it.
221
+
222
+ 03:40.720 --> 03:42.440
223
+ That is what that means.
224
+
225
+ 03:42.440 --> 03:47.080
226
+ It was out in the desert, so you get really good GPS.
227
+
228
+ 03:47.080 --> 03:51.320
229
+ So that went, and we could map it roughly.
230
+
231
+ 03:51.320 --> 03:55.160
232
+ And so in retrospect now, it's within the realm of things
233
+
234
+ 03:55.160 --> 03:57.800
235
+ we could do back then.
236
+
237
+ 03:57.800 --> 03:59.200
238
+ Just actually getting the vehicle,
239
+
240
+ 03:59.200 --> 04:00.680
241
+ and there's a bunch of engineering work
242
+
243
+ 04:00.680 --> 04:04.200
244
+ to get the vehicle so that we could control and drive it.
245
+
246
+ 04:04.200 --> 04:09.520
247
+ That's still a pain today, but it was even more so back then.
248
+
249
+ 04:09.520 --> 04:12.920
250
+ And then the uncertainty of exactly what they wanted us
251
+
252
+ 04:12.920 --> 04:17.080
253
+ to do was part of the challenge as well.
254
+
255
+ 04:17.080 --> 04:19.360
256
+ Right, you didn't actually know the track hiding it.
257
+
258
+ 04:19.360 --> 04:21.520
259
+ You knew approximately, but you didn't actually
260
+
261
+ 04:21.520 --> 04:23.560
262
+ know the route that's going to be taken.
263
+
264
+ 04:23.560 --> 04:26.560
265
+ That's right, we didn't even really,
266
+
267
+ 04:26.560 --> 04:28.640
268
+ the way the rules had been described,
269
+
270
+ 04:28.640 --> 04:29.840
271
+ you had to kind of guess.
272
+
273
+ 04:29.840 --> 04:33.440
274
+ So if you think back to that challenge,
275
+
276
+ 04:33.440 --> 04:37.000
277
+ the idea was that the government would give us,
278
+
279
+ 04:37.000 --> 04:40.360
280
+ the DARPA would give us a set of waypoints
281
+
282
+ 04:40.360 --> 04:44.240
283
+ and kind of the width that you had to stay within between the line
284
+
285
+ 04:44.240 --> 04:46.840
286
+ that went between each of those waypoints.
287
+
288
+ 04:46.840 --> 04:49.280
289
+ And so the most devious thing they could have done
290
+
291
+ 04:49.280 --> 04:53.720
292
+ is set a kilometer wide corridor across a field of scrub
293
+
294
+ 04:53.720 --> 04:58.520
295
+ brush and rocks and said, go figure it out.
296
+
297
+ 04:58.520 --> 05:02.200
298
+ Fortunately, it turned into basically driving along
299
+
300
+ 05:02.200 --> 05:06.800
301
+ a set of trails, which is much more relevant to the application
302
+
303
+ 05:06.800 --> 05:08.760
304
+ they were looking for.
305
+
306
+ 05:08.760 --> 05:12.080
307
+ But no, it was a hell of a thing back in the day.
308
+
309
+ 05:12.080 --> 05:16.640
310
+ So the legend, Red, was kind of leading that effort
311
+
312
+ 05:16.640 --> 05:19.120
313
+ in terms of just broadly speaking.
314
+
315
+ 05:19.120 --> 05:22.040
316
+ So you're a leader now.
317
+
318
+ 05:22.040 --> 05:25.000
319
+ What have you learned from Red about leadership?
320
+
321
+ 05:25.000 --> 05:26.360
322
+ I think there's a couple of things.
323
+
324
+ 05:26.360 --> 05:30.880
325
+ One is go and try those really hard things.
326
+
327
+ 05:30.880 --> 05:34.760
328
+ That's where there is an incredible opportunity.
329
+
330
+ 05:34.760 --> 05:36.560
331
+ I think the other big one, though,
332
+
333
+ 05:36.560 --> 05:41.720
334
+ is to see people for who they can be, not who they are.
335
+
336
+ 05:41.720 --> 05:46.080
337
+ It's one of the deepest lessons I learned from Red,
338
+
339
+ 05:46.080 --> 05:51.000
340
+ was that he would look at undergraduates or graduate
341
+
342
+ 05:51.000 --> 05:56.120
343
+ students and empower them to be leaders,
344
+
345
+ 05:56.120 --> 06:01.400
346
+ to have responsibility, to do great things,
347
+
348
+ 06:01.400 --> 06:04.760
349
+ that I think another person might look at them and think,
350
+
351
+ 06:04.760 --> 06:06.600
352
+ oh, well, that's just an undergraduate student.
353
+
354
+ 06:06.600 --> 06:08.720
355
+ What could they know?
356
+
357
+ 06:08.720 --> 06:13.520
358
+ And so I think that trust, but verify, have confidence
359
+
360
+ 06:13.520 --> 06:14.880
361
+ in what people can become, I think,
362
+
363
+ 06:14.880 --> 06:16.680
364
+ is a really powerful thing.
365
+
366
+ 06:16.680 --> 06:20.480
367
+ So through that, let's just fast forward through the history.
368
+
369
+ 06:20.480 --> 06:24.200
370
+ Can you maybe talk through the technical evolution
371
+
372
+ 06:24.200 --> 06:27.480
373
+ of autonomous vehicle systems from the first two
374
+
375
+ 06:27.480 --> 06:30.920
376
+ Grand Challenges to the Urban Challenge to today?
377
+
378
+ 06:30.920 --> 06:33.600
379
+ Are there major shifts in your mind,
380
+
381
+ 06:33.600 --> 06:37.240
382
+ or is it the same kind of technology just made more robust?
383
+
384
+ 06:37.240 --> 06:40.880
385
+ I think there's been some big, big steps.
386
+
387
+ 06:40.880 --> 06:46.600
388
+ So for the Grand Challenge, the real technology
389
+
390
+ 06:46.600 --> 06:51.400
391
+ that unlocked that was HD mapping.
392
+
393
+ 06:51.400 --> 06:55.200
394
+ Prior to that, a lot of the off road robotics work
395
+
396
+ 06:55.200 --> 06:58.920
397
+ had been done without any real prior model of what
398
+
399
+ 06:58.920 --> 07:01.400
400
+ the vehicle was going to encounter.
401
+
402
+ 07:01.400 --> 07:03.960
403
+ And so that innovation, that the fact
404
+
405
+ 07:03.960 --> 07:11.320
406
+ that we could get decimeter resolution models,
407
+
408
+ 07:11.320 --> 07:13.560
409
+ was really a big deal.
410
+
411
+ 07:13.560 --> 07:17.480
412
+ And that allowed us to kind of bound
413
+
414
+ 07:17.480 --> 07:19.680
415
+ the complexity of the driving problem the vehicle had
416
+
417
+ 07:19.680 --> 07:21.040
418
+ and allowed it to operate at speed,
419
+
420
+ 07:21.040 --> 07:23.800
421
+ because we could assume things about the environment
422
+
423
+ 07:23.800 --> 07:26.400
424
+ that it was going to encounter.
425
+
426
+ 07:26.400 --> 07:31.320
427
+ So that was one of the big step there.
428
+
429
+ 07:31.320 --> 07:38.520
430
+ For the Urban Challenge, one of the big technological
431
+
432
+ 07:38.520 --> 07:41.960
433
+ innovations there was the multi beam LiDAR.
434
+
435
+ 07:41.960 --> 07:45.720
436
+ And be able to generate high resolution,
437
+
438
+ 07:45.720 --> 07:48.680
439
+ mid to long range 3D models the world,
440
+
441
+ 07:48.680 --> 07:54.120
442
+ and use that for understanding the world around the vehicle.
443
+
444
+ 07:54.120 --> 07:59.120
445
+ And that was really kind of a game changing technology.
446
+
447
+ 07:59.120 --> 08:02.880
448
+ And parallel with that, we saw a bunch
449
+
450
+ 08:02.880 --> 08:06.640
451
+ of other technologies that had been kind of converging
452
+
453
+ 08:06.640 --> 08:08.960
454
+ half their day in the sun.
455
+
456
+ 08:08.960 --> 08:16.800
457
+ So Bayesian estimation had been, SLAM had been a big field
458
+
459
+ 08:16.800 --> 08:18.600
460
+ in robotics.
461
+
462
+ 08:18.600 --> 08:20.800
463
+ You would go to a conference a couple of years
464
+
465
+ 08:20.800 --> 08:23.800
466
+ before that, and every paper would effectively
467
+
468
+ 08:23.800 --> 08:25.640
469
+ have SLAM somewhere in it.
470
+
471
+ 08:25.640 --> 08:31.560
472
+ And so seeing that Bayesian estimation techniques
473
+
474
+ 08:31.560 --> 08:34.040
475
+ play out on a very visible stage,
476
+
477
+ 08:34.040 --> 08:38.680
478
+ I thought that was pretty exciting to see.
479
+
480
+ 08:38.680 --> 08:41.760
481
+ And mostly SLAM was done based on LiDAR at that time?
482
+
483
+ 08:41.760 --> 08:42.400
484
+ Well, yeah.
485
+
486
+ 08:42.400 --> 08:46.720
487
+ And in fact, we weren't really doing SLAM per se in real time,
488
+
489
+ 08:46.720 --> 08:48.120
490
+ because we had a model ahead of time.
491
+
492
+ 08:48.120 --> 08:51.560
493
+ We had a roadmap, but we were doing localization.
494
+
495
+ 08:51.560 --> 08:54.080
496
+ And we were using the LiDAR or the cameras,
497
+
498
+ 08:54.080 --> 08:55.920
499
+ depending on who exactly was doing it,
500
+
501
+ 08:55.920 --> 08:58.080
502
+ to localize to a model of the world.
503
+
504
+ 08:58.080 --> 09:00.720
505
+ And I thought that was a big step
506
+
507
+ 09:00.720 --> 09:07.160
508
+ from kind of naively trusting GPS INS before that.
509
+
510
+ 09:07.160 --> 09:10.400
511
+ And again, lots of work had been going on in this field.
512
+
513
+ 09:10.400 --> 09:14.080
514
+ Certainly, this was not doing anything particularly
515
+
516
+ 09:14.080 --> 09:17.400
517
+ innovative in SLAM or in localization,
518
+
519
+ 09:17.400 --> 09:20.160
520
+ but it was seeing that technology necessary
521
+
522
+ 09:20.160 --> 09:21.800
523
+ in a real application on a big stage.
524
+
525
+ 09:21.800 --> 09:23.080
526
+ I thought it was very cool.
527
+
528
+ 09:23.080 --> 09:25.600
529
+ So for the Urban Challenge, those already maps
530
+
531
+ 09:25.600 --> 09:28.120
532
+ constructed offline in general?
533
+
534
+ 09:28.120 --> 09:28.600
535
+ OK.
536
+
537
+ 09:28.600 --> 09:30.920
538
+ And did people do that individually?
539
+
540
+ 09:30.920 --> 09:33.600
541
+ Did individual teams do it individually?
542
+
543
+ 09:33.600 --> 09:36.440
544
+ So they had their own different approaches there?
545
+
546
+ 09:36.440 --> 09:41.720
547
+ Or did everybody kind of share that information,
548
+
549
+ 09:41.720 --> 09:42.880
550
+ at least intuitively?
551
+
552
+ 09:42.880 --> 09:49.560
553
+ So DARPA gave all the teams a model of the world, a map.
554
+
555
+ 09:49.560 --> 09:53.720
556
+ And then one of the things that we had to figure out back then
557
+
558
+ 09:53.720 --> 09:56.720
559
+ was, and it's still one of these things that trips people up
560
+
561
+ 09:56.720 --> 10:00.240
562
+ today, is actually the coordinate system.
563
+
564
+ 10:00.240 --> 10:03.000
565
+ So you get a latitude, longitude.
566
+
567
+ 10:03.000 --> 10:05.120
568
+ And to so many decimal places, you
569
+
570
+ 10:05.120 --> 10:07.800
571
+ don't really care about kind of the ellipsoid of the Earth
572
+
573
+ 10:07.800 --> 10:09.520
574
+ that's being used.
575
+
576
+ 10:09.520 --> 10:12.720
577
+ But when you want to get to 10 centimeter or centimeter
578
+
579
+ 10:12.720 --> 10:18.480
580
+ resolution, you care whether the coordinate system is NADS 83
581
+
582
+ 10:18.480 --> 10:22.720
583
+ or WGS 84, or these are different ways
584
+
585
+ 10:22.720 --> 10:26.720
586
+ to describe both the kind of nonsphericalness of the Earth,
587
+
588
+ 10:26.720 --> 10:31.560
589
+ but also kind of the actually, and I think when I can't remember
590
+
591
+ 10:31.560 --> 10:33.560
592
+ which one, the tectonic shifts that are happening
593
+
594
+ 10:33.560 --> 10:36.920
595
+ and how to transform the global datum as a function of that.
596
+
597
+ 10:36.920 --> 10:40.400
598
+ So getting a map and then actually matching it
599
+
600
+ 10:40.400 --> 10:41.880
601
+ to reality to centimeter resolution,
602
+
603
+ 10:41.880 --> 10:44.000
604
+ that was kind of interesting and fun back then.
605
+
606
+ 10:44.000 --> 10:46.800
607
+ So how much work was the perception doing there?
608
+
609
+ 10:46.800 --> 10:52.440
610
+ So how much were you relying on localization based on maps
611
+
612
+ 10:52.440 --> 10:55.720
613
+ without using perception to register to the maps?
614
+
615
+ 10:55.720 --> 10:57.960
616
+ And I guess the question is how advanced
617
+
618
+ 10:57.960 --> 10:59.720
619
+ was perception at that point?
620
+
621
+ 10:59.720 --> 11:01.920
622
+ It's certainly behind where we are today.
623
+
624
+ 11:01.920 --> 11:05.800
625
+ We're more than a decade since the urban challenge.
626
+
627
+ 11:05.800 --> 11:13.080
628
+ But the core of it was there, that we were tracking vehicles.
629
+
630
+ 11:13.080 --> 11:15.600
631
+ We had to do that at 100 plus meter range
632
+
633
+ 11:15.600 --> 11:18.280
634
+ because we had to merge with other traffic.
635
+
636
+ 11:18.280 --> 11:21.200
637
+ We were using, again, Bayesian estimates
638
+
639
+ 11:21.200 --> 11:23.800
640
+ for state of these vehicles.
641
+
642
+ 11:23.800 --> 11:25.560
643
+ We had to deal with a bunch of the problems
644
+
645
+ 11:25.560 --> 11:28.240
646
+ that you think of today of predicting
647
+
648
+ 11:28.240 --> 11:31.040
649
+ where that vehicle is going to be a few seconds into the future.
650
+
651
+ 11:31.040 --> 11:33.680
652
+ We had to deal with the fact that there
653
+
654
+ 11:33.680 --> 11:36.000
655
+ were multiple hypotheses for that because a vehicle
656
+
657
+ 11:36.000 --> 11:37.640
658
+ at an intersection might be going right
659
+
660
+ 11:37.640 --> 11:41.440
661
+ or it might be going straight or it might be making a left turn.
662
+
663
+ 11:41.440 --> 11:44.080
664
+ And we had to deal with the challenge of the fact
665
+
666
+ 11:44.080 --> 11:47.520
667
+ that our behavior was going to impact the behavior
668
+
669
+ 11:47.520 --> 11:48.880
670
+ of that other operator.
671
+
672
+ 11:48.880 --> 11:53.400
673
+ And we did a lot of that in relatively naive ways.
674
+
675
+ 11:53.400 --> 11:54.720
676
+ But it kind of worked.
677
+
678
+ 11:54.720 --> 11:57.000
679
+ Still had to have some kind of assumption.
680
+
681
+ 11:57.000 --> 12:00.640
682
+ And so where does that 10 years later, where does that take us
683
+
684
+ 12:00.640 --> 12:04.200
685
+ today from that artificial city construction
686
+
687
+ 12:04.200 --> 12:06.920
688
+ to real cities to the urban environment?
689
+
690
+ 12:06.920 --> 12:13.600
691
+ Yeah, I think the biggest thing is that the actors are truly
692
+
693
+ 12:13.600 --> 12:18.680
694
+ unpredictable, that most of the time, the drivers on the road,
695
+
696
+ 12:18.680 --> 12:24.000
697
+ the other road users are out there behaving well.
698
+
699
+ 12:24.000 --> 12:27.040
700
+ But every once in a while, they're not.
701
+
702
+ 12:27.040 --> 12:33.320
703
+ The variety of other vehicles is, you have all of them.
704
+
705
+ 12:33.320 --> 12:35.760
706
+ In terms of behavior, or terms of perception, or both?
707
+
708
+ 12:35.760 --> 12:38.320
709
+ Both.
710
+
711
+ 12:38.320 --> 12:40.480
712
+ Back then, we didn't have to deal with cyclists.
713
+
714
+ 12:40.480 --> 12:42.800
715
+ We didn't have to deal with pedestrians.
716
+
717
+ 12:42.800 --> 12:46.240
718
+ Didn't have to deal with traffic lights.
719
+
720
+ 12:46.240 --> 12:49.360
721
+ The scale over which that you have to operate is now
722
+
723
+ 12:49.360 --> 12:52.240
724
+ as much larger than the airbase that we were thinking about back
725
+
726
+ 12:52.240 --> 12:52.720
727
+ then.
728
+
729
+ 12:52.720 --> 12:56.280
730
+ So what easy question?
731
+
732
+ 12:56.280 --> 12:59.720
733
+ What do you think is the hardest part about driving?
734
+
735
+ 12:59.720 --> 13:00.480
736
+ Easy question.
737
+
738
+ 13:00.480 --> 13:01.320
739
+ Yeah.
740
+
741
+ 13:01.320 --> 13:02.600
742
+ No, I'm joking.
743
+
744
+ 13:02.600 --> 13:07.440
745
+ I'm sure nothing really jumps out at you as one thing.
746
+
747
+ 13:07.440 --> 13:12.920
748
+ But in the jump from the urban challenge to the real world,
749
+
750
+ 13:12.920 --> 13:16.200
751
+ is there something that's a particular euphorcy
752
+
753
+ 13:16.200 --> 13:18.480
754
+ as a very serious, difficult challenge?
755
+
756
+ 13:18.480 --> 13:21.120
757
+ I think the most fundamental difference
758
+
759
+ 13:21.120 --> 13:28.960
760
+ is that we're doing it for real, that in that environment,
761
+
762
+ 13:28.960 --> 13:31.840
763
+ it was both a limited complexity environment,
764
+
765
+ 13:31.840 --> 13:33.240
766
+ because certain actors weren't there,
767
+
768
+ 13:33.240 --> 13:35.360
769
+ because the roads were maintained.
770
+
771
+ 13:35.360 --> 13:38.720
772
+ There were barriers keeping people separate from robots
773
+
774
+ 13:38.720 --> 13:40.880
775
+ at the time.
776
+
777
+ 13:40.880 --> 13:44.480
778
+ And it only had to work for 60 miles, which looking at it
779
+
780
+ 13:44.480 --> 13:48.960
781
+ from 2006, it had to work for 60 miles.
782
+
783
+ 13:48.960 --> 13:52.720
784
+ Looking at it from now, we want things
785
+
786
+ 13:52.720 --> 13:57.200
787
+ that will go and drive for half a million miles.
788
+
789
+ 13:57.200 --> 14:00.960
790
+ And it's just a different game.
791
+
792
+ 14:00.960 --> 14:06.080
793
+ So how important, you said Lyder came into the game early on,
794
+
795
+ 14:06.080 --> 14:08.880
796
+ and it's really the primary driver of autonomous vehicles
797
+
798
+ 14:08.880 --> 14:10.240
799
+ today as a sensor.
800
+
801
+ 14:10.240 --> 14:12.880
802
+ So how important is the role of Lyder in the sensor suite
803
+
804
+ 14:12.880 --> 14:14.760
805
+ in the near term?
806
+
807
+ 14:14.760 --> 14:18.680
808
+ So I think it's essential.
809
+
810
+ 14:18.680 --> 14:20.520
811
+ But I also believe that cameras are essential,
812
+
813
+ 14:20.520 --> 14:22.160
814
+ and I believe the radar is essential.
815
+
816
+ 14:22.160 --> 14:27.400
817
+ I think that you really need to use the composition of data
818
+
819
+ 14:27.400 --> 14:28.920
820
+ from these different sensors if you
821
+
822
+ 14:28.920 --> 14:32.600
823
+ want the thing to really be robust.
824
+
825
+ 14:32.600 --> 14:35.440
826
+ The question I want to ask, let's see if we can untangle it,
827
+
828
+ 14:35.440 --> 14:40.240
829
+ is what are your thoughts on the Elon Musk provocative statement
830
+
831
+ 14:40.240 --> 14:45.840
832
+ that Lyder is a crutch, that is a kind of, I guess,
833
+
834
+ 14:45.840 --> 14:49.600
835
+ growing pains, and that much of the perception
836
+
837
+ 14:49.600 --> 14:52.160
838
+ task can be done with cameras?
839
+
840
+ 14:52.160 --> 14:56.920
841
+ So I think it is undeniable that people walk around
842
+
843
+ 14:56.920 --> 14:59.680
844
+ without lasers in their foreheads,
845
+
846
+ 14:59.680 --> 15:01.840
847
+ and they can get into vehicles and drive them.
848
+
849
+ 15:01.840 --> 15:05.560
850
+ And so there's an existence proof
851
+
852
+ 15:05.560 --> 15:10.840
853
+ that you can drive using passive vision.
854
+
855
+ 15:10.840 --> 15:12.680
856
+ No doubt, can't argue with that.
857
+
858
+ 15:12.680 --> 15:14.320
859
+ In terms of sensors, yeah.
860
+
861
+ 15:14.320 --> 15:14.800
862
+ So there's proof.
863
+
864
+ 15:14.800 --> 15:15.960
865
+ Yes, in terms of sensors, right?
866
+
867
+ 15:15.960 --> 15:18.720
868
+ So there's an example that we all
869
+
870
+ 15:18.720 --> 15:23.280
871
+ go do it at many of us every day.
872
+
873
+ 15:23.280 --> 15:28.200
874
+ In terms of Lyder being a crutch, sure.
875
+
876
+ 15:28.200 --> 15:33.080
877
+ But in the same way that the combustion engine
878
+
879
+ 15:33.080 --> 15:35.240
880
+ was a crutch on the path to an electric vehicle,
881
+
882
+ 15:35.240 --> 15:40.840
883
+ in the same way that any technology ultimately gets
884
+
885
+ 15:40.840 --> 15:44.640
886
+ replaced by some superior technology in the future.
887
+
888
+ 15:44.640 --> 15:47.720
889
+ And really, the way that I look at this
890
+
891
+ 15:47.720 --> 15:51.720
892
+ is that the way we get around on the ground, the way
893
+
894
+ 15:51.720 --> 15:55.280
895
+ that we use transportation is broken.
896
+
897
+ 15:55.280 --> 15:59.720
898
+ And that we have this, I think the number I saw this morning,
899
+
900
+ 15:59.720 --> 16:04.040
901
+ 37,000 Americans killed last year on our roads.
902
+
903
+ 16:04.040 --> 16:05.360
904
+ And that's just not acceptable.
905
+
906
+ 16:05.360 --> 16:09.440
907
+ And so any technology that we can bring to bear
908
+
909
+ 16:09.440 --> 16:12.840
910
+ that accelerates this technology, self driving technology,
911
+
912
+ 16:12.840 --> 16:15.720
913
+ coming to market and saving lives,
914
+
915
+ 16:15.720 --> 16:18.280
916
+ is technology we should be using.
917
+
918
+ 16:18.280 --> 16:24.040
919
+ And it feels just arbitrary to say, well, I'm not
920
+
921
+ 16:24.040 --> 16:27.800
922
+ OK with using lasers, because that's whatever.
923
+
924
+ 16:27.800 --> 16:30.760
925
+ But I am OK with using an 8 megapixel camera
926
+
927
+ 16:30.760 --> 16:32.880
928
+ or a 16 megapixel camera.
929
+
930
+ 16:32.880 --> 16:34.640
931
+ These are just bits of technology,
932
+
933
+ 16:34.640 --> 16:36.880
934
+ and we should be taking the best technology from the tool
935
+
936
+ 16:36.880 --> 16:41.600
937
+ bin that allows us to go and solve a problem.
938
+
939
+ 16:41.600 --> 16:45.160
940
+ The question I often talk to, well, obviously you do as well,
941
+
942
+ 16:45.160 --> 16:48.320
943
+ to automotive companies.
944
+
945
+ 16:48.320 --> 16:51.880
946
+ And if there's one word that comes up more often than anything,
947
+
948
+ 16:51.880 --> 16:55.320
949
+ it's cost and drive costs down.
950
+
951
+ 16:55.320 --> 17:01.440
952
+ So while it's true that it's a tragic number, the 37,000,
953
+
954
+ 17:01.440 --> 17:04.880
955
+ the question is, and I'm not the one asking this question,
956
+
957
+ 17:04.880 --> 17:07.160
958
+ because I hate this question, but we
959
+
960
+ 17:07.160 --> 17:11.680
961
+ want to find the cheapest sensor suite that
962
+
963
+ 17:11.680 --> 17:13.400
964
+ creates a safe vehicle.
965
+
966
+ 17:13.400 --> 17:18.240
967
+ So in that uncomfortable trade off,
968
+
969
+ 17:18.240 --> 17:23.680
970
+ do you foresee lidar coming down in cost in the future?
971
+
972
+ 17:23.680 --> 17:28.000
973
+ Or do you see a day where level 4 autonomy is possible
974
+
975
+ 17:28.000 --> 17:29.880
976
+ without lidar?
977
+
978
+ 17:29.880 --> 17:32.880
979
+ I see both of those, but it's really a matter of time.
980
+
981
+ 17:32.880 --> 17:35.080
982
+ And I think, really, maybe I would
983
+
984
+ 17:35.080 --> 17:38.760
985
+ talk to the question you asked about the cheapest sensor.
986
+
987
+ 17:38.760 --> 17:40.440
988
+ I don't think that's actually what you want.
989
+
990
+ 17:40.440 --> 17:45.720
991
+ What you want is a sensor suite that is economically viable.
992
+
993
+ 17:45.720 --> 17:49.480
994
+ And then after that, everything is about margin
995
+
996
+ 17:49.480 --> 17:52.320
997
+ and driving cost out of the system.
998
+
999
+ 17:52.320 --> 17:55.400
1000
+ What you also want is a sensor suite that works.
1001
+
1002
+ 17:55.400 --> 18:01.280
1003
+ And so it's great to tell a story about how it would be better
1004
+
1005
+ 18:01.280 --> 18:04.560
1006
+ to have a self driving system with a $50 sensor instead
1007
+
1008
+ 18:04.560 --> 18:08.720
1009
+ of a $500 sensor.
1010
+
1011
+ 18:08.720 --> 18:11.560
1012
+ But if the $500 sensor makes it work and the $50 sensor
1013
+
1014
+ 18:11.560 --> 18:15.680
1015
+ doesn't work, who cares?
1016
+
1017
+ 18:15.680 --> 18:21.680
1018
+ So long as you can actually have an economic opportunity there.
1019
+
1020
+ 18:21.680 --> 18:23.760
1021
+ And the economic opportunity is important,
1022
+
1023
+ 18:23.760 --> 18:27.800
1024
+ because that's how you actually have a sustainable business.
1025
+
1026
+ 18:27.800 --> 18:30.440
1027
+ And that's how you can actually see this come to scale
1028
+
1029
+ 18:30.440 --> 18:32.520
1030
+ and be out in the world.
1031
+
1032
+ 18:32.520 --> 18:36.400
1033
+ And so when I look at lidar, I see
1034
+
1035
+ 18:36.400 --> 18:41.200
1036
+ a technology that has no underlying fundamentally expense
1037
+
1038
+ 18:41.200 --> 18:43.240
1039
+ to it, fundamental expense to it.
1040
+
1041
+ 18:43.240 --> 18:46.120
1042
+ It's going to be more expensive than an imager,
1043
+
1044
+ 18:46.120 --> 18:51.400
1045
+ because CMOS processes or FAP processes
1046
+
1047
+ 18:51.400 --> 18:56.200
1048
+ are dramatically more scalable than mechanical processes.
1049
+
1050
+ 18:56.200 --> 18:58.160
1051
+ But we still should be able to drive cost
1052
+
1053
+ 18:58.160 --> 19:00.440
1054
+ out substantially on that side.
1055
+
1056
+ 19:00.440 --> 19:05.880
1057
+ And then I also do think that with the right business model,
1058
+
1059
+ 19:05.880 --> 19:08.440
1060
+ you can absorb more, certainly more cost
1061
+
1062
+ 19:08.440 --> 19:09.480
1063
+ on the below materials.
1064
+
1065
+ 19:09.480 --> 19:12.600
1066
+ Yeah, if the sensor suite works, extra value is provided.
1067
+
1068
+ 19:12.600 --> 19:15.480
1069
+ Thereby, you don't need to drive cost down to zero.
1070
+
1071
+ 19:15.480 --> 19:17.120
1072
+ It's a basic economics.
1073
+
1074
+ 19:17.120 --> 19:18.840
1075
+ You've talked about your intuition
1076
+
1077
+ 19:18.840 --> 19:22.720
1078
+ at level two autonomy is problematic because
1079
+
1080
+ 19:22.720 --> 19:27.280
1081
+ of the human factor of vigilance, decrement, complacency,
1082
+
1083
+ 19:27.280 --> 19:29.600
1084
+ overtrust, and so on, just us being human.
1085
+
1086
+ 19:29.600 --> 19:33.000
1087
+ With the overtrust system, we start doing even more
1088
+
1089
+ 19:33.000 --> 19:36.480
1090
+ so partaking in the secondary activities like smartphone
1091
+
1092
+ 19:36.480 --> 19:38.720
1093
+ and so on.
1094
+
1095
+ 19:38.720 --> 19:42.960
1096
+ Have your views evolved on this point in either direction?
1097
+
1098
+ 19:42.960 --> 19:44.760
1099
+ Can you speak to it?
1100
+
1101
+ 19:44.760 --> 19:48.240
1102
+ So I want to be really careful, because sometimes this
1103
+
1104
+ 19:48.240 --> 19:53.000
1105
+ gets twisted in a way that I certainly didn't intend.
1106
+
1107
+ 19:53.000 --> 19:59.360
1108
+ So active safety systems are a really important technology
1109
+
1110
+ 19:59.360 --> 20:03.400
1111
+ that we should be pursuing and integrating into vehicles.
1112
+
1113
+ 20:03.400 --> 20:05.680
1114
+ And there's an opportunity in the near term
1115
+
1116
+ 20:05.680 --> 20:09.400
1117
+ to reduce accidents, reduce fatalities, and that's
1118
+
1119
+ 20:09.400 --> 20:13.400
1120
+ and we should be pushing on that.
1121
+
1122
+ 20:13.400 --> 20:17.280
1123
+ Level two systems are systems where
1124
+
1125
+ 20:17.280 --> 20:19.480
1126
+ the vehicle is controlling two axes,
1127
+
1128
+ 20:19.480 --> 20:24.800
1129
+ so breaking and thrall slash steering.
1130
+
1131
+ 20:24.800 --> 20:27.200
1132
+ And I think there are variants of level two systems that
1133
+
1134
+ 20:27.200 --> 20:30.200
1135
+ are supporting the driver that absolutely we
1136
+
1137
+ 20:30.200 --> 20:32.560
1138
+ should encourage to be out there.
1139
+
1140
+ 20:32.560 --> 20:37.920
1141
+ Where I think there's a real challenge is in the human factors
1142
+
1143
+ 20:37.920 --> 20:40.800
1144
+ part around this and the misconception
1145
+
1146
+ 20:40.800 --> 20:44.920
1147
+ from the public around the capability set that that enables
1148
+
1149
+ 20:44.920 --> 20:48.000
1150
+ and the trust that they should have in it.
1151
+
1152
+ 20:48.000 --> 20:53.880
1153
+ And that is where I'm actually incrementally more
1154
+
1155
+ 20:53.880 --> 20:55.800
1156
+ concerned around level three systems
1157
+
1158
+ 20:55.800 --> 20:59.960
1159
+ and how exactly a level two system is marketed and delivered
1160
+
1161
+ 20:59.960 --> 21:03.240
1162
+ and how much effort people have put into those human factors.
1163
+
1164
+ 21:03.240 --> 21:07.000
1165
+ So I still believe several things around this.
1166
+
1167
+ 21:07.000 --> 21:10.760
1168
+ One is people will over trust the technology.
1169
+
1170
+ 21:10.760 --> 21:12.720
1171
+ We've seen over the last few weeks
1172
+
1173
+ 21:12.720 --> 21:16.280
1174
+ a spate of people sleeping in their Tesla.
1175
+
1176
+ 21:16.280 --> 21:23.240
1177
+ I watched an episode last night of Trevor Noah talking
1178
+
1179
+ 21:23.240 --> 21:27.160
1180
+ about this, and this is a smart guy
1181
+
1182
+ 21:27.160 --> 21:31.040
1183
+ who has a lot of resources at his disposal describing
1184
+
1185
+ 21:31.040 --> 21:32.880
1186
+ a Tesla as a self driving car.
1187
+
1188
+ 21:32.880 --> 21:35.640
1189
+ And that why shouldn't people be sleeping in their Tesla?
1190
+
1191
+ 21:35.640 --> 21:38.800
1192
+ It's like, well, because it's not a self driving car
1193
+
1194
+ 21:38.800 --> 21:41.120
1195
+ and it is not intended to be.
1196
+
1197
+ 21:41.120 --> 21:48.400
1198
+ And these people will almost certainly die at some point
1199
+
1200
+ 21:48.400 --> 21:50.400
1201
+ or hurt other people.
1202
+
1203
+ 21:50.400 --> 21:52.640
1204
+ And so we need to really be thoughtful about how
1205
+
1206
+ 21:52.640 --> 21:56.280
1207
+ that technology is described and brought to market.
1208
+
1209
+ 21:56.280 --> 22:00.760
1210
+ I also think that because of the economic issue,
1211
+
1212
+ 22:00.760 --> 22:03.320
1213
+ economic challenges we were just talking about,
1214
+
1215
+ 22:03.320 --> 22:06.960
1216
+ that technology path will, these level two driver system
1217
+
1218
+ 22:06.960 --> 22:08.400
1219
+ systems, that technology path will
1220
+
1221
+ 22:08.400 --> 22:11.560
1222
+ diverge from the technology path that we
1223
+
1224
+ 22:11.560 --> 22:15.800
1225
+ need to be on to actually deliver truly self driving
1226
+
1227
+ 22:15.800 --> 22:19.120
1228
+ vehicles, ones where you can get in it and sleep
1229
+
1230
+ 22:19.120 --> 22:21.480
1231
+ and have the equivalent or better safety
1232
+
1233
+ 22:21.480 --> 22:24.600
1234
+ than a human driver behind the wheel.
1235
+
1236
+ 22:24.600 --> 22:28.440
1237
+ Because, again, the economics are very different
1238
+
1239
+ 22:28.440 --> 22:29.800
1240
+ in those two worlds.
1241
+
1242
+ 22:29.800 --> 22:32.720
1243
+ And so that leads to divergent technology.
1244
+
1245
+ 22:32.720 --> 22:36.920
1246
+ So you just don't see the economics of gradually
1247
+
1248
+ 22:36.920 --> 22:41.520
1249
+ increasing from level two and doing so quickly enough
1250
+
1251
+ 22:41.520 --> 22:44.400
1252
+ to where it doesn't cost safety, critical safety concerns.
1253
+
1254
+ 22:44.400 --> 22:48.600
1255
+ You believe that it needs to diverge at this point
1256
+
1257
+ 22:48.600 --> 22:50.600
1258
+ into different, basically different routes.
1259
+
1260
+ 22:50.600 --> 22:53.760
1261
+ And really that comes back to what
1262
+
1263
+ 22:53.760 --> 22:56.840
1264
+ are those L2 and L1 systems doing?
1265
+
1266
+ 22:56.840 --> 22:59.800
1267
+ And they are driver assistance functions
1268
+
1269
+ 22:59.800 --> 23:04.360
1270
+ where the people that are marketing that responsibly
1271
+
1272
+ 23:04.360 --> 23:07.960
1273
+ are being very clear and putting human factors in place
1274
+
1275
+ 23:07.960 --> 23:12.400
1276
+ such that the driver is actually responsible for the vehicle
1277
+
1278
+ 23:12.400 --> 23:15.200
1279
+ and that the technology is there to support the driver.
1280
+
1281
+ 23:15.200 --> 23:19.880
1282
+ And the safety cases that are built around those
1283
+
1284
+ 23:19.880 --> 23:24.320
1285
+ are dependent on that driver attention and attentiveness.
1286
+
1287
+ 23:24.320 --> 23:30.360
1288
+ And at that point, you can kind of give up, to some degree,
1289
+
1290
+ 23:30.360 --> 23:34.280
1291
+ for economic reasons, you can give up on, say, false negatives.
1292
+
1293
+ 23:34.280 --> 23:36.200
1294
+ And so the way to think about this
1295
+
1296
+ 23:36.200 --> 23:40.760
1297
+ is for a four collision mitigation braking system,
1298
+
1299
+ 23:40.760 --> 23:45.080
1300
+ if half the times the driver missed a vehicle in front of it,
1301
+
1302
+ 23:45.080 --> 23:47.640
1303
+ it hit the brakes and brought the vehicle to a stop,
1304
+
1305
+ 23:47.640 --> 23:51.200
1306
+ that would be an incredible, incredible advance
1307
+
1308
+ 23:51.200 --> 23:52.960
1309
+ in safety on our roads, right?
1310
+
1311
+ 23:52.960 --> 23:55.080
1312
+ That would be equivalent to seatbelts.
1313
+
1314
+ 23:55.080 --> 23:57.560
1315
+ But it would mean that if that vehicle wasn't being monitored,
1316
+
1317
+ 23:57.560 --> 24:00.560
1318
+ it would hit one out of two cars.
1319
+
1320
+ 24:00.560 --> 24:05.080
1321
+ And so economically, that's a perfectly good solution
1322
+
1323
+ 24:05.080 --> 24:06.200
1324
+ for a driver assistance system.
1325
+
1326
+ 24:06.200 --> 24:07.360
1327
+ What you should do at that point,
1328
+
1329
+ 24:07.360 --> 24:09.200
1330
+ if you can get it to work 50% of the time,
1331
+
1332
+ 24:09.200 --> 24:11.040
1333
+ is drive the cost out of that so you can get it
1334
+
1335
+ 24:11.040 --> 24:13.320
1336
+ on as many vehicles as possible.
1337
+
1338
+ 24:13.320 --> 24:16.920
1339
+ But driving the cost out of it doesn't drive up performance
1340
+
1341
+ 24:16.920 --> 24:18.840
1342
+ on the false negative case.
1343
+
1344
+ 24:18.840 --> 24:21.480
1345
+ And so you'll continue to not have a technology
1346
+
1347
+ 24:21.480 --> 24:25.720
1348
+ that could really be available for a self driven vehicle.
1349
+
1350
+ 24:25.720 --> 24:28.480
1351
+ So clearly the communication,
1352
+
1353
+ 24:28.480 --> 24:31.640
1354
+ and this probably applies to all four vehicles as well,
1355
+
1356
+ 24:31.640 --> 24:34.440
1357
+ the marketing and the communication
1358
+
1359
+ 24:34.440 --> 24:37.080
1360
+ of what the technology is actually capable of,
1361
+
1362
+ 24:37.080 --> 24:38.440
1363
+ how hard it is, how easy it is,
1364
+
1365
+ 24:38.440 --> 24:41.040
1366
+ all that kind of stuff is highly problematic.
1367
+
1368
+ 24:41.040 --> 24:45.680
1369
+ So say everybody in the world was perfectly communicated
1370
+
1371
+ 24:45.680 --> 24:48.400
1372
+ and were made to be completely aware
1373
+
1374
+ 24:48.400 --> 24:50.040
1375
+ of every single technology out there,
1376
+
1377
+ 24:50.040 --> 24:52.880
1378
+ what it's able to do.
1379
+
1380
+ 24:52.880 --> 24:54.160
1381
+ What's your intuition?
1382
+
1383
+ 24:54.160 --> 24:56.920
1384
+ And now we're maybe getting into philosophical ground.
1385
+
1386
+ 24:56.920 --> 25:00.040
1387
+ Is it possible to have a level two vehicle
1388
+
1389
+ 25:00.040 --> 25:03.280
1390
+ where we don't overtrust it?
1391
+
1392
+ 25:04.720 --> 25:05.840
1393
+ I don't think so.
1394
+
1395
+ 25:05.840 --> 25:10.840
1396
+ If people truly understood the risks and internalized it,
1397
+
1398
+ 25:11.200 --> 25:14.320
1399
+ then sure you could do that safely,
1400
+
1401
+ 25:14.320 --> 25:16.200
1402
+ but that's a world that doesn't exist.
1403
+
1404
+ 25:16.200 --> 25:17.560
1405
+ The people are going to,
1406
+
1407
+ 25:19.440 --> 25:20.800
1408
+ if the facts are put in front of them,
1409
+
1410
+ 25:20.800 --> 25:24.480
1411
+ they're gonna then combine that with their experience.
1412
+
1413
+ 25:24.480 --> 25:28.400
1414
+ And let's say they're using an L2 system
1415
+
1416
+ 25:28.400 --> 25:31.040
1417
+ and they go up and down the one on one every day
1418
+
1419
+ 25:31.040 --> 25:32.800
1420
+ and they do that for a month
1421
+
1422
+ 25:32.800 --> 25:35.200
1423
+ and it just worked every day for a month.
1424
+
1425
+ 25:36.320 --> 25:37.400
1426
+ Like that's pretty compelling.
1427
+
1428
+ 25:37.400 --> 25:41.880
1429
+ At that point, just even if you know the statistics,
1430
+
1431
+ 25:41.880 --> 25:43.520
1432
+ you're like, well, I don't know,
1433
+
1434
+ 25:43.520 --> 25:44.840
1435
+ maybe there's something a little funny about those.
1436
+
1437
+ 25:44.840 --> 25:47.000
1438
+ Maybe they're driving in difficult places.
1439
+
1440
+ 25:47.000 --> 25:49.960
1441
+ Like I've seen it with my own eyes, it works.
1442
+
1443
+ 25:49.960 --> 25:52.480
1444
+ And the problem is that that sample size that they have,
1445
+
1446
+ 25:52.480 --> 25:54.000
1447
+ so it's 30 miles up and down,
1448
+
1449
+ 25:54.000 --> 25:58.800
1450
+ so 60 miles times 30 days, so 60, 180, 1,800 miles.
1451
+
1452
+ 26:01.720 --> 26:05.240
1453
+ That's a drop in the bucket compared to the one,
1454
+
1455
+ 26:05.240 --> 26:07.640
1456
+ what 85 million miles between fatalities.
1457
+
1458
+ 26:07.640 --> 26:11.400
1459
+ And so they don't really have a true estimate
1460
+
1461
+ 26:11.400 --> 26:14.440
1462
+ based on their personal experience of the real risks,
1463
+
1464
+ 26:14.440 --> 26:15.640
1465
+ but they're gonna trust it anyway,
1466
+
1467
+ 26:15.640 --> 26:17.720
1468
+ because it's hard not to, it worked for a month.
1469
+
1470
+ 26:17.720 --> 26:18.640
1471
+ What's gonna change?
1472
+
1473
+ 26:18.640 --> 26:21.600
1474
+ So even if you start a perfect understanding of the system,
1475
+
1476
+ 26:21.600 --> 26:24.160
1477
+ your own experience will make it drift.
1478
+
1479
+ 26:24.160 --> 26:25.920
1480
+ I mean, that's a big concern.
1481
+
1482
+ 26:25.920 --> 26:29.480
1483
+ Over a year, over two years even, it doesn't have to be months.
1484
+
1485
+ 26:29.480 --> 26:33.720
1486
+ And I think that as this technology moves from,
1487
+
1488
+ 26:35.440 --> 26:37.800
1489
+ what I would say is kind of the more technology savvy
1490
+
1491
+ 26:37.800 --> 26:41.480
1492
+ ownership group to the mass market,
1493
+
1494
+ 26:41.480 --> 26:44.640
1495
+ you may be able to have some of those folks
1496
+
1497
+ 26:44.640 --> 26:46.320
1498
+ who are really familiar with technology,
1499
+
1500
+ 26:46.320 --> 26:48.880
1501
+ they may be able to internalize it better.
1502
+
1503
+ 26:48.880 --> 26:50.840
1504
+ And you're kind of immunization
1505
+
1506
+ 26:50.840 --> 26:53.400
1507
+ against this kind of false risk assessment
1508
+
1509
+ 26:53.400 --> 26:56.960
1510
+ might last longer, but as folks who aren't as savvy
1511
+
1512
+ 26:56.960 --> 27:00.200
1513
+ about that read the material
1514
+
1515
+ 27:00.200 --> 27:02.200
1516
+ and they compare that to their personal experience,
1517
+
1518
+ 27:02.200 --> 27:08.200
1519
+ I think there that it's gonna move more quickly.
1520
+
1521
+ 27:08.200 --> 27:11.320
1522
+ So your work, the program that you've created at Google
1523
+
1524
+ 27:11.320 --> 27:16.320
1525
+ and now at Aurora is focused more on the second path
1526
+
1527
+ 27:16.640 --> 27:18.520
1528
+ of creating full autonomy.
1529
+
1530
+ 27:18.520 --> 27:20.920
1531
+ So it's such a fascinating,
1532
+
1533
+ 27:21.800 --> 27:24.600
1534
+ I think it's one of the most interesting AI problems
1535
+
1536
+ 27:24.600 --> 27:25.640
1537
+ of the century, right?
1538
+
1539
+ 27:25.640 --> 27:28.320
1540
+ It's a, I just talked to a lot of people,
1541
+
1542
+ 27:28.320 --> 27:30.400
1543
+ just regular people, I don't know, my mom
1544
+
1545
+ 27:30.400 --> 27:33.840
1546
+ about autonomous vehicles and you begin to grapple
1547
+
1548
+ 27:33.840 --> 27:38.080
1549
+ with ideas of giving your life control over to a machine.
1550
+
1551
+ 27:38.080 --> 27:40.040
1552
+ It's philosophically interesting,
1553
+
1554
+ 27:40.040 --> 27:41.760
1555
+ it's practically interesting.
1556
+
1557
+ 27:41.760 --> 27:43.720
1558
+ So let's talk about safety.
1559
+
1560
+ 27:43.720 --> 27:46.240
1561
+ How do you think, we demonstrate,
1562
+
1563
+ 27:46.240 --> 27:47.880
1564
+ you've spoken about metrics in the past,
1565
+
1566
+ 27:47.880 --> 27:51.880
1567
+ how do you think we demonstrate to the world
1568
+
1569
+ 27:51.880 --> 27:56.160
1570
+ that an autonomous vehicle, an Aurora system is safe?
1571
+
1572
+ 27:56.160 --> 27:57.320
1573
+ This is one where it's difficult
1574
+
1575
+ 27:57.320 --> 27:59.280
1576
+ because there isn't a sound bite answer.
1577
+
1578
+ 27:59.280 --> 28:04.280
1579
+ That we have to show a combination of work
1580
+
1581
+ 28:05.960 --> 28:08.360
1582
+ that was done diligently and thoughtfully.
1583
+
1584
+ 28:08.360 --> 28:10.840
1585
+ And this is where something like a functional safety process
1586
+
1587
+ 28:10.840 --> 28:14.360
1588
+ as part of that is like, here's the way we did the work.
1589
+
1590
+ 28:15.320 --> 28:17.200
1591
+ That means that we were very thorough.
1592
+
1593
+ 28:17.200 --> 28:20.560
1594
+ So, if you believe that we, what we said about,
1595
+
1596
+ 28:20.560 --> 28:21.480
1597
+ this is the way we did it,
1598
+
1599
+ 28:21.480 --> 28:23.440
1600
+ then you can have some confidence that we were thorough
1601
+
1602
+ 28:23.440 --> 28:27.000
1603
+ in the engineering work we put into the system.
1604
+
1605
+ 28:27.000 --> 28:30.160
1606
+ And then on top of that, to kind of demonstrate
1607
+
1608
+ 28:30.160 --> 28:32.000
1609
+ that we weren't just thorough,
1610
+
1611
+ 28:32.000 --> 28:34.000
1612
+ we were actually good at what we did.
1613
+
1614
+ 28:35.320 --> 28:38.240
1615
+ There'll be a kind of a collection of evidence
1616
+
1617
+ 28:38.240 --> 28:40.480
1618
+ in terms of demonstrating that the capabilities
1619
+
1620
+ 28:40.480 --> 28:43.960
1621
+ work the way we thought they did, statistically
1622
+
1623
+ 28:43.960 --> 28:47.200
1624
+ and to whatever degree we can demonstrate that
1625
+
1626
+ 28:48.200 --> 28:50.320
1627
+ both in some combination of simulation,
1628
+
1629
+ 28:50.320 --> 28:54.720
1630
+ some combination of unit testing and decomposition testing,
1631
+
1632
+ 28:54.720 --> 28:57.000
1633
+ and then some part of it will be on road data.
1634
+
1635
+ 28:58.200 --> 29:03.200
1636
+ And I think the way we'll ultimately convey this
1637
+
1638
+ 29:03.320 --> 29:06.800
1639
+ to the public is there'll be clearly some conversation
1640
+
1641
+ 29:06.800 --> 29:08.240
1642
+ with the public about it,
1643
+
1644
+ 29:08.240 --> 29:12.080
1645
+ but we'll kind of invoke the kind of the trusted nodes
1646
+
1647
+ 29:12.080 --> 29:14.360
1648
+ and that we'll spend more time being able to go
1649
+
1650
+ 29:14.360 --> 29:17.280
1651
+ into more depth with folks like NHTSA
1652
+
1653
+ 29:17.280 --> 29:19.760
1654
+ and other federal and state regulatory bodies
1655
+
1656
+ 29:19.760 --> 29:22.600
1657
+ and kind of given that they are operating
1658
+
1659
+ 29:22.600 --> 29:25.120
1660
+ in the public interest and they're trusted
1661
+
1662
+ 29:26.240 --> 29:28.680
1663
+ that if we can show enough work to them
1664
+
1665
+ 29:28.680 --> 29:30.040
1666
+ that they're convinced,
1667
+
1668
+ 29:30.040 --> 29:33.840
1669
+ then I think we're in a pretty good place.
1670
+
1671
+ 29:33.840 --> 29:35.040
1672
+ That means that you work with people
1673
+
1674
+ 29:35.040 --> 29:36.960
1675
+ that are essentially experts at safety
1676
+
1677
+ 29:36.960 --> 29:39.040
1678
+ to try to discuss and show,
1679
+
1680
+ 29:39.040 --> 29:41.800
1681
+ do you think the answer is probably no,
1682
+
1683
+ 29:41.800 --> 29:44.360
1684
+ but just in case, do you think there exists a metric?
1685
+
1686
+ 29:44.360 --> 29:46.360
1687
+ So currently people have been using
1688
+
1689
+ 29:46.360 --> 29:48.200
1690
+ a number of disengagement.
1691
+
1692
+ 29:48.200 --> 29:50.160
1693
+ And it quickly turns into a marketing scheme
1694
+
1695
+ 29:50.160 --> 29:54.320
1696
+ to sort of you alter the experiments you run to.
1697
+
1698
+ 29:54.320 --> 29:56.320
1699
+ I think you've spoken that you don't like.
1700
+
1701
+ 29:56.320 --> 29:57.160
1702
+ Don't love it.
1703
+
1704
+ 29:57.160 --> 29:59.720
1705
+ No, in fact, I was on the record telling DMV
1706
+
1707
+ 29:59.720 --> 30:02.000
1708
+ that I thought this was not a great metric.
1709
+
1710
+ 30:02.000 --> 30:05.360
1711
+ Do you think it's possible to create a metric,
1712
+
1713
+ 30:05.360 --> 30:09.480
1714
+ a number that could demonstrate safety
1715
+
1716
+ 30:09.480 --> 30:12.400
1717
+ outside of fatalities?
1718
+
1719
+ 30:12.400 --> 30:16.640
1720
+ So I do and I think that it won't be just one number.
1721
+
1722
+ 30:16.640 --> 30:21.320
1723
+ So as we are internally grappling with this
1724
+
1725
+ 30:21.320 --> 30:23.600
1726
+ and at some point we'll be able to talk
1727
+
1728
+ 30:23.600 --> 30:25.080
1729
+ more publicly about it,
1730
+
1731
+ 30:25.080 --> 30:28.560
1732
+ is how do we think about human performance
1733
+
1734
+ 30:28.560 --> 30:32.200
1735
+ in different tasks, say detecting traffic lights
1736
+
1737
+ 30:32.200 --> 30:36.240
1738
+ or safely making a left turn across traffic?
1739
+
1740
+ 30:37.720 --> 30:40.040
1741
+ And what do we think the failure rates
1742
+
1743
+ 30:40.040 --> 30:42.520
1744
+ are for those different capabilities for people?
1745
+
1746
+ 30:42.520 --> 30:44.760
1747
+ And then demonstrating to ourselves
1748
+
1749
+ 30:44.760 --> 30:48.480
1750
+ and then ultimately folks in regulatory role
1751
+
1752
+ 30:48.480 --> 30:50.760
1753
+ and then ultimately the public,
1754
+
1755
+ 30:50.760 --> 30:52.400
1756
+ that we have confidence that our system
1757
+
1758
+ 30:52.400 --> 30:54.800
1759
+ will work better than that.
1760
+
1761
+ 30:54.800 --> 30:57.040
1762
+ And so these individual metrics
1763
+
1764
+ 30:57.040 --> 31:00.720
1765
+ will kind of tell a compelling story ultimately.
1766
+
1767
+ 31:01.760 --> 31:03.920
1768
+ I do think at the end of the day,
1769
+
1770
+ 31:03.920 --> 31:06.640
1771
+ what we care about in terms of safety
1772
+
1773
+ 31:06.640 --> 31:11.640
1774
+ is life saved and injuries reduced.
1775
+
1776
+ 31:11.640 --> 31:15.320
1777
+ And then ultimately kind of casualty dollars
1778
+
1779
+ 31:16.440 --> 31:19.360
1780
+ that people aren't having to pay to get their car fixed.
1781
+
1782
+ 31:19.360 --> 31:22.680
1783
+ And I do think that in aviation,
1784
+
1785
+ 31:22.680 --> 31:25.880
1786
+ they look at a kind of an event pyramid
1787
+
1788
+ 31:25.880 --> 31:28.600
1789
+ where a crash is at the top of that
1790
+
1791
+ 31:28.600 --> 31:30.440
1792
+ and that's the worst event obviously.
1793
+
1794
+ 31:30.440 --> 31:34.240
1795
+ And then there's injuries and near miss events and whatnot
1796
+
1797
+ 31:34.240 --> 31:37.320
1798
+ and violation of operating procedures.
1799
+
1800
+ 31:37.320 --> 31:40.160
1801
+ And you kind of build a statistical model
1802
+
1803
+ 31:40.160 --> 31:44.440
1804
+ of the relevance of the low severity things
1805
+
1806
+ 31:44.440 --> 31:45.280
1807
+ and the high severity things.
1808
+
1809
+ 31:45.280 --> 31:46.120
1810
+ And I think that's something
1811
+
1812
+ 31:46.120 --> 31:48.240
1813
+ where we'll be able to look at as well
1814
+
1815
+ 31:48.240 --> 31:51.920
1816
+ because an event per 85 million miles
1817
+
1818
+ 31:51.920 --> 31:54.480
1819
+ is statistically a difficult thing
1820
+
1821
+ 31:54.480 --> 31:59.440
1822
+ even at the scale of the US to kind of compare directly.
1823
+
1824
+ 31:59.440 --> 32:02.280
1825
+ And that event, the fatality that's connected
1826
+
1827
+ 32:02.280 --> 32:07.280
1828
+ to an autonomous vehicle is significantly,
1829
+
1830
+ 32:07.480 --> 32:09.160
1831
+ at least currently magnified
1832
+
1833
+ 32:09.160 --> 32:12.320
1834
+ in the amount of attention you get.
1835
+
1836
+ 32:12.320 --> 32:15.080
1837
+ So that speaks to public perception.
1838
+
1839
+ 32:15.080 --> 32:16.720
1840
+ I think the most popular topic
1841
+
1842
+ 32:16.720 --> 32:19.520
1843
+ about autonomous vehicles in the public
1844
+
1845
+ 32:19.520 --> 32:23.080
1846
+ is the trolley problem formulation, right?
1847
+
1848
+ 32:23.080 --> 32:27.040
1849
+ Which has, let's not get into that too much
1850
+
1851
+ 32:27.040 --> 32:29.600
1852
+ but is misguided in many ways.
1853
+
1854
+ 32:29.600 --> 32:32.320
1855
+ But it speaks to the fact that people are grappling
1856
+
1857
+ 32:32.320 --> 32:36.160
1858
+ with this idea of giving control over to a machine.
1859
+
1860
+ 32:36.160 --> 32:41.160
1861
+ So how do you win the hearts and minds of the people
1862
+
1863
+ 32:41.560 --> 32:43.600
1864
+ that autonomy is something
1865
+
1866
+ 32:43.600 --> 32:45.480
1867
+ that could be a part of their lives?
1868
+
1869
+ 32:45.480 --> 32:47.640
1870
+ I think you let them experience it, right?
1871
+
1872
+ 32:47.640 --> 32:50.440
1873
+ I think it's right.
1874
+
1875
+ 32:50.440 --> 32:52.720
1876
+ I think people should be skeptical.
1877
+
1878
+ 32:52.720 --> 32:55.680
1879
+ I think people should ask questions.
1880
+
1881
+ 32:55.680 --> 32:57.000
1882
+ I think they should doubt
1883
+
1884
+ 32:58.040 --> 33:00.960
1885
+ because this is something new and different.
1886
+
1887
+ 33:00.960 --> 33:01.960
1888
+ They haven't touched it yet.
1889
+
1890
+ 33:01.960 --> 33:03.680
1891
+ And I think it's perfectly reasonable.
1892
+
1893
+ 33:03.680 --> 33:07.360
1894
+ And but at the same time,
1895
+
1896
+ 33:07.360 --> 33:09.360
1897
+ it's clear there's an opportunity to make the road safer.
1898
+
1899
+ 33:09.360 --> 33:12.480
1900
+ It's clear that we can improve access to mobility.
1901
+
1902
+ 33:12.480 --> 33:15.160
1903
+ It's clear that we can reduce the cost of mobility.
1904
+
1905
+ 33:16.680 --> 33:19.520
1906
+ And that once people try that
1907
+
1908
+ 33:19.520 --> 33:22.800
1909
+ and understand that it's safe
1910
+
1911
+ 33:22.800 --> 33:24.480
1912
+ and are able to use in their daily lives,
1913
+
1914
+ 33:24.480 --> 33:28.080
1915
+ I think it's one of these things that will just be obvious.
1916
+
1917
+ 33:28.080 --> 33:32.280
1918
+ And I've seen this practically in demonstrations
1919
+
1920
+ 33:32.280 --> 33:35.640
1921
+ that I've given where I've had people come in
1922
+
1923
+ 33:35.640 --> 33:38.600
1924
+ and they're very skeptical.
1925
+
1926
+ 33:38.600 --> 33:39.960
1927
+ And they get in the vehicle.
1928
+
1929
+ 33:39.960 --> 33:42.640
1930
+ My favorite one is taking somebody out on the freeway
1931
+
1932
+ 33:42.640 --> 33:46.080
1933
+ and we're on the one on one driving at 65 miles an hour.
1934
+
1935
+ 33:46.080 --> 33:48.560
1936
+ And after 10 minutes, they kind of turn and ask,
1937
+
1938
+ 33:48.560 --> 33:49.560
1939
+ is that all it does?
1940
+
1941
+ 33:49.560 --> 33:52.160
1942
+ And you're like, it's self driving car.
1943
+
1944
+ 33:52.160 --> 33:54.920
1945
+ I'm not sure exactly what you thought it would do, right?
1946
+
1947
+ 33:54.920 --> 33:57.960
1948
+ But it becomes mundane,
1949
+
1950
+ 33:58.920 --> 34:01.560
1951
+ which is exactly what you want to technology
1952
+
1953
+ 34:01.560 --> 34:02.760
1954
+ like this to be, right?
1955
+
1956
+ 34:02.760 --> 34:04.680
1957
+ We don't really...
1958
+
1959
+ 34:04.680 --> 34:07.320
1960
+ When I turn the light switch on in here,
1961
+
1962
+ 34:07.320 --> 34:12.040
1963
+ I don't think about the complexity of those electrons
1964
+
1965
+ 34:12.040 --> 34:14.240
1966
+ being pushed down a wire from wherever it was
1967
+
1968
+ 34:14.240 --> 34:15.880
1969
+ and being generated.
1970
+
1971
+ 34:15.880 --> 34:19.120
1972
+ It's like, I just get annoyed if it doesn't work, right?
1973
+
1974
+ 34:19.120 --> 34:21.440
1975
+ And what I value is the fact
1976
+
1977
+ 34:21.440 --> 34:23.120
1978
+ that I can do other things in this space.
1979
+
1980
+ 34:23.120 --> 34:24.600
1981
+ I can see my colleagues.
1982
+
1983
+ 34:24.600 --> 34:26.200
1984
+ I can read stuff on a paper.
1985
+
1986
+ 34:26.200 --> 34:29.240
1987
+ I can not be afraid of the dark.
1988
+
1989
+ 34:29.240 --> 34:32.840
1990
+ And I think that's what we want this technology to be like
1991
+
1992
+ 34:32.840 --> 34:34.160
1993
+ is it's in the background
1994
+
1995
+ 34:34.160 --> 34:36.520
1996
+ and people get to have those life experiences
1997
+
1998
+ 34:36.520 --> 34:37.880
1999
+ and do so safely.
2000
+
2001
+ 34:37.880 --> 34:41.600
2002
+ So putting this technology in the hands of people
2003
+
2004
+ 34:41.600 --> 34:45.800
2005
+ speaks to scale of deployment, right?
2006
+
2007
+ 34:45.800 --> 34:50.360
2008
+ So what do you think the dreaded question about the future
2009
+
2010
+ 34:50.360 --> 34:52.840
2011
+ because nobody can predict the future?
2012
+
2013
+ 34:52.840 --> 34:57.080
2014
+ But just maybe speak poetically about
2015
+
2016
+ 34:57.080 --> 35:00.600
2017
+ when do you think we'll see a large scale deployment
2018
+
2019
+ 35:00.600 --> 35:05.600
2020
+ of autonomous vehicles, 10,000, those kinds of numbers.
2021
+
2022
+ 35:06.360 --> 35:08.280
2023
+ We'll see that within 10 years.
2024
+
2025
+ 35:09.280 --> 35:10.600
2026
+ I'm pretty confident.
2027
+
2028
+ 35:10.600 --> 35:11.920
2029
+ We...
2030
+
2031
+ 35:13.920 --> 35:15.920
2032
+ What's an impressive scale?
2033
+
2034
+ 35:15.920 --> 35:19.000
2035
+ What moment, so you've done the DARPA Challenge
2036
+
2037
+ 35:19.000 --> 35:20.240
2038
+ where there's one vehicle,
2039
+
2040
+ 35:20.240 --> 35:22.000
2041
+ at which moment does it become,
2042
+
2043
+ 35:22.000 --> 35:23.720
2044
+ wow, this is serious scale?
2045
+
2046
+ 35:23.720 --> 35:27.960
2047
+ So I think the moment it gets serious is when
2048
+
2049
+ 35:27.960 --> 35:32.040
2050
+ we really do have a driverless vehicle
2051
+
2052
+ 35:32.040 --> 35:33.880
2053
+ operating on public roads
2054
+
2055
+ 35:34.760 --> 35:37.760
2056
+ and that we can do that kind of continuously.
2057
+
2058
+ 35:37.760 --> 35:38.640
2059
+ Without a safety driver?
2060
+
2061
+ 35:38.640 --> 35:40.240
2062
+ Without a safety driver in the vehicle.
2063
+
2064
+ 35:40.240 --> 35:41.320
2065
+ I think at that moment,
2066
+
2067
+ 35:41.320 --> 35:44.160
2068
+ we've kind of crossed the zero to one threshold.
2069
+
2070
+ 35:45.720 --> 35:50.000
2071
+ And then it is about how do we continue to scale that?
2072
+
2073
+ 35:50.000 --> 35:53.720
2074
+ How do we build the right business models?
2075
+
2076
+ 35:53.720 --> 35:56.040
2077
+ How do we build the right customer experience around it
2078
+
2079
+ 35:56.040 --> 35:59.680
2080
+ so that it is actually a useful product out in the world?
2081
+
2082
+ 36:00.720 --> 36:03.360
2083
+ And I think that is really,
2084
+
2085
+ 36:03.360 --> 36:05.720
2086
+ at that point, it moves from a,
2087
+
2088
+ 36:05.720 --> 36:08.960
2089
+ what is this kind of mixed science engineering project
2090
+
2091
+ 36:08.960 --> 36:12.120
2092
+ into engineering and commercialization
2093
+
2094
+ 36:12.120 --> 36:15.600
2095
+ and really starting to deliver on the value
2096
+
2097
+ 36:15.600 --> 36:18.000
2098
+ that we all see here.
2099
+
2100
+ 36:18.000 --> 36:20.680
2101
+ And actually making that real in the world.
2102
+
2103
+ 36:20.680 --> 36:22.240
2104
+ What do you think that deployment looks like?
2105
+
2106
+ 36:22.240 --> 36:24.920
2107
+ Where do we first see the inkling of
2108
+
2109
+ 36:24.920 --> 36:28.600
2110
+ no safety driver, one or two cars here and there?
2111
+
2112
+ 36:28.600 --> 36:29.760
2113
+ Is it on the highway?
2114
+
2115
+ 36:29.760 --> 36:33.200
2116
+ Is it in specific routes in the urban environment?
2117
+
2118
+ 36:33.200 --> 36:36.960
2119
+ I think it's gonna be urban, suburban type environments.
2120
+
2121
+ 36:37.920 --> 36:38.920
2122
+ You know, with Aurora,
2123
+
2124
+ 36:38.920 --> 36:41.560
2125
+ when we thought about how to tackle this,
2126
+
2127
+ 36:42.400 --> 36:45.040
2128
+ it was kind of invoke to think about trucking
2129
+
2130
+ 36:46.000 --> 36:47.760
2131
+ as opposed to urban driving.
2132
+
2133
+ 36:47.760 --> 36:51.240
2134
+ And again, the human intuition around this
2135
+
2136
+ 36:51.240 --> 36:55.360
2137
+ is that freeways are easier to drive on
2138
+
2139
+ 36:57.040 --> 36:59.240
2140
+ because everybody's kind of going in the same direction
2141
+
2142
+ 36:59.240 --> 37:01.560
2143
+ and lanes are a little wider, et cetera.
2144
+
2145
+ 37:01.560 --> 37:03.280
2146
+ And I think that that intuition is pretty good,
2147
+
2148
+ 37:03.280 --> 37:06.000
2149
+ except we don't really care about most of the time.
2150
+
2151
+ 37:06.000 --> 37:08.360
2152
+ We care about all of the time.
2153
+
2154
+ 37:08.360 --> 37:10.840
2155
+ And when you're driving on a freeway with a truck,
2156
+
2157
+ 37:10.840 --> 37:15.840
2158
+ say 70 miles an hour and you've got 70,000 pound load
2159
+
2160
+ 37:15.840 --> 37:17.840
2161
+ to do with you, that's just an incredible amount
2162
+
2163
+ 37:17.840 --> 37:18.840
2164
+ of kinetic energy.
2165
+
2166
+ 37:18.840 --> 37:21.440
2167
+ And so when that goes wrong, it goes really wrong.
2168
+
2169
+ 37:22.600 --> 37:27.600
2170
+ And that those challenges that you see occur more rarely
2171
+
2172
+ 37:27.760 --> 37:31.040
2173
+ so you don't get to learn as quickly.
2174
+
2175
+ 37:31.040 --> 37:33.640
2176
+ And they're incrementally more difficult
2177
+
2178
+ 37:33.640 --> 37:35.920
2179
+ than urban driving, but they're not easier
2180
+
2181
+ 37:35.920 --> 37:37.400
2182
+ than urban driving.
2183
+
2184
+ 37:37.400 --> 37:41.600
2185
+ And so I think this happens in moderate speed,
2186
+
2187
+ 37:41.600 --> 37:43.840
2188
+ urban environments, because there,
2189
+
2190
+ 37:43.840 --> 37:46.560
2191
+ if two vehicles crash at 25 miles per hour,
2192
+
2193
+ 37:46.560 --> 37:50.040
2194
+ it's not good, but probably everybody walks away.
2195
+
2196
+ 37:51.000 --> 37:53.680
2197
+ And those events where there's the possibility
2198
+
2199
+ 37:53.680 --> 37:55.720
2200
+ for that occurring happen frequently.
2201
+
2202
+ 37:55.720 --> 37:57.920
2203
+ So we get to learn more rapidly.
2204
+
2205
+ 37:57.920 --> 38:01.320
2206
+ We get to do that with lower risk for everyone.
2207
+
2208
+ 38:02.440 --> 38:04.280
2209
+ And then we can deliver value to people
2210
+
2211
+ 38:04.280 --> 38:05.800
2212
+ that need to get from one place to another.
2213
+
2214
+ 38:05.800 --> 38:08.160
2215
+ And then once we've got that solved,
2216
+
2217
+ 38:08.160 --> 38:10.000
2218
+ then the kind of the freeway driving part of this
2219
+
2220
+ 38:10.000 --> 38:12.440
2221
+ just falls out, but we're able to learn
2222
+
2223
+ 38:12.440 --> 38:15.160
2224
+ more safely, more quickly in the urban environment.
2225
+
2226
+ 38:15.160 --> 38:18.480
2227
+ So 10 years and then scale 20, 30 years.
2228
+
2229
+ 38:18.480 --> 38:21.440
2230
+ I mean, who knows if it's sufficiently compelling
2231
+
2232
+ 38:21.440 --> 38:24.320
2233
+ experience is created, it can be faster and slower.
2234
+
2235
+ 38:24.320 --> 38:27.120
2236
+ Do you think there could be breakthroughs
2237
+
2238
+ 38:27.120 --> 38:29.880
2239
+ and what kind of breakthroughs might there be
2240
+
2241
+ 38:29.880 --> 38:32.360
2242
+ that completely change that timeline?
2243
+
2244
+ 38:32.360 --> 38:35.320
2245
+ Again, not only am I asking to predict the future,
2246
+
2247
+ 38:35.320 --> 38:37.280
2248
+ I'm asking you to predict breakthroughs
2249
+
2250
+ 38:37.280 --> 38:38.280
2251
+ that haven't happened yet.
2252
+
2253
+ 38:38.280 --> 38:41.800
2254
+ So what's the, I think another way to ask that would be
2255
+
2256
+ 38:41.800 --> 38:44.240
2257
+ if I could wave a magic wand,
2258
+
2259
+ 38:44.240 --> 38:46.640
2260
+ what part of the system would I make work today
2261
+
2262
+ 38:46.640 --> 38:48.600
2263
+ to accelerate it as quickly as possible?
2264
+
2265
+ 38:48.600 --> 38:49.440
2266
+ Right.
2267
+
2268
+ 38:52.080 --> 38:54.080
2269
+ Don't say infrastructure, please don't say infrastructure.
2270
+
2271
+ 38:54.080 --> 38:56.280
2272
+ No, it's definitely not infrastructure.
2273
+
2274
+ 38:56.280 --> 39:00.520
2275
+ It's really that perception forecasting capability.
2276
+
2277
+ 39:00.520 --> 39:04.760
2278
+ So if tomorrow you could give me a perfect model
2279
+
2280
+ 39:04.760 --> 39:07.520
2281
+ of what's happening and what will happen
2282
+
2283
+ 39:07.520 --> 39:11.400
2284
+ for the next five seconds around a vehicle
2285
+
2286
+ 39:11.400 --> 39:14.480
2287
+ on the roadway, that would accelerate things
2288
+
2289
+ 39:14.480 --> 39:15.320
2290
+ pretty dramatically.
2291
+
2292
+ 39:15.320 --> 39:17.560
2293
+ Are you interested in staying up at night?
2294
+
2295
+ 39:17.560 --> 39:21.680
2296
+ Are you mostly bothered by cars, pedestrians, or cyclists?
2297
+
2298
+ 39:21.680 --> 39:25.920
2299
+ So I worry most about the vulnerable road users
2300
+
2301
+ 39:25.920 --> 39:28.000
2302
+ about the combination of cyclists and cars, right?
2303
+
2304
+ 39:28.000 --> 39:29.480
2305
+ Just cyclists and pedestrians
2306
+
2307
+ 39:29.480 --> 39:31.880
2308
+ because they're not in armor.
2309
+
2310
+ 39:33.240 --> 39:36.480
2311
+ The cars, they're bigger, they've got protection
2312
+
2313
+ 39:36.480 --> 39:39.440
2314
+ for the people and so the ultimate risk is lower there.
2315
+
2316
+ 39:39.440 --> 39:44.080
2317
+ Whereas a pedestrian or cyclist, they're out on the road
2318
+
2319
+ 39:44.080 --> 39:46.520
2320
+ and they don't have any protection.
2321
+
2322
+ 39:46.520 --> 39:49.760
2323
+ And so we need to pay extra attention to that.
2324
+
2325
+ 39:49.760 --> 39:54.120
2326
+ Do you think about a very difficult technical challenge
2327
+
2328
+ 39:55.760 --> 39:58.560
2329
+ of the fact that pedestrians,
2330
+
2331
+ 39:58.560 --> 40:01.400
2332
+ if you try to protect pedestrians by being careful
2333
+
2334
+ 40:01.400 --> 40:04.600
2335
+ and slow, they'll take advantage of that.
2336
+
2337
+ 40:04.600 --> 40:07.560
2338
+ So the game theoretic dance.
2339
+
2340
+ 40:07.560 --> 40:10.880
2341
+ Does that worry you from a technical perspective
2342
+
2343
+ 40:10.880 --> 40:12.520
2344
+ how we solve that?
2345
+
2346
+ 40:12.520 --> 40:14.600
2347
+ Because as humans, the way we solve that
2348
+
2349
+ 40:14.600 --> 40:17.280
2350
+ is kind of nudge our way through the pedestrians,
2351
+
2352
+ 40:17.280 --> 40:20.040
2353
+ which doesn't feel from a technical perspective
2354
+
2355
+ 40:20.040 --> 40:22.320
2356
+ as a appropriate algorithm.
2357
+
2358
+ 40:23.240 --> 40:25.960
2359
+ But do you think about how we solve that problem?
2360
+
2361
+ 40:25.960 --> 40:30.960
2362
+ Yeah, I think there's two different concepts there.
2363
+
2364
+ 40:31.400 --> 40:35.880
2365
+ So one is, am I worried that because these vehicles
2366
+
2367
+ 40:35.880 --> 40:37.640
2368
+ are self driving, people will kind of step on the road
2369
+
2370
+ 40:37.640 --> 40:38.680
2371
+ and take advantage of them.
2372
+
2373
+ 40:38.680 --> 40:43.680
2374
+ And I've heard this and I don't really believe it
2375
+
2376
+ 40:43.800 --> 40:46.000
2377
+ because if I'm driving down the road
2378
+
2379
+ 40:46.000 --> 40:48.920
2380
+ and somebody steps in front of me, I'm going to stop.
2381
+
2382
+ 40:48.920 --> 40:49.760
2383
+ Right?
2384
+
2385
+ 40:49.760 --> 40:53.720
2386
+ Like even if I'm annoyed, I'm not gonna just drive
2387
+
2388
+ 40:53.720 --> 40:55.200
2389
+ through a person stood on the road.
2390
+
2391
+ 40:55.200 --> 40:56.440
2392
+ Right.
2393
+
2394
+ 40:56.440 --> 41:00.440
2395
+ And so I think today people can take advantage of this
2396
+
2397
+ 41:00.440 --> 41:02.600
2398
+ and you do see some people do it.
2399
+
2400
+ 41:02.600 --> 41:04.200
2401
+ I guess there's an incremental risk
2402
+
2403
+ 41:04.200 --> 41:05.920
2404
+ because maybe they have lower confidence
2405
+
2406
+ 41:05.920 --> 41:06.760
2407
+ that I'm going to see them
2408
+
2409
+ 41:06.760 --> 41:09.360
2410
+ than they might have for an automated vehicle.
2411
+
2412
+ 41:09.360 --> 41:12.080
2413
+ And so maybe that shifts it a little bit.
2414
+
2415
+ 41:12.080 --> 41:14.400
2416
+ But I think people don't want to get hit by cars.
2417
+
2418
+ 41:14.400 --> 41:17.120
2419
+ And so I think that I'm not that worried
2420
+
2421
+ 41:17.120 --> 41:18.800
2422
+ about people walking out of the one on one
2423
+
2424
+ 41:18.800 --> 41:21.840
2425
+ and creating chaos more than they would today.
2426
+
2427
+ 41:24.400 --> 41:27.040
2428
+ Regarding kind of the nudging through a big stream
2429
+
2430
+ 41:27.040 --> 41:30.040
2431
+ of pedestrians leaving a concert or something.
2432
+
2433
+ 41:30.040 --> 41:33.480
2434
+ I think that is further down the technology pipeline.
2435
+
2436
+ 41:33.480 --> 41:36.920
2437
+ I think that you're right, that's tricky.
2438
+
2439
+ 41:36.920 --> 41:38.560
2440
+ I don't think it's necessarily,
2441
+
2442
+ 41:40.320 --> 41:43.360
2443
+ I think the algorithm people use for this is pretty simple.
2444
+
2445
+ 41:43.360 --> 41:44.200
2446
+ Right?
2447
+
2448
+ 41:44.200 --> 41:45.040
2449
+ It's kind of just move forward slowly
2450
+
2451
+ 41:45.040 --> 41:47.600
2452
+ and if somebody's really close and stop.
2453
+
2454
+ 41:47.600 --> 41:50.840
2455
+ And I think that that probably can be replicated
2456
+
2457
+ 41:50.840 --> 41:54.040
2458
+ pretty easily and particularly given that it's,
2459
+
2460
+ 41:54.040 --> 41:57.240
2461
+ you don't do this at 30 miles an hour, you do it at one,
2462
+
2463
+ 41:57.240 --> 41:59.080
2464
+ that even in those situations,
2465
+
2466
+ 41:59.080 --> 42:01.200
2467
+ the risk is relatively minimal.
2468
+
2469
+ 42:01.200 --> 42:03.440
2470
+ But it's not something we're thinking
2471
+
2472
+ 42:03.440 --> 42:04.560
2473
+ about in any serious way.
2474
+
2475
+ 42:04.560 --> 42:08.000
2476
+ And probably that's less an algorithm problem
2477
+
2478
+ 42:08.000 --> 42:10.160
2479
+ more creating a human experience.
2480
+
2481
+ 42:10.160 --> 42:14.320
2482
+ So the HCI people that create a visual display
2483
+
2484
+ 42:14.320 --> 42:16.280
2485
+ that you're pleasantly as a pedestrian,
2486
+
2487
+ 42:16.280 --> 42:17.680
2488
+ nudged out of the way.
2489
+
2490
+ 42:17.680 --> 42:21.960
2491
+ That's an experience problem, not an algorithm problem.
2492
+
2493
+ 42:22.880 --> 42:25.480
2494
+ Who's the main competitor to Aurora today?
2495
+
2496
+ 42:25.480 --> 42:28.600
2497
+ And how do you out compete them in the long run?
2498
+
2499
+ 42:28.600 --> 42:31.200
2500
+ So we really focus a lot on what we're doing here.
2501
+
2502
+ 42:31.200 --> 42:34.440
2503
+ I think that, I've said this a few times
2504
+
2505
+ 42:34.440 --> 42:37.960
2506
+ that this is a huge difficult problem
2507
+
2508
+ 42:37.960 --> 42:40.280
2509
+ and it's great that a bunch of companies are tackling it
2510
+
2511
+ 42:40.280 --> 42:42.320
2512
+ because I think it's so important for society
2513
+
2514
+ 42:42.320 --> 42:43.760
2515
+ that somebody gets there.
2516
+
2517
+ 42:45.200 --> 42:49.040
2518
+ So we don't spend a whole lot of time
2519
+
2520
+ 42:49.040 --> 42:51.560
2521
+ like thinking tactically about who's out there
2522
+
2523
+ 42:51.560 --> 42:55.480
2524
+ and how do we beat that person individually?
2525
+
2526
+ 42:55.480 --> 42:58.680
2527
+ What are we trying to do to go faster ultimately?
2528
+
2529
+ 42:58.680 --> 43:02.600
2530
+ Well, part of it is the leisure team we have
2531
+
2532
+ 43:02.600 --> 43:04.160
2533
+ has got pretty tremendous experience.
2534
+
2535
+ 43:04.160 --> 43:06.400
2536
+ And so we kind of understand the landscape
2537
+
2538
+ 43:06.400 --> 43:09.120
2539
+ and understand where the cul de sacs are to some degree.
2540
+
2541
+ 43:09.120 --> 43:10.920
2542
+ And we try and avoid those.
2543
+
2544
+ 43:12.600 --> 43:14.240
2545
+ I think there's a part of it
2546
+
2547
+ 43:14.240 --> 43:16.240
2548
+ just this great team we've built.
2549
+
2550
+ 43:16.240 --> 43:19.040
2551
+ People, this is a technology and a company
2552
+
2553
+ 43:19.040 --> 43:22.280
2554
+ that people believe in the mission of.
2555
+
2556
+ 43:22.280 --> 43:24.760
2557
+ And so it allows us to attract just awesome people
2558
+
2559
+ 43:24.760 --> 43:25.680
2560
+ to go work.
2561
+
2562
+ 43:26.760 --> 43:28.000
2563
+ We've got a culture, I think,
2564
+
2565
+ 43:28.000 --> 43:30.440
2566
+ that people appreciate, that allows them to focus,
2567
+
2568
+ 43:30.440 --> 43:33.080
2569
+ allows them to really spend time solving problems.
2570
+
2571
+ 43:33.080 --> 43:35.880
2572
+ And I think that keeps them energized.
2573
+
2574
+ 43:35.880 --> 43:40.880
2575
+ And then we've invested heavily in the infrastructure
2576
+
2577
+ 43:43.520 --> 43:46.520
2578
+ and architectures that we think will ultimately accelerate us.
2579
+
2580
+ 43:46.520 --> 43:50.640
2581
+ So because of the folks we're able to bring in early on,
2582
+
2583
+ 43:50.640 --> 43:53.520
2584
+ because of the great investors we have,
2585
+
2586
+ 43:53.520 --> 43:56.760
2587
+ we don't spend all of our time doing demos
2588
+
2589
+ 43:56.760 --> 43:58.680
2590
+ and kind of leaping from one demo to the next.
2591
+
2592
+ 43:58.680 --> 44:02.800
2593
+ We've been given the freedom to invest in
2594
+
2595
+ 44:03.960 --> 44:05.480
2596
+ infrastructure to do machine learning,
2597
+
2598
+ 44:05.480 --> 44:08.600
2599
+ infrastructure to pull data from our on road testing,
2600
+
2601
+ 44:08.600 --> 44:11.480
2602
+ infrastructure to use that to accelerate engineering.
2603
+
2604
+ 44:11.480 --> 44:14.480
2605
+ And I think that early investment
2606
+
2607
+ 44:14.480 --> 44:17.320
2608
+ and continuing investment in those kind of tools
2609
+
2610
+ 44:17.320 --> 44:19.400
2611
+ will ultimately allow us to accelerate
2612
+
2613
+ 44:19.400 --> 44:21.920
2614
+ and do something pretty incredible.
2615
+
2616
+ 44:21.920 --> 44:23.400
2617
+ Chris, beautifully put.
2618
+
2619
+ 44:23.400 --> 44:24.640
2620
+ It's a good place to end.
2621
+
2622
+ 44:24.640 --> 44:26.520
2623
+ Thank you so much for talking today.
2624
+
2625
+ 44:26.520 --> 44:27.360
2626
+ Thank you very much.
2627
+
2628
+ 44:27.360 --> 44:57.200
2629
+ I hope you enjoyed it.
2630
+
vtt/episode_029_large.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_029_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_030_small.vtt ADDED
@@ -0,0 +1,3584 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.440
4
+ The following is a conversation with Kevin Scott,
5
+
6
+ 00:03.440 --> 00:06.080
7
+ the CTO of Microsoft.
8
+
9
+ 00:06.080 --> 00:08.560
10
+ Before that, he was the senior vice president
11
+
12
+ 00:08.560 --> 00:11.080
13
+ of engineering and operations at LinkedIn,
14
+
15
+ 00:11.080 --> 00:13.520
16
+ and before that, he oversaw mobile ads
17
+
18
+ 00:13.520 --> 00:14.960
19
+ engineering at Google.
20
+
21
+ 00:15.960 --> 00:19.000
22
+ He also has a podcast called Behind the Tech
23
+
24
+ 00:19.000 --> 00:21.880
25
+ with Kevin Scott, which I'm a fan of.
26
+
27
+ 00:21.880 --> 00:24.280
28
+ This was a fun and wide ranging conversation
29
+
30
+ 00:24.280 --> 00:26.680
31
+ that covered many aspects of computing.
32
+
33
+ 00:26.680 --> 00:28.840
34
+ It happened over a month ago,
35
+
36
+ 00:28.840 --> 00:31.000
37
+ before the announcement of Microsoft's investment
38
+
39
+ 00:31.000 --> 00:34.440
40
+ OpenAI that a few people have asked me about.
41
+
42
+ 00:34.440 --> 00:38.120
43
+ I'm sure there'll be one or two people in the future
44
+
45
+ 00:38.120 --> 00:41.400
46
+ that'll talk with me about the impact of that investment.
47
+
48
+ 00:42.280 --> 00:45.440
49
+ This is the Artificial Intelligence podcast.
50
+
51
+ 00:45.440 --> 00:47.680
52
+ If you enjoy it, subscribe on YouTube,
53
+
54
+ 00:47.680 --> 00:49.440
55
+ give it five stars on iTunes,
56
+
57
+ 00:49.440 --> 00:50.960
58
+ support it on a Patreon,
59
+
60
+ 00:50.960 --> 00:53.000
61
+ or simply connect with me on Twitter,
62
+
63
+ 00:53.000 --> 00:57.680
64
+ at Lex Freedman, spelled FRIDMAM.
65
+
66
+ 00:57.680 --> 00:59.240
67
+ And I'd like to give a special thank you
68
+
69
+ 00:59.240 --> 01:01.960
70
+ to Tom and Elanti Bighausen
71
+
72
+ 01:01.960 --> 01:04.600
73
+ for their support of the podcast on Patreon.
74
+
75
+ 01:04.600 --> 01:06.080
76
+ Thanks Tom and Elanti.
77
+
78
+ 01:06.080 --> 01:08.400
79
+ Hope I didn't mess up your last name too bad.
80
+
81
+ 01:08.400 --> 01:10.520
82
+ Your support means a lot,
83
+
84
+ 01:10.520 --> 01:13.480
85
+ and inspires me to keep this series going.
86
+
87
+ 01:13.480 --> 01:18.160
88
+ And now, here's my conversation with Kevin Scott.
89
+
90
+ 01:18.160 --> 01:20.760
91
+ You've described yourself as a kid in a candy store
92
+
93
+ 01:20.760 --> 01:23.000
94
+ at Microsoft because of all the interesting projects
95
+
96
+ 01:23.000 --> 01:24.200
97
+ that are going on.
98
+
99
+ 01:24.200 --> 01:28.000
100
+ Can you try to do the impossible task
101
+
102
+ 01:28.000 --> 01:31.760
103
+ and give a brief whirlwind view
104
+
105
+ 01:31.760 --> 01:34.520
106
+ of all the spaces that Microsoft is working in?
107
+
108
+ 01:35.520 --> 01:37.440
109
+ Both research and product.
110
+
111
+ 01:37.440 --> 01:42.440
112
+ If you include research, it becomes even more difficult.
113
+
114
+ 01:46.480 --> 01:48.880
115
+ So, I think broadly speaking,
116
+
117
+ 01:48.880 --> 01:53.720
118
+ Microsoft's product portfolio includes everything
119
+
120
+ 01:53.720 --> 01:56.920
121
+ from big cloud business,
122
+
123
+ 01:56.920 --> 01:59.360
124
+ like a big set of SaaS services.
125
+
126
+ 01:59.360 --> 02:01.720
127
+ We have sort of the original,
128
+
129
+ 02:01.720 --> 02:05.560
130
+ or like some of what are among the original
131
+
132
+ 02:05.560 --> 02:09.640
133
+ productivity software products that everybody uses.
134
+
135
+ 02:09.640 --> 02:11.200
136
+ We have an operating system business.
137
+
138
+ 02:11.200 --> 02:13.560
139
+ We have a hardware business
140
+
141
+ 02:13.560 --> 02:17.240
142
+ where we make everything from computer mice
143
+
144
+ 02:17.240 --> 02:20.760
145
+ and headphones to high end,
146
+
147
+ 02:20.760 --> 02:23.520
148
+ high end personal computers and laptops.
149
+
150
+ 02:23.520 --> 02:27.680
151
+ We have a fairly broad ranging research group
152
+
153
+ 02:27.680 --> 02:29.680
154
+ where we have people doing everything
155
+
156
+ 02:29.680 --> 02:31.880
157
+ from economics research.
158
+
159
+ 02:31.880 --> 02:35.920
160
+ So, there's this really smart young economist,
161
+
162
+ 02:35.920 --> 02:39.760
163
+ Glenn Weil, who like my group works with a lot,
164
+
165
+ 02:39.760 --> 02:42.880
166
+ who's doing this research on these things
167
+
168
+ 02:42.880 --> 02:45.120
169
+ called radical markets.
170
+
171
+ 02:45.120 --> 02:48.120
172
+ Like he's written an entire technical book
173
+
174
+ 02:48.120 --> 02:51.120
175
+ about this whole notion of radical markets.
176
+
177
+ 02:51.120 --> 02:53.520
178
+ So, like the research group sort of spans from that
179
+
180
+ 02:53.520 --> 02:56.840
181
+ to human computer interaction, to artificial intelligence.
182
+
183
+ 02:56.840 --> 03:01.040
184
+ And we have GitHub, we have LinkedIn.
185
+
186
+ 03:01.040 --> 03:05.800
187
+ We have a search advertising and news business
188
+
189
+ 03:05.800 --> 03:07.360
190
+ and like probably a bunch of stuff
191
+
192
+ 03:07.360 --> 03:11.240
193
+ that I'm embarrassingly not recounting in this list.
194
+
195
+ 03:11.240 --> 03:12.920
196
+ On gaming to Xbox and so on, right?
197
+
198
+ 03:12.920 --> 03:14.120
199
+ Yeah, gaming for sure.
200
+
201
+ 03:14.120 --> 03:17.320
202
+ Like I was having a super fun conversation
203
+
204
+ 03:17.320 --> 03:19.520
205
+ this morning with Phil Spencer.
206
+
207
+ 03:19.520 --> 03:21.280
208
+ So, when I was in college,
209
+
210
+ 03:21.280 --> 03:25.560
211
+ there was this game that Lucas Arts made
212
+
213
+ 03:25.560 --> 03:27.600
214
+ called Day of the Tentacle,
215
+
216
+ 03:27.600 --> 03:30.160
217
+ that my friends and I played forever.
218
+
219
+ 03:30.160 --> 03:33.920
220
+ And like we're doing some interesting collaboration now
221
+
222
+ 03:33.920 --> 03:37.920
223
+ with the folks who made Day of the Tentacle.
224
+
225
+ 03:37.920 --> 03:40.840
226
+ And I was like completely nerding out with Tim Schaeffer,
227
+
228
+ 03:40.840 --> 03:43.880
229
+ like the guy who wrote Day of the Tentacle this morning,
230
+
231
+ 03:43.880 --> 03:45.840
232
+ just a complete fanboy,
233
+
234
+ 03:45.840 --> 03:49.880
235
+ which you know, sort of it like happens a lot.
236
+
237
+ 03:49.880 --> 03:53.320
238
+ Like, you know, Microsoft has been doing so much stuff
239
+
240
+ 03:53.320 --> 03:56.000
241
+ at such breadth for such a long period of time
242
+
243
+ 03:56.000 --> 03:59.680
244
+ that, you know, like being CTO,
245
+
246
+ 03:59.680 --> 04:02.200
247
+ like most of the time my job is very, very serious
248
+
249
+ 04:02.200 --> 04:05.640
250
+ and sometimes that like I get caught up
251
+
252
+ 04:05.640 --> 04:09.200
253
+ in like how amazing it is
254
+
255
+ 04:09.200 --> 04:11.520
256
+ to be able to have the conversations
257
+
258
+ 04:11.520 --> 04:14.640
259
+ that I have with the people I get to have them with.
260
+
261
+ 04:14.640 --> 04:17.040
262
+ You had to reach back into the sentimental
263
+
264
+ 04:17.040 --> 04:21.640
265
+ and what's the radical markets and the economics?
266
+
267
+ 04:21.640 --> 04:24.760
268
+ So the idea with radical markets is like,
269
+
270
+ 04:24.760 --> 04:29.760
271
+ can you come up with new market based mechanisms to,
272
+
273
+ 04:32.320 --> 04:33.800
274
+ you know, I think we have this,
275
+
276
+ 04:33.800 --> 04:35.240
277
+ we're having this debate right now,
278
+
279
+ 04:35.240 --> 04:40.040
280
+ like does capitalism work, like free markets work?
281
+
282
+ 04:40.040 --> 04:43.000
283
+ Can the incentive structures
284
+
285
+ 04:43.000 --> 04:46.360
286
+ that are built into these systems produce outcomes
287
+
288
+ 04:46.360 --> 04:51.360
289
+ that are creating sort of equitably distributed benefits
290
+
291
+ 04:51.560 --> 04:53.520
292
+ for every member of society?
293
+
294
+ 04:55.400 --> 04:58.720
295
+ You know, and I think it's a reasonable set of questions
296
+
297
+ 04:58.720 --> 04:59.560
298
+ to be asking.
299
+
300
+ 04:59.560 --> 05:02.160
301
+ And so what Glenn, and so like, you know,
302
+
303
+ 05:02.160 --> 05:04.400
304
+ one mode of thought there, like if you have doubts
305
+
306
+ 05:04.400 --> 05:06.720
307
+ that the markets are actually working,
308
+
309
+ 05:06.720 --> 05:08.560
310
+ you can sort of like tip towards like,
311
+
312
+ 05:08.560 --> 05:10.800
313
+ okay, let's become more socialist
314
+
315
+ 05:10.800 --> 05:14.240
316
+ and like have central planning and governments
317
+
318
+ 05:14.240 --> 05:15.800
319
+ or some other central organization
320
+
321
+ 05:15.800 --> 05:18.280
322
+ is like making a bunch of decisions
323
+
324
+ 05:18.280 --> 05:22.040
325
+ about how sort of work gets done
326
+
327
+ 05:22.040 --> 05:25.400
328
+ and like where the investments
329
+
330
+ 05:25.400 --> 05:28.880
331
+ and where the outputs of those investments get distributed.
332
+
333
+ 05:28.880 --> 05:32.160
334
+ Glenn's notion is like lean more
335
+
336
+ 05:32.160 --> 05:35.800
337
+ into like the market based mechanism.
338
+
339
+ 05:35.800 --> 05:37.920
340
+ So like for instance,
341
+
342
+ 05:37.920 --> 05:39.600
343
+ this is one of the more radical ideas,
344
+
345
+ 05:39.600 --> 05:44.600
346
+ like suppose that you had a radical pricing mechanism
347
+
348
+ 05:45.160 --> 05:47.120
349
+ for assets like real estate
350
+
351
+ 05:47.120 --> 05:52.120
352
+ where you could be bid out of your position
353
+
354
+ 05:53.600 --> 05:58.600
355
+ in your home, you know, for instance.
356
+
357
+ 05:58.720 --> 06:01.120
358
+ So like if somebody came along and said,
359
+
360
+ 06:01.120 --> 06:04.400
361
+ you know, like I can find higher economic utility
362
+
363
+ 06:04.400 --> 06:05.760
364
+ for this piece of real estate
365
+
366
+ 06:05.760 --> 06:08.720
367
+ that you're running your business in,
368
+
369
+ 06:08.720 --> 06:13.040
370
+ like then like you either have to, you know,
371
+
372
+ 06:13.040 --> 06:16.440
373
+ sort of bid to sort of stay
374
+
375
+ 06:16.440 --> 06:19.960
376
+ or like the thing that's got the higher economic utility,
377
+
378
+ 06:19.960 --> 06:21.440
379
+ you know, sort of takes over the asset
380
+
381
+ 06:21.440 --> 06:23.720
382
+ and which would make it very difficult
383
+
384
+ 06:23.720 --> 06:27.600
385
+ to have the same sort of rent seeking behaviors
386
+
387
+ 06:27.600 --> 06:29.000
388
+ that you've got right now
389
+
390
+ 06:29.000 --> 06:34.000
391
+ because like if you did speculative bidding,
392
+
393
+ 06:34.000 --> 06:39.000
394
+ like you would very quickly like lose a whole lot of money.
395
+
396
+ 06:40.440 --> 06:43.520
397
+ And so like the prices of the assets would be sort of
398
+
399
+ 06:43.520 --> 06:47.600
400
+ like very closely indexed to like the value
401
+
402
+ 06:47.600 --> 06:49.720
403
+ that they can produce.
404
+
405
+ 06:49.720 --> 06:52.680
406
+ And like because like you'd have this sort of real time
407
+
408
+ 06:52.680 --> 06:55.320
409
+ mechanism that would force you to sort of mark the value
410
+
411
+ 06:55.320 --> 06:56.800
412
+ of the asset to the market,
413
+
414
+ 06:56.800 --> 06:58.560
415
+ then it could be taxed appropriately.
416
+
417
+ 06:58.560 --> 07:00.400
418
+ Like you couldn't sort of sit on this thing and say,
419
+
420
+ 07:00.400 --> 07:03.040
421
+ oh, like this house is only worth 10,000 bucks
422
+
423
+ 07:03.040 --> 07:06.600
424
+ when like everything around it is worth 10 million.
425
+
426
+ 07:06.600 --> 07:07.440
427
+ That's really interesting.
428
+
429
+ 07:07.440 --> 07:08.720
430
+ So it's an incentive structure
431
+
432
+ 07:08.720 --> 07:13.200
433
+ that where the prices match the value much better.
434
+
435
+ 07:13.200 --> 07:14.040
436
+ Yeah.
437
+
438
+ 07:14.040 --> 07:16.320
439
+ And Glenn does a much, much better job than I do
440
+
441
+ 07:16.320 --> 07:18.920
442
+ at selling and I probably picked the world's worst example,
443
+
444
+ 07:18.920 --> 07:20.360
445
+ you know, and, and, and, but like,
446
+
447
+ 07:20.360 --> 07:24.520
448
+ and it's intentionally provocative, you know,
449
+
450
+ 07:24.520 --> 07:26.480
451
+ so like this whole notion, like I, you know,
452
+
453
+ 07:26.480 --> 07:28.920
454
+ like I'm not sure whether I like this notion
455
+
456
+ 07:28.920 --> 07:31.120
457
+ that like we can have a set of market mechanisms
458
+
459
+ 07:31.120 --> 07:35.360
460
+ where I could get bid out of, out of my property, you know,
461
+
462
+ 07:35.360 --> 07:37.680
463
+ but, but, you know, like if you're thinking about something
464
+
465
+ 07:37.680 --> 07:42.480
466
+ like Elizabeth Warren's wealth tax, for instance,
467
+
468
+ 07:42.480 --> 07:45.600
469
+ like you would have, I mean, it'd be really interesting
470
+
471
+ 07:45.600 --> 07:50.080
472
+ in like how you would actually set the price on the assets.
473
+
474
+ 07:50.080 --> 07:52.040
475
+ And like you might have to have a mechanism like that
476
+
477
+ 07:52.040 --> 07:54.160
478
+ if you put a tax like that in place.
479
+
480
+ 07:54.160 --> 07:56.440
481
+ It's really interesting that that kind of research,
482
+
483
+ 07:56.440 --> 07:59.800
484
+ at least tangentially touching Microsoft research.
485
+
486
+ 07:59.800 --> 08:00.640
487
+ Yeah.
488
+
489
+ 08:00.640 --> 08:02.560
490
+ So if you're really thinking broadly,
491
+
492
+ 08:02.560 --> 08:07.560
493
+ maybe you can speak to this connects to AI.
494
+
495
+ 08:08.400 --> 08:10.680
496
+ So we have a candidate, Andrew Yang,
497
+
498
+ 08:10.680 --> 08:13.480
499
+ who kind of talks about artificial intelligence
500
+
501
+ 08:13.480 --> 08:16.640
502
+ and the concern that people have about, you know,
503
+
504
+ 08:16.640 --> 08:19.000
505
+ automations impact on society.
506
+
507
+ 08:19.000 --> 08:22.680
508
+ And arguably Microsoft is at the cutting edge
509
+
510
+ 08:22.680 --> 08:25.040
511
+ of innovation in all these kinds of ways.
512
+
513
+ 08:25.040 --> 08:27.080
514
+ And so it's pushing AI forward.
515
+
516
+ 08:27.080 --> 08:30.040
517
+ How do you think about combining all our conversations
518
+
519
+ 08:30.040 --> 08:32.840
520
+ together here with radical markets and socialism
521
+
522
+ 08:32.840 --> 08:37.520
523
+ and innovation in AI that Microsoft is doing?
524
+
525
+ 08:37.520 --> 08:42.520
526
+ And then Andrew Yang's worry that that will,
527
+
528
+ 08:43.520 --> 08:46.840
529
+ that will result in job loss for the lower and so on.
530
+
531
+ 08:46.840 --> 08:47.680
532
+ How do you think about that?
533
+
534
+ 08:47.680 --> 08:51.160
535
+ I think it's sort of one of the most important questions
536
+
537
+ 08:51.160 --> 08:55.320
538
+ in technology, like maybe even in society right now
539
+
540
+ 08:55.320 --> 09:00.320
541
+ about how is AI going to develop over the course
542
+
543
+ 09:00.720 --> 09:02.000
544
+ of the next several decades
545
+
546
+ 09:02.000 --> 09:03.600
547
+ and like what's it gonna be used for
548
+
549
+ 09:03.600 --> 09:06.560
550
+ and like what benefits will it produce
551
+
552
+ 09:06.560 --> 09:08.520
553
+ and what negative impacts will it produce
554
+
555
+ 09:08.520 --> 09:13.520
556
+ and you know, who gets to steer this whole thing?
557
+
558
+ 09:13.720 --> 09:16.320
559
+ You know, I'll say at the highest level,
560
+
561
+ 09:17.240 --> 09:22.240
562
+ one of the real joys of getting to do what I do at Microsoft
563
+
564
+ 09:22.240 --> 09:27.240
565
+ is Microsoft has this heritage as a platform company.
566
+
567
+ 09:27.560 --> 09:31.040
568
+ And so, you know, like Bill has this thing
569
+
570
+ 09:31.040 --> 09:32.880
571
+ that he said a bunch of years ago
572
+
573
+ 09:32.880 --> 09:36.440
574
+ where the measure of a successful platform
575
+
576
+ 09:36.440 --> 09:39.800
577
+ is that it produces far more economic value
578
+
579
+ 09:39.800 --> 09:41.840
580
+ for the people who build on top of the platform
581
+
582
+ 09:41.840 --> 09:46.840
583
+ than is created for the platform owner or builder.
584
+
585
+ 09:47.320 --> 09:50.920
586
+ And I think we have to think about AI that way.
587
+
588
+ 09:50.920 --> 09:55.920
589
+ Like it has to be a platform that other people can use
590
+
591
+ 09:56.280 --> 10:01.280
592
+ to build businesses, to fulfill their creative objectives,
593
+
594
+ 10:01.280 --> 10:04.640
595
+ to be entrepreneurs, to solve problems that they have
596
+
597
+ 10:04.640 --> 10:07.680
598
+ in their work and in their lives.
599
+
600
+ 10:07.680 --> 10:11.960
601
+ It can't be a thing where there are a handful of companies
602
+
603
+ 10:11.960 --> 10:16.440
604
+ sitting in a very small handful of cities geographically
605
+
606
+ 10:16.440 --> 10:19.120
607
+ who are making all the decisions
608
+
609
+ 10:19.120 --> 10:24.120
610
+ about what goes into the AI and like,
611
+
612
+ 10:24.240 --> 10:26.920
613
+ and then on top of like all this infrastructure,
614
+
615
+ 10:26.920 --> 10:31.000
616
+ then build all of the commercially valuable uses for it.
617
+
618
+ 10:31.000 --> 10:34.400
619
+ So like, I think like that's bad from a, you know,
620
+
621
+ 10:34.400 --> 10:36.520
622
+ sort of, you know, economics
623
+
624
+ 10:36.520 --> 10:39.720
625
+ and sort of equitable distribution of value perspective,
626
+
627
+ 10:39.720 --> 10:42.080
628
+ like, you know, sort of back to this whole notion of,
629
+
630
+ 10:42.080 --> 10:44.560
631
+ you know, like, do the markets work?
632
+
633
+ 10:44.560 --> 10:47.600
634
+ But I think it's also bad from an innovation perspective
635
+
636
+ 10:47.600 --> 10:51.360
637
+ because like I have infinite amounts of faith
638
+
639
+ 10:51.360 --> 10:53.880
640
+ in human beings that if you, you know,
641
+
642
+ 10:53.880 --> 10:58.280
643
+ give folks powerful tools, they will go do interesting things.
644
+
645
+ 10:58.280 --> 11:02.320
646
+ And it's more than just a few tens of thousands of people
647
+
648
+ 11:02.320 --> 11:03.360
649
+ with the interesting tools,
650
+
651
+ 11:03.360 --> 11:05.400
652
+ it should be millions of people with the tools.
653
+
654
+ 11:05.400 --> 11:07.200
655
+ So it's sort of like, you know,
656
+
657
+ 11:07.200 --> 11:10.200
658
+ you think about the steam engine
659
+
660
+ 11:10.200 --> 11:13.800
661
+ and the late 18th century, like it was, you know,
662
+
663
+ 11:13.800 --> 11:16.800
664
+ maybe the first large scale substitute for human labor
665
+
666
+ 11:16.800 --> 11:19.120
667
+ that we've built like a machine.
668
+
669
+ 11:19.120 --> 11:21.680
670
+ And, you know, in the beginning,
671
+
672
+ 11:21.680 --> 11:23.520
673
+ when these things are getting deployed,
674
+
675
+ 11:23.520 --> 11:28.320
676
+ the folks who got most of the value from the steam engines
677
+
678
+ 11:28.320 --> 11:30.160
679
+ were the folks who had capital
680
+
681
+ 11:30.160 --> 11:31.600
682
+ so they could afford to build them.
683
+
684
+ 11:31.600 --> 11:34.720
685
+ And like they built factories around them in businesses
686
+
687
+ 11:34.720 --> 11:38.680
688
+ and the experts who knew how to build and maintain them.
689
+
690
+ 11:38.680 --> 11:42.880
691
+ But access to that technology democratized over time.
692
+
693
+ 11:42.880 --> 11:47.040
694
+ Like now like an engine is not a,
695
+
696
+ 11:47.040 --> 11:48.800
697
+ it's not like a differentiated thing.
698
+
699
+ 11:48.800 --> 11:50.280
700
+ Like there isn't one engine company
701
+
702
+ 11:50.280 --> 11:51.560
703
+ that builds all the engines
704
+
705
+ 11:51.560 --> 11:53.120
706
+ and all of the things that use engines
707
+
708
+ 11:53.120 --> 11:54.240
709
+ are made by this company.
710
+
711
+ 11:54.240 --> 11:57.440
712
+ And like they get all the economics from all of that.
713
+
714
+ 11:57.440 --> 11:59.320
715
+ Like, no, like fully demarcated.
716
+
717
+ 11:59.320 --> 12:00.600
718
+ Like they're probably, you know,
719
+
720
+ 12:00.600 --> 12:02.360
721
+ we're sitting here in this room
722
+
723
+ 12:02.360 --> 12:03.680
724
+ and like even though they don't,
725
+
726
+ 12:03.680 --> 12:05.280
727
+ they're probably things, you know,
728
+
729
+ 12:05.280 --> 12:09.120
730
+ like the MIMS gyroscope that are in both of our,
731
+
732
+ 12:09.120 --> 12:11.480
733
+ like there's like little engines, you know,
734
+
735
+ 12:11.480 --> 12:14.520
736
+ sort of everywhere, they're just a component
737
+
738
+ 12:14.520 --> 12:16.240
739
+ in how we build the modern world.
740
+
741
+ 12:16.240 --> 12:17.680
742
+ Like AI needs to get there.
743
+
744
+ 12:17.680 --> 12:20.200
745
+ Yeah, so that's a really powerful way to think.
746
+
747
+ 12:20.200 --> 12:25.120
748
+ If we think of AI as a platform versus a tool
749
+
750
+ 12:25.120 --> 12:27.600
751
+ that Microsoft owns as a platform
752
+
753
+ 12:27.600 --> 12:30.120
754
+ that enables creation on top of it,
755
+
756
+ 12:30.120 --> 12:31.520
757
+ that's the way to democratize it.
758
+
759
+ 12:31.520 --> 12:34.200
760
+ That's really interesting actually.
761
+
762
+ 12:34.200 --> 12:36.040
763
+ And Microsoft throughout its history
764
+
765
+ 12:36.040 --> 12:38.240
766
+ has been positioned well to do that.
767
+
768
+ 12:38.240 --> 12:41.640
769
+ And the, you know, the tieback to this radical markets thing,
770
+
771
+ 12:41.640 --> 12:46.640
772
+ like the, so my team has been working with Glenn
773
+
774
+ 12:47.800 --> 12:51.120
775
+ on this and Jaren Lanier actually.
776
+
777
+ 12:51.120 --> 12:56.120
778
+ So Jaren is the like the sort of father of virtual reality.
779
+
780
+ 12:56.440 --> 12:59.480
781
+ Like he's one of the most interesting human beings
782
+
783
+ 12:59.480 --> 13:01.760
784
+ on the planet, like a sweet, sweet guy.
785
+
786
+ 13:02.840 --> 13:07.120
787
+ And so Jaren and Glenn and folks in my team
788
+
789
+ 13:07.120 --> 13:10.360
790
+ have been working on this notion of data as labor
791
+
792
+ 13:10.360 --> 13:13.160
793
+ or like they call it data dignity as well.
794
+
795
+ 13:13.160 --> 13:16.880
796
+ And so the idea is that if you, you know,
797
+
798
+ 13:16.880 --> 13:18.600
799
+ again, going back to this, you know,
800
+
801
+ 13:18.600 --> 13:20.800
802
+ sort of industrial analogy,
803
+
804
+ 13:20.800 --> 13:23.560
805
+ if you think about data as the raw material
806
+
807
+ 13:23.560 --> 13:27.640
808
+ that is consumed by the machine of AI
809
+
810
+ 13:27.640 --> 13:30.560
811
+ in order to do useful things,
812
+
813
+ 13:30.560 --> 13:34.400
814
+ then like we're not doing a really great job right now
815
+
816
+ 13:34.400 --> 13:37.760
817
+ in having transparent marketplaces for valuing
818
+
819
+ 13:37.760 --> 13:39.800
820
+ those data contributions.
821
+
822
+ 13:39.800 --> 13:42.680
823
+ So like, and we all make them like explicitly,
824
+
825
+ 13:42.680 --> 13:43.600
826
+ like you go to LinkedIn,
827
+
828
+ 13:43.600 --> 13:46.160
829
+ you sort of set up your profile on LinkedIn,
830
+
831
+ 13:46.160 --> 13:47.800
832
+ like that's an explicit contribution.
833
+
834
+ 13:47.800 --> 13:49.480
835
+ Like, you know exactly the information
836
+
837
+ 13:49.480 --> 13:50.720
838
+ that you're putting into the system.
839
+
840
+ 13:50.720 --> 13:53.000
841
+ And like you put it there because you have
842
+
843
+ 13:53.000 --> 13:55.520
844
+ some nominal notion of like what value
845
+
846
+ 13:55.520 --> 13:56.640
847
+ you're going to get in return,
848
+
849
+ 13:56.640 --> 13:57.720
850
+ but it's like only nominal.
851
+
852
+ 13:57.720 --> 13:59.680
853
+ Like you don't know exactly what value
854
+
855
+ 13:59.680 --> 14:02.040
856
+ you're getting in return, like services free, you know,
857
+
858
+ 14:02.040 --> 14:04.600
859
+ like it's low amount of like perceived.
860
+
861
+ 14:04.600 --> 14:06.680
862
+ And then you've got all this indirect contribution
863
+
864
+ 14:06.680 --> 14:08.960
865
+ that you're making just by virtue of interacting
866
+
867
+ 14:08.960 --> 14:13.160
868
+ with all of the technology that's in your daily life.
869
+
870
+ 14:13.160 --> 14:16.120
871
+ And so like what Glenn and Jaren
872
+
873
+ 14:16.120 --> 14:19.440
874
+ and this data dignity team are trying to do is like,
875
+
876
+ 14:19.440 --> 14:22.240
877
+ can we figure out a set of mechanisms
878
+
879
+ 14:22.240 --> 14:26.000
880
+ that let us value those data contributions
881
+
882
+ 14:26.000 --> 14:28.200
883
+ so that you could create an economy
884
+
885
+ 14:28.200 --> 14:31.480
886
+ and like a set of controls and incentives
887
+
888
+ 14:31.480 --> 14:36.480
889
+ that would allow people to like maybe even in the limit
890
+
891
+ 14:36.840 --> 14:38.880
892
+ like earn part of their living
893
+
894
+ 14:38.880 --> 14:41.000
895
+ through the data that they're creating.
896
+
897
+ 14:41.000 --> 14:42.680
898
+ And like you can sort of see it in explicit ways.
899
+
900
+ 14:42.680 --> 14:46.000
901
+ There are these companies like Scale AI
902
+
903
+ 14:46.000 --> 14:49.960
904
+ and like they're a whole bunch of them in China right now
905
+
906
+ 14:49.960 --> 14:52.400
907
+ that are basically data labeling companies.
908
+
909
+ 14:52.400 --> 14:54.560
910
+ So like you're doing supervised machine learning,
911
+
912
+ 14:54.560 --> 14:57.400
913
+ you need lots and lots of label training data.
914
+
915
+ 14:58.600 --> 15:01.440
916
+ And like those people are getting like who work
917
+
918
+ 15:01.440 --> 15:03.600
919
+ for those companies are getting compensated
920
+
921
+ 15:03.600 --> 15:06.360
922
+ for their data contributions into the system.
923
+
924
+ 15:06.360 --> 15:07.720
925
+ And so...
926
+
927
+ 15:07.720 --> 15:10.280
928
+ That's easier to put a number on their contribution
929
+
930
+ 15:10.280 --> 15:11.960
931
+ because they're explicitly labeling data.
932
+
933
+ 15:11.960 --> 15:12.800
934
+ Correct.
935
+
936
+ 15:12.800 --> 15:14.360
937
+ But you're saying that we're all contributing data
938
+
939
+ 15:14.360 --> 15:15.720
940
+ in different kinds of ways.
941
+
942
+ 15:15.720 --> 15:19.640
943
+ And it's fascinating to start to explicitly try
944
+
945
+ 15:19.640 --> 15:20.880
946
+ to put a number on it.
947
+
948
+ 15:20.880 --> 15:22.600
949
+ Do you think that's possible?
950
+
951
+ 15:22.600 --> 15:23.640
952
+ I don't know, it's hard.
953
+
954
+ 15:23.640 --> 15:25.480
955
+ It really is.
956
+
957
+ 15:25.480 --> 15:30.480
958
+ Because, you know, we don't have as much transparency
959
+
960
+ 15:30.480 --> 15:35.480
961
+ as I think we need in like how the data is getting used.
962
+
963
+ 15:37.240 --> 15:38.720
964
+ And it's, you know, super complicated.
965
+
966
+ 15:38.720 --> 15:41.000
967
+ Like, you know, we, you know,
968
+
969
+ 15:41.000 --> 15:42.880
970
+ I think as technologists sort of appreciate
971
+
972
+ 15:42.880 --> 15:44.160
973
+ like some of the subtlety there.
974
+
975
+ 15:44.160 --> 15:47.880
976
+ It's like, you know, the data, the data gets created
977
+
978
+ 15:47.880 --> 15:51.400
979
+ and then it gets, you know, it's not valuable.
980
+
981
+ 15:51.400 --> 15:56.000
982
+ Like the data exhaust that you give off
983
+
984
+ 15:56.000 --> 15:58.480
985
+ or the, you know, the explicit data
986
+
987
+ 15:58.480 --> 16:03.240
988
+ that I am putting into the system isn't valuable.
989
+
990
+ 16:03.240 --> 16:05.160
991
+ It's super valuable atomically.
992
+
993
+ 16:05.160 --> 16:08.360
994
+ Like it's only valuable when you sort of aggregate it together
995
+
996
+ 16:08.360 --> 16:10.440
997
+ into, you know, sort of large numbers.
998
+
999
+ 16:10.440 --> 16:11.960
1000
+ It's true even for these like folks
1001
+
1002
+ 16:11.960 --> 16:14.880
1003
+ who are getting compensated for like labeling things.
1004
+
1005
+ 16:14.880 --> 16:16.480
1006
+ Like for supervised machine learning now,
1007
+
1008
+ 16:16.480 --> 16:20.080
1009
+ like you need lots of labels to train, you know,
1010
+
1011
+ 16:20.080 --> 16:22.080
1012
+ a model that performs well.
1013
+
1014
+ 16:22.080 --> 16:24.440
1015
+ And so, you know, I think that's one of the challenges.
1016
+
1017
+ 16:24.440 --> 16:26.120
1018
+ It's like, how do you, you know,
1019
+
1020
+ 16:26.120 --> 16:28.000
1021
+ how do you sort of figure out like
1022
+
1023
+ 16:28.000 --> 16:31.480
1024
+ because this data is getting combined in so many ways,
1025
+
1026
+ 16:31.480 --> 16:33.880
1027
+ like through these combinations,
1028
+
1029
+ 16:33.880 --> 16:35.880
1030
+ like how the value is flowing.
1031
+
1032
+ 16:35.880 --> 16:38.520
1033
+ Yeah, that's, that's fascinating.
1034
+
1035
+ 16:38.520 --> 16:39.360
1036
+ Yeah.
1037
+
1038
+ 16:39.360 --> 16:41.880
1039
+ And it's fascinating that you're thinking about this.
1040
+
1041
+ 16:41.880 --> 16:44.160
1042
+ And I wasn't even going into this competition
1043
+
1044
+ 16:44.160 --> 16:48.200
1045
+ expecting the breadth of research really
1046
+
1047
+ 16:48.200 --> 16:50.600
1048
+ that Microsoft broadly is thinking about.
1049
+
1050
+ 16:50.600 --> 16:52.360
1051
+ You are thinking about in Microsoft.
1052
+
1053
+ 16:52.360 --> 16:57.360
1054
+ So if we go back to 89 when Microsoft released Office
1055
+
1056
+ 16:57.360 --> 17:00.920
1057
+ or 1990 when they released Windows 3.0,
1058
+
1059
+ 17:00.920 --> 17:04.960
1060
+ how's the, in your view,
1061
+
1062
+ 17:04.960 --> 17:07.280
1063
+ I know you weren't there the entire, you know,
1064
+
1065
+ 17:07.280 --> 17:09.760
1066
+ through its history, but how has the company changed
1067
+
1068
+ 17:09.760 --> 17:12.840
1069
+ in the 30 years since as you look at it now?
1070
+
1071
+ 17:12.840 --> 17:17.080
1072
+ The good thing is it's started off as a platform company.
1073
+
1074
+ 17:17.080 --> 17:19.960
1075
+ Like it's still a platform company,
1076
+
1077
+ 17:19.960 --> 17:22.640
1078
+ like the parts of the business that are like thriving
1079
+
1080
+ 17:22.640 --> 17:26.560
1081
+ and most successful or those that are building platforms,
1082
+
1083
+ 17:26.560 --> 17:29.000
1084
+ like the mission of the company now is,
1085
+
1086
+ 17:29.000 --> 17:30.120
1087
+ the mission's changed.
1088
+
1089
+ 17:30.120 --> 17:32.480
1090
+ It's like changing a very interesting way.
1091
+
1092
+ 17:32.480 --> 17:36.280
1093
+ So, you know, back in 89.90,
1094
+
1095
+ 17:36.280 --> 17:39.040
1096
+ like they were still on the original mission,
1097
+
1098
+ 17:39.040 --> 17:43.840
1099
+ which was like put a PC on every desk and in every home.
1100
+
1101
+ 17:43.840 --> 17:47.480
1102
+ Like, and it was basically about democratizing access
1103
+
1104
+ 17:47.480 --> 17:50.000
1105
+ to this new personal computing technology,
1106
+
1107
+ 17:50.000 --> 17:52.680
1108
+ which when Bill started the company,
1109
+
1110
+ 17:52.680 --> 17:57.680
1111
+ integrated circuit microprocessors were a brand new thing
1112
+
1113
+ 17:57.680 --> 18:00.120
1114
+ and like people were building, you know,
1115
+
1116
+ 18:00.120 --> 18:03.840
1117
+ homebrew computers, you know, from kits,
1118
+
1119
+ 18:03.840 --> 18:07.520
1120
+ like the way people build ham radios right now.
1121
+
1122
+ 18:08.520 --> 18:10.680
1123
+ And I think this is sort of the interesting thing
1124
+
1125
+ 18:10.680 --> 18:12.840
1126
+ for folks who build platforms in general.
1127
+
1128
+ 18:12.840 --> 18:16.840
1129
+ Bill saw the opportunity there
1130
+
1131
+ 18:16.840 --> 18:18.720
1132
+ and what personal computers could do.
1133
+
1134
+ 18:18.720 --> 18:20.440
1135
+ And it was like, it was sort of a reach.
1136
+
1137
+ 18:20.440 --> 18:21.680
1138
+ Like you just sort of imagined
1139
+
1140
+ 18:21.680 --> 18:23.880
1141
+ like where things were, you know,
1142
+
1143
+ 18:23.880 --> 18:24.880
1144
+ when they started the company
1145
+
1146
+ 18:24.880 --> 18:26.120
1147
+ versus where things are now.
1148
+
1149
+ 18:26.120 --> 18:29.400
1150
+ Like in success, when you democratize a platform,
1151
+
1152
+ 18:29.400 --> 18:31.000
1153
+ it just sort of vanishes into the platform.
1154
+
1155
+ 18:31.000 --> 18:32.480
1156
+ You don't pay attention to it anymore.
1157
+
1158
+ 18:32.480 --> 18:35.600
1159
+ Like operating systems aren't a thing anymore.
1160
+
1161
+ 18:35.600 --> 18:38.040
1162
+ Like they're super important, like completely critical.
1163
+
1164
+ 18:38.040 --> 18:41.760
1165
+ And like, you know, when you see one, you know, fail,
1166
+
1167
+ 18:41.760 --> 18:43.520
1168
+ like you just, you sort of understand,
1169
+
1170
+ 18:43.520 --> 18:45.320
1171
+ but like, you know, it's not a thing where you're,
1172
+
1173
+ 18:45.320 --> 18:47.920
1174
+ you're not like waiting for, you know,
1175
+
1176
+ 18:47.920 --> 18:50.480
1177
+ the next operating system thing
1178
+
1179
+ 18:50.480 --> 18:52.960
1180
+ in the same way that you were in 1995, right?
1181
+
1182
+ 18:52.960 --> 18:54.280
1183
+ Like in 1995, like, you know,
1184
+
1185
+ 18:54.280 --> 18:56.000
1186
+ we had Rolling Stones on the stage
1187
+
1188
+ 18:56.000 --> 18:57.600
1189
+ with the Windows 95 roll out.
1190
+
1191
+ 18:57.600 --> 18:59.320
1192
+ Like it was like the biggest thing in the world.
1193
+
1194
+ 18:59.320 --> 19:01.080
1195
+ Everybody would like lined up for it
1196
+
1197
+ 19:01.080 --> 19:03.400
1198
+ the way that people used to line up for iPhone.
1199
+
1200
+ 19:03.400 --> 19:05.120
1201
+ But like, you know, eventually,
1202
+
1203
+ 19:05.120 --> 19:07.160
1204
+ and like this isn't necessarily a bad thing.
1205
+
1206
+ 19:07.160 --> 19:09.000
1207
+ Like it just sort of, you know,
1208
+
1209
+ 19:09.000 --> 19:12.880
1210
+ the success is that it's sort of, it becomes ubiquitous.
1211
+
1212
+ 19:12.880 --> 19:14.800
1213
+ It's like everywhere and like human beings
1214
+
1215
+ 19:14.800 --> 19:16.640
1216
+ when their technology becomes ubiquitous,
1217
+
1218
+ 19:16.640 --> 19:18.240
1219
+ they just sort of start taking it for granted.
1220
+
1221
+ 19:18.240 --> 19:23.240
1222
+ So the mission now that Satya rearticulated
1223
+
1224
+ 19:23.640 --> 19:25.280
1225
+ five plus years ago now
1226
+
1227
+ 19:25.280 --> 19:27.360
1228
+ when he took over as CEO of the company,
1229
+
1230
+ 19:29.320 --> 19:33.480
1231
+ our mission is to empower every individual
1232
+
1233
+ 19:33.480 --> 19:37.760
1234
+ and every organization in the world to be more successful.
1235
+
1236
+ 19:39.200 --> 19:43.160
1237
+ And so, you know, again, like that's a platform mission.
1238
+
1239
+ 19:43.160 --> 19:46.320
1240
+ And like the way that we do it now is different.
1241
+
1242
+ 19:46.320 --> 19:48.680
1243
+ It's like we have a hyperscale cloud
1244
+
1245
+ 19:48.680 --> 19:51.680
1246
+ that people are building their applications on top of.
1247
+
1248
+ 19:51.680 --> 19:53.680
1249
+ Like we have a bunch of AI infrastructure
1250
+
1251
+ 19:53.680 --> 19:56.280
1252
+ that people are building their AI applications on top of.
1253
+
1254
+ 19:56.280 --> 20:01.280
1255
+ We have, you know, we have a productivity suite of software
1256
+
1257
+ 20:02.280 --> 20:05.800
1258
+ like Microsoft Dynamics, which, you know,
1259
+
1260
+ 20:05.800 --> 20:07.440
1261
+ some people might not think is the sexiest thing
1262
+
1263
+ 20:07.440 --> 20:10.040
1264
+ in the world, but it's like helping people figure out
1265
+
1266
+ 20:10.040 --> 20:12.720
1267
+ how to automate all of their business processes
1268
+
1269
+ 20:12.720 --> 20:16.800
1270
+ and workflows and to, you know, like help those businesses
1271
+
1272
+ 20:16.800 --> 20:19.120
1273
+ using it to like grow and be more successful.
1274
+
1275
+ 20:19.120 --> 20:24.120
1276
+ So it's a much broader vision in a way now
1277
+
1278
+ 20:24.240 --> 20:25.480
1279
+ than it was back then.
1280
+
1281
+ 20:25.480 --> 20:27.400
1282
+ Like it was sort of very particular thing.
1283
+
1284
+ 20:27.400 --> 20:29.280
1285
+ And like now, like we live in this world
1286
+
1287
+ 20:29.280 --> 20:31.320
1288
+ where technology is so powerful
1289
+
1290
+ 20:31.320 --> 20:36.320
1291
+ and it's like such a basic fact of life
1292
+
1293
+ 20:36.320 --> 20:39.760
1294
+ that it, you know, that it both exists
1295
+
1296
+ 20:39.760 --> 20:42.760
1297
+ and is going to get better and better over time
1298
+
1299
+ 20:42.760 --> 20:46.000
1300
+ or at least more and more powerful over time.
1301
+
1302
+ 20:46.000 --> 20:48.200
1303
+ So like, you know, what you have to do as a platform player
1304
+
1305
+ 20:48.200 --> 20:49.920
1306
+ is just much bigger.
1307
+
1308
+ 20:49.920 --> 20:50.760
1309
+ Right.
1310
+
1311
+ 20:50.760 --> 20:52.600
1312
+ There's so many directions in which you can transform.
1313
+
1314
+ 20:52.600 --> 20:55.160
1315
+ You didn't mention mixed reality too.
1316
+
1317
+ 20:55.160 --> 20:59.200
1318
+ You know, that's probably early days
1319
+
1320
+ 20:59.200 --> 21:00.680
1321
+ or depends how you think of it.
1322
+
1323
+ 21:00.680 --> 21:02.240
1324
+ But if we think in a scale of centuries,
1325
+
1326
+ 21:02.240 --> 21:04.120
1327
+ it's the early days of mixed reality.
1328
+
1329
+ 21:04.120 --> 21:04.960
1330
+ Oh, for sure.
1331
+
1332
+ 21:04.960 --> 21:08.280
1333
+ And so yeah, with how it lands,
1334
+
1335
+ 21:08.280 --> 21:10.600
1336
+ the Microsoft is doing some really interesting work there.
1337
+
1338
+ 21:10.600 --> 21:13.560
1339
+ Do you touch that part of the effort?
1340
+
1341
+ 21:13.560 --> 21:14.840
1342
+ What's the thinking?
1343
+
1344
+ 21:14.840 --> 21:17.640
1345
+ Do you think of mixed reality as a platform too?
1346
+
1347
+ 21:17.640 --> 21:18.480
1348
+ Oh, sure.
1349
+
1350
+ 21:18.480 --> 21:21.320
1351
+ When we look at what the platforms of the future could be.
1352
+
1353
+ 21:21.320 --> 21:23.880
1354
+ So like fairly obvious that like AI is one,
1355
+
1356
+ 21:23.880 --> 21:26.600
1357
+ like you don't have to, I mean, like that's,
1358
+
1359
+ 21:26.600 --> 21:29.160
1360
+ you know, you sort of say it to like someone
1361
+
1362
+ 21:29.160 --> 21:31.920
1363
+ and you know, like they get it.
1364
+
1365
+ 21:31.920 --> 21:36.280
1366
+ But like we also think of the like mixed reality
1367
+
1368
+ 21:36.280 --> 21:39.560
1369
+ and quantum is like these two interesting,
1370
+
1371
+ 21:39.560 --> 21:40.920
1372
+ you know, potentially.
1373
+
1374
+ 21:40.920 --> 21:41.800
1375
+ Quantum computing.
1376
+
1377
+ 21:41.800 --> 21:42.640
1378
+ Yeah.
1379
+
1380
+ 21:42.640 --> 21:44.520
1381
+ Okay, so let's get crazy then.
1382
+
1383
+ 21:44.520 --> 21:48.920
1384
+ So you're talking about some futuristic things here.
1385
+
1386
+ 21:48.920 --> 21:50.920
1387
+ Well, the mixed reality Microsoft is really,
1388
+
1389
+ 21:50.920 --> 21:52.600
1390
+ it's not even futuristic, it's here.
1391
+
1392
+ 21:52.600 --> 21:53.440
1393
+ It is.
1394
+
1395
+ 21:53.440 --> 21:54.280
1396
+ Incredible stuff.
1397
+
1398
+ 21:54.280 --> 21:56.680
1399
+ And look, and it's having an impact right now.
1400
+
1401
+ 21:56.680 --> 21:58.720
1402
+ Like one of the more interesting things
1403
+
1404
+ 21:58.720 --> 22:01.280
1405
+ that's happened with mixed reality over the past
1406
+
1407
+ 22:01.280 --> 22:04.120
1408
+ couple of years that I didn't clearly see
1409
+
1410
+ 22:04.120 --> 22:08.400
1411
+ is that it's become the computing device
1412
+
1413
+ 22:08.400 --> 22:13.160
1414
+ for folks who, for doing their work
1415
+
1416
+ 22:13.160 --> 22:16.040
1417
+ who haven't used any computing device at all
1418
+
1419
+ 22:16.040 --> 22:16.960
1420
+ to do their work before.
1421
+
1422
+ 22:16.960 --> 22:19.800
1423
+ So technicians and service folks
1424
+
1425
+ 22:19.800 --> 22:24.200
1426
+ and people who are doing like machine maintenance
1427
+
1428
+ 22:24.200 --> 22:25.280
1429
+ on factory floors.
1430
+
1431
+ 22:25.280 --> 22:28.760
1432
+ So like they, you know, because they're mobile
1433
+
1434
+ 22:28.760 --> 22:30.280
1435
+ and like they're out in the world
1436
+
1437
+ 22:30.280 --> 22:32.320
1438
+ and they're working with their hands
1439
+
1440
+ 22:32.320 --> 22:34.080
1441
+ and, you know, sort of servicing these
1442
+
1443
+ 22:34.080 --> 22:36.520
1444
+ like very complicated things.
1445
+
1446
+ 22:36.520 --> 22:39.440
1447
+ They're, they don't use their mobile phone
1448
+
1449
+ 22:39.440 --> 22:41.440
1450
+ and like they don't carry a laptop with them.
1451
+
1452
+ 22:41.440 --> 22:43.480
1453
+ And, you know, they're not tethered to a desk.
1454
+
1455
+ 22:43.480 --> 22:46.920
1456
+ And so mixed reality, like where it's getting
1457
+
1458
+ 22:46.920 --> 22:48.840
1459
+ traction right now, where HoloLens is selling
1460
+
1461
+ 22:48.840 --> 22:53.840
1462
+ a lot of units is for these sorts of applications
1463
+
1464
+ 22:53.880 --> 22:55.440
1465
+ for these workers and it's become like,
1466
+
1467
+ 22:55.440 --> 22:58.040
1468
+ I mean, like the people love it.
1469
+
1470
+ 22:58.040 --> 23:00.600
1471
+ They're like, oh my God, like this is like,
1472
+
1473
+ 23:00.600 --> 23:02.840
1474
+ for them like the same sort of productivity boosts
1475
+
1476
+ 23:02.840 --> 23:05.520
1477
+ that, you know, like an office worker had
1478
+
1479
+ 23:05.520 --> 23:08.200
1480
+ when they got their first personal computer.
1481
+
1482
+ 23:08.200 --> 23:09.800
1483
+ Yeah, but you did mention,
1484
+
1485
+ 23:09.800 --> 23:13.400
1486
+ it's certainly obvious AI as a platform,
1487
+
1488
+ 23:13.400 --> 23:15.560
1489
+ but can we dig into it a little bit?
1490
+
1491
+ 23:15.560 --> 23:18.320
1492
+ How does AI begin to infuse some of the products
1493
+
1494
+ 23:18.320 --> 23:19.480
1495
+ in Microsoft?
1496
+
1497
+ 23:19.480 --> 23:24.480
1498
+ So currently providing training of, for example,
1499
+
1500
+ 23:25.040 --> 23:26.760
1501
+ neural networks in the cloud
1502
+
1503
+ 23:26.760 --> 23:30.960
1504
+ or providing pre trained models
1505
+
1506
+ 23:30.960 --> 23:35.360
1507
+ or just even providing computing resources
1508
+
1509
+ 23:35.360 --> 23:37.520
1510
+ and whatever different inference
1511
+
1512
+ 23:37.520 --> 23:39.320
1513
+ that you want to do using neural networks.
1514
+
1515
+ 23:39.320 --> 23:40.160
1516
+ Yep.
1517
+
1518
+ 23:40.160 --> 23:43.560
1519
+ Well, how do you think of AI infusing the,
1520
+
1521
+ 23:43.560 --> 23:45.880
1522
+ as a platform that Microsoft can provide?
1523
+
1524
+ 23:45.880 --> 23:48.320
1525
+ Yeah, I mean, I think it's, it's super interesting.
1526
+
1527
+ 23:48.320 --> 23:49.560
1528
+ It's like everywhere.
1529
+
1530
+ 23:49.560 --> 23:54.560
1531
+ And like we run these, we run these review meetings now
1532
+
1533
+ 23:54.560 --> 23:59.560
1534
+ where it's me and Satya and like members of Satya's
1535
+
1536
+ 24:01.480 --> 24:04.600
1537
+ leadership team and like a cross functional group
1538
+
1539
+ 24:04.600 --> 24:06.200
1540
+ of folks across the entire company
1541
+
1542
+ 24:06.200 --> 24:11.200
1543
+ who are working on like either AI infrastructure
1544
+
1545
+ 24:11.840 --> 24:15.520
1546
+ or like have some substantial part of their,
1547
+
1548
+ 24:16.480 --> 24:21.480
1549
+ of their product work using AI in some significant way.
1550
+
1551
+ 24:21.480 --> 24:23.440
1552
+ Now, the important thing to understand is like,
1553
+
1554
+ 24:23.440 --> 24:27.040
1555
+ when you think about like how the AI is going to manifest
1556
+
1557
+ 24:27.040 --> 24:29.600
1558
+ in like an experience for something
1559
+
1560
+ 24:29.600 --> 24:30.760
1561
+ that's going to make it better,
1562
+
1563
+ 24:30.760 --> 24:35.760
1564
+ like I think you don't want the AI in this
1565
+
1566
+ 24:35.760 --> 24:37.760
1567
+ to be the first order thing.
1568
+
1569
+ 24:37.760 --> 24:40.600
1570
+ It's like whatever the product is and like the thing
1571
+
1572
+ 24:40.600 --> 24:42.440
1573
+ that is trying to help you do,
1574
+
1575
+ 24:42.440 --> 24:44.560
1576
+ like the AI just sort of makes it better.
1577
+
1578
+ 24:44.560 --> 24:46.840
1579
+ And you know, this is a gross exaggeration,
1580
+
1581
+ 24:46.840 --> 24:50.680
1582
+ but like I, yeah, people get super excited about it.
1583
+
1584
+ 24:50.680 --> 24:53.280
1585
+ They're super excited about like where the AI is showing up
1586
+
1587
+ 24:53.280 --> 24:55.440
1588
+ in products and I'm like, do you get that excited
1589
+
1590
+ 24:55.440 --> 24:59.880
1591
+ about like where you're using a hash table like in your code?
1592
+
1593
+ 24:59.880 --> 25:03.200
1594
+ Like it's just another, it's a very interesting
1595
+
1596
+ 25:03.200 --> 25:05.800
1597
+ programming tool, but it's sort of like it's an engineering
1598
+
1599
+ 25:05.800 --> 25:09.560
1600
+ tool and so like it shows up everywhere.
1601
+
1602
+ 25:09.560 --> 25:12.920
1603
+ So like we've got dozens and dozens of features now
1604
+
1605
+ 25:12.920 --> 25:17.400
1606
+ in office that are powered by like fairly sophisticated
1607
+
1608
+ 25:17.400 --> 25:22.200
1609
+ machine learning, our search engine wouldn't work at all
1610
+
1611
+ 25:22.200 --> 25:24.840
1612
+ if you took the machine learning out of it.
1613
+
1614
+ 25:24.840 --> 25:28.560
1615
+ The like increasingly, you know,
1616
+
1617
+ 25:28.560 --> 25:33.560
1618
+ things like content moderation on our Xbox and xCloud
1619
+
1620
+ 25:34.800 --> 25:35.960
1621
+ platform.
1622
+
1623
+ 25:37.000 --> 25:39.160
1624
+ When you mean moderation to me, like the recommender
1625
+
1626
+ 25:39.160 --> 25:41.760
1627
+ is like showing what you want to look at next.
1628
+
1629
+ 25:41.760 --> 25:44.000
1630
+ No, no, no, it's like anti bullying stuff.
1631
+
1632
+ 25:44.000 --> 25:47.040
1633
+ So the usual social network stuff that you have to deal with.
1634
+
1635
+ 25:47.040 --> 25:47.880
1636
+ Yeah, correct.
1637
+
1638
+ 25:47.880 --> 25:50.080
1639
+ But it's like really it's targeted,
1640
+
1641
+ 25:50.080 --> 25:52.280
1642
+ it's targeted towards a gaming audience.
1643
+
1644
+ 25:52.280 --> 25:55.320
1645
+ So it's like a very particular type of thing where,
1646
+
1647
+ 25:55.320 --> 25:59.480
1648
+ you know, the the line between playful banter
1649
+
1650
+ 25:59.480 --> 26:02.280
1651
+ and like legitimate bullying is like a subtle one.
1652
+
1653
+ 26:02.280 --> 26:06.080
1654
+ And like you have to, it's sort of tough.
1655
+
1656
+ 26:06.080 --> 26:09.080
1657
+ Like I have, I love to, if we could dig into it
1658
+
1659
+ 26:09.080 --> 26:11.720
1660
+ because you're also, you led the engineering efforts
1661
+
1662
+ 26:11.720 --> 26:14.920
1663
+ of LinkedIn and if we look at,
1664
+
1665
+ 26:14.920 --> 26:17.640
1666
+ if we look at LinkedIn as a social network
1667
+
1668
+ 26:17.640 --> 26:21.760
1669
+ and if we look at the Xbox gaming as the social components,
1670
+
1671
+ 26:21.760 --> 26:24.840
1672
+ the very different kinds of, I imagine communication
1673
+
1674
+ 26:24.840 --> 26:26.880
1675
+ going on on the two platforms, right?
1676
+
1677
+ 26:26.880 --> 26:29.520
1678
+ And the line in terms of bullying and so on
1679
+
1680
+ 26:29.520 --> 26:31.480
1681
+ is different on the two platforms.
1682
+
1683
+ 26:31.480 --> 26:33.480
1684
+ So how do you, I mean,
1685
+
1686
+ 26:33.480 --> 26:36.240
1687
+ such a fascinating philosophical discussion
1688
+
1689
+ 26:36.240 --> 26:37.240
1690
+ of where that line is.
1691
+
1692
+ 26:37.240 --> 26:39.840
1693
+ I don't think anyone knows the right answer.
1694
+
1695
+ 26:39.840 --> 26:42.040
1696
+ Twitter folks are under fire now,
1697
+
1698
+ 26:42.040 --> 26:45.120
1699
+ Jack at Twitter for trying to find that line.
1700
+
1701
+ 26:45.120 --> 26:46.920
1702
+ Nobody knows what that line is,
1703
+
1704
+ 26:46.920 --> 26:51.720
1705
+ but how do you try to find the line for,
1706
+
1707
+ 26:52.480 --> 26:57.480
1708
+ you know, trying to prevent abusive behavior
1709
+
1710
+ 26:58.040 --> 27:00.200
1711
+ and at the same time let people be playful
1712
+
1713
+ 27:00.200 --> 27:02.880
1714
+ and joke around and that kind of thing.
1715
+
1716
+ 27:02.880 --> 27:04.640
1717
+ I think in a certain way, like, you know,
1718
+
1719
+ 27:04.640 --> 27:09.640
1720
+ if you have what I would call vertical social networks,
1721
+
1722
+ 27:09.640 --> 27:12.200
1723
+ it gets to be a little bit easier.
1724
+
1725
+ 27:12.200 --> 27:14.440
1726
+ So like if you have a clear notion
1727
+
1728
+ 27:14.440 --> 27:17.960
1729
+ of like what your social network should be used for
1730
+
1731
+ 27:17.960 --> 27:22.280
1732
+ or like what you are designing a community around,
1733
+
1734
+ 27:22.280 --> 27:25.800
1735
+ then you don't have as many dimensions
1736
+
1737
+ 27:25.800 --> 27:28.960
1738
+ to your sort of content safety problem
1739
+
1740
+ 27:28.960 --> 27:33.720
1741
+ as, you know, as you do in a general purpose platform.
1742
+
1743
+ 27:33.720 --> 27:37.520
1744
+ I mean, so like on LinkedIn,
1745
+
1746
+ 27:37.520 --> 27:39.920
1747
+ like the whole social network is about
1748
+
1749
+ 27:39.920 --> 27:41.560
1750
+ connecting people with opportunity,
1751
+
1752
+ 27:41.560 --> 27:43.160
1753
+ whether it's helping them find a job
1754
+
1755
+ 27:43.160 --> 27:46.280
1756
+ or to, you know, sort of find mentors
1757
+
1758
+ 27:46.280 --> 27:49.320
1759
+ or to, you know, sort of help them
1760
+
1761
+ 27:49.320 --> 27:52.120
1762
+ like find their next sales lead
1763
+
1764
+ 27:52.120 --> 27:56.160
1765
+ or to just sort of allow them to broadcast
1766
+
1767
+ 27:56.160 --> 27:59.440
1768
+ their, you know, sort of professional identity
1769
+
1770
+ 27:59.440 --> 28:04.440
1771
+ to their network of peers and collaborators
1772
+
1773
+ 28:04.440 --> 28:05.880
1774
+ and, you know, sort of professional community.
1775
+
1776
+ 28:05.880 --> 28:07.400
1777
+ Like that is, I mean, like in some ways,
1778
+
1779
+ 28:07.400 --> 28:08.960
1780
+ like that's very, very broad,
1781
+
1782
+ 28:08.960 --> 28:12.480
1783
+ but in other ways, it's sort of, you know, it's narrow.
1784
+
1785
+ 28:12.480 --> 28:17.480
1786
+ And so like you can build AIs like machine learning systems
1787
+
1788
+ 28:18.360 --> 28:23.360
1789
+ that are, you know, capable with those boundaries
1790
+
1791
+ 28:23.360 --> 28:26.200
1792
+ of making better automated decisions about like,
1793
+
1794
+ 28:26.200 --> 28:28.240
1795
+ what is, you know, sort of inappropriate
1796
+
1797
+ 28:28.240 --> 28:30.440
1798
+ and offensive comment or dangerous comment
1799
+
1800
+ 28:30.440 --> 28:31.920
1801
+ or illegal content.
1802
+
1803
+ 28:31.920 --> 28:34.800
1804
+ When you have some constraints,
1805
+
1806
+ 28:34.800 --> 28:37.400
1807
+ you know, same thing with, you know,
1808
+
1809
+ 28:37.400 --> 28:40.880
1810
+ same thing with like the gaming social network.
1811
+
1812
+ 28:40.880 --> 28:42.680
1813
+ So for instance, like it's about playing games,
1814
+
1815
+ 28:42.680 --> 28:44.880
1816
+ about having fun and like the thing
1817
+
1818
+ 28:44.880 --> 28:47.240
1819
+ that you don't want to have happen on the platform.
1820
+
1821
+ 28:47.240 --> 28:49.160
1822
+ It's why bullying is such an important thing.
1823
+
1824
+ 28:49.160 --> 28:50.600
1825
+ Like bullying is not fun.
1826
+
1827
+ 28:50.600 --> 28:53.400
1828
+ So you want to do everything in your power
1829
+
1830
+ 28:53.400 --> 28:56.240
1831
+ to encourage that not to happen.
1832
+
1833
+ 28:56.240 --> 29:00.320
1834
+ And yeah, but I think that's a really important thing
1835
+
1836
+ 29:00.320 --> 29:03.920
1837
+ but I think it's sort of a tough problem in general.
1838
+
1839
+ 29:03.920 --> 29:05.280
1840
+ It's one where I think, you know,
1841
+
1842
+ 29:05.280 --> 29:07.120
1843
+ eventually we're gonna have to have
1844
+
1845
+ 29:09.120 --> 29:13.800
1846
+ some sort of clarification from our policy makers
1847
+
1848
+ 29:13.800 --> 29:17.400
1849
+ about what it is that we should be doing,
1850
+
1851
+ 29:17.400 --> 29:20.880
1852
+ like where the lines are, because it's tough.
1853
+
1854
+ 29:20.880 --> 29:23.760
1855
+ Like you don't, like in democracy, right?
1856
+
1857
+ 29:23.760 --> 29:26.680
1858
+ Like you don't want, you want some sort
1859
+
1860
+ 29:26.680 --> 29:28.880
1861
+ of democratic involvement.
1862
+
1863
+ 29:28.880 --> 29:30.440
1864
+ Like people should have a say
1865
+
1866
+ 29:30.440 --> 29:34.680
1867
+ in like where the lines are drawn.
1868
+
1869
+ 29:34.680 --> 29:36.920
1870
+ Like you don't want a bunch of people
1871
+
1872
+ 29:36.920 --> 29:39.480
1873
+ making like unilateral decisions.
1874
+
1875
+ 29:39.480 --> 29:43.120
1876
+ And like we are in a state right now
1877
+
1878
+ 29:43.120 --> 29:44.760
1879
+ for some of these platforms where you actually
1880
+
1881
+ 29:44.760 --> 29:46.280
1882
+ do have to make unilateral decisions
1883
+
1884
+ 29:46.280 --> 29:48.640
1885
+ where the policy making isn't gonna happen fast enough
1886
+
1887
+ 29:48.640 --> 29:52.520
1888
+ in order to like prevent very bad things from happening.
1889
+
1890
+ 29:52.520 --> 29:55.200
1891
+ But like we need the policy making side of that
1892
+
1893
+ 29:55.200 --> 29:58.480
1894
+ to catch up I think as quickly as possible
1895
+
1896
+ 29:58.480 --> 30:00.680
1897
+ because you want that whole process
1898
+
1899
+ 30:00.680 --> 30:02.000
1900
+ to be a democratic thing,
1901
+
1902
+ 30:02.000 --> 30:05.760
1903
+ not a, you know, not some sort of weird thing
1904
+
1905
+ 30:05.760 --> 30:08.040
1906
+ where you've got a non representative group
1907
+
1908
+ 30:08.040 --> 30:10.440
1909
+ of people making decisions that have, you know,
1910
+
1911
+ 30:10.440 --> 30:12.520
1912
+ like national and global impact.
1913
+
1914
+ 30:12.520 --> 30:14.720
1915
+ And it's fascinating because the digital space
1916
+
1917
+ 30:14.720 --> 30:17.520
1918
+ is different than the physical space
1919
+
1920
+ 30:17.520 --> 30:19.800
1921
+ in which nations and governments were established.
1922
+
1923
+ 30:19.800 --> 30:23.960
1924
+ And so what policy looks like globally,
1925
+
1926
+ 30:23.960 --> 30:25.760
1927
+ what bullying looks like globally,
1928
+
1929
+ 30:25.760 --> 30:28.360
1930
+ what healthy communication looks like globally
1931
+
1932
+ 30:28.360 --> 30:31.920
1933
+ is an open question and we're all figuring it out together.
1934
+
1935
+ 30:31.920 --> 30:32.760
1936
+ Which is fascinating.
1937
+
1938
+ 30:32.760 --> 30:37.160
1939
+ Yeah, I mean with, you know, sort of fake news for instance
1940
+
1941
+ 30:37.160 --> 30:42.160
1942
+ and deep fakes and fake news generated by humans.
1943
+
1944
+ 30:42.320 --> 30:44.600
1945
+ Yeah, so we can talk about deep fakes.
1946
+
1947
+ 30:44.600 --> 30:46.120
1948
+ Like I think that is another like, you know,
1949
+
1950
+ 30:46.120 --> 30:48.280
1951
+ sort of very interesting level of complexity.
1952
+
1953
+ 30:48.280 --> 30:51.480
1954
+ But like if you think about just the written word, right?
1955
+
1956
+ 30:51.480 --> 30:54.400
1957
+ Like we have, you know, we invented Papyrus
1958
+
1959
+ 30:54.400 --> 30:56.760
1960
+ what 3000 years ago where we, you know,
1961
+
1962
+ 30:56.760 --> 31:01.160
1963
+ you could sort of put word on paper.
1964
+
1965
+ 31:01.160 --> 31:06.160
1966
+ And then 500 years ago, like we get the printing press
1967
+
1968
+ 31:07.240 --> 31:11.480
1969
+ like where the word gets a little bit more ubiquitous.
1970
+
1971
+ 31:11.480 --> 31:14.600
1972
+ And then like you really, really didn't get ubiquitous
1973
+
1974
+ 31:14.600 --> 31:18.400
1975
+ printed word until the end of the 19th century
1976
+
1977
+ 31:18.400 --> 31:20.720
1978
+ when the offset press was invented.
1979
+
1980
+ 31:20.720 --> 31:22.360
1981
+ And then, you know, just sort of explodes
1982
+
1983
+ 31:22.360 --> 31:25.360
1984
+ and like, you know, the cross product of that
1985
+
1986
+ 31:25.360 --> 31:28.960
1987
+ and the industrial revolutions need
1988
+
1989
+ 31:28.960 --> 31:32.880
1990
+ for educated citizens resulted in like
1991
+
1992
+ 31:32.880 --> 31:34.720
1993
+ this rapid expansion of literacy
1994
+
1995
+ 31:34.720 --> 31:36.000
1996
+ and the rapid expansion of the word.
1997
+
1998
+ 31:36.000 --> 31:39.680
1999
+ But like we had 3000 years up to that point
2000
+
2001
+ 31:39.680 --> 31:44.040
2002
+ to figure out like how to, you know, like what's,
2003
+
2004
+ 31:44.040 --> 31:46.880
2005
+ what's journalism, what's editorial integrity?
2006
+
2007
+ 31:46.880 --> 31:50.120
2008
+ Like what's, you know, what's scientific peer review?
2009
+
2010
+ 31:50.120 --> 31:52.840
2011
+ And so like you built all of this mechanism
2012
+
2013
+ 31:52.840 --> 31:57.080
2014
+ to like try to filter through all of the noise
2015
+
2016
+ 31:57.080 --> 32:00.600
2017
+ that the technology made possible to like, you know,
2018
+
2019
+ 32:00.600 --> 32:04.000
2020
+ sort of getting to something that society could cope with.
2021
+
2022
+ 32:04.000 --> 32:06.600
2023
+ And like, if you think about just the piece,
2024
+
2025
+ 32:06.600 --> 32:09.800
2026
+ the PC didn't exist 50 years ago.
2027
+
2028
+ 32:09.800 --> 32:11.800
2029
+ And so in like this span of, you know,
2030
+
2031
+ 32:11.800 --> 32:16.160
2032
+ like half a century, like we've gone from no digital,
2033
+
2034
+ 32:16.160 --> 32:18.320
2035
+ you know, no ubiquitous digital technology
2036
+
2037
+ 32:18.320 --> 32:21.080
2038
+ to like having a device that sits in your pocket
2039
+
2040
+ 32:21.080 --> 32:23.760
2041
+ where you can sort of say whatever is on your mind
2042
+
2043
+ 32:23.760 --> 32:26.800
2044
+ to like what would Mary have
2045
+
2046
+ 32:26.800 --> 32:31.800
2047
+ and Mary Meeker just released her new like slide deck last week.
2048
+
2049
+ 32:32.440 --> 32:37.360
2050
+ You know, we've got 50% penetration of the internet
2051
+
2052
+ 32:37.360 --> 32:38.520
2053
+ to the global population.
2054
+
2055
+ 32:38.520 --> 32:40.280
2056
+ Like there are like three and a half billion people
2057
+
2058
+ 32:40.280 --> 32:41.720
2059
+ who are connected now.
2060
+
2061
+ 32:41.720 --> 32:43.720
2062
+ So it's like, it's crazy, crazy.
2063
+
2064
+ 32:43.720 --> 32:45.000
2065
+ They're like inconceivable,
2066
+
2067
+ 32:45.000 --> 32:46.480
2068
+ like how fast all of this happened.
2069
+
2070
+ 32:46.480 --> 32:48.720
2071
+ So, you know, it's not surprising
2072
+
2073
+ 32:48.720 --> 32:51.000
2074
+ that we haven't figured out what to do yet,
2075
+
2076
+ 32:51.000 --> 32:55.640
2077
+ but like we gotta really like lean into this set of problems
2078
+
2079
+ 32:55.640 --> 33:00.200
2080
+ because like we basically have three millennia worth of work
2081
+
2082
+ 33:00.200 --> 33:02.520
2083
+ to do about how to deal with all of this
2084
+
2085
+ 33:02.520 --> 33:05.800
2086
+ and like probably what amounts to the next decade
2087
+
2088
+ 33:05.800 --> 33:07.040
2089
+ worth of time.
2090
+
2091
+ 33:07.040 --> 33:09.960
2092
+ So since we're on the topic of tough, you know,
2093
+
2094
+ 33:09.960 --> 33:11.600
2095
+ tough challenging problems,
2096
+
2097
+ 33:11.600 --> 33:15.200
2098
+ let's look at more on the tooling side in AI
2099
+
2100
+ 33:15.200 --> 33:18.440
2101
+ that Microsoft is looking at as face recognition software.
2102
+
2103
+ 33:18.440 --> 33:21.840
2104
+ So there's a lot of powerful positive use cases
2105
+
2106
+ 33:21.840 --> 33:24.240
2107
+ for face recognition, but there's some negative ones
2108
+
2109
+ 33:24.240 --> 33:27.200
2110
+ and we're seeing those in different governments
2111
+
2112
+ 33:27.200 --> 33:28.160
2113
+ in the world.
2114
+
2115
+ 33:28.160 --> 33:30.240
2116
+ So how do you, how does Microsoft think
2117
+
2118
+ 33:30.240 --> 33:33.880
2119
+ about the use of face recognition software
2120
+
2121
+ 33:33.880 --> 33:38.880
2122
+ as a platform in governments and companies?
2123
+
2124
+ 33:39.400 --> 33:42.280
2125
+ Yeah, how do we strike an ethical balance here?
2126
+
2127
+ 33:42.280 --> 33:47.280
2128
+ Yeah, I think we've articulated a clear point of view.
2129
+
2130
+ 33:47.280 --> 33:51.840
2131
+ So Brad Smith wrote a blog post last fall,
2132
+
2133
+ 33:51.840 --> 33:54.120
2134
+ I believe that sort of like outline,
2135
+
2136
+ 33:54.120 --> 33:57.000
2137
+ like very specifically what, you know,
2138
+
2139
+ 33:57.000 --> 33:59.280
2140
+ what our point of view is there.
2141
+
2142
+ 33:59.280 --> 34:02.240
2143
+ And, you know, I think we believe that there are certain uses
2144
+
2145
+ 34:02.240 --> 34:04.680
2146
+ to which face recognition should not be put
2147
+
2148
+ 34:04.680 --> 34:09.160
2149
+ and we believe again that there's a need for regulation there.
2150
+
2151
+ 34:09.160 --> 34:12.440
2152
+ Like the government should like really come in and say
2153
+
2154
+ 34:12.440 --> 34:15.720
2155
+ that, you know, this is where the lines are.
2156
+
2157
+ 34:15.720 --> 34:18.600
2158
+ And like we very much wanted to like figuring out
2159
+
2160
+ 34:18.600 --> 34:20.680
2161
+ where the lines are should be a democratic process.
2162
+
2163
+ 34:20.680 --> 34:23.240
2164
+ But in the short term, like we've drawn some lines
2165
+
2166
+ 34:23.240 --> 34:26.640
2167
+ where, you know, we push back against uses
2168
+
2169
+ 34:26.640 --> 34:29.440
2170
+ of face recognition technology.
2171
+
2172
+ 34:29.440 --> 34:32.480
2173
+ You know, like this city of San Francisco, for instance,
2174
+
2175
+ 34:32.480 --> 34:36.480
2176
+ I think has completely outlawed any government agency
2177
+
2178
+ 34:36.480 --> 34:39.560
2179
+ from using face recognition tech.
2180
+
2181
+ 34:39.560 --> 34:44.560
2182
+ And like that may prove to be a little bit overly broad.
2183
+
2184
+ 34:44.560 --> 34:48.840
2185
+ But for like certain law enforcement things,
2186
+
2187
+ 34:48.840 --> 34:53.840
2188
+ like you really, I would personally rather be overly
2189
+
2190
+ 34:54.040 --> 34:57.400
2191
+ sort of cautious in terms of restricting use of it
2192
+
2193
+ 34:57.400 --> 34:58.920
2194
+ until like we have, you know,
2195
+
2196
+ 34:58.920 --> 35:02.160
2197
+ sort of defined a reasonable, you know,
2198
+
2199
+ 35:02.160 --> 35:04.880
2200
+ democratically determined regulatory framework
2201
+
2202
+ 35:04.880 --> 35:08.840
2203
+ for like where we could and should use it.
2204
+
2205
+ 35:08.840 --> 35:10.880
2206
+ And, you know, the other thing there is
2207
+
2208
+ 35:11.960 --> 35:14.000
2209
+ like we've got a bunch of research that we're doing
2210
+
2211
+ 35:14.000 --> 35:18.400
2212
+ and a bunch of progress that we've made on bias there.
2213
+
2214
+ 35:18.400 --> 35:20.880
2215
+ And like there are all sorts of like weird biases
2216
+
2217
+ 35:20.880 --> 35:23.640
2218
+ that these models can have like all the way
2219
+
2220
+ 35:23.640 --> 35:26.920
2221
+ from like the most noteworthy one where, you know,
2222
+
2223
+ 35:26.920 --> 35:31.680
2224
+ you may have underrepresented minorities
2225
+
2226
+ 35:31.680 --> 35:34.680
2227
+ who are like underrepresented in the training data.
2228
+
2229
+ 35:34.680 --> 35:39.240
2230
+ And then you start learning like strange things.
2231
+
2232
+ 35:39.240 --> 35:42.160
2233
+ But like they're even, you know, other weird things
2234
+
2235
+ 35:42.160 --> 35:46.480
2236
+ like we've, I think we've seen in the public research
2237
+
2238
+ 35:46.480 --> 35:49.520
2239
+ like models can learn strange things
2240
+
2241
+ 35:49.520 --> 35:54.520
2242
+ like all doctors or men for instance.
2243
+
2244
+ 35:54.520 --> 35:59.520
2245
+ Yeah, I mean, and so like it really is a thing where
2246
+
2247
+ 36:00.760 --> 36:03.600
2248
+ it's very important for everybody
2249
+
2250
+ 36:03.600 --> 36:08.440
2251
+ who is working on these things before they push publish,
2252
+
2253
+ 36:08.440 --> 36:12.800
2254
+ they launch the experiment, they, you know, push the code
2255
+
2256
+ 36:12.800 --> 36:17.120
2257
+ to, you know, online or they even publish the paper
2258
+
2259
+ 36:17.120 --> 36:20.040
2260
+ that they are at least starting to think
2261
+
2262
+ 36:20.040 --> 36:25.040
2263
+ about what some of the potential negative consequences
2264
+
2265
+ 36:25.040 --> 36:25.880
2266
+ are some of this stuff.
2267
+
2268
+ 36:25.880 --> 36:29.040
2269
+ I mean, this is where, you know, like the deep fake stuff
2270
+
2271
+ 36:29.040 --> 36:32.360
2272
+ I find very worrisome just because
2273
+
2274
+ 36:32.360 --> 36:37.360
2275
+ they're going to be some very good beneficial uses
2276
+
2277
+ 36:39.800 --> 36:44.800
2278
+ of like GAN generated imagery.
2279
+
2280
+ 36:46.080 --> 36:48.440
2281
+ And like, and funny enough, like one of the places
2282
+
2283
+ 36:48.440 --> 36:52.920
2284
+ where it's actually useful is we're using the technology
2285
+
2286
+ 36:52.920 --> 36:57.920
2287
+ right now to generate synthetic, synthetic visual data
2288
+
2289
+ 36:58.640 --> 37:01.160
2290
+ for training some of the face recognition models
2291
+
2292
+ 37:01.160 --> 37:03.440
2293
+ to get rid of the bias.
2294
+
2295
+ 37:03.440 --> 37:05.800
2296
+ So like that's one like super good use of the tech,
2297
+
2298
+ 37:05.800 --> 37:09.640
2299
+ but like, you know, it's getting good enough now
2300
+
2301
+ 37:09.640 --> 37:12.320
2302
+ where, you know, it's going to sort of challenge
2303
+
2304
+ 37:12.320 --> 37:15.400
2305
+ a normal human beings ability to like now you're just sort
2306
+
2307
+ 37:15.400 --> 37:19.320
2308
+ of say like it's very expensive for someone
2309
+
2310
+ 37:19.320 --> 37:23.280
2311
+ to fabricate a photorealistic fake video.
2312
+
2313
+ 37:24.200 --> 37:26.920
2314
+ And like GANs are going to make it fantastically cheap
2315
+
2316
+ 37:26.920 --> 37:30.440
2317
+ to fabricate a photorealistic fake video.
2318
+
2319
+ 37:30.440 --> 37:33.920
2320
+ And so like what you assume you can sort of trust
2321
+
2322
+ 37:33.920 --> 37:38.400
2323
+ is true versus like be skeptical about is about to change.
2324
+
2325
+ 37:38.400 --> 37:40.560
2326
+ And like we're not ready for it, I don't think.
2327
+
2328
+ 37:40.560 --> 37:42.000
2329
+ The nature of truth, right?
2330
+
2331
+ 37:42.000 --> 37:46.360
2332
+ That's, it's also exciting because I think both you
2333
+
2334
+ 37:46.360 --> 37:49.600
2335
+ and I probably would agree that the way to solve,
2336
+
2337
+ 37:49.600 --> 37:52.080
2338
+ to take on that challenge is with technology.
2339
+
2340
+ 37:52.080 --> 37:52.920
2341
+ Yeah. Right.
2342
+
2343
+ 37:52.920 --> 37:56.800
2344
+ There's probably going to be ideas of ways to verify
2345
+
2346
+ 37:56.800 --> 38:00.800
2347
+ which kind of video is legitimate, which kind is not.
2348
+
2349
+ 38:00.800 --> 38:03.880
2350
+ So to me, that's an exciting possibility.
2351
+
2352
+ 38:03.880 --> 38:07.160
2353
+ Most likely for just the comedic genius
2354
+
2355
+ 38:07.160 --> 38:10.960
2356
+ that the internet usually creates with these kinds of videos.
2357
+
2358
+ 38:10.960 --> 38:13.960
2359
+ And hopefully will not result in any serious harm.
2360
+
2361
+ 38:13.960 --> 38:17.680
2362
+ Yeah. And it could be, you know, like I think
2363
+
2364
+ 38:17.680 --> 38:22.680
2365
+ we will have technology to that may be able to detect
2366
+
2367
+ 38:23.040 --> 38:24.440
2368
+ whether or not something's fake or real.
2369
+
2370
+ 38:24.440 --> 38:29.440
2371
+ Although the fakes are pretty convincing
2372
+
2373
+ 38:30.160 --> 38:34.360
2374
+ even like when you subject them to machine scrutiny.
2375
+
2376
+ 38:34.360 --> 38:37.800
2377
+ But, you know, we also have these increasingly
2378
+
2379
+ 38:37.800 --> 38:40.520
2380
+ interesting social networks, you know,
2381
+
2382
+ 38:40.520 --> 38:45.520
2383
+ that are under fire right now for some of the bad things
2384
+
2385
+ 38:45.800 --> 38:46.640
2386
+ that they do.
2387
+
2388
+ 38:46.640 --> 38:47.720
2389
+ Like one of the things you could choose to do
2390
+
2391
+ 38:47.720 --> 38:51.760
2392
+ with a social network is like you could,
2393
+
2394
+ 38:51.760 --> 38:55.560
2395
+ you could use crypto and the networks
2396
+
2397
+ 38:55.560 --> 38:59.960
2398
+ to like have content signed where you could have a like
2399
+
2400
+ 38:59.960 --> 39:02.160
2401
+ full chain of custody that accompanied
2402
+
2403
+ 39:02.160 --> 39:03.920
2404
+ every piece of content.
2405
+
2406
+ 39:03.920 --> 39:06.800
2407
+ So like when you're viewing something
2408
+
2409
+ 39:06.800 --> 39:09.640
2410
+ and like you want to ask yourself like how, you know,
2411
+
2412
+ 39:09.640 --> 39:11.040
2413
+ how much can I trust this?
2414
+
2415
+ 39:11.040 --> 39:12.400
2416
+ Like you can click something
2417
+
2418
+ 39:12.400 --> 39:15.640
2419
+ and like have a verified chain of custody that shows like,
2420
+
2421
+ 39:15.640 --> 39:19.040
2422
+ oh, this is coming from, you know, from this source.
2423
+
2424
+ 39:19.040 --> 39:24.040
2425
+ And it's like signed by like someone whose identity I trust.
2426
+
2427
+ 39:24.080 --> 39:25.400
2428
+ Yeah, I think having that, you know,
2429
+
2430
+ 39:25.400 --> 39:28.040
2431
+ having that chain of custody like being able to like say,
2432
+
2433
+ 39:28.040 --> 39:31.200
2434
+ oh, here's this video, like it may or may not
2435
+
2436
+ 39:31.200 --> 39:33.760
2437
+ been produced using some of this deep fake technology.
2438
+
2439
+ 39:33.760 --> 39:35.640
2440
+ But if you've got a verified chain of custody
2441
+
2442
+ 39:35.640 --> 39:37.800
2443
+ where you can sort of trace it all the way back
2444
+
2445
+ 39:37.800 --> 39:39.960
2446
+ to an identity and you can decide whether or not
2447
+
2448
+ 39:39.960 --> 39:41.520
2449
+ like I trust this identity.
2450
+
2451
+ 39:41.520 --> 39:43.360
2452
+ Like, oh no, this is really from the White House
2453
+
2454
+ 39:43.360 --> 39:45.480
2455
+ or like this is really from the, you know,
2456
+
2457
+ 39:45.480 --> 39:48.840
2458
+ the office of this particular presidential candidate
2459
+
2460
+ 39:48.840 --> 39:50.960
2461
+ or it's really from, you know,
2462
+
2463
+ 39:50.960 --> 39:55.520
2464
+ Jeff Wiener CEO of LinkedIn or Satya Nadella CEO of Microsoft.
2465
+
2466
+ 39:55.520 --> 39:58.400
2467
+ Like that might be like one way
2468
+
2469
+ 39:58.400 --> 39:59.960
2470
+ that you can solve some of the problems.
2471
+
2472
+ 39:59.960 --> 40:01.800
2473
+ So like that's not the super high tech.
2474
+
2475
+ 40:01.800 --> 40:04.480
2476
+ Like we've had all of this technology forever.
2477
+
2478
+ 40:04.480 --> 40:06.720
2479
+ And but I think you're right.
2480
+
2481
+ 40:06.720 --> 40:11.120
2482
+ Like it has to be some sort of technological thing
2483
+
2484
+ 40:11.120 --> 40:15.840
2485
+ because the underlying tech that is used to create this
2486
+
2487
+ 40:15.840 --> 40:18.800
2488
+ is not going to do anything but get better over time
2489
+
2490
+ 40:18.800 --> 40:21.160
2491
+ and the genie is sort of out of the bottle.
2492
+
2493
+ 40:21.160 --> 40:22.800
2494
+ There's no stuffing it back in.
2495
+
2496
+ 40:22.800 --> 40:24.520
2497
+ And there's a social component
2498
+
2499
+ 40:24.520 --> 40:26.600
2500
+ which I think is really healthy for democracy
2501
+
2502
+ 40:26.600 --> 40:30.200
2503
+ where people will be skeptical about the thing they watch.
2504
+
2505
+ 40:30.200 --> 40:31.040
2506
+ Yeah.
2507
+
2508
+ 40:31.040 --> 40:34.160
2509
+ In general, so, you know, which is good.
2510
+
2511
+ 40:34.160 --> 40:37.280
2512
+ Skepticism in general is good for your personal content.
2513
+
2514
+ 40:37.280 --> 40:40.400
2515
+ So deep fakes in that sense are creating
2516
+
2517
+ 40:40.400 --> 40:44.800
2518
+ global skepticism about can they trust what they read?
2519
+
2520
+ 40:44.800 --> 40:46.880
2521
+ It encourages further research.
2522
+
2523
+ 40:46.880 --> 40:48.840
2524
+ I come from the Soviet Union
2525
+
2526
+ 40:49.800 --> 40:53.320
2527
+ where basically nobody trusted the media
2528
+
2529
+ 40:53.320 --> 40:55.120
2530
+ because you knew it was propaganda.
2531
+
2532
+ 40:55.120 --> 40:59.160
2533
+ And that kind of skepticism encouraged further research
2534
+
2535
+ 40:59.160 --> 41:02.360
2536
+ about ideas supposed to just trusting anyone's source.
2537
+
2538
+ 41:02.360 --> 41:05.440
2539
+ Well, like I think it's one of the reasons why the,
2540
+
2541
+ 41:05.440 --> 41:09.440
2542
+ you know, the scientific method and our apparatus
2543
+
2544
+ 41:09.440 --> 41:11.480
2545
+ of modern science is so good.
2546
+
2547
+ 41:11.480 --> 41:15.360
2548
+ Like because you don't have to trust anything.
2549
+
2550
+ 41:15.360 --> 41:18.520
2551
+ Like you, like the whole notion of, you know,
2552
+
2553
+ 41:18.520 --> 41:21.320
2554
+ like modern science beyond the fact that, you know,
2555
+
2556
+ 41:21.320 --> 41:23.440
2557
+ this is a hypothesis and this is an experiment
2558
+
2559
+ 41:23.440 --> 41:24.840
2560
+ to test the hypothesis.
2561
+
2562
+ 41:24.840 --> 41:27.360
2563
+ And, you know, like this is a peer review process
2564
+
2565
+ 41:27.360 --> 41:30.080
2566
+ for scrutinizing published results.
2567
+
2568
+ 41:30.080 --> 41:33.280
2569
+ But like stuff's also supposed to be reproducible.
2570
+
2571
+ 41:33.280 --> 41:35.240
2572
+ So like, you know, it's been vetted by this process,
2573
+
2574
+ 41:35.240 --> 41:38.000
2575
+ but like you also are expected to publish enough detail
2576
+
2577
+ 41:38.000 --> 41:41.480
2578
+ where, you know, if you are sufficiently skeptical
2579
+
2580
+ 41:41.480 --> 41:44.720
2581
+ of the thing, you can go try to like reproduce it yourself.
2582
+
2583
+ 41:44.720 --> 41:47.560
2584
+ And like, I don't know what it is.
2585
+
2586
+ 41:47.560 --> 41:49.920
2587
+ Like, I think a lot of engineers are like this
2588
+
2589
+ 41:49.920 --> 41:52.600
2590
+ where like, you know, sort of this, like your brain
2591
+
2592
+ 41:52.600 --> 41:55.520
2593
+ is sort of wired for skepticism.
2594
+
2595
+ 41:55.520 --> 41:58.000
2596
+ Like you don't just first order trust everything
2597
+
2598
+ 41:58.000 --> 42:00.040
2599
+ that you see and encounter.
2600
+
2601
+ 42:00.040 --> 42:02.560
2602
+ And like you're sort of curious to understand,
2603
+
2604
+ 42:02.560 --> 42:04.480
2605
+ you know, the next thing.
2606
+
2607
+ 42:04.480 --> 42:09.080
2608
+ But like, I think it's an entirely healthy thing.
2609
+
2610
+ 42:09.080 --> 42:12.280
2611
+ And like we need a little bit more of that right now.
2612
+
2613
+ 42:12.280 --> 42:16.200
2614
+ So I'm not a large business owner.
2615
+
2616
+ 42:16.200 --> 42:23.200
2617
+ So I'm just, I'm just a huge fan of many of Microsoft products.
2618
+
2619
+ 42:23.200 --> 42:25.360
2620
+ I mean, I still, actually in terms of,
2621
+
2622
+ 42:25.360 --> 42:27.000
2623
+ I generate a lot of graphics and images
2624
+
2625
+ 42:27.000 --> 42:28.640
2626
+ and I still use PowerPoint to do that.
2627
+
2628
+ 42:28.640 --> 42:30.440
2629
+ It beats Illustrator for me.
2630
+
2631
+ 42:30.440 --> 42:34.480
2632
+ Even professional sort of, it's fascinating.
2633
+
2634
+ 42:34.480 --> 42:39.560
2635
+ So I wonder what is the future of, let's say,
2636
+
2637
+ 42:39.560 --> 42:41.920
2638
+ windows and office look like?
2639
+
2640
+ 42:41.920 --> 42:43.840
2641
+ Is do you see it?
2642
+
2643
+ 42:43.840 --> 42:45.880
2644
+ I mean, I remember looking forward to XP.
2645
+
2646
+ 42:45.880 --> 42:48.200
2647
+ Was it exciting when XP was released?
2648
+
2649
+ 42:48.200 --> 42:51.080
2650
+ Just like you said, I don't remember when 95 was released.
2651
+
2652
+ 42:51.080 --> 42:53.800
2653
+ But XP for me was a big celebration.
2654
+
2655
+ 42:53.800 --> 42:56.000
2656
+ And when 10 came out, I was like,
2657
+
2658
+ 42:56.000 --> 42:58.040
2659
+ okay, well, it's nice, it's a nice improvement.
2660
+
2661
+ 42:58.040 --> 43:02.600
2662
+ But so what do you see the future of these products?
2663
+
2664
+ 43:02.600 --> 43:04.640
2665
+ You know, I think there's a bunch of excitement.
2666
+
2667
+ 43:04.640 --> 43:07.160
2668
+ I mean, on the office front,
2669
+
2670
+ 43:07.160 --> 43:13.440
2671
+ there's going to be this like increasing productivity
2672
+
2673
+ 43:13.440 --> 43:17.080
2674
+ wins that are coming out of some of these AI powered features
2675
+
2676
+ 43:17.080 --> 43:19.000
2677
+ that are coming, like the products will sort of get
2678
+
2679
+ 43:19.000 --> 43:21.120
2680
+ smarter and smarter in like a very subtle way.
2681
+
2682
+ 43:21.120 --> 43:24.120
2683
+ Like there's not going to be this big bang moment
2684
+
2685
+ 43:24.120 --> 43:27.080
2686
+ where, you know, like Clippy is going to reemerge
2687
+
2688
+ 43:27.080 --> 43:27.960
2689
+ and it's going to be...
2690
+
2691
+ 43:27.960 --> 43:28.680
2692
+ Wait a minute.
2693
+
2694
+ 43:28.680 --> 43:30.520
2695
+ Okay, well, I have to wait, wait, wait.
2696
+
2697
+ 43:30.520 --> 43:31.960
2698
+ It's Clippy coming back.
2699
+
2700
+ 43:31.960 --> 43:34.560
2701
+ Well, quite seriously.
2702
+
2703
+ 43:34.560 --> 43:37.920
2704
+ So injection of AI, there's not much,
2705
+
2706
+ 43:37.920 --> 43:39.040
2707
+ or at least I'm not familiar,
2708
+
2709
+ 43:39.040 --> 43:41.200
2710
+ sort of assistive type of stuff going on
2711
+
2712
+ 43:41.200 --> 43:43.600
2713
+ inside the office products,
2714
+
2715
+ 43:43.600 --> 43:47.600
2716
+ like a Clippy style assistant, personal assistant.
2717
+
2718
+ 43:47.600 --> 43:50.560
2719
+ Do you think that there's a possibility
2720
+
2721
+ 43:50.560 --> 43:52.000
2722
+ of that in the future?
2723
+
2724
+ 43:52.000 --> 43:54.680
2725
+ So I think there are a bunch of like very small ways
2726
+
2727
+ 43:54.680 --> 43:57.320
2728
+ in which like machine learning power
2729
+
2730
+ 43:57.320 --> 44:00.080
2731
+ and assistive things are in the product right now.
2732
+
2733
+ 44:00.080 --> 44:04.800
2734
+ So there are a bunch of interesting things,
2735
+
2736
+ 44:04.800 --> 44:09.280
2737
+ like the auto response stuff's getting better and better
2738
+
2739
+ 44:09.280 --> 44:12.160
2740
+ and it's like getting to the point where, you know,
2741
+
2742
+ 44:12.160 --> 44:14.960
2743
+ it can auto respond with like, okay,
2744
+
2745
+ 44:14.960 --> 44:19.080
2746
+ let this person is clearly trying to schedule a meeting
2747
+
2748
+ 44:19.080 --> 44:21.520
2749
+ so it looks at your calendar and it automatically
2750
+
2751
+ 44:21.520 --> 44:24.080
2752
+ like tries to find like a time and a space
2753
+
2754
+ 44:24.080 --> 44:26.240
2755
+ that's mutually interesting.
2756
+
2757
+ 44:26.240 --> 44:31.240
2758
+ Like we have this notion of Microsoft search
2759
+
2760
+ 44:33.520 --> 44:34.960
2761
+ where it's like not just web search,
2762
+
2763
+ 44:34.960 --> 44:38.200
2764
+ but it's like search across like all of your information
2765
+
2766
+ 44:38.200 --> 44:43.200
2767
+ that's sitting inside of like your Office 365 tenant
2768
+
2769
+ 44:43.320 --> 44:46.880
2770
+ and like, you know, potentially in other products.
2771
+
2772
+ 44:46.880 --> 44:49.680
2773
+ And like we have this thing called the Microsoft Graph
2774
+
2775
+ 44:49.680 --> 44:53.400
2776
+ that is basically a API federator that, you know,
2777
+
2778
+ 44:53.400 --> 44:57.960
2779
+ sort of like gets you hooked up across the entire breadth
2780
+
2781
+ 44:57.960 --> 44:59.760
2782
+ of like all of the, you know,
2783
+
2784
+ 44:59.760 --> 45:01.640
2785
+ like what were information silos
2786
+
2787
+ 45:01.640 --> 45:04.720
2788
+ before they got woven together with the graph.
2789
+
2790
+ 45:05.680 --> 45:07.880
2791
+ Like that is like getting increasing
2792
+
2793
+ 45:07.880 --> 45:09.160
2794
+ with increasing effectiveness,
2795
+
2796
+ 45:09.160 --> 45:11.280
2797
+ sort of plumbed into the,
2798
+
2799
+ 45:11.280 --> 45:13.120
2800
+ into some of these auto response things
2801
+
2802
+ 45:13.120 --> 45:15.840
2803
+ where you're going to be able to see the system
2804
+
2805
+ 45:15.840 --> 45:18.200
2806
+ like automatically retrieve information for you.
2807
+
2808
+ 45:18.200 --> 45:21.160
2809
+ Like if, you know, like I frequently send out,
2810
+
2811
+ 45:21.160 --> 45:24.080
2812
+ you know, emails to folks where like I can't find a paper
2813
+
2814
+ 45:24.080 --> 45:25.400
2815
+ or a document or whatnot.
2816
+
2817
+ 45:25.400 --> 45:26.840
2818
+ There's no reason why the system won't be able
2819
+
2820
+ 45:26.840 --> 45:27.680
2821
+ to do that for you.
2822
+
2823
+ 45:27.680 --> 45:29.560
2824
+ And like, I think the,
2825
+
2826
+ 45:29.560 --> 45:33.640
2827
+ it's building towards like having things that look more
2828
+
2829
+ 45:33.640 --> 45:37.880
2830
+ like like a fully integrated, you know, assistant,
2831
+
2832
+ 45:37.880 --> 45:40.720
2833
+ but like you'll have a bunch of steps
2834
+
2835
+ 45:40.720 --> 45:42.800
2836
+ that you will see before you,
2837
+
2838
+ 45:42.800 --> 45:45.120
2839
+ like it will not be this like big bang thing
2840
+
2841
+ 45:45.120 --> 45:47.400
2842
+ where like Clippy comes back and you've got this like,
2843
+
2844
+ 45:47.400 --> 45:49.360
2845
+ you know, manifestation of, you know,
2846
+
2847
+ 45:49.360 --> 45:52.000
2848
+ like a fully, fully powered assistant.
2849
+
2850
+ 45:53.320 --> 45:56.920
2851
+ So I think that's, that's definitely coming out.
2852
+
2853
+ 45:56.920 --> 45:58.680
2854
+ Like all of the, you know, collaboration,
2855
+
2856
+ 45:58.680 --> 46:00.720
2857
+ co authoring stuff's getting better.
2858
+
2859
+ 46:00.720 --> 46:02.200
2860
+ You know, it's like really interesting.
2861
+
2862
+ 46:02.200 --> 46:07.200
2863
+ Like if you look at how we use the office product portfolio
2864
+
2865
+ 46:08.320 --> 46:10.840
2866
+ at Microsoft, like more and more of it is happening
2867
+
2868
+ 46:10.840 --> 46:14.480
2869
+ inside of like teams as a canvas.
2870
+
2871
+ 46:14.480 --> 46:17.160
2872
+ And like it's this thing where, you know,
2873
+
2874
+ 46:17.160 --> 46:19.840
2875
+ that you've got collaboration is like
2876
+
2877
+ 46:19.840 --> 46:21.560
2878
+ at the center of the product.
2879
+
2880
+ 46:21.560 --> 46:26.560
2881
+ And like we, we, we built some like really cool stuff
2882
+
2883
+ 46:26.720 --> 46:29.440
2884
+ that's some of, which is about to be open source
2885
+
2886
+ 46:29.440 --> 46:33.120
2887
+ that are sort of framework level things for doing,
2888
+
2889
+ 46:33.120 --> 46:35.600
2890
+ for doing co authoring.
2891
+
2892
+ 46:35.600 --> 46:36.440
2893
+ That's awesome.
2894
+
2895
+ 46:36.440 --> 46:38.920
2896
+ So in, is there a cloud component to that?
2897
+
2898
+ 46:38.920 --> 46:41.880
2899
+ So on the web or is it,
2900
+
2901
+ 46:41.880 --> 46:43.640
2902
+ forgive me if I don't already know this,
2903
+
2904
+ 46:43.640 --> 46:45.600
2905
+ but with office 365,
2906
+
2907
+ 46:45.600 --> 46:48.480
2908
+ we still, the collaboration we do, if we're doing Word,
2909
+
2910
+ 46:48.480 --> 46:50.640
2911
+ we're still sending the file around.
2912
+
2913
+ 46:50.640 --> 46:51.480
2914
+ No, no, no, no.
2915
+
2916
+ 46:51.480 --> 46:53.400
2917
+ So this is,
2918
+
2919
+ 46:53.400 --> 46:55.240
2920
+ we're already a little bit better than that.
2921
+
2922
+ 46:55.240 --> 46:57.360
2923
+ And like, you know, so like the fact that you're unaware
2924
+
2925
+ 46:57.360 --> 46:59.120
2926
+ of it means we've got a better job to do,
2927
+
2928
+ 46:59.120 --> 47:01.960
2929
+ like helping you discover, discover this stuff.
2930
+
2931
+ 47:02.880 --> 47:06.360
2932
+ But yeah, I mean, it's already like got a huge,
2933
+
2934
+ 47:06.360 --> 47:07.200
2935
+ huge cloud component.
2936
+
2937
+ 47:07.200 --> 47:09.680
2938
+ And like part of, you know, part of this framework stuff,
2939
+
2940
+ 47:09.680 --> 47:12.640
2941
+ I think we're calling it, like I,
2942
+
2943
+ 47:12.640 --> 47:14.520
2944
+ like we've been working on it for a couple of years.
2945
+
2946
+ 47:14.520 --> 47:17.200
2947
+ So like, I know the, the internal OLA code name for it,
2948
+
2949
+ 47:17.200 --> 47:18.640
2950
+ but I think when we launched it to build,
2951
+
2952
+ 47:18.640 --> 47:20.720
2953
+ it's called the fluid framework.
2954
+
2955
+ 47:21.920 --> 47:25.080
2956
+ And, but like what fluid lets you do is like,
2957
+
2958
+ 47:25.080 --> 47:27.920
2959
+ you can go into a conversation that you're having in teams
2960
+
2961
+ 47:27.920 --> 47:30.280
2962
+ and like reference, like part of a spreadsheet
2963
+
2964
+ 47:30.280 --> 47:32.600
2965
+ that you're working on,
2966
+
2967
+ 47:32.600 --> 47:35.600
2968
+ where somebody's like sitting in the Excel canvas,
2969
+
2970
+ 47:35.600 --> 47:37.760
2971
+ like working on the spreadsheet with a, you know,
2972
+
2973
+ 47:37.760 --> 47:39.120
2974
+ charter whatnot.
2975
+
2976
+ 47:39.120 --> 47:42.000
2977
+ And like, you can sort of embed like part of the spreadsheet
2978
+
2979
+ 47:42.000 --> 47:43.240
2980
+ in the team's conversation,
2981
+
2982
+ 47:43.240 --> 47:46.520
2983
+ where like you can dynamically update in like all
2984
+
2985
+ 47:46.520 --> 47:49.400
2986
+ of the changes that you're making to the,
2987
+
2988
+ 47:49.400 --> 47:51.280
2989
+ to this object or like, you know,
2990
+
2991
+ 47:51.280 --> 47:54.680
2992
+ coordinate and everything is sort of updating in real time.
2993
+
2994
+ 47:54.680 --> 47:58.000
2995
+ So like you can be in whatever canvas is most convenient
2996
+
2997
+ 47:58.000 --> 48:00.400
2998
+ for you to get your work done.
2999
+
3000
+ 48:00.400 --> 48:03.400
3001
+ So out of my own sort of curiosity as an engineer,
3002
+
3003
+ 48:03.400 --> 48:06.280
3004
+ I know what it's like to sort of lead a team
3005
+
3006
+ 48:06.280 --> 48:08.280
3007
+ of 10, 15 engineers.
3008
+
3009
+ 48:08.280 --> 48:11.680
3010
+ Microsoft has, I don't know what the numbers are,
3011
+
3012
+ 48:11.680 --> 48:14.920
3013
+ maybe 15, maybe 60,000 engineers, maybe 40.
3014
+
3015
+ 48:14.920 --> 48:16.160
3016
+ I don't know exactly what the number is.
3017
+
3018
+ 48:16.160 --> 48:17.000
3019
+ It's a lot.
3020
+
3021
+ 48:17.000 --> 48:18.520
3022
+ It's tens of thousands.
3023
+
3024
+ 48:18.520 --> 48:20.640
3025
+ Right. This is more than 10 or 15.
3026
+
3027
+ 48:23.640 --> 48:28.640
3028
+ I mean, you've led different sizes,
3029
+
3030
+ 48:28.720 --> 48:30.560
3031
+ mostly large sizes of engineers.
3032
+
3033
+ 48:30.560 --> 48:33.840
3034
+ What does it take to lead such a large group
3035
+
3036
+ 48:33.840 --> 48:37.480
3037
+ into a continue innovation,
3038
+
3039
+ 48:37.480 --> 48:40.240
3040
+ continue being highly productive
3041
+
3042
+ 48:40.240 --> 48:43.200
3043
+ and yet develop all kinds of new ideas
3044
+
3045
+ 48:43.200 --> 48:45.120
3046
+ and yet maintain like, what does it take
3047
+
3048
+ 48:45.120 --> 48:49.000
3049
+ to lead such a large group of brilliant people?
3050
+
3051
+ 48:49.000 --> 48:52.080
3052
+ I think the thing that you learn
3053
+
3054
+ 48:52.080 --> 48:55.120
3055
+ as you manage larger and larger scale
3056
+
3057
+ 48:55.120 --> 48:57.920
3058
+ is that there are three things
3059
+
3060
+ 48:57.920 --> 49:00.480
3061
+ that are like very, very important
3062
+
3063
+ 49:00.480 --> 49:02.360
3064
+ for big engineering teams.
3065
+
3066
+ 49:02.360 --> 49:06.320
3067
+ Like one is like having some sort of forethought
3068
+
3069
+ 49:06.320 --> 49:09.840
3070
+ about what it is that you're going to be building
3071
+
3072
+ 49:09.840 --> 49:11.040
3073
+ over large periods of time.
3074
+
3075
+ 49:11.040 --> 49:11.880
3076
+ Like not exactly.
3077
+
3078
+ 49:11.880 --> 49:13.760
3079
+ Like you don't need to know that like,
3080
+
3081
+ 49:13.760 --> 49:16.440
3082
+ I'm putting all my chips on this one product
3083
+
3084
+ 49:16.440 --> 49:17.760
3085
+ and like this is going to be the thing.
3086
+
3087
+ 49:17.760 --> 49:21.440
3088
+ But it's useful to know what sort of capabilities
3089
+
3090
+ 49:21.440 --> 49:23.080
3091
+ you think you're going to need to have
3092
+
3093
+ 49:23.080 --> 49:24.720
3094
+ to build the products of the future
3095
+
3096
+ 49:24.720 --> 49:28.000
3097
+ and then like invest in that infrastructure.
3098
+
3099
+ 49:28.000 --> 49:31.520
3100
+ Like whether, and I'm not just talking about storage systems
3101
+
3102
+ 49:31.520 --> 49:33.480
3103
+ or cloud APIs, it's also like,
3104
+
3105
+ 49:33.480 --> 49:35.360
3106
+ what is your development process look like?
3107
+
3108
+ 49:35.360 --> 49:36.720
3109
+ What tools do you want?
3110
+
3111
+ 49:36.720 --> 49:39.560
3112
+ Like what culture do you want to build
3113
+
3114
+ 49:39.560 --> 49:42.760
3115
+ around like how you're sort of collaborating together
3116
+
3117
+ 49:42.760 --> 49:45.720
3118
+ to like make complicated technical things?
3119
+
3120
+ 49:45.720 --> 49:48.080
3121
+ And so like having an opinion and investing in that
3122
+
3123
+ 49:48.080 --> 49:50.480
3124
+ is like, it just gets more and more important.
3125
+
3126
+ 49:50.480 --> 49:54.520
3127
+ And like the sooner you can get a concrete set of opinions,
3128
+
3129
+ 49:54.520 --> 49:57.680
3130
+ like the better you're going to be.
3131
+
3132
+ 49:57.680 --> 50:01.600
3133
+ Like you can wing it for a while at small scales.
3134
+
3135
+ 50:01.600 --> 50:03.160
3136
+ Like, you know, when you start a company,
3137
+
3138
+ 50:03.160 --> 50:06.320
3139
+ like you don't have to be like super specific about it.
3140
+
3141
+ 50:06.320 --> 50:10.000
3142
+ But like the biggest miseries that I've ever seen
3143
+
3144
+ 50:10.000 --> 50:12.640
3145
+ as an engineering leader are in places
3146
+
3147
+ 50:12.640 --> 50:14.440
3148
+ where you didn't have a clear enough opinion
3149
+
3150
+ 50:14.440 --> 50:16.800
3151
+ about those things soon enough.
3152
+
3153
+ 50:16.800 --> 50:20.240
3154
+ And then you just sort of go create a bunch of technical debt
3155
+
3156
+ 50:20.240 --> 50:24.000
3157
+ and like culture debt that is excruciatingly painful
3158
+
3159
+ 50:24.000 --> 50:25.760
3160
+ to clean up.
3161
+
3162
+ 50:25.760 --> 50:28.640
3163
+ So like that's one bundle of things.
3164
+
3165
+ 50:28.640 --> 50:33.640
3166
+ Like the other, you know, another bundle of things is
3167
+
3168
+ 50:33.640 --> 50:37.440
3169
+ like it's just really, really important to
3170
+
3171
+ 50:38.960 --> 50:43.960
3172
+ like have a clear mission that's not just some cute crap
3173
+
3174
+ 50:45.520 --> 50:48.880
3175
+ you say because like you think you should have a mission,
3176
+
3177
+ 50:48.880 --> 50:52.880
3178
+ but like something that clarifies for people
3179
+
3180
+ 50:52.880 --> 50:55.680
3181
+ like where it is that you're headed together.
3182
+
3183
+ 50:57.160 --> 50:58.520
3184
+ Like I know it's like probably
3185
+
3186
+ 50:58.520 --> 51:00.320
3187
+ like a little bit too popular right now,
3188
+
3189
+ 51:00.320 --> 51:05.320
3190
+ but Yval Harari's book, Sapiens,
3191
+
3192
+ 51:07.240 --> 51:12.240
3193
+ one of the central ideas in his book is that
3194
+
3195
+ 51:12.440 --> 51:16.840
3196
+ like storytelling is like the quintessential thing
3197
+
3198
+ 51:16.840 --> 51:20.480
3199
+ for coordinating the activities of large groups of people.
3200
+
3201
+ 51:20.480 --> 51:22.320
3202
+ Like once you get past Dunbar's number
3203
+
3204
+ 51:23.360 --> 51:25.800
3205
+ and like I've really, really seen that
3206
+
3207
+ 51:25.800 --> 51:27.320
3208
+ just managing engineering teams.
3209
+
3210
+ 51:27.320 --> 51:32.080
3211
+ Like you can just brute force things
3212
+
3213
+ 51:32.080 --> 51:35.160
3214
+ when you're less than 120, 150 folks
3215
+
3216
+ 51:35.160 --> 51:37.520
3217
+ where you can sort of know and trust
3218
+
3219
+ 51:37.520 --> 51:40.920
3220
+ and understand what the dynamics are between all the people.
3221
+
3222
+ 51:40.920 --> 51:41.840
3223
+ But like past that,
3224
+
3225
+ 51:41.840 --> 51:45.440
3226
+ like things just sort of start to catastrophically fail
3227
+
3228
+ 51:45.440 --> 51:48.760
3229
+ if you don't have some sort of set of shared goals
3230
+
3231
+ 51:48.760 --> 51:50.480
3232
+ that you're marching towards.
3233
+
3234
+ 51:50.480 --> 51:52.960
3235
+ And so like even though it sounds touchy feely
3236
+
3237
+ 51:52.960 --> 51:55.640
3238
+ and you know, like a bunch of technical people
3239
+
3240
+ 51:55.640 --> 51:58.200
3241
+ will sort of balk at the idea that like you need
3242
+
3243
+ 51:58.200 --> 52:01.680
3244
+ to like have a clear, like the missions
3245
+
3246
+ 52:01.680 --> 52:03.560
3247
+ like very, very, very important.
3248
+
3249
+ 52:03.560 --> 52:04.640
3250
+ Yval's right, right?
3251
+
3252
+ 52:04.640 --> 52:07.520
3253
+ Stories, that's how our society,
3254
+
3255
+ 52:07.520 --> 52:09.360
3256
+ that's the fabric that connects us all of us
3257
+
3258
+ 52:09.360 --> 52:11.120
3259
+ is these powerful stories.
3260
+
3261
+ 52:11.120 --> 52:13.440
3262
+ And that works for companies too, right?
3263
+
3264
+ 52:13.440 --> 52:14.520
3265
+ It works for everything.
3266
+
3267
+ 52:14.520 --> 52:16.520
3268
+ Like I mean, even down to like, you know,
3269
+
3270
+ 52:16.520 --> 52:18.280
3271
+ you sort of really think about like our currency
3272
+
3273
+ 52:18.280 --> 52:19.960
3274
+ for instance is a story.
3275
+
3276
+ 52:19.960 --> 52:23.360
3277
+ Our constitution is a story, our laws are story.
3278
+
3279
+ 52:23.360 --> 52:27.840
3280
+ I mean, like we believe very, very, very strongly in them
3281
+
3282
+ 52:27.840 --> 52:29.960
3283
+ and thank God we do.
3284
+
3285
+ 52:29.960 --> 52:33.040
3286
+ But like they are, they're just abstract things.
3287
+
3288
+ 52:33.040 --> 52:34.000
3289
+ Like they're just words.
3290
+
3291
+ 52:34.000 --> 52:36.520
3292
+ Like if we don't believe in them, they're nothing.
3293
+
3294
+ 52:36.520 --> 52:39.440
3295
+ And in some sense, those stories are platforms
3296
+
3297
+ 52:39.440 --> 52:43.040
3298
+ and the kinds some of which Microsoft is creating, right?
3299
+
3300
+ 52:43.040 --> 52:46.360
3301
+ Yeah, platforms in which we define the future.
3302
+
3303
+ 52:46.360 --> 52:48.600
3304
+ So last question, what do you,
3305
+
3306
+ 52:48.600 --> 52:50.080
3307
+ let's get philosophical maybe,
3308
+
3309
+ 52:50.080 --> 52:51.480
3310
+ bigger than even Microsoft.
3311
+
3312
+ 52:51.480 --> 52:56.280
3313
+ What do you think the next 2030 plus years
3314
+
3315
+ 52:56.280 --> 53:00.120
3316
+ looks like for computing, for technology, for devices?
3317
+
3318
+ 53:00.120 --> 53:03.760
3319
+ Do you have crazy ideas about the future of the world?
3320
+
3321
+ 53:04.600 --> 53:06.400
3322
+ Yeah, look, I think we, you know,
3323
+
3324
+ 53:06.400 --> 53:09.480
3325
+ we're entering this time where we've got,
3326
+
3327
+ 53:10.640 --> 53:13.360
3328
+ we have technology that is progressing
3329
+
3330
+ 53:13.360 --> 53:15.800
3331
+ at the fastest rate that it ever has.
3332
+
3333
+ 53:15.800 --> 53:20.800
3334
+ And you've got, you get some really big social problems
3335
+
3336
+ 53:20.800 --> 53:25.800
3337
+ like society scale problems that we have to tackle.
3338
+
3339
+ 53:26.320 --> 53:28.720
3340
+ And so, you know, I think we're gonna rise to the challenge
3341
+
3342
+ 53:28.720 --> 53:30.560
3343
+ and like figure out how to intersect
3344
+
3345
+ 53:30.560 --> 53:32.400
3346
+ like all of the power of this technology
3347
+
3348
+ 53:32.400 --> 53:35.320
3349
+ with all of the big challenges that are facing us,
3350
+
3351
+ 53:35.320 --> 53:37.840
3352
+ whether it's, you know, global warming,
3353
+
3354
+ 53:37.840 --> 53:41.000
3355
+ whether it's like the biggest remainder of the population
3356
+
3357
+ 53:41.000 --> 53:46.000
3358
+ boom is in Africa for the next 50 years or so.
3359
+
3360
+ 53:46.800 --> 53:49.360
3361
+ And like global warming is gonna make it increasingly
3362
+
3363
+ 53:49.360 --> 53:52.600
3364
+ difficult to feed the global population in particular,
3365
+
3366
+ 53:52.600 --> 53:54.200
3367
+ like in this place where you're gonna have
3368
+
3369
+ 53:54.200 --> 53:56.600
3370
+ like the biggest population boom.
3371
+
3372
+ 53:57.720 --> 54:01.520
3373
+ I think we, you know, like AI is gonna,
3374
+
3375
+ 54:01.520 --> 54:03.560
3376
+ like if we push it in the right direction,
3377
+
3378
+ 54:03.560 --> 54:05.680
3379
+ like it can do like incredible things
3380
+
3381
+ 54:05.680 --> 54:10.160
3382
+ to empower all of us to achieve our full potential
3383
+
3384
+ 54:10.160 --> 54:15.160
3385
+ and to, you know, like live better lives.
3386
+
3387
+ 54:15.160 --> 54:20.160
3388
+ But like that also means focus on like
3389
+
3390
+ 54:20.520 --> 54:22.040
3391
+ some super important things,
3392
+
3393
+ 54:22.040 --> 54:23.960
3394
+ like how can you apply it to healthcare
3395
+
3396
+ 54:23.960 --> 54:28.960
3397
+ to make sure that, you know, like our quality and cost of,
3398
+
3399
+ 54:29.640 --> 54:32.080
3400
+ and sort of ubiquity of health coverage
3401
+
3402
+ 54:32.080 --> 54:35.080
3403
+ is better and better over time.
3404
+
3405
+ 54:35.080 --> 54:37.960
3406
+ Like that's more and more important every day
3407
+
3408
+ 54:37.960 --> 54:40.880
3409
+ is like in the United States
3410
+
3411
+ 54:40.880 --> 54:43.280
3412
+ and like the rest of the industrialized world.
3413
+
3414
+ 54:43.280 --> 54:45.720
3415
+ So Western Europe, China, Japan, Korea,
3416
+
3417
+ 54:45.720 --> 54:48.880
3418
+ like you've got this population bubble
3419
+
3420
+ 54:48.880 --> 54:52.880
3421
+ of like aging working, you know, working age folks
3422
+
3423
+ 54:52.880 --> 54:56.200
3424
+ who are, you know, at some point over the next 20, 30 years
3425
+
3426
+ 54:56.200 --> 54:58.000
3427
+ they're gonna be largely retired
3428
+
3429
+ 54:58.000 --> 55:00.160
3430
+ and like you're gonna have more retired people
3431
+
3432
+ 55:00.160 --> 55:01.200
3433
+ than working age people.
3434
+
3435
+ 55:01.200 --> 55:02.520
3436
+ And then like you've got, you know,
3437
+
3438
+ 55:02.520 --> 55:04.800
3439
+ sort of natural questions about who's gonna take care
3440
+
3441
+ 55:04.800 --> 55:07.120
3442
+ of all the old folks and who's gonna do all the work.
3443
+
3444
+ 55:07.120 --> 55:11.040
3445
+ And the answers to like all of these sorts of questions
3446
+
3447
+ 55:11.040 --> 55:13.200
3448
+ like where you're sort of running into, you know,
3449
+
3450
+ 55:13.200 --> 55:16.080
3451
+ like constraints of the, you know,
3452
+
3453
+ 55:16.080 --> 55:20.080
3454
+ the world and of society has always been like
3455
+
3456
+ 55:20.080 --> 55:23.000
3457
+ what tech is gonna like help us get around this.
3458
+
3459
+ 55:23.000 --> 55:26.360
3460
+ You know, like when I was a kid in the 70s and 80s,
3461
+
3462
+ 55:26.360 --> 55:29.800
3463
+ like we talked all the time about like population boom,
3464
+
3465
+ 55:29.800 --> 55:32.200
3466
+ population boom, like we're gonna,
3467
+
3468
+ 55:32.200 --> 55:34.360
3469
+ like we're not gonna be able to like feed the planet.
3470
+
3471
+ 55:34.360 --> 55:36.800
3472
+ And like we were like right in the middle
3473
+
3474
+ 55:36.800 --> 55:38.200
3475
+ of the green revolution
3476
+
3477
+ 55:38.200 --> 55:43.200
3478
+ where like this massive technology driven increase
3479
+
3480
+ 55:44.560 --> 55:47.520
3481
+ and crop productivity like worldwide.
3482
+
3483
+ 55:47.520 --> 55:49.320
3484
+ And like some of that was like taking some of the things
3485
+
3486
+ 55:49.320 --> 55:52.560
3487
+ that we knew in the West and like getting them distributed
3488
+
3489
+ 55:52.560 --> 55:55.760
3490
+ to the, you know, to the developing world.
3491
+
3492
+ 55:55.760 --> 55:59.360
3493
+ And like part of it were things like, you know,
3494
+
3495
+ 55:59.360 --> 56:03.280
3496
+ just smarter biology like helping us increase.
3497
+
3498
+ 56:03.280 --> 56:06.760
3499
+ And like we don't talk about like, yeah,
3500
+
3501
+ 56:06.760 --> 56:10.320
3502
+ overpopulation anymore because like we can more or less,
3503
+
3504
+ 56:10.320 --> 56:12.000
3505
+ we sort of figured out how to feed the world.
3506
+
3507
+ 56:12.000 --> 56:14.760
3508
+ Like that's a technology story.
3509
+
3510
+ 56:14.760 --> 56:19.480
3511
+ And so like I'm super, super hopeful about the future
3512
+
3513
+ 56:19.480 --> 56:24.080
3514
+ and in the ways where we will be able to apply technology
3515
+
3516
+ 56:24.080 --> 56:28.040
3517
+ to solve some of these super challenging problems.
3518
+
3519
+ 56:28.040 --> 56:31.360
3520
+ Like I've, like one of the things
3521
+
3522
+ 56:31.360 --> 56:34.680
3523
+ that I'm trying to spend my time doing right now
3524
+
3525
+ 56:34.680 --> 56:36.600
3526
+ is trying to get everybody else to be hopeful
3527
+
3528
+ 56:36.600 --> 56:38.720
3529
+ as well because, you know, back to Harari,
3530
+
3531
+ 56:38.720 --> 56:41.160
3532
+ like we are the stories that we tell.
3533
+
3534
+ 56:41.160 --> 56:44.320
3535
+ Like if we, you know, if we get overly pessimistic right now
3536
+
3537
+ 56:44.320 --> 56:49.320
3538
+ about like the potential future of technology, like we,
3539
+
3540
+ 56:49.320 --> 56:53.680
3541
+ you know, like we may fail to fail to get all the things
3542
+
3543
+ 56:53.680 --> 56:56.880
3544
+ in place that we need to like have our best possible future.
3545
+
3546
+ 56:56.880 --> 56:59.440
3547
+ And that kind of hopeful optimism.
3548
+
3549
+ 56:59.440 --> 57:03.160
3550
+ I'm glad that you have it because you're leading large groups
3551
+
3552
+ 57:03.160 --> 57:05.600
3553
+ of engineers that are actually defining
3554
+
3555
+ 57:05.600 --> 57:06.720
3556
+ that are writing that story,
3557
+
3558
+ 57:06.720 --> 57:08.320
3559
+ that are helping build that future,
3560
+
3561
+ 57:08.320 --> 57:10.000
3562
+ which is super exciting.
3563
+
3564
+ 57:10.000 --> 57:12.320
3565
+ And I agree with everything you said,
3566
+
3567
+ 57:12.320 --> 57:14.840
3568
+ except I do hope Clippy comes back.
3569
+
3570
+ 57:16.400 --> 57:17.760
3571
+ We miss him.
3572
+
3573
+ 57:17.760 --> 57:19.360
3574
+ I speak for the people.
3575
+
3576
+ 57:19.360 --> 57:21.800
3577
+ So, Kellen, thank you so much for talking to me.
3578
+
3579
+ 57:21.800 --> 57:22.640
3580
+ Thank you so much for having me.
3581
+
3582
+ 57:22.640 --> 57:43.640
3583
+ It was a pleasure.
3584
+
vtt/episode_031_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_032_small.vtt ADDED
@@ -0,0 +1,3392 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.360
4
+ The following is a conversation with Paola Arlada.
5
+
6
+ 00:03.360 --> 00:06.400
7
+ She is a professor of stem cell and regenerative biology
8
+
9
+ 00:06.400 --> 00:09.760
10
+ at Harvard University and is interested in understanding
11
+
12
+ 00:09.760 --> 00:13.160
13
+ the molecular laws that govern the birth, differentiation,
14
+
15
+ 00:13.160 --> 00:16.600
16
+ and assembly of the human brain's cerebral cortex.
17
+
18
+ 00:16.600 --> 00:18.400
19
+ She explores the complexity of the brain
20
+
21
+ 00:18.400 --> 00:21.000
22
+ by studying and engineering elements
23
+
24
+ 00:21.000 --> 00:22.880
25
+ of how the brain develops.
26
+
27
+ 00:22.880 --> 00:25.640
28
+ This was a fascinating conversation to me.
29
+
30
+ 00:25.640 --> 00:28.560
31
+ It's part of the Artificial Intelligence podcast.
32
+
33
+ 00:28.560 --> 00:30.600
34
+ If you enjoy it, subscribe on YouTube,
35
+
36
+ 00:30.600 --> 00:32.360
37
+ give it five stars on iTunes.
38
+
39
+ 00:32.360 --> 00:35.760
40
+ Support on Patreon or simply connect with me on Twitter
41
+
42
+ 00:35.760 --> 00:39.960
43
+ at Lex Freedman, spelled F R I D M A N.
44
+
45
+ 00:39.960 --> 00:43.120
46
+ And I'd like to give a special thank you to Amy Jeffers
47
+
48
+ 00:43.120 --> 00:45.560
49
+ for her support of the podcast on Patreon.
50
+
51
+ 00:45.560 --> 00:47.680
52
+ She's an artist and you should definitely check out
53
+
54
+ 00:47.680 --> 00:52.680
55
+ her Instagram at LoveTruthGood, three beautiful words.
56
+
57
+ 00:52.720 --> 00:55.640
58
+ Your support means a lot and inspires me
59
+
60
+ 00:55.640 --> 00:57.720
61
+ to keep the series going.
62
+
63
+ 00:57.720 --> 01:01.720
64
+ And now here's my conversation with Paola Arlada.
65
+
66
+ 01:03.120 --> 01:05.320
67
+ You studied the development of the human brain
68
+
69
+ 01:05.320 --> 01:06.680
70
+ for many years.
71
+
72
+ 01:06.680 --> 01:10.520
73
+ So let me ask you an out of the box question first.
74
+
75
+ 01:11.480 --> 01:14.600
76
+ How likely is it that there's intelligent life out there
77
+
78
+ 01:14.600 --> 01:17.160
79
+ in the universe outside of earth
80
+
81
+ 01:17.160 --> 01:19.320
82
+ with something like the human brain?
83
+
84
+ 01:19.320 --> 01:20.760
85
+ So I can put it another way.
86
+
87
+ 01:20.760 --> 01:24.320
88
+ How unlikely is the human brain?
89
+
90
+ 01:24.320 --> 01:28.360
91
+ How difficult is it to build a thing
92
+
93
+ 01:28.360 --> 01:30.400
94
+ through the evolutionary process?
95
+
96
+ 01:30.400 --> 01:32.520
97
+ Well, it has happened here, right?
98
+
99
+ 01:32.520 --> 01:33.360
100
+ On this planet.
101
+
102
+ 01:33.360 --> 01:34.200
103
+ Once, yes.
104
+
105
+ 01:34.200 --> 01:35.040
106
+ Once.
107
+
108
+ 01:35.040 --> 01:39.960
109
+ So that simply tells you that it could, of course,
110
+
111
+ 01:39.960 --> 01:44.120
112
+ happen again, other places is only a matter of probability.
113
+
114
+ 01:44.120 --> 01:46.600
115
+ What the probability that you would get a brain
116
+
117
+ 01:46.600 --> 01:51.200
118
+ like the ones that we have, like the human brain.
119
+
120
+ 01:51.200 --> 01:54.000
121
+ So how difficult is it to make the human brain?
122
+
123
+ 01:54.000 --> 01:55.400
124
+ It's pretty difficult.
125
+
126
+ 01:56.480 --> 02:00.960
127
+ But most importantly, I guess we know very little
128
+
129
+ 02:00.960 --> 02:04.440
130
+ about how this process really happens.
131
+
132
+ 02:04.440 --> 02:06.320
133
+ And there is a reason for that,
134
+
135
+ 02:06.320 --> 02:09.160
136
+ actually multiple reasons for that.
137
+
138
+ 02:09.160 --> 02:13.400
139
+ Most of what we know about how the mammalian brains
140
+
141
+ 02:13.400 --> 02:15.680
142
+ or the brain of mammals develop,
143
+
144
+ 02:15.680 --> 02:18.960
145
+ comes from studying in labs other brains,
146
+
147
+ 02:18.960 --> 02:22.560
148
+ not our own brain, the brain of mice, for example.
149
+
150
+ 02:22.560 --> 02:25.400
151
+ But if I showed you a picture of a mouse brain
152
+
153
+ 02:25.400 --> 02:28.400
154
+ and then you put it next to a picture of a human brain,
155
+
156
+ 02:28.400 --> 02:31.160
157
+ they don't look at all like each other.
158
+
159
+ 02:31.160 --> 02:33.040
160
+ So they're very different.
161
+
162
+ 02:33.040 --> 02:36.240
163
+ And therefore, there is a limit to what you can learn
164
+
165
+ 02:36.240 --> 02:40.560
166
+ about how the human brain is made by studying the mouse brain.
167
+
168
+ 02:40.560 --> 02:43.320
169
+ There is a huge value in studying the mouse brain.
170
+
171
+ 02:43.320 --> 02:45.080
172
+ There are many things that we have learned,
173
+
174
+ 02:45.080 --> 02:46.600
175
+ but it's not the same thing.
176
+
177
+ 02:46.600 --> 02:49.200
178
+ So in having studied the human brain
179
+
180
+ 02:49.200 --> 02:51.440
181
+ or through the mouse and through other methodologies
182
+
183
+ 02:51.440 --> 02:54.760
184
+ that we'll talk about, do you have a sense?
185
+
186
+ 02:54.760 --> 02:57.560
187
+ I mean, you're one of the experts in the world.
188
+
189
+ 02:57.560 --> 03:01.160
190
+ How much do you feel you know about the brain?
191
+
192
+ 03:01.160 --> 03:05.920
193
+ And how often do you find yourself in awe
194
+
195
+ 03:05.920 --> 03:07.840
196
+ of this mysterious thing?
197
+
198
+ 03:07.840 --> 03:12.200
199
+ Yeah, you pretty much find yourself in awe all the time.
200
+
201
+ 03:12.200 --> 03:15.360
202
+ It's an amazing process.
203
+
204
+ 03:15.360 --> 03:17.960
205
+ It's a process by which,
206
+
207
+ 03:17.960 --> 03:20.720
208
+ by means that we don't fully understand
209
+
210
+ 03:20.720 --> 03:23.640
211
+ at the very beginning of embryogenesis,
212
+
213
+ 03:23.640 --> 03:28.800
214
+ the structure called the neural tube literally self assembles.
215
+
216
+ 03:28.800 --> 03:30.440
217
+ And it happens in an embryo
218
+
219
+ 03:30.440 --> 03:33.720
220
+ and it can happen also from stem cells in a dish.
221
+
222
+ 03:33.720 --> 03:34.960
223
+ Okay.
224
+
225
+ 03:34.960 --> 03:38.720
226
+ And then from there, these stem cells that are present
227
+
228
+ 03:38.720 --> 03:41.640
229
+ within the neural tube give rise to all of the thousands
230
+
231
+ 03:41.640 --> 03:43.440
232
+ and thousands of different cell types
233
+
234
+ 03:43.440 --> 03:46.560
235
+ that are present in the brain through time, right?
236
+
237
+ 03:46.560 --> 03:51.760
238
+ With the interesting, very intriguing, interesting observation
239
+
240
+ 03:51.760 --> 03:56.280
241
+ is that the time that it takes for the human brain to be made,
242
+
243
+ 03:56.280 --> 04:01.480
244
+ it's human time, meaning that for me and you,
245
+
246
+ 04:01.480 --> 04:04.560
247
+ it took almost nine months of gestation to build the brain
248
+
249
+ 04:04.560 --> 04:08.080
250
+ and then another 20 years of learning postnatally
251
+
252
+ 04:08.080 --> 04:09.360
253
+ to get the brain that we have today
254
+
255
+ 04:09.360 --> 04:11.360
256
+ that allows us to this conversation.
257
+
258
+ 04:11.360 --> 04:14.960
259
+ A mouse takes 20 days or so
260
+
261
+ 04:14.960 --> 04:22.200
262
+ for an embryo to be born and so the brain is built
263
+
264
+ 04:22.200 --> 04:24.840
265
+ in a much shorter period of time and the beauty of it
266
+
267
+ 04:24.840 --> 04:27.160
268
+ is that if you take mouse stem cells
269
+
270
+ 04:27.160 --> 04:29.880
271
+ and you put them in a cultured dish,
272
+
273
+ 04:29.880 --> 04:34.840
274
+ the brain organoid that you get from a mouse is formed faster
275
+
276
+ 04:34.840 --> 04:39.120
277
+ that if you took human stem cells and put them in the dish
278
+
279
+ 04:39.120 --> 04:42.120
280
+ and let them make a human brain organoid.
281
+
282
+ 04:42.120 --> 04:45.560
283
+ So the very developmental process is...
284
+
285
+ 04:45.560 --> 04:49.000
286
+ Controlled by the speed of the species.
287
+
288
+ 04:49.000 --> 04:54.080
289
+ Which means it's by its own purpose, it's not accidental
290
+
291
+ 04:54.080 --> 04:59.800
292
+ or there is something in that temporal dynamic to that development.
293
+
294
+ 04:59.800 --> 05:04.000
295
+ Exactly, that is very important for us to get the brain we have
296
+
297
+ 05:04.000 --> 05:08.120
298
+ and we can speculate for why that is.
299
+
300
+ 05:08.120 --> 05:13.000
301
+ It takes us a long time as human beings after we're born
302
+
303
+ 05:13.000 --> 05:16.040
304
+ to learn all the things that we have to learn
305
+
306
+ 05:16.040 --> 05:17.920
307
+ to have the adult brain.
308
+
309
+ 05:17.920 --> 05:20.160
310
+ It's actually 20 years, think about it.
311
+
312
+ 05:20.160 --> 05:23.440
313
+ From when a baby is born to when a teenager
314
+
315
+ 05:23.440 --> 05:27.160
316
+ goes through puberty to adults, it's a long time.
317
+
318
+ 05:27.160 --> 05:32.320
319
+ Do you think you can maybe talk through the first few months
320
+
321
+ 05:32.320 --> 05:35.080
322
+ and then on to the first 20 years
323
+
324
+ 05:35.080 --> 05:37.640
325
+ and then for the rest of their lives?
326
+
327
+ 05:37.640 --> 05:41.080
328
+ What does the development of the human brain look like?
329
+
330
+ 05:41.080 --> 05:43.360
331
+ What are the different stages?
332
+
333
+ 05:43.360 --> 05:46.720
334
+ At the beginning you have to build a brain, right?
335
+
336
+ 05:46.720 --> 05:48.840
337
+ And the brain is made of cells.
338
+
339
+ 05:48.840 --> 05:51.960
340
+ What's the very beginning? Which beginning are we talking about?
341
+
342
+ 05:51.960 --> 05:53.280
343
+ In the embryo.
344
+
345
+ 05:53.280 --> 05:56.000
346
+ As the embryo is developing in the womb,
347
+
348
+ 05:56.000 --> 05:59.680
349
+ in addition to making all of the other tissues of the embryo,
350
+
351
+ 05:59.680 --> 06:02.040
352
+ the muscle, the heart, the blood,
353
+
354
+ 06:02.040 --> 06:04.960
355
+ the embryo is also building the brain.
356
+
357
+ 06:04.960 --> 06:08.480
358
+ And it builds from a very simple structure
359
+
360
+ 06:08.480 --> 06:10.000
361
+ called the neural tube,
362
+
363
+ 06:10.000 --> 06:13.160
364
+ which is basically nothing but a tube of cells
365
+
366
+ 06:13.160 --> 06:15.800
367
+ that spans sort of the length of the embryo
368
+
369
+ 06:15.800 --> 06:20.840
370
+ from the head all the way to the tail, let's say, of the embryo.
371
+
372
+ 06:20.840 --> 06:23.360
373
+ And then over in human beings,
374
+
375
+ 06:23.360 --> 06:25.520
376
+ over many months of gestation,
377
+
378
+ 06:25.520 --> 06:28.240
379
+ from that neural tube,
380
+
381
+ 06:28.240 --> 06:32.480
382
+ which contains a stem cell like cells of the brain,
383
+
384
+ 06:32.480 --> 06:37.000
385
+ you will make many, many other building blocks of the brain.
386
+
387
+ 06:37.000 --> 06:39.600
388
+ So all of the other cell types,
389
+
390
+ 06:39.600 --> 06:43.080
391
+ because there are many, many different types of cells in the brain,
392
+
393
+ 06:43.080 --> 06:46.640
394
+ that will form specific structures of the brain.
395
+
396
+ 06:46.640 --> 06:49.800
397
+ So you can think about embryonic development of the brain
398
+
399
+ 06:49.800 --> 06:54.440
400
+ as just the time in which you are making the building blocks, the cells.
401
+
402
+ 06:54.440 --> 06:56.840
403
+ Are the stem cells relatively homogeneous,
404
+
405
+ 06:56.840 --> 06:59.240
406
+ like uniform, or are they all different types?
407
+
408
+ 06:59.240 --> 07:01.280
409
+ It's a very good question. It's exactly how it works.
410
+
411
+ 07:01.280 --> 07:05.200
412
+ You start with a more homogeneous,
413
+
414
+ 07:05.200 --> 07:09.120
415
+ perhaps more multipotent type of stem cell.
416
+
417
+ 07:09.120 --> 07:13.520
418
+ That multipotent means that it has the potential
419
+
420
+ 07:13.520 --> 07:17.360
421
+ to make many, many different types of other cells.
422
+
423
+ 07:17.360 --> 07:21.440
424
+ And then with time, these progenitors become more heterogeneous,
425
+
426
+ 07:21.440 --> 07:22.840
427
+ which means more diverse.
428
+
429
+ 07:22.840 --> 07:26.200
430
+ There are going to be many different types of these stem cells.
431
+
432
+ 07:26.200 --> 07:28.760
433
+ And also they will give rise to progeny,
434
+
435
+ 07:28.760 --> 07:31.760
436
+ to other cells that are not stem cells,
437
+
438
+ 07:31.760 --> 07:33.320
439
+ that are specific cells of the brain,
440
+
441
+ 07:33.320 --> 07:36.080
442
+ that are very different from the mother stem cell.
443
+
444
+ 07:36.080 --> 07:40.360
445
+ And now you think about this process of making cells from the stem cells
446
+
447
+ 07:40.360 --> 07:43.760
448
+ over many, many months of development for humans.
449
+
450
+ 07:43.760 --> 07:48.200
451
+ And what you're doing here, building the cells that physically make the brain,
452
+
453
+ 07:48.200 --> 07:52.400
454
+ and then you arrange them in specific structures
455
+
456
+ 07:52.400 --> 07:55.440
457
+ that are present in the final brain.
458
+
459
+ 07:55.440 --> 07:59.280
460
+ So you can think about the embryonic development of the brain
461
+
462
+ 07:59.280 --> 08:02.120
463
+ as the time where you're building the bricks.
464
+
465
+ 08:02.120 --> 08:05.520
466
+ You're putting the bricks together to form buildings,
467
+
468
+ 08:05.520 --> 08:08.120
469
+ structures, regions of the brain,
470
+
471
+ 08:08.120 --> 08:13.040
472
+ and where you make the connections between these many different types of cells,
473
+
474
+ 08:13.040 --> 08:15.200
475
+ especially nerve cells, neurons, right,
476
+
477
+ 08:15.200 --> 08:19.240
478
+ that transmit action potentials and electricity.
479
+
480
+ 08:19.240 --> 08:22.280
481
+ I've heard you also say somewhere, I think, correct me if I'm wrong,
482
+
483
+ 08:22.280 --> 08:25.080
484
+ that the order of the way this builds matters.
485
+
486
+ 08:25.080 --> 08:26.040
487
+ Oh, yes.
488
+
489
+ 08:26.040 --> 08:29.880
490
+ If you are an engineer and you think about development,
491
+
492
+ 08:29.880 --> 08:34.960
493
+ you can think of it as, well, I could also take all the cells
494
+
495
+ 08:34.960 --> 08:37.960
496
+ and bring them all together into a brain in the end.
497
+
498
+ 08:37.960 --> 08:40.320
499
+ But development is much more than that.
500
+
501
+ 08:40.320 --> 08:43.880
502
+ So the cells are made in a very specific order
503
+
504
+ 08:43.880 --> 08:47.400
505
+ that subserve the final product that you need to get.
506
+
507
+ 08:47.400 --> 08:52.200
508
+ And so, for example, all of the nerve cells, the neurons, are made first.
509
+
510
+ 08:52.200 --> 08:56.760
511
+ And all of the supportive cells of the neurons, like the glia, is made later.
512
+
513
+ 08:56.760 --> 09:02.080
514
+ And there is a reason for that because they have to assemble together in specific ways.
515
+
516
+ 09:02.080 --> 09:05.720
517
+ But you also may say, well, why don't we just put them all together in the end?
518
+
519
+ 09:05.720 --> 09:08.920
520
+ It's because as they develop next to each other,
521
+
522
+ 09:08.920 --> 09:11.280
523
+ they influence their own development.
524
+
525
+ 09:11.280 --> 09:15.400
526
+ So it's a different thing for a glia to be made alone in a dish
527
+
528
+ 09:15.400 --> 09:19.360
529
+ than a glia cell be made in a developing embryo
530
+
531
+ 09:19.360 --> 09:23.680
532
+ with all these other cells around it that produce all these other signals.
533
+
534
+ 09:23.680 --> 09:27.840
535
+ First of all, that's mind blowing, that this development process.
536
+
537
+ 09:27.840 --> 09:29.760
538
+ From my perspective in artificial intelligence,
539
+
540
+ 09:29.760 --> 09:33.480
541
+ you often think of how incredible the final product is,
542
+
543
+ 09:33.480 --> 09:35.200
544
+ the final product, the brain.
545
+
546
+ 09:35.200 --> 09:40.400
547
+ But you just, you're making me realize that the final product is just,
548
+
549
+ 09:40.400 --> 09:44.480
550
+ is the beautiful thing is the actual development process.
551
+
552
+ 09:44.480 --> 09:51.640
553
+ Do we know the code that drives that development?
554
+
555
+ 09:51.640 --> 09:53.640
556
+ Do we have any sense?
557
+
558
+ 09:53.640 --> 09:59.320
559
+ First of all, thank you for saying that it's really the formation of the brain.
560
+
561
+ 09:59.320 --> 10:05.120
562
+ It's really its development, this incredibly choreographed dance
563
+
564
+ 10:05.120 --> 10:10.560
565
+ that happens the same way every time each one of us builds the brain, right?
566
+
567
+ 10:10.560 --> 10:14.880
568
+ And that builds an organ that allows us to do what we're doing today, right?
569
+
570
+ 10:14.880 --> 10:16.360
571
+ That is mind blowing.
572
+
573
+ 10:16.360 --> 10:21.880
574
+ And this is why developmental neurobiologists never get tired of studying that.
575
+
576
+ 10:21.880 --> 10:23.800
577
+ Now, you're asking about the code.
578
+
579
+ 10:23.800 --> 10:26.520
580
+ What drives this? How is this done?
581
+
582
+ 10:26.520 --> 10:29.720
583
+ Well, it's millions of years of evolution
584
+
585
+ 10:29.720 --> 10:33.120
586
+ of really fine tuning gene expression programs
587
+
588
+ 10:33.120 --> 10:37.480
589
+ that allow certain cells to be made at a certain time
590
+
591
+ 10:37.480 --> 10:41.680
592
+ and to become a certain cell type,
593
+
594
+ 10:41.680 --> 10:47.360
595
+ but also mechanical forces of pressure bending.
596
+
597
+ 10:47.360 --> 10:50.280
598
+ This embryo is not just, it will not stay a tube,
599
+
600
+ 10:50.280 --> 10:52.000
601
+ this brain for very long.
602
+
603
+ 10:52.000 --> 10:55.720
604
+ At some point, this tube in the front of the embryo will expand
605
+
606
+ 10:55.720 --> 10:58.160
607
+ to make the primordium of the brain, right?
608
+
609
+ 10:58.160 --> 11:02.880
610
+ Now, the forces that control the cells feel,
611
+
612
+ 11:02.880 --> 11:04.840
613
+ and this is another beautiful thing,
614
+
615
+ 11:04.840 --> 11:09.240
616
+ the very force that they feel, which is different from a week before,
617
+
618
+ 11:09.240 --> 11:11.080
619
+ a week ago, will tell the cell,
620
+
621
+ 11:11.080 --> 11:13.640
622
+ oh, you're being squished in a certain way,
623
+
624
+ 11:13.640 --> 11:16.560
625
+ begin to produce these new genes,
626
+
627
+ 11:16.560 --> 11:18.400
628
+ because now you are at the corner,
629
+
630
+ 11:18.400 --> 11:23.160
631
+ or you are in a stretch of cells or whatever it is.
632
+
633
+ 11:23.160 --> 11:26.000
634
+ And so that mechanical physical force
635
+
636
+ 11:26.000 --> 11:29.440
637
+ shapes the fate of the cell as well.
638
+
639
+ 11:29.440 --> 11:31.880
640
+ So it's not only chemical, it's also mechanical.
641
+
642
+ 11:31.880 --> 11:38.360
643
+ So from my perspective, biology is this incredibly complex mess,
644
+
645
+ 11:38.360 --> 11:40.160
646
+ gooey mess.
647
+
648
+ 11:40.160 --> 11:43.440
649
+ So you're seeing mechanical forces.
650
+
651
+ 11:43.440 --> 11:50.000
652
+ How different is a computer or any kind of mechanical machine
653
+
654
+ 11:50.000 --> 11:53.840
655
+ that humans build and the biological systems?
656
+
657
+ 11:53.840 --> 11:57.000
658
+ Have you been, because you've worked a lot with biological systems,
659
+
660
+ 11:57.000 --> 11:59.960
661
+ are they as much of a mess as it seems
662
+
663
+ 11:59.960 --> 12:03.520
664
+ from a perspective of an engineer, a mechanical engineer?
665
+
666
+ 12:03.520 --> 12:11.680
667
+ Yeah, they are much more prone to taking alternative routes, right?
668
+
669
+ 12:11.680 --> 12:18.160
670
+ So if you, we go back to printing a brain versus developing a brain,
671
+
672
+ 12:18.160 --> 12:20.440
673
+ of course, if you print a brain,
674
+
675
+ 12:20.440 --> 12:23.960
676
+ given that you start with the same building blocks, the same cells,
677
+
678
+ 12:23.960 --> 12:28.600
679
+ you could potentially print it the same way every time.
680
+
681
+ 12:28.600 --> 12:32.520
682
+ But that final brain may not work the same way
683
+
684
+ 12:32.520 --> 12:34.440
685
+ as a brain built during development does,
686
+
687
+ 12:34.440 --> 12:38.680
688
+ because the very same building blocks that you're using
689
+
690
+ 12:38.680 --> 12:41.440
691
+ developed in a completely different environment, right?
692
+
693
+ 12:41.440 --> 12:43.000
694
+ That was not the environment of the brain.
695
+
696
+ 12:43.000 --> 12:47.120
697
+ Therefore, they're going to be different just by definition.
698
+
699
+ 12:47.120 --> 12:51.840
700
+ So if you instead use development to build, let's say, a brain
701
+
702
+ 12:51.840 --> 12:55.840
703
+ organoid, which maybe we will be talking about in a few minutes.
704
+
705
+ 12:55.840 --> 12:57.000
706
+ Those things are fascinating.
707
+
708
+ 12:57.000 --> 13:01.960
709
+ Yes, so if you use processes of development,
710
+
711
+ 13:01.960 --> 13:06.480
712
+ then when you watch it, you can see that sometimes things can go wrong
713
+
714
+ 13:06.480 --> 13:07.520
715
+ in some organoids.
716
+
717
+ 13:07.520 --> 13:10.880
718
+ And by wrong, I mean different one organoid from the next.
719
+
720
+ 13:10.880 --> 13:14.840
721
+ While if you think about that embryo, it always goes right.
722
+
723
+ 13:14.840 --> 13:18.960
724
+ So it's this development, it's for as complex as it is.
725
+
726
+ 13:18.960 --> 13:23.680
727
+ Every time a baby is born has, you know, with very few exceptions,
728
+
729
+ 13:23.680 --> 13:26.200
730
+ the brain is like the next baby.
731
+
732
+ 13:26.200 --> 13:31.320
733
+ But it's not the same if you develop it in a dish.
734
+
735
+ 13:31.320 --> 13:33.840
736
+ And first of all, we don't even develop a brain.
737
+
738
+ 13:33.840 --> 13:36.080
739
+ You develop something much simpler in the dish.
740
+
741
+ 13:36.080 --> 13:39.680
742
+ But there are more options for building things differently,
743
+
744
+ 13:39.680 --> 13:46.080
745
+ which really tells you that evolution has played a really
746
+
747
+ 13:46.080 --> 13:53.280
748
+ tight game here for how in the end the brain is built in vivo.
749
+
750
+ 13:53.280 --> 13:55.360
751
+ So just a quick maybe dumb question,
752
+
753
+ 13:55.360 --> 14:01.120
754
+ but it seems like the building process is not a dictatorship.
755
+
756
+ 14:01.120 --> 14:06.680
757
+ It seems like there's not a centralized high level mechanism
758
+
759
+ 14:06.680 --> 14:10.320
760
+ that says, OK, this cell built itself the wrong way.
761
+
762
+ 14:10.320 --> 14:11.560
763
+ I'm going to kill it.
764
+
765
+ 14:11.560 --> 14:15.480
766
+ It seems like there's a really strong distributed mechanism.
767
+
768
+ 14:15.480 --> 14:18.440
769
+ Is that in your sense for what you have?
770
+
771
+ 14:18.440 --> 14:20.920
772
+ There are a lot of possibilities, right?
773
+
774
+ 14:20.920 --> 14:25.080
775
+ And if you think about, for example, different species,
776
+
777
+ 14:25.080 --> 14:28.920
778
+ building their brain, each brain is a little bit different.
779
+
780
+ 14:28.920 --> 14:31.360
781
+ So the brain of a lizard is very different from that
782
+
783
+ 14:31.360 --> 14:36.560
784
+ of a chicken, from that of one of us, and so on and so forth.
785
+
786
+ 14:36.560 --> 14:40.960
787
+ And still is a brain, but it was built differently.
788
+
789
+ 14:40.960 --> 14:44.120
790
+ Starting from stem cells, they pretty much
791
+
792
+ 14:44.120 --> 14:46.040
793
+ had the same potential.
794
+
795
+ 14:46.040 --> 14:49.400
796
+ But in the end, evolution builds different brains
797
+
798
+ 14:49.400 --> 14:51.520
799
+ in different species, because that
800
+
801
+ 14:51.520 --> 14:54.040
802
+ serves in a way the purpose of that species
803
+
804
+ 14:54.040 --> 14:56.680
805
+ and the well being of that organism.
806
+
807
+ 14:56.680 --> 15:00.720
808
+ And so there are many possibilities,
809
+
810
+ 15:00.720 --> 15:04.880
811
+ but then there is a way, and you were talking about a code.
812
+
813
+ 15:04.880 --> 15:07.560
814
+ Nobody knows what the entire code of development is.
815
+
816
+ 15:07.560 --> 15:08.680
817
+ Of course, we don't.
818
+
819
+ 15:08.680 --> 15:13.400
820
+ We know bits and pieces of very specific aspects
821
+
822
+ 15:13.400 --> 15:15.680
823
+ of development of the brain, what genes are involved
824
+
825
+ 15:15.680 --> 15:18.520
826
+ to make a certain cell types, how those two cells interact
827
+
828
+ 15:18.520 --> 15:21.480
829
+ to make the next level structure that we might know,
830
+
831
+ 15:21.480 --> 15:24.560
832
+ but the entirety of it, how it's so well controlled.
833
+
834
+ 15:24.560 --> 15:26.160
835
+ It's really mind blowing.
836
+
837
+ 15:26.160 --> 15:29.120
838
+ So in the first two months in the embryo,
839
+
840
+ 15:29.120 --> 15:32.720
841
+ or whatever, the first few weeks, few months.
842
+
843
+ 15:32.720 --> 15:37.120
844
+ So yeah, the building blocks are constructed,
845
+
846
+ 15:37.120 --> 15:40.040
847
+ the actual, the different regions of the brain,
848
+
849
+ 15:40.040 --> 15:42.760
850
+ I guess, in the nervous system.
851
+
852
+ 15:42.760 --> 15:46.520
853
+ Well, this continues way longer than just the first few months.
854
+
855
+ 15:46.520 --> 15:50.480
856
+ So over the very first few months,
857
+
858
+ 15:50.480 --> 15:52.080
859
+ you build a lot of these cells,
860
+
861
+ 15:52.080 --> 15:56.800
862
+ but then there is continuous building of new cell types
863
+
864
+ 15:56.800 --> 15:58.480
865
+ all the way through birth.
866
+
867
+ 15:58.480 --> 16:00.400
868
+ And then even postnatally,
869
+
870
+ 16:01.520 --> 16:04.000
871
+ I don't know if you've ever heard of myelin.
872
+
873
+ 16:04.000 --> 16:06.720
874
+ Myelin is this sort of insulation
875
+
876
+ 16:06.720 --> 16:09.800
877
+ that is built around the cables of the neurons
878
+
879
+ 16:09.800 --> 16:12.200
880
+ so that the electricity can go really fast from.
881
+
882
+ 16:12.200 --> 16:13.520
883
+ The axons, I guess they're called.
884
+
885
+ 16:13.520 --> 16:15.720
886
+ The axons are called axons, exactly.
887
+
888
+ 16:15.720 --> 16:20.720
889
+ And so as human beings, we myelinate ourselves
890
+
891
+ 16:22.920 --> 16:27.000
892
+ postnatally, a kid, a six year old kid,
893
+
894
+ 16:27.000 --> 16:29.720
895
+ as barely started the process of making
896
+
897
+ 16:29.720 --> 16:31.640
898
+ the mature oligodendrocytes,
899
+
900
+ 16:31.640 --> 16:33.400
901
+ which are the cells that then eventually
902
+
903
+ 16:33.400 --> 16:36.360
904
+ will wrap the axons into myelin.
905
+
906
+ 16:36.360 --> 16:38.720
907
+ And this will continue, believe it or not,
908
+
909
+ 16:38.720 --> 16:42.200
910
+ until we are about 25, 30 years old.
911
+
912
+ 16:42.200 --> 16:45.080
913
+ So there is a continuous process of maturation
914
+
915
+ 16:45.080 --> 16:46.600
916
+ and tweaking and additions,
917
+
918
+ 16:46.600 --> 16:51.040
919
+ and also in response to what we do.
920
+
921
+ 16:51.040 --> 16:53.960
922
+ I remember taking api biology in high school,
923
+
924
+ 16:53.960 --> 16:57.040
925
+ and in the textbook, it said that,
926
+
927
+ 16:57.040 --> 16:58.560
928
+ I'm going by memory here,
929
+
930
+ 16:58.560 --> 17:02.000
931
+ that scientists disagree on the purpose of myelin
932
+
933
+ 17:03.040 --> 17:04.720
934
+ in the brain.
935
+
936
+ 17:04.720 --> 17:06.400
937
+ Is that totally wrong?
938
+
939
+ 17:06.400 --> 17:10.000
940
+ So like, I guess it speeds up the,
941
+
942
+ 17:12.000 --> 17:13.200
943
+ okay, but I'd be wrong here,
944
+
945
+ 17:13.200 --> 17:15.280
946
+ but I guess it speeds up the electricity traveling
947
+
948
+ 17:15.280 --> 17:17.680
949
+ down the axon or something.
950
+
951
+ 17:17.680 --> 17:20.160
952
+ So that's the most sort of canonical,
953
+
954
+ 17:20.160 --> 17:21.760
955
+ and definitely that's the case.
956
+
957
+ 17:21.760 --> 17:24.880
958
+ So you have to imagine an axon,
959
+
960
+ 17:24.880 --> 17:27.680
961
+ and you can think about it as a cable or some type
962
+
963
+ 17:27.680 --> 17:29.520
964
+ with electricity going through.
965
+
966
+ 17:29.520 --> 17:34.400
967
+ And what myelin does by insulating the outside,
968
+
969
+ 17:34.400 --> 17:36.360
970
+ I should say there are tracts of myelin
971
+
972
+ 17:36.360 --> 17:39.600
973
+ and pieces of axons that are naked without myelin.
974
+
975
+ 17:39.600 --> 17:41.760
976
+ And so by having the insulation,
977
+
978
+ 17:41.760 --> 17:44.040
979
+ the electricity instead of going straight through the cable,
980
+
981
+ 17:44.040 --> 17:47.240
982
+ it will jump over a piece of myelin, right?
983
+
984
+ 17:47.240 --> 17:49.960
985
+ To the next naked little piece and jump again,
986
+
987
+ 17:49.960 --> 17:52.720
988
+ and therefore, that's the idea that you go faster.
989
+
990
+ 17:53.920 --> 17:58.720
991
+ And it was always thought that in order to build
992
+
993
+ 17:58.720 --> 18:01.840
994
+ a big brain, a big nervous system,
995
+
996
+ 18:01.840 --> 18:04.160
997
+ in order to have a nervous system
998
+
999
+ 18:04.160 --> 18:06.440
1000
+ that can do very complex type of things,
1001
+
1002
+ 18:06.440 --> 18:09.400
1003
+ then you need a lot of myelin because you wanna go fast
1004
+
1005
+ 18:09.400 --> 18:13.320
1006
+ with this information from point A to point B.
1007
+
1008
+ 18:13.320 --> 18:17.960
1009
+ Well, a few years ago, maybe five years ago or so,
1010
+
1011
+ 18:17.960 --> 18:20.680
1012
+ we discovered that some of the most evolved,
1013
+
1014
+ 18:20.680 --> 18:24.120
1015
+ which means the newest type of neurons that we have
1016
+
1017
+ 18:24.120 --> 18:26.520
1018
+ as non human primates, as as human beings,
1019
+
1020
+ 18:26.520 --> 18:29.120
1021
+ in the top of our cerebral cortex,
1022
+
1023
+ 18:29.120 --> 18:30.920
1024
+ which should be the neurons that do some
1025
+
1026
+ 18:30.920 --> 18:33.200
1027
+ of the most complex things that we do.
1028
+
1029
+ 18:33.200 --> 18:37.080
1030
+ Well, those have axons that have very little myelin.
1031
+
1032
+ 18:37.080 --> 18:42.080
1033
+ Wow. And they have very interesting ways
1034
+
1035
+ 18:42.080 --> 18:44.400
1036
+ in which they put this myelin on their axons,
1037
+
1038
+ 18:44.400 --> 18:46.400
1039
+ you know, a little piece here, then a long track
1040
+
1041
+ 18:46.400 --> 18:48.680
1042
+ with no myelin, another chunk there,
1043
+
1044
+ 18:48.680 --> 18:50.600
1045
+ and some don't have myelin at all.
1046
+
1047
+ 18:50.600 --> 18:53.120
1048
+ So now you have to explain
1049
+
1050
+ 18:54.760 --> 18:57.960
1051
+ where we're going with evolution.
1052
+
1053
+ 18:57.960 --> 19:01.360
1054
+ And if you think about it, perhaps as an electrical engineer,
1055
+
1056
+ 19:02.800 --> 19:06.000
1057
+ when I looked at it, I initially thought,
1058
+
1059
+ 19:06.000 --> 19:07.560
1060
+ I'm a developmental neurobiology,
1061
+
1062
+ 19:07.560 --> 19:10.880
1063
+ I thought maybe this is what we see now,
1064
+
1065
+ 19:10.880 --> 19:14.160
1066
+ but if we give evolution another few million years,
1067
+
1068
+ 19:14.160 --> 19:16.520
1069
+ we'll see a lot of myelin on these neurons too.
1070
+
1071
+ 19:16.520 --> 19:18.840
1072
+ But I actually think now that that's instead
1073
+
1074
+ 19:18.840 --> 19:22.000
1075
+ the future of the brain, less myelin,
1076
+
1077
+ 19:22.000 --> 19:24.720
1078
+ and my allow for more flexibility
1079
+
1080
+ 19:24.720 --> 19:26.720
1081
+ on what you do with your axons,
1082
+
1083
+ 19:26.720 --> 19:28.560
1084
+ and therefore more complicated
1085
+
1086
+ 19:28.560 --> 19:32.200
1087
+ and unpredictable type of functions,
1088
+
1089
+ 19:32.200 --> 19:34.320
1090
+ which is also a bit mind blowing.
1091
+
1092
+ 19:34.320 --> 19:38.480
1093
+ So it seems like it's controlling the timing of the signal.
1094
+
1095
+ 19:38.480 --> 19:40.160
1096
+ So they're in the timing,
1097
+
1098
+ 19:40.160 --> 19:43.320
1099
+ you can encode a lot of information.
1100
+
1101
+ 19:43.320 --> 19:44.680
1102
+ And so the brain...
1103
+
1104
+ 19:44.680 --> 19:48.600
1105
+ The timing, the chemistry of that little piece of axon,
1106
+
1107
+ 19:48.600 --> 19:52.160
1108
+ perhaps it's a dynamic process where the myelin can move.
1109
+
1110
+ 19:52.160 --> 19:57.160
1111
+ Now you see how many layers of variability you can add,
1112
+
1113
+ 19:57.520 --> 19:58.960
1114
+ and that's actually really good.
1115
+
1116
+ 19:58.960 --> 20:02.320
1117
+ If you're trying to come up with a new function
1118
+
1119
+ 20:02.320 --> 20:06.600
1120
+ or a new capability or something unpredictable in a way.
1121
+
1122
+ 20:06.600 --> 20:08.240
1123
+ So we're gonna jump right out a little bit,
1124
+
1125
+ 20:08.240 --> 20:12.880
1126
+ but the old question of how much is nature
1127
+
1128
+ 20:12.880 --> 20:14.560
1129
+ and how much is nurture,
1130
+
1131
+ 20:14.560 --> 20:17.360
1132
+ in terms of this incredible thing
1133
+
1134
+ 20:17.360 --> 20:18.920
1135
+ after the development is over,
1136
+
1137
+ 20:20.280 --> 20:25.160
1138
+ we seem to be kind of somewhat smart, intelligent,
1139
+
1140
+ 20:26.160 --> 20:27.600
1141
+ cognition, consciousness,
1142
+
1143
+ 20:27.600 --> 20:30.680
1144
+ all these things are just incredible ability of reason
1145
+
1146
+ 20:30.680 --> 20:32.080
1147
+ and so on emerge.
1148
+
1149
+ 20:32.080 --> 20:34.960
1150
+ In your sense, how much is in the hardware,
1151
+
1152
+ 20:34.960 --> 20:39.320
1153
+ in the nature and how much is in the nurtures
1154
+
1155
+ 20:39.320 --> 20:41.040
1156
+ learned through with our parents
1157
+
1158
+ 20:41.040 --> 20:42.480
1159
+ through interacting with the environment, so on.
1160
+
1161
+ 20:42.480 --> 20:43.800
1162
+ It's really both, right?
1163
+
1164
+ 20:43.800 --> 20:45.040
1165
+ If you think about it.
1166
+
1167
+ 20:45.040 --> 20:48.040
1168
+ So we are born with a brain as babies
1169
+
1170
+ 20:48.040 --> 20:53.040
1171
+ that has most of its cells and most of its structures
1172
+
1173
+ 20:53.640 --> 20:57.920
1174
+ and that will take a few years to grow,
1175
+
1176
+ 20:57.920 --> 21:00.640
1177
+ to add more, to be better.
1178
+
1179
+ 21:00.640 --> 21:04.160
1180
+ But really then we have this 20 years
1181
+
1182
+ 21:04.160 --> 21:07.080
1183
+ of interacting with the environment around us.
1184
+
1185
+ 21:07.080 --> 21:10.800
1186
+ And so what that brain that was so perfectly built
1187
+
1188
+ 21:10.800 --> 21:15.800
1189
+ or imperfectly built due to our genetic cues
1190
+
1191
+ 21:16.480 --> 21:20.200
1192
+ will then be used to incorporate the environment
1193
+
1194
+ 21:20.200 --> 21:22.760
1195
+ in its farther maturation and development.
1196
+
1197
+ 21:22.760 --> 21:27.000
1198
+ And so your experiences do shape your brain.
1199
+
1200
+ 21:27.000 --> 21:29.480
1201
+ I mean, we know that like if you and I
1202
+
1203
+ 21:29.480 --> 21:33.000
1204
+ may have had a different childhood or a different,
1205
+
1206
+ 21:33.000 --> 21:35.080
1207
+ we have been going to different schools,
1208
+
1209
+ 21:35.080 --> 21:36.480
1210
+ we have been learning different things
1211
+
1212
+ 21:36.480 --> 21:38.080
1213
+ and our brain is a little bit different
1214
+
1215
+ 21:38.080 --> 21:41.200
1216
+ because of that we behave differently because of that.
1217
+
1218
+ 21:41.200 --> 21:44.080
1219
+ And so especially postnatally,
1220
+
1221
+ 21:44.080 --> 21:46.040
1222
+ experience is extremely important.
1223
+
1224
+ 21:46.040 --> 21:48.800
1225
+ We are born with a plastic brain.
1226
+
1227
+ 21:48.800 --> 21:51.480
1228
+ What that means is a brain that is able to change
1229
+
1230
+ 21:51.480 --> 21:54.280
1231
+ in response to stimuli.
1232
+
1233
+ 21:54.280 --> 21:56.400
1234
+ They can be sensory.
1235
+
1236
+ 21:56.400 --> 22:01.040
1237
+ So perhaps some of the most illuminating studies
1238
+
1239
+ 22:01.040 --> 22:03.440
1240
+ that were done were studies in which
1241
+
1242
+ 22:03.440 --> 22:07.000
1243
+ the sensory organs were not working, right?
1244
+
1245
+ 22:07.000 --> 22:09.520
1246
+ If you are born with eyes that don't work,
1247
+
1248
+ 22:09.520 --> 22:12.520
1249
+ then your very brain, the piece of the brain
1250
+
1251
+ 22:12.520 --> 22:16.000
1252
+ that normally would process vision, the visual cortex
1253
+
1254
+ 22:17.240 --> 22:19.840
1255
+ develops postnatally differently
1256
+
1257
+ 22:19.840 --> 22:23.520
1258
+ and it might be used to do something different, right?
1259
+
1260
+ 22:23.520 --> 22:25.600
1261
+ So that's the most extreme.
1262
+
1263
+ 22:25.600 --> 22:27.480
1264
+ The plasticity of the brain, I guess,
1265
+
1266
+ 22:27.480 --> 22:29.480
1267
+ is the magic hardware that it,
1268
+
1269
+ 22:29.480 --> 22:32.960
1270
+ and then its flexibility in all forms
1271
+
1272
+ 22:32.960 --> 22:36.320
1273
+ is what enables the learning postnatally.
1274
+
1275
+ 22:36.320 --> 22:39.280
1276
+ Can you talk about organoids?
1277
+
1278
+ 22:39.280 --> 22:40.920
1279
+ What are they?
1280
+
1281
+ 22:40.920 --> 22:43.760
1282
+ And how can you use them to help us understand
1283
+
1284
+ 22:43.760 --> 22:45.760
1285
+ the brain and the development of the brain?
1286
+
1287
+ 22:45.760 --> 22:47.360
1288
+ This is very, very important.
1289
+
1290
+ 22:47.360 --> 22:49.920
1291
+ So the first thing I'd like to say,
1292
+
1293
+ 22:49.920 --> 22:51.440
1294
+ please keep this in the video.
1295
+
1296
+ 22:51.440 --> 22:56.040
1297
+ The first thing I'd like to say is that an organoid,
1298
+
1299
+ 22:56.040 --> 23:01.040
1300
+ a brain organoid, is not the same as a brain, okay?
1301
+
1302
+ 23:01.600 --> 23:03.640
1303
+ It's a fundamental distinction.
1304
+
1305
+ 23:03.640 --> 23:08.560
1306
+ It's a system, a cellular system,
1307
+
1308
+ 23:08.560 --> 23:12.200
1309
+ that one can develop in the culture dish
1310
+
1311
+ 23:12.200 --> 23:17.200
1312
+ starting from stem cells that will mimic some aspects
1313
+
1314
+ 23:17.200 --> 23:21.400
1315
+ of the development of the brain, but not all of it.
1316
+
1317
+ 23:21.400 --> 23:23.760
1318
+ They are very small, maximum,
1319
+
1320
+ 23:23.760 --> 23:27.920
1321
+ they become about four to five millimeters in diameters.
1322
+
1323
+ 23:27.920 --> 23:32.920
1324
+ They are much simpler than our brain, of course,
1325
+
1326
+ 23:33.400 --> 23:36.480
1327
+ but yet they are the only system
1328
+
1329
+ 23:36.480 --> 23:39.520
1330
+ where we can literally watch a process
1331
+
1332
+ 23:39.520 --> 23:42.560
1333
+ of human brain development unfold.
1334
+
1335
+ 23:42.560 --> 23:45.080
1336
+ And by watch, I mean study it.
1337
+
1338
+ 23:45.080 --> 23:48.000
1339
+ Remember when I told you that we can't understand
1340
+
1341
+ 23:48.000 --> 23:50.040
1342
+ everything about development in our own brain
1343
+
1344
+ 23:50.040 --> 23:51.560
1345
+ by studying a mouse?
1346
+
1347
+ 23:51.560 --> 23:53.600
1348
+ Well, we can't study the actual process
1349
+
1350
+ 23:53.600 --> 23:54.840
1351
+ of development of the human brain
1352
+
1353
+ 23:54.840 --> 23:56.320
1354
+ because it all happens in utero.
1355
+
1356
+ 23:56.320 --> 23:59.320
1357
+ So we will never have access to that process ever.
1358
+
1359
+ 24:00.400 --> 24:04.320
1360
+ And therefore, this is our next best thing,
1361
+
1362
+ 24:04.320 --> 24:08.400
1363
+ like a bunch of stem cells that can be coaxed
1364
+
1365
+ 24:08.400 --> 24:11.720
1366
+ into starting a process of neural tube formation.
1367
+
1368
+ 24:11.720 --> 24:14.680
1369
+ Remember that tube that is made by the embryo rion?
1370
+
1371
+ 24:14.680 --> 24:17.160
1372
+ And from there, a lot of the cell types
1373
+
1374
+ 24:17.160 --> 24:20.680
1375
+ that are present within the brain
1376
+
1377
+ 24:20.680 --> 24:24.960
1378
+ and you can simply watch it and study,
1379
+
1380
+ 24:24.960 --> 24:28.680
1381
+ but you can also think about diseases
1382
+
1383
+ 24:28.680 --> 24:30.880
1384
+ where development of the brain
1385
+
1386
+ 24:30.880 --> 24:34.200
1387
+ does not proceed normally, right, properly.
1388
+
1389
+ 24:34.200 --> 24:35.920
1390
+ Think about neurodevelopmental diseases
1391
+
1392
+ 24:35.920 --> 24:38.280
1393
+ that are many, many different types.
1394
+
1395
+ 24:38.280 --> 24:40.200
1396
+ Think about autism spectrum disorders,
1397
+
1398
+ 24:40.200 --> 24:42.640
1399
+ there are also many different types of autism.
1400
+
1401
+ 24:42.640 --> 24:45.320
1402
+ So there you could take a stem cell
1403
+
1404
+ 24:45.320 --> 24:47.520
1405
+ which really means either a sample of blood
1406
+
1407
+ 24:47.520 --> 24:50.960
1408
+ or a sample of skin from the patient,
1409
+
1410
+ 24:50.960 --> 24:54.360
1411
+ make a stem cell, and then with that stem cell,
1412
+
1413
+ 24:54.360 --> 24:57.480
1414
+ watch a process of formation of a brain organoid
1415
+
1416
+ 24:57.480 --> 25:00.640
1417
+ of that person, with that genetics,
1418
+
1419
+ 25:00.640 --> 25:02.240
1420
+ with that genetic code in it.
1421
+
1422
+ 25:02.240 --> 25:05.840
1423
+ And you can ask, what is this genetic code doing
1424
+
1425
+ 25:05.840 --> 25:08.800
1426
+ to some aspects of development of the brain?
1427
+
1428
+ 25:08.800 --> 25:12.040
1429
+ And for the first time, you may come to solutions
1430
+
1431
+ 25:12.040 --> 25:15.440
1432
+ like, what cells are involved in autism?
1433
+
1434
+ 25:15.440 --> 25:17.400
1435
+ So I have so many questions around this.
1436
+
1437
+ 25:17.400 --> 25:20.560
1438
+ So if you take this human stem cell
1439
+
1440
+ 25:20.560 --> 25:23.400
1441
+ for that particular person with that genetic code,
1442
+
1443
+ 25:23.400 --> 25:26.560
1444
+ how, and you try to build an organoid,
1445
+
1446
+ 25:26.560 --> 25:28.880
1447
+ how often will it look similar?
1448
+
1449
+ 25:28.880 --> 25:31.880
1450
+ What's the, yeah, so.
1451
+
1452
+ 25:31.880 --> 25:33.320
1453
+ Reproducibility.
1454
+
1455
+ 25:33.320 --> 25:37.360
1456
+ Yes, or how much variability is the flip side of that, yeah.
1457
+
1458
+ 25:37.360 --> 25:42.360
1459
+ So there is much more variability in building organoids
1460
+
1461
+ 25:42.360 --> 25:44.560
1462
+ than there is in building brain.
1463
+
1464
+ 25:44.560 --> 25:47.320
1465
+ It's really true that the majority of us,
1466
+
1467
+ 25:47.320 --> 25:49.600
1468
+ when we are born as babies,
1469
+
1470
+ 25:49.600 --> 25:52.480
1471
+ our brains look a lot like each other.
1472
+
1473
+ 25:52.480 --> 25:54.920
1474
+ This is the magic that the embryo does,
1475
+
1476
+ 25:54.920 --> 25:57.680
1477
+ where it builds a brain in the context of a body
1478
+
1479
+ 25:57.680 --> 26:01.280
1480
+ and there is very little variability there.
1481
+
1482
+ 26:01.280 --> 26:02.320
1483
+ There is disease, of course,
1484
+
1485
+ 26:02.320 --> 26:04.000
1486
+ but in general, little variability.
1487
+
1488
+ 26:04.000 --> 26:08.400
1489
+ When you build an organoid, we don't have the full code
1490
+
1491
+ 26:08.400 --> 26:09.520
1492
+ for how this is done.
1493
+
1494
+ 26:09.520 --> 26:13.440
1495
+ And so in part, the organoid somewhat builds itself
1496
+
1497
+ 26:13.440 --> 26:15.560
1498
+ because there are some structures of the brain
1499
+
1500
+ 26:15.560 --> 26:17.280
1501
+ that the cells know how to make.
1502
+
1503
+ 26:18.160 --> 26:21.840
1504
+ And another part comes from the investigator,
1505
+
1506
+ 26:21.840 --> 26:26.160
1507
+ the scientist, adding to the media factors
1508
+
1509
+ 26:26.160 --> 26:28.040
1510
+ that we know in the mouse, for example,
1511
+
1512
+ 26:28.040 --> 26:30.760
1513
+ would foster a certain step of development.
1514
+
1515
+ 26:30.760 --> 26:33.240
1516
+ But it's very limited.
1517
+
1518
+ 26:33.240 --> 26:36.160
1519
+ And so as a result,
1520
+
1521
+ 26:36.160 --> 26:38.200
1522
+ the kind of product you get in the end
1523
+
1524
+ 26:38.200 --> 26:39.720
1525
+ is much more reductionist.
1526
+
1527
+ 26:39.720 --> 26:42.680
1528
+ It's much more simple than what you get in vivo.
1529
+
1530
+ 26:42.680 --> 26:46.200
1531
+ It mimics early events of development as of today.
1532
+
1533
+ 26:46.200 --> 26:49.080
1534
+ And it doesn't build very complex type of anatomy
1535
+
1536
+ 26:49.080 --> 26:51.440
1537
+ and structure does not as of today,
1538
+
1539
+ 26:52.600 --> 26:54.920
1540
+ which happens instead in vivo.
1541
+
1542
+ 26:54.920 --> 26:59.120
1543
+ And also the variability that you see
1544
+
1545
+ 26:59.120 --> 27:02.800
1546
+ one organoid to the next tends to be higher
1547
+
1548
+ 27:02.800 --> 27:05.560
1549
+ than when you compare an embryo to the next.
1550
+
1551
+ 27:05.560 --> 27:08.960
1552
+ So, okay, then the next question is how hard
1553
+
1554
+ 27:08.960 --> 27:11.120
1555
+ and maybe another flip side of that expensive
1556
+
1557
+ 27:11.120 --> 27:14.960
1558
+ is it to go from one stem cell to an organoid?
1559
+
1560
+ 27:14.960 --> 27:16.760
1561
+ How many can you build in like,
1562
+
1563
+ 27:16.760 --> 27:18.480
1564
+ because it sounds very complicated.
1565
+
1566
+ 27:18.480 --> 27:23.480
1567
+ It's work, definitely, and it's money, definitely.
1568
+
1569
+ 27:23.480 --> 27:28.080
1570
+ But you can really grow a very high number
1571
+
1572
+ 27:28.080 --> 27:31.640
1573
+ of these organoids, you know, can go perhaps,
1574
+
1575
+ 27:31.640 --> 27:33.160
1576
+ I told you the maximum they become
1577
+
1578
+ 27:33.160 --> 27:34.800
1579
+ about five millimeters in diameter.
1580
+
1581
+ 27:34.800 --> 27:39.800
1582
+ So this is about the size of a tiny, tiny, you know, raising
1583
+
1584
+ 27:40.800 --> 27:43.160
1585
+ or perhaps the seed of an apple.
1586
+
1587
+ 27:43.160 --> 27:47.560
1588
+ And so you can grow 50 to 100 of those
1589
+
1590
+ 27:47.560 --> 27:51.360
1591
+ inside one big bioreactors, which are these flasks
1592
+
1593
+ 27:51.360 --> 27:55.520
1594
+ where the media provides nutrients for the organoids.
1595
+
1596
+ 27:55.520 --> 28:00.520
1597
+ So the problem is not to grow more or less of them.
1598
+
1599
+ 28:01.760 --> 28:06.480
1600
+ It's really to figure out how to grow them in a way
1601
+
1602
+ 28:06.480 --> 28:08.440
1603
+ that they are more and more reproducible.
1604
+
1605
+ 28:08.440 --> 28:10.000
1606
+ For example, organoid to organoid,
1607
+
1608
+ 28:10.000 --> 28:13.200
1609
+ so they can be used to study a biological process
1610
+
1611
+ 28:13.200 --> 28:15.640
1612
+ because if you have too much of variability,
1613
+
1614
+ 28:15.640 --> 28:17.160
1615
+ then you never know if what you see
1616
+
1617
+ 28:17.160 --> 28:19.560
1618
+ is just an exception or really the rule.
1619
+
1620
+ 28:19.560 --> 28:22.160
1621
+ So what does an organoid look like?
1622
+
1623
+ 28:22.160 --> 28:25.120
1624
+ Are there different neurons already emerging?
1625
+
1626
+ 28:25.120 --> 28:27.520
1627
+ Is there, you know, well, first,
1628
+
1629
+ 28:27.520 --> 28:29.920
1630
+ can you tell me what kind of neurons are there?
1631
+
1632
+ 28:29.920 --> 28:30.920
1633
+ Yes.
1634
+
1635
+ 28:30.920 --> 28:35.560
1636
+ Are they sort of all the same?
1637
+
1638
+ 28:35.560 --> 28:37.480
1639
+ Are they not all the same?
1640
+
1641
+ 28:37.480 --> 28:39.560
1642
+ Is how much do we understand
1643
+
1644
+ 28:39.560 --> 28:42.440
1645
+ and how much of that variance
1646
+
1647
+ 28:42.440 --> 28:45.800
1648
+ if any can exist in organoids?
1649
+
1650
+ 28:45.800 --> 28:49.360
1651
+ Yes, so you could grow,
1652
+
1653
+ 28:49.360 --> 28:52.440
1654
+ I told you that the brain has different parts.
1655
+
1656
+ 28:52.440 --> 28:56.000
1657
+ So the cerebral cortex is on the top part of the brain,
1658
+
1659
+ 28:56.000 --> 28:57.960
1660
+ but there is another region called the striatum
1661
+
1662
+ 28:57.960 --> 28:59.960
1663
+ that is below the cortex and so on and so forth.
1664
+
1665
+ 28:59.960 --> 29:03.760
1666
+ All of these regions have different types of cells
1667
+
1668
+ 29:03.760 --> 29:05.040
1669
+ in the actual brain.
1670
+
1671
+ 29:05.040 --> 29:05.880
1672
+ Okay.
1673
+
1674
+ 29:05.880 --> 29:08.880
1675
+ And so scientists have been able to grow organoids
1676
+
1677
+ 29:08.880 --> 29:11.440
1678
+ that may mimic some aspects of development
1679
+
1680
+ 29:11.440 --> 29:13.920
1681
+ of these different regions of the brain.
1682
+
1683
+ 29:13.920 --> 29:16.480
1684
+ And so we are very interested in the cerebral cortex.
1685
+
1686
+ 29:16.480 --> 29:17.720
1687
+ That's the coolest part, right?
1688
+
1689
+ 29:17.720 --> 29:18.560
1690
+ Very cool.
1691
+
1692
+ 29:18.560 --> 29:20.880
1693
+ I agree with you.
1694
+
1695
+ 29:20.880 --> 29:23.880
1696
+ We wouldn't be here talking if we didn't have a cerebral cortex.
1697
+
1698
+ 29:23.880 --> 29:25.200
1699
+ It's also, I like to think,
1700
+
1701
+ 29:25.200 --> 29:27.600
1702
+ the part of the brain that really truly makes us human,
1703
+
1704
+ 29:27.600 --> 29:30.200
1705
+ the most evolved in recent evolution.
1706
+
1707
+ 29:30.200 --> 29:33.600
1708
+ And so in the attempt to make the cerebral cortex
1709
+
1710
+ 29:33.600 --> 29:37.200
1711
+ and by figuring out a way to have these organoids
1712
+
1713
+ 29:37.200 --> 29:40.240
1714
+ continue to grow and develop for extended periods of time,
1715
+
1716
+ 29:40.240 --> 29:42.400
1717
+ much like it happens in the real embryo,
1718
+
1719
+ 29:42.400 --> 29:44.240
1720
+ months and months in culture,
1721
+
1722
+ 29:44.240 --> 29:47.920
1723
+ then you can see that many different types
1724
+
1725
+ 29:47.920 --> 29:50.200
1726
+ of neurons of the cortex appear
1727
+
1728
+ 29:50.200 --> 29:52.200
1729
+ and at some point also the astrocytes,
1730
+
1731
+ 29:52.200 --> 29:57.200
1732
+ so the glia cells of the cerebral cortex also appear.
1733
+
1734
+ 29:57.640 --> 29:59.000
1735
+ What are these?
1736
+
1737
+ 29:59.000 --> 29:59.840
1738
+ Astrocytes.
1739
+
1740
+ 29:59.840 --> 30:00.680
1741
+ Astrocytes.
1742
+
1743
+ 30:00.680 --> 30:02.080
1744
+ The astrocytes are not neurons,
1745
+
1746
+ 30:02.080 --> 30:03.440
1747
+ so they're not nerve cells,
1748
+
1749
+ 30:03.440 --> 30:06.160
1750
+ but they play very important roles.
1751
+
1752
+ 30:06.160 --> 30:09.000
1753
+ One important role is to support the neuron,
1754
+
1755
+ 30:09.000 --> 30:11.880
1756
+ but of course they have much more active type of roles.
1757
+
1758
+ 30:11.880 --> 30:13.280
1759
+ They're very important, for example,
1760
+
1761
+ 30:13.280 --> 30:14.560
1762
+ to make the synapses,
1763
+
1764
+ 30:14.560 --> 30:17.640
1765
+ which are the point of contact and communication
1766
+
1767
+ 30:17.640 --> 30:21.480
1768
+ between two neurons, they...
1769
+
1770
+ 30:21.480 --> 30:25.680
1771
+ So all that chemistry fun happens in the synapses
1772
+
1773
+ 30:25.680 --> 30:28.160
1774
+ happens because of these cells?
1775
+
1776
+ 30:28.160 --> 30:29.680
1777
+ Are they the medium in which?
1778
+
1779
+ 30:29.680 --> 30:32.000
1780
+ Happens because of the interactions,
1781
+
1782
+ 30:32.000 --> 30:34.760
1783
+ happens because you are making the cells
1784
+
1785
+ 30:34.760 --> 30:36.320
1786
+ and they have certain properties,
1787
+
1788
+ 30:36.320 --> 30:40.360
1789
+ including the ability to make neurotransmitters,
1790
+
1791
+ 30:40.360 --> 30:43.320
1792
+ which are the chemicals that are secreted to the synapses,
1793
+
1794
+ 30:43.320 --> 30:46.480
1795
+ including the ability of making these axons grow
1796
+
1797
+ 30:46.480 --> 30:49.240
1798
+ with their growth cones and so on and so forth.
1799
+
1800
+ 30:49.240 --> 30:51.400
1801
+ And then you have other cells around there
1802
+
1803
+ 30:51.400 --> 30:55.200
1804
+ that release chemicals or touch the neurons
1805
+
1806
+ 30:55.200 --> 30:57.200
1807
+ or interact with them in different ways
1808
+
1809
+ 30:57.200 --> 30:59.880
1810
+ to really foster this perfect process,
1811
+
1812
+ 30:59.880 --> 31:02.480
1813
+ in this case of synaptogenesis.
1814
+
1815
+ 31:02.480 --> 31:05.680
1816
+ And this does happen within organoids.
1817
+
1818
+ 31:05.680 --> 31:06.520
1819
+ Or with organoids.
1820
+
1821
+ 31:06.520 --> 31:09.760
1822
+ So the mechanical and the chemical stuff happens.
1823
+
1824
+ 31:09.760 --> 31:11.640
1825
+ The connectivity between neurons.
1826
+
1827
+ 31:11.640 --> 31:13.320
1828
+ This, in a way, is not surprising
1829
+
1830
+ 31:13.320 --> 31:18.160
1831
+ because scientists have been culturing neurons forever.
1832
+
1833
+ 31:18.160 --> 31:20.760
1834
+ And when you take a neuron, even a very young one,
1835
+
1836
+ 31:20.760 --> 31:23.480
1837
+ and you culture it, eventually finds another cell
1838
+
1839
+ 31:23.480 --> 31:26.960
1840
+ or another neuron to talk to, it will form a synapse.
1841
+
1842
+ 31:26.960 --> 31:28.520
1843
+ Are we talking about mice neurons?
1844
+
1845
+ 31:28.520 --> 31:29.600
1846
+ Are we talking about human neurons?
1847
+
1848
+ 31:29.600 --> 31:30.600
1849
+ It doesn't matter, both.
1850
+
1851
+ 31:30.600 --> 31:33.280
1852
+ So you can culture a neuron like a single neuron
1853
+
1854
+ 31:33.280 --> 31:37.920
1855
+ and give it a little friend and it starts interacting?
1856
+
1857
+ 31:37.920 --> 31:40.240
1858
+ Yes. So neurons are able to...
1859
+
1860
+ 31:40.240 --> 31:44.560
1861
+ It sounds... It's more simple than what it may sound to you.
1862
+
1863
+ 31:44.560 --> 31:48.320
1864
+ Neurons have molecular properties and structural properties
1865
+
1866
+ 31:48.320 --> 31:51.120
1867
+ that allow them to really communicate with other cells.
1868
+
1869
+ 31:51.120 --> 31:53.160
1870
+ And so if you put not one neuron,
1871
+
1872
+ 31:53.160 --> 31:55.120
1873
+ but if you put several neurons together,
1874
+
1875
+ 31:55.120 --> 32:00.240
1876
+ chances are that they will form synapses with each other.
1877
+
1878
+ 32:00.240 --> 32:01.120
1879
+ Okay, great.
1880
+
1881
+ 32:01.120 --> 32:03.360
1882
+ So an organoid is not a brain.
1883
+
1884
+ 32:03.360 --> 32:03.880
1885
+ No.
1886
+
1887
+ 32:03.880 --> 32:07.600
1888
+ But there's some...
1889
+
1890
+ 32:07.600 --> 32:10.440
1891
+ It's able to, especially what you're talking about,
1892
+
1893
+ 32:10.440 --> 32:15.120
1894
+ mimic some properties of the cerebral cortex, for example.
1895
+
1896
+ 32:15.120 --> 32:17.960
1897
+ So what can you understand about the brain
1898
+
1899
+ 32:17.960 --> 32:21.040
1900
+ by studying an organoid of the cerebral cortex?
1901
+
1902
+ 32:21.040 --> 32:26.400
1903
+ I can literally study all this incredible diversity of cell type,
1904
+
1905
+ 32:26.400 --> 32:29.040
1906
+ all these many, many different classes of cells.
1907
+
1908
+ 32:29.040 --> 32:30.760
1909
+ How are they made?
1910
+
1911
+ 32:30.760 --> 32:32.520
1912
+ How do they look like?
1913
+
1914
+ 32:32.520 --> 32:34.920
1915
+ What do they need to be made properly?
1916
+
1917
+ 32:34.920 --> 32:36.280
1918
+ And what goes wrong?
1919
+
1920
+ 32:36.280 --> 32:39.680
1921
+ If now the genetics of that stem cell
1922
+
1923
+ 32:39.680 --> 32:42.800
1924
+ that I used to make the organoid came from a patient
1925
+
1926
+ 32:42.800 --> 32:44.320
1927
+ with a neurodevelopmental disease,
1928
+
1929
+ 32:44.320 --> 32:47.600
1930
+ can I actually watch for the very first time
1931
+
1932
+ 32:47.600 --> 32:51.400
1933
+ what may have gone wrong years before in this kid
1934
+
1935
+ 32:51.400 --> 32:53.480
1936
+ when its own brain was being made?
1937
+
1938
+ 32:53.480 --> 32:54.720
1939
+ Think about that loop.
1940
+
1941
+ 32:54.720 --> 32:59.600
1942
+ In a way, it's a little tiny rudimentary window
1943
+
1944
+ 32:59.600 --> 33:04.240
1945
+ into the past, into the time when that brain,
1946
+
1947
+ 33:04.240 --> 33:07.680
1948
+ in a kid that had this neurodevelopmental disease,
1949
+
1950
+ 33:07.680 --> 33:10.120
1951
+ was being made.
1952
+
1953
+ 33:10.120 --> 33:12.880
1954
+ And I think that's unbelievably powerful
1955
+
1956
+ 33:12.880 --> 33:16.800
1957
+ because today we have no idea of what cell types,
1958
+
1959
+ 33:16.800 --> 33:20.880
1960
+ we barely know what brain regions are affected in these diseases.
1961
+
1962
+ 33:20.880 --> 33:23.720
1963
+ Now we have an experimental system
1964
+
1965
+ 33:23.720 --> 33:25.440
1966
+ that we can study in the lab
1967
+
1968
+ 33:25.440 --> 33:28.440
1969
+ and we can ask what are the cells affected?
1970
+
1971
+ 33:28.440 --> 33:31.840
1972
+ When, during development, things went wrong.
1973
+
1974
+ 33:31.840 --> 33:35.200
1975
+ What are the molecules among the many, many different molecules
1976
+
1977
+ 33:35.200 --> 33:36.600
1978
+ that control brain development?
1979
+
1980
+ 33:36.600 --> 33:39.720
1981
+ Which ones are the ones that really messed up here
1982
+
1983
+ 33:39.720 --> 33:42.160
1984
+ and we want perhaps to fix?
1985
+
1986
+ 33:42.160 --> 33:44.520
1987
+ And what is really the final product?
1988
+
1989
+ 33:44.520 --> 33:48.560
1990
+ Is it a less strong kind of circuit and brain?
1991
+
1992
+ 33:48.560 --> 33:50.560
1993
+ Is it a brain that lacks a cell type?
1994
+
1995
+ 33:50.560 --> 33:52.040
1996
+ Is it a, what is it?
1997
+
1998
+ 33:52.040 --> 33:54.920
1999
+ Because then we can think about treatment
2000
+
2001
+ 33:54.920 --> 33:59.360
2002
+ and care for these patients that is informed
2003
+
2004
+ 33:59.360 --> 34:02.040
2005
+ rather than just based on current diagnostics.
2006
+
2007
+ 34:02.040 --> 34:06.240
2008
+ So how hard is it to detect through the developmental process?
2009
+
2010
+ 34:06.240 --> 34:09.920
2011
+ It's a super exciting tool
2012
+
2013
+ 34:09.920 --> 34:15.160
2014
+ to see how different conditions develop.
2015
+
2016
+ 34:15.160 --> 34:17.640
2017
+ How hard is it to detect that, wait a minute,
2018
+
2019
+ 34:17.640 --> 34:20.760
2020
+ this is abnormal development.
2021
+
2022
+ 34:20.760 --> 34:22.080
2023
+ Yeah.
2024
+
2025
+ 34:22.080 --> 34:24.840
2026
+ That's how hard it, how much signals there,
2027
+
2028
+ 34:24.840 --> 34:26.520
2029
+ how much of it is it a mess?
2030
+
2031
+ 34:26.520 --> 34:29.520
2032
+ Because things can go wrong at multiple levels, right?
2033
+
2034
+ 34:29.520 --> 34:34.360
2035
+ You could have a cell that is born and built
2036
+
2037
+ 34:34.360 --> 34:36.280
2038
+ but then doesn't work properly
2039
+
2040
+ 34:36.280 --> 34:38.360
2041
+ or a cell that is not even born
2042
+
2043
+ 34:38.360 --> 34:40.760
2044
+ or a cell that doesn't interact with other cells differently
2045
+
2046
+ 34:40.760 --> 34:42.160
2047
+ and so on and so forth.
2048
+
2049
+ 34:42.160 --> 34:44.440
2050
+ So today we have technology
2051
+
2052
+ 34:44.440 --> 34:47.800
2053
+ that we did not have even five years ago
2054
+
2055
+ 34:47.800 --> 34:49.800
2056
+ that allows us to look, for example,
2057
+
2058
+ 34:49.800 --> 34:52.160
2059
+ at the molecular picture of a cell,
2060
+
2061
+ 34:52.160 --> 34:56.840
2062
+ of a single cell in a sea of cells with high precision.
2063
+
2064
+ 34:56.840 --> 34:58.920
2065
+ And so that molecular information
2066
+
2067
+ 34:58.920 --> 35:01.840
2068
+ where you compare many, many single cells
2069
+
2070
+ 35:01.840 --> 35:03.720
2071
+ for the genes that they produce
2072
+
2073
+ 35:03.720 --> 35:06.240
2074
+ between a control individual
2075
+
2076
+ 35:06.240 --> 35:10.200
2077
+ and an individual with a neurodevelopmental disease,
2078
+
2079
+ 35:10.200 --> 35:13.880
2080
+ that may tell you what is different, molecularly.
2081
+
2082
+ 35:13.880 --> 35:18.640
2083
+ Or you could see that some cells are not even made,
2084
+
2085
+ 35:18.640 --> 35:20.840
2086
+ for example, or that the process of maturation
2087
+
2088
+ 35:20.840 --> 35:22.680
2089
+ of the cells may be wrong.
2090
+
2091
+ 35:22.680 --> 35:25.080
2092
+ There are many different levels here
2093
+
2094
+ 35:26.040 --> 35:29.640
2095
+ and we can study the cells at the molecular level
2096
+
2097
+ 35:29.640 --> 35:33.440
2098
+ but also we can use the organoids to ask questions
2099
+
2100
+ 35:33.440 --> 35:35.360
2101
+ about the properties of the neurons,
2102
+
2103
+ 35:35.360 --> 35:37.400
2104
+ the functional properties,
2105
+
2106
+ 35:37.400 --> 35:39.000
2107
+ how they communicate with each other,
2108
+
2109
+ 35:39.000 --> 35:41.440
2110
+ how they respond to a stimulus and so on and so forth
2111
+
2112
+ 35:41.440 --> 35:46.440
2113
+ and we may get abnormalities there, right?
2114
+
2115
+ 35:46.440 --> 35:51.440
2116
+ And detect those, so how early is this work in the,
2117
+
2118
+ 35:51.920 --> 35:54.400
2119
+ maybe in the history of science?
2120
+
2121
+ 35:54.400 --> 35:59.400
2122
+ So, so, I mean, like, so if you were to,
2123
+
2124
+ 35:59.840 --> 36:04.840
2125
+ if you and I time travel a thousand years into the future,
2126
+
2127
+ 36:05.280 --> 36:10.040
2128
+ organoids seem to be, maybe I'm romanticizing the notion
2129
+
2130
+ 36:10.040 --> 36:12.880
2131
+ but you're building not a brain
2132
+
2133
+ 36:12.880 --> 36:15.800
2134
+ but something that has properties of a brain.
2135
+
2136
+ 36:15.800 --> 36:19.120
2137
+ So it feels like you might be getting close to,
2138
+
2139
+ 36:19.120 --> 36:23.320
2140
+ in the building process, to build us to understand.
2141
+
2142
+ 36:23.320 --> 36:28.320
2143
+ So how far are we in this understanding
2144
+
2145
+ 36:29.160 --> 36:30.360
2146
+ process of development?
2147
+
2148
+ 36:31.520 --> 36:34.320
2149
+ A thousand years from now, it's a long time from now.
2150
+
2151
+ 36:34.320 --> 36:36.560
2152
+ So if this planet is still gonna be here,
2153
+
2154
+ 36:36.560 --> 36:38.280
2155
+ a thousand years from now.
2156
+
2157
+ 36:38.280 --> 36:42.080
2158
+ So I mean, if, you know, like they write a book,
2159
+
2160
+ 36:42.080 --> 36:44.120
2161
+ obviously there'll be a chapter about you.
2162
+
2163
+ 36:44.120 --> 36:47.320
2164
+ That's probably the science fiction book today.
2165
+
2166
+ 36:47.320 --> 36:48.160
2167
+ Yeah, today.
2168
+
2169
+ 36:48.160 --> 36:50.840
2170
+ But I mean, I guess where we really understood
2171
+
2172
+ 36:50.840 --> 36:53.400
2173
+ very little about the brain a century ago,
2174
+
2175
+ 36:53.400 --> 36:55.920
2176
+ where I was a big fan in high school,
2177
+
2178
+ 36:55.920 --> 36:58.760
2179
+ reading Freud and so on, still am of psychiatry.
2180
+
2181
+ 36:59.680 --> 37:01.480
2182
+ I would say we still understand very little
2183
+
2184
+ 37:01.480 --> 37:04.720
2185
+ about the functional aspect of just,
2186
+
2187
+ 37:04.720 --> 37:07.760
2188
+ but how in the history of understanding
2189
+
2190
+ 37:07.760 --> 37:09.640
2191
+ the biology of the brain, the development,
2192
+
2193
+ 37:09.640 --> 37:11.240
2194
+ how far are we along?
2195
+
2196
+ 37:11.240 --> 37:12.960
2197
+ It's a very good question.
2198
+
2199
+ 37:12.960 --> 37:15.520
2200
+ And so this is just, of course, my opinion.
2201
+
2202
+ 37:15.520 --> 37:19.720
2203
+ I think that we did not have technology,
2204
+
2205
+ 37:19.720 --> 37:23.160
2206
+ even 10 years ago or certainly not 20 years ago,
2207
+
2208
+ 37:23.160 --> 37:27.760
2209
+ to even think about experimentally investigating
2210
+
2211
+ 37:27.760 --> 37:30.160
2212
+ the development of the human brain.
2213
+
2214
+ 37:30.160 --> 37:32.200
2215
+ So we've done a lot of work in science
2216
+
2217
+ 37:32.200 --> 37:35.480
2218
+ to study the brain on many other organisms.
2219
+
2220
+ 37:35.480 --> 37:39.600
2221
+ Now we have some technologies which I'll spell out
2222
+
2223
+ 37:39.600 --> 37:43.120
2224
+ that allow us to actually look at the real thing
2225
+
2226
+ 37:43.120 --> 37:45.040
2227
+ and look at the brain, at the human brain.
2228
+
2229
+ 37:45.040 --> 37:46.840
2230
+ So what are these technologies?
2231
+
2232
+ 37:46.840 --> 37:50.440
2233
+ There has been huge progress in stem cell biology.
2234
+
2235
+ 37:50.440 --> 37:54.080
2236
+ The moment someone figured out how to turn a skin cell
2237
+
2238
+ 37:54.080 --> 37:57.760
2239
+ into an embryonic stem cell, basically,
2240
+
2241
+ 37:57.760 --> 38:00.160
2242
+ and that how that embryonic stem cell
2243
+
2244
+ 38:00.160 --> 38:02.480
2245
+ could begin a process of development again
2246
+
2247
+ 38:02.480 --> 38:04.000
2248
+ to, for example, make a brain,
2249
+
2250
+ 38:04.000 --> 38:06.040
2251
+ there was a huge, you know, advance.
2252
+
2253
+ 38:06.040 --> 38:08.200
2254
+ And in fact, there was a Nobel Prize for that.
2255
+
2256
+ 38:08.200 --> 38:10.440
2257
+ That started the field, really,
2258
+
2259
+ 38:10.440 --> 38:14.240
2260
+ of using stem cells to build organs.
2261
+
2262
+ 38:14.240 --> 38:17.080
2263
+ Now we can build on all the knowledge of development
2264
+
2265
+ 38:17.080 --> 38:18.560
2266
+ that we build over the many, many, many years
2267
+
2268
+ 38:18.560 --> 38:20.720
2269
+ to say, how do we make these stem cells?
2270
+
2271
+ 38:20.720 --> 38:22.680
2272
+ Now make more and more complex aspects
2273
+
2274
+ 38:22.680 --> 38:25.280
2275
+ of development of the human brain.
2276
+
2277
+ 38:25.280 --> 38:28.480
2278
+ So this field is young, the field of brain organoids,
2279
+
2280
+ 38:28.480 --> 38:30.120
2281
+ but it's moving fast.
2282
+
2283
+ 38:30.120 --> 38:32.560
2284
+ And it's moving fast in a very serious way
2285
+
2286
+ 38:32.560 --> 38:35.960
2287
+ that is rooted in labs with the right ethical framework
2288
+
2289
+ 38:35.960 --> 38:39.720
2290
+ and really building on, you know,
2291
+
2292
+ 38:39.720 --> 38:43.520
2293
+ solid science for what reality is and what is not.
2294
+
2295
+ 38:43.520 --> 38:46.120
2296
+ And, but it will go fast
2297
+
2298
+ 38:46.120 --> 38:49.120
2299
+ and it will be more and more powerful.
2300
+
2301
+ 38:49.120 --> 38:52.480
2302
+ We also have technology that allows us to basically study
2303
+
2304
+ 38:52.480 --> 38:54.640
2305
+ the properties of single cells
2306
+
2307
+ 38:54.640 --> 38:59.240
2308
+ across many, many millions of single cells,
2309
+
2310
+ 38:59.240 --> 39:02.160
2311
+ which we didn't have perhaps five years ago.
2312
+
2313
+ 39:02.160 --> 39:04.840
2314
+ So now with that, even an organoid
2315
+
2316
+ 39:04.840 --> 39:08.480
2317
+ that has millions of cells can be profiled in a way,
2318
+
2319
+ 39:08.480 --> 39:11.320
2320
+ looked at with very, very high resolution,
2321
+
2322
+ 39:11.320 --> 39:14.960
2323
+ the single cell level to really understand what is going on.
2324
+
2325
+ 39:14.960 --> 39:17.520
2326
+ And you could do it in multiple stages of development
2327
+
2328
+ 39:17.520 --> 39:20.120
2329
+ and you can build your hypothesis and so on and so forth.
2330
+
2331
+ 39:20.120 --> 39:22.600
2332
+ So it's not gonna be a thousand years.
2333
+
2334
+ 39:22.600 --> 39:25.240
2335
+ It's gonna be a shorter amount of time.
2336
+
2337
+ 39:25.240 --> 39:29.480
2338
+ And I see this as sort of an exponential growth
2339
+
2340
+ 39:29.480 --> 39:33.560
2341
+ of this field enabled by these technologies
2342
+
2343
+ 39:33.560 --> 39:35.000
2344
+ that we didn't have before.
2345
+
2346
+ 39:35.000 --> 39:36.960
2347
+ And so we're gonna see something transformative
2348
+
2349
+ 39:36.960 --> 39:41.880
2350
+ that we didn't see at all in the prior thousand years.
2351
+
2352
+ 39:41.880 --> 39:44.640
2353
+ So I apologize for the crazy sci fi questions,
2354
+
2355
+ 39:44.640 --> 39:48.840
2356
+ but the developmental process is fascinating to watch
2357
+
2358
+ 39:48.840 --> 39:53.360
2359
+ and study, but how far are we away from
2360
+
2361
+ 39:53.360 --> 39:57.280
2362
+ and maybe how difficult is it to build
2363
+
2364
+ 39:57.280 --> 40:02.280
2365
+ not just an organoid, but a human brain from a stem cell?
2366
+
2367
+ 40:02.280 --> 40:05.640
2368
+ Yeah, first of all, that's not the goal
2369
+
2370
+ 40:05.640 --> 40:09.400
2371
+ for the majority of the serial scientists that work on this
2372
+
2373
+ 40:09.400 --> 40:14.160
2374
+ because you don't have to build the whole human brain
2375
+
2376
+ 40:14.160 --> 40:17.000
2377
+ to make this model useful for understanding
2378
+
2379
+ 40:17.000 --> 40:20.440
2380
+ how the brain develops or understanding disease.
2381
+
2382
+ 40:20.440 --> 40:22.440
2383
+ You don't have to build the whole thing.
2384
+
2385
+ 40:22.440 --> 40:25.200
2386
+ So let me just comment on that, it's fascinating.
2387
+
2388
+ 40:25.200 --> 40:29.200
2389
+ It shows to me the difference between you and I
2390
+
2391
+ 40:29.200 --> 40:32.240
2392
+ is you're actually trying to understand the beauty
2393
+
2394
+ 40:32.240 --> 40:35.520
2395
+ of the human brain and to use it to really help
2396
+
2397
+ 40:35.520 --> 40:38.800
2398
+ thousands or millions of people with disease and so on, right?
2399
+
2400
+ 40:38.800 --> 40:41.480
2401
+ From an artificial intelligence perspective,
2402
+
2403
+ 40:41.480 --> 40:45.600
2404
+ we're trying to build systems that we can put in robots
2405
+
2406
+ 40:45.600 --> 40:49.080
2407
+ and try to create systems that have echoes
2408
+
2409
+ 40:49.080 --> 40:52.360
2410
+ of the intelligence about reasoning about the world,
2411
+
2412
+ 40:52.360 --> 40:53.600
2413
+ navigating the world.
2414
+
2415
+ 40:53.600 --> 40:56.040
2416
+ It's different objectives, I think.
2417
+
2418
+ 40:56.040 --> 40:57.520
2419
+ Yeah, that's very much science fiction.
2420
+
2421
+ 40:57.520 --> 41:00.280
2422
+ Science fiction, but we operate in science fiction a little bit.
2423
+
2424
+ 41:00.280 --> 41:03.440
2425
+ But so on that point of building a brain,
2426
+
2427
+ 41:03.440 --> 41:05.800
2428
+ even though that is not the focus or interest,
2429
+
2430
+ 41:05.800 --> 41:08.520
2431
+ perhaps, of the community, how difficult is it?
2432
+
2433
+ 41:08.520 --> 41:11.200
2434
+ Is it truly science fiction at this point?
2435
+
2436
+ 41:11.200 --> 41:13.960
2437
+ I think the field will progress, like I said,
2438
+
2439
+ 41:13.960 --> 41:17.960
2440
+ and that the system will be more and more complex in a way,
2441
+
2442
+ 41:17.960 --> 41:18.720
2443
+ right?
2444
+
2445
+ 41:18.720 --> 41:23.880
2446
+ But there are properties that emerge from the human brain
2447
+
2448
+ 41:23.880 --> 41:26.640
2449
+ that have to do with the mind, that may have to do with consciousness,
2450
+
2451
+ 41:26.640 --> 41:29.840
2452
+ that may have to do with intelligence or whatever.
2453
+
2454
+ 41:29.840 --> 41:33.720
2455
+ We really don't understand even how they can emerge
2456
+
2457
+ 41:33.720 --> 41:36.880
2458
+ from an actual real brain, and therefore, we cannot measure
2459
+
2460
+ 41:36.880 --> 41:40.160
2461
+ or study in an organoid.
2462
+
2463
+ 41:40.160 --> 41:43.040
2464
+ So I think that this field, many, many years from now,
2465
+
2466
+ 41:43.040 --> 41:48.240
2467
+ may lead to the building of better neural circuits
2468
+
2469
+ 41:48.240 --> 41:50.640
2470
+ that really are built out of understanding of how
2471
+
2472
+ 41:50.640 --> 41:52.400
2473
+ this process really works.
2474
+
2475
+ 41:52.400 --> 41:57.000
2476
+ And it's hard to predict how complex this really will be.
2477
+
2478
+ 41:57.000 --> 42:01.200
2479
+ I really don't think we're so far from, it makes me laugh, really.
2480
+
2481
+ 42:01.200 --> 42:05.120
2482
+ It's really that far from building the human brain.
2483
+
2484
+ 42:05.120 --> 42:10.040
2485
+ But you're going to be building something that is always
2486
+
2487
+ 42:10.040 --> 42:14.800
2488
+ a bad version of it, but that may have really powerful properties
2489
+
2490
+ 42:14.800 --> 42:18.560
2491
+ and might be able to respond to stimuli
2492
+
2493
+ 42:18.560 --> 42:21.880
2494
+ or be used in certain contexts.
2495
+
2496
+ 42:21.880 --> 42:24.800
2497
+ And this is why I really think that there is no other way
2498
+
2499
+ 42:24.800 --> 42:28.200
2500
+ to do this science, but within the right ethical framework.
2501
+
2502
+ 42:28.200 --> 42:31.440
2503
+ Because where you're going with this is also,
2504
+
2505
+ 42:31.440 --> 42:34.160
2506
+ we can talk about science fiction and write that book,
2507
+
2508
+ 42:34.160 --> 42:36.600
2509
+ and we could today.
2510
+
2511
+ 42:36.600 --> 42:41.520
2512
+ But this work happens in a specific ethical framework
2513
+
2514
+ 42:41.520 --> 42:44.880
2515
+ that we don't decide just as scientists, but also as a society.
2516
+
2517
+ 42:44.880 --> 42:48.560
2518
+ So the ethical framework here is a fascinating one,
2519
+
2520
+ 42:48.560 --> 42:51.120
2521
+ is a complicated one.
2522
+
2523
+ 42:51.120 --> 42:55.680
2524
+ Do you have a sense, a grasp of how we think about ethically,
2525
+
2526
+ 42:55.680 --> 43:04.160
2527
+ of building organoids from human stem cells to understand the brain?
2528
+
2529
+ 43:04.160 --> 43:09.720
2530
+ It seems like a tool for helping potentially millions of people
2531
+
2532
+ 43:09.720 --> 43:14.960
2533
+ cure diseases, or at least start to cure by understanding it.
2534
+
2535
+ 43:14.960 --> 43:20.560
2536
+ But is there more, is there gray areas that are ethical,
2537
+
2538
+ 43:20.560 --> 43:22.320
2539
+ that we have to think about ethically?
2540
+
2541
+ 43:22.320 --> 43:23.160
2542
+ Absolutely.
2543
+
2544
+ 43:23.160 --> 43:25.520
2545
+ We must think about that.
2546
+
2547
+ 43:25.520 --> 43:29.560
2548
+ Every discussion about the ethics of this
2549
+
2550
+ 43:29.560 --> 43:34.480
2551
+ needs to be based on actual data from the models that we have today
2552
+
2553
+ 43:34.480 --> 43:36.280
2554
+ and from the ones that we will have tomorrow.
2555
+
2556
+ 43:36.280 --> 43:37.800
2557
+ So it's a continuous conversation.
2558
+
2559
+ 43:37.800 --> 43:39.840
2560
+ It's not something that you decide now.
2561
+
2562
+ 43:39.840 --> 43:42.000
2563
+ Today, there is no issue, really.
2564
+
2565
+ 43:42.000 --> 43:47.240
2566
+ Very simple models that clearly can help you in many ways
2567
+
2568
+ 43:47.240 --> 43:49.880
2569
+ without much to think about.
2570
+
2571
+ 43:49.880 --> 43:52.200
2572
+ But tomorrow, we need to have another conversation,
2573
+
2574
+ 43:52.200 --> 43:53.160
2575
+ and so on and so forth.
2576
+
2577
+ 43:53.160 --> 43:57.120
2578
+ And so the way we do this is to actually really bring together
2579
+
2580
+ 43:57.120 --> 44:00.440
2581
+ constantly a group of people that are not only scientists,
2582
+
2583
+ 44:00.440 --> 44:04.160
2584
+ but also bioethicists, lawyers, philosophers, psychiatrists,
2585
+
2586
+ 44:04.160 --> 44:06.680
2587
+ and psychologists, and so on and so forth,
2588
+
2589
+ 44:06.680 --> 44:13.040
2590
+ to decide as a society, really, what we should
2591
+
2592
+ 44:13.040 --> 44:15.320
2593
+ and what we should not do.
2594
+
2595
+ 44:15.320 --> 44:17.600
2596
+ So that's the way to think about the ethics.
2597
+
2598
+ 44:17.600 --> 44:21.440
2599
+ Now, I also think, though, that as a scientist,
2600
+
2601
+ 44:21.440 --> 44:23.840
2602
+ I have a moral responsibility.
2603
+
2604
+ 44:23.840 --> 44:28.360
2605
+ So if you think about how transformative
2606
+
2607
+ 44:28.360 --> 44:32.640
2608
+ it could be for understanding and curing a neuropsychiatric
2609
+
2610
+ 44:32.640 --> 44:37.320
2611
+ disease, to be able to actually watch and study
2612
+
2613
+ 44:37.320 --> 44:41.480
2614
+ and treat with drugs the very brain of the patient
2615
+
2616
+ 44:41.480 --> 44:44.720
2617
+ that you are trying to study, how transformative
2618
+
2619
+ 44:44.720 --> 44:47.200
2620
+ at this moment in time this could be.
2621
+
2622
+ 44:47.200 --> 44:47.960
2623
+ We couldn't do it.
2624
+
2625
+ 44:47.960 --> 44:50.800
2626
+ Five years ago, we could do it now.
2627
+
2628
+ 44:50.800 --> 44:53.440
2629
+ Taking a stem cell of a particular patient
2630
+
2631
+ 44:53.440 --> 44:57.480
2632
+ and make an organoid for a simple and different
2633
+
2634
+ 44:57.480 --> 45:01.160
2635
+ from the human brain, it still is his process
2636
+
2637
+ 45:01.160 --> 45:04.720
2638
+ of brain development with his or her genetics.
2639
+
2640
+ 45:04.720 --> 45:08.280
2641
+ And we could understand perhaps what is going wrong.
2642
+
2643
+ 45:08.280 --> 45:10.960
2644
+ Perhaps we could use as a platform, as a cellular platform,
2645
+
2646
+ 45:10.960 --> 45:13.720
2647
+ to screen for drugs, to fix a process,
2648
+
2649
+ 45:13.720 --> 45:15.280
2650
+ and so on and so forth.
2651
+
2652
+ 45:15.280 --> 45:18.840
2653
+ So we could do it now, we couldn't do it five years ago.
2654
+
2655
+ 45:18.840 --> 45:20.480
2656
+ Should we not do it?
2657
+
2658
+ 45:20.480 --> 45:24.760
2659
+ What is the downside of doing it?
2660
+
2661
+ 45:24.760 --> 45:27.320
2662
+ I don't see a downside at this very moment.
2663
+
2664
+ 45:27.320 --> 45:30.880
2665
+ If we invited a lot of people, I'm sure there would be
2666
+
2667
+ 45:30.880 --> 45:33.440
2668
+ somebody who would argue against it,
2669
+
2670
+ 45:33.440 --> 45:37.680
2671
+ what would be the devil's advocate argument?
2672
+
2673
+ 45:39.680 --> 45:42.960
2674
+ So it's exactly perhaps what you alluded at
2675
+
2676
+ 45:42.960 --> 45:47.120
2677
+ with your question, that you are making a,
2678
+
2679
+ 45:47.120 --> 45:51.680
2680
+ enabling some process of formation of the brain
2681
+
2682
+ 45:51.680 --> 45:54.440
2683
+ that could be misused at some point,
2684
+
2685
+ 45:54.440 --> 45:59.080
2686
+ or that could be showing properties
2687
+
2688
+ 45:59.080 --> 46:03.960
2689
+ that ethically we don't wanna see in a tissue.
2690
+
2691
+ 46:03.960 --> 46:07.760
2692
+ So today, I repeat, today this is not an issue.
2693
+
2694
+ 46:07.760 --> 46:11.280
2695
+ And so you just gain dramatically from the science
2696
+
2697
+ 46:11.280 --> 46:13.720
2698
+ without, because the system is so simple
2699
+
2700
+ 46:13.720 --> 46:17.840
2701
+ and so different in a way from the actual brain.
2702
+
2703
+ 46:17.840 --> 46:20.000
2704
+ But because it is the brain,
2705
+
2706
+ 46:20.000 --> 46:23.960
2707
+ we have an obligation to really consider all of this, right?
2708
+
2709
+ 46:23.960 --> 46:27.160
2710
+ And again, it's a balanced conversation
2711
+
2712
+ 46:27.160 --> 46:30.360
2713
+ where we should put disease and betterment of humanity
2714
+
2715
+ 46:30.360 --> 46:32.440
2716
+ also on that plate.
2717
+
2718
+ 46:32.440 --> 46:35.440
2719
+ What do you think, at least historically,
2720
+
2721
+ 46:35.440 --> 46:37.280
2722
+ there was some politicization,
2723
+
2724
+ 46:37.280 --> 46:42.280
2725
+ politicization of embryonic stem cells,
2726
+
2727
+ 46:44.360 --> 46:45.960
2728
+ a stem cell research.
2729
+
2730
+ 46:47.160 --> 46:49.160
2731
+ Do you still see that out there?
2732
+
2733
+ 46:49.160 --> 46:53.600
2734
+ Is that still a force that we have to think about,
2735
+
2736
+ 46:53.600 --> 46:55.600
2737
+ especially in this larger discourse
2738
+
2739
+ 46:55.600 --> 46:57.600
2740
+ that we're having about the role of science
2741
+
2742
+ 46:57.600 --> 47:00.640
2743
+ in at least American society?
2744
+
2745
+ 47:00.640 --> 47:03.520
2746
+ Yeah, this is a very good question.
2747
+
2748
+ 47:03.520 --> 47:05.040
2749
+ It's very, very important.
2750
+
2751
+ 47:05.040 --> 47:08.480
2752
+ I see a very central role for scientists
2753
+
2754
+ 47:08.480 --> 47:12.040
2755
+ to inform decisions about what we should
2756
+
2757
+ 47:12.040 --> 47:14.440
2758
+ or should not do in society.
2759
+
2760
+ 47:14.440 --> 47:16.400
2761
+ And this is because the scientists
2762
+
2763
+ 47:16.400 --> 47:20.440
2764
+ have the firsthand look and understanding
2765
+
2766
+ 47:20.440 --> 47:23.520
2767
+ of really the work that they are doing.
2768
+
2769
+ 47:23.520 --> 47:26.080
2770
+ And again, this varies depending on
2771
+
2772
+ 47:26.080 --> 47:27.480
2773
+ what we're talking about here.
2774
+
2775
+ 47:27.480 --> 47:30.800
2776
+ So now we're talking about brain organoids.
2777
+
2778
+ 47:30.800 --> 47:33.800
2779
+ I think that the scientists need to be part
2780
+
2781
+ 47:33.800 --> 47:36.520
2782
+ of that conversation about what is,
2783
+
2784
+ 47:36.520 --> 47:38.040
2785
+ will be allowed in the future
2786
+
2787
+ 47:38.040 --> 47:40.840
2788
+ or not allowed in the future to do with the system.
2789
+
2790
+ 47:40.840 --> 47:43.400
2791
+ And I think that is very, very important
2792
+
2793
+ 47:43.400 --> 47:47.880
2794
+ because they bring reality of data to the conversation.
2795
+
2796
+ 47:48.880 --> 47:51.720
2797
+ And so they should have a voice.
2798
+
2799
+ 47:51.720 --> 47:53.400
2800
+ So data should have a voice.
2801
+
2802
+ 47:53.400 --> 47:55.200
2803
+ Data needs to have a voice.
2804
+
2805
+ 47:55.200 --> 47:59.360
2806
+ Because in not only data, we should also be good
2807
+
2808
+ 47:59.360 --> 48:04.240
2809
+ at communicating with non scientists the data.
2810
+
2811
+ 48:04.240 --> 48:06.840
2812
+ So there has been, often time,
2813
+
2814
+ 48:06.840 --> 48:11.840
2815
+ there is a lot of discussion and excitement
2816
+
2817
+ 48:12.280 --> 48:16.320
2818
+ and fights about certain topics
2819
+
2820
+ 48:16.320 --> 48:19.320
2821
+ just because of the way they are described.
2822
+
2823
+ 48:19.320 --> 48:21.000
2824
+ I'll give you an example.
2825
+
2826
+ 48:21.000 --> 48:23.400
2827
+ If I called the same cellular system,
2828
+
2829
+ 48:23.400 --> 48:27.080
2830
+ we just talked about a brain organoid.
2831
+
2832
+ 48:27.080 --> 48:30.320
2833
+ Or if I called it a human mini brain,
2834
+
2835
+ 48:30.320 --> 48:34.600
2836
+ your reaction is gonna be very different to this.
2837
+
2838
+ 48:34.600 --> 48:37.760
2839
+ And so the way the systems are described,
2840
+
2841
+ 48:37.760 --> 48:40.720
2842
+ I mean, we and journalists alike
2843
+
2844
+ 48:40.720 --> 48:43.720
2845
+ need to be a bit careful that this debate
2846
+
2847
+ 48:43.720 --> 48:46.080
2848
+ is a real debate and informed by real data.
2849
+
2850
+ 48:46.080 --> 48:47.960
2851
+ That's all I'm asking.
2852
+
2853
+ 48:47.960 --> 48:49.600
2854
+ And yeah, the language matters here.
2855
+
2856
+ 48:49.600 --> 48:51.280
2857
+ So I work on autonomous vehicles
2858
+
2859
+ 48:51.280 --> 48:54.960
2860
+ and there the use of language could drastically
2861
+
2862
+ 48:54.960 --> 48:57.480
2863
+ change the interpretation and the way people feel
2864
+
2865
+ 48:57.480 --> 49:01.520
2866
+ about what is the right way to proceed forward.
2867
+
2868
+ 49:01.520 --> 49:04.720
2869
+ You are, as I've seen from a presentation,
2870
+
2871
+ 49:04.720 --> 49:06.240
2872
+ you're a parent.
2873
+
2874
+ 49:06.240 --> 49:09.840
2875
+ I saw you show a couple of pictures of your son.
2876
+
2877
+ 49:09.840 --> 49:11.440
2878
+ Is it just the one?
2879
+
2880
+ 49:11.440 --> 49:12.280
2881
+ Two.
2882
+
2883
+ 49:12.280 --> 49:13.120
2884
+ Two.
2885
+
2886
+ 49:13.120 --> 49:13.960
2887
+ Son and a daughter.
2888
+
2889
+ 49:13.960 --> 49:14.800
2890
+ Son and a daughter.
2891
+
2892
+ 49:14.800 --> 49:17.360
2893
+ So what have you learned from the human brain
2894
+
2895
+ 49:17.360 --> 49:20.120
2896
+ by raising two of them?
2897
+
2898
+ 49:20.120 --> 49:22.800
2899
+ More than I could ever learn in a lab.
2900
+
2901
+ 49:22.800 --> 49:25.600
2902
+ What have I learned?
2903
+
2904
+ 49:26.840 --> 49:28.640
2905
+ I've learned that children really have
2906
+
2907
+ 49:28.640 --> 49:31.520
2908
+ these amazing plastic minds, right?
2909
+
2910
+ 49:31.520 --> 49:35.880
2911
+ That we have a responsibility to, you know,
2912
+
2913
+ 49:35.880 --> 49:38.440
2914
+ foster their growth in good, healthy ways
2915
+
2916
+ 49:39.360 --> 49:42.320
2917
+ that keep them curious, that keep some adventures,
2918
+
2919
+ 49:42.320 --> 49:45.920
2920
+ that doesn't raise them in fear of things.
2921
+
2922
+ 49:46.840 --> 49:48.920
2923
+ But also respecting who they are,
2924
+
2925
+ 49:48.920 --> 49:51.320
2926
+ which is in part, you know, coming from the genetics
2927
+
2928
+ 49:51.320 --> 49:53.840
2929
+ we talked about, my children are very different
2930
+
2931
+ 49:53.840 --> 49:55.240
2932
+ from each other despite the fact
2933
+
2934
+ 49:55.240 --> 49:57.840
2935
+ that they're the product of the same two parents.
2936
+
2937
+ 49:59.320 --> 50:03.080
2938
+ I also learned that what you do for them
2939
+
2940
+ 50:03.080 --> 50:04.280
2941
+ comes back to you.
2942
+
2943
+ 50:04.280 --> 50:05.840
2944
+ Like, you know, if you're a good parent,
2945
+
2946
+ 50:05.840 --> 50:09.800
2947
+ you're gonna, most of the time have, you know,
2948
+
2949
+ 50:09.800 --> 50:12.200
2950
+ perhaps decent kids at the end.
2951
+
2952
+ 50:12.200 --> 50:13.760
2953
+ So what do you think, just a quick comment,
2954
+
2955
+ 50:13.760 --> 50:17.760
2956
+ what do you think is the source of that difference?
2957
+
2958
+ 50:17.760 --> 50:20.960
2959
+ It's often the surprising thing for parents.
2960
+
2961
+ 50:20.960 --> 50:24.000
2962
+ I can't believe that our kids,
2963
+
2964
+ 50:25.640 --> 50:28.080
2965
+ they're so different, yet they came from the same parents.
2966
+
2967
+ 50:28.080 --> 50:29.640
2968
+ Well, they are genetically different.
2969
+
2970
+ 50:29.640 --> 50:31.920
2971
+ Even they came from the same two parents
2972
+
2973
+ 50:31.920 --> 50:33.640
2974
+ because the mixing of gametes,
2975
+
2976
+ 50:33.640 --> 50:35.720
2977
+ so when we know these genetics,
2978
+
2979
+ 50:35.720 --> 50:39.800
2980
+ creates every time a genetically different individual
2981
+
2982
+ 50:39.800 --> 50:43.760
2983
+ which will have a specific mix of genes
2984
+
2985
+ 50:43.760 --> 50:46.560
2986
+ that is a different mix every time from the two parents.
2987
+
2988
+ 50:46.560 --> 50:50.320
2989
+ And so they're not twins.
2990
+
2991
+ 50:50.320 --> 50:52.960
2992
+ They're genetically different.
2993
+
2994
+ 50:52.960 --> 50:55.320
2995
+ Just that little bit of variation.
2996
+
2997
+ 50:55.320 --> 50:58.320
2998
+ As you said, really from a biological perspective,
2999
+
3000
+ 50:58.320 --> 51:00.600
3001
+ the brains look pretty similar.
3002
+
3003
+ 51:00.600 --> 51:02.400
3004
+ Well, so let me clarify that.
3005
+
3006
+ 51:02.400 --> 51:05.400
3007
+ So the genetics you have, the genes that you have,
3008
+
3009
+ 51:05.400 --> 51:08.680
3010
+ that play that beautiful orchestrated symphony
3011
+
3012
+ 51:08.680 --> 51:12.040
3013
+ of development, different genes
3014
+
3015
+ 51:12.040 --> 51:13.920
3016
+ will play it slightly differently.
3017
+
3018
+ 51:13.920 --> 51:16.120
3019
+ It's like playing the same piece of music
3020
+
3021
+ 51:16.120 --> 51:17.960
3022
+ but with the different orchestra
3023
+
3024
+ 51:17.960 --> 51:20.000
3025
+ and a different director, right?
3026
+
3027
+ 51:20.000 --> 51:21.440
3028
+ The music will not come out.
3029
+
3030
+ 51:21.440 --> 51:25.400
3031
+ It will be still a piece by the same author
3032
+
3033
+ 51:25.400 --> 51:27.040
3034
+ but it will come out differently
3035
+
3036
+ 51:27.040 --> 51:28.920
3037
+ if it's played by the high school orchestra
3038
+
3039
+ 51:28.920 --> 51:33.440
3040
+ instead of the, instead of the Scala in Milan.
3041
+
3042
+ 51:34.680 --> 51:39.160
3043
+ And so you are born superficially with the same brain.
3044
+
3045
+ 51:39.160 --> 51:43.440
3046
+ It has the same cell types, similar patterns of connectivity
3047
+
3048
+ 51:43.440 --> 51:45.200
3049
+ but the properties of the cells
3050
+
3051
+ 51:45.200 --> 51:47.600
3052
+ and how the cells will then react to the environment
3053
+
3054
+ 51:47.600 --> 51:51.320
3055
+ as you experience your world will be also shaped
3056
+
3057
+ 51:51.320 --> 51:53.680
3058
+ by who genetically you are.
3059
+
3060
+ 51:53.680 --> 51:55.120
3061
+ Speaking just as a parent,
3062
+
3063
+ 51:55.120 --> 51:56.880
3064
+ this is not something that comes from my work.
3065
+
3066
+ 51:56.880 --> 51:58.840
3067
+ I think you can tell at birth
3068
+
3069
+ 51:58.840 --> 52:01.120
3070
+ that these kids are different
3071
+
3072
+ 52:01.120 --> 52:04.560
3073
+ and that they have a different personality in a way, right?
3074
+
3075
+ 52:05.560 --> 52:07.600
3076
+ So both is needed.
3077
+
3078
+ 52:07.600 --> 52:10.800
3079
+ The genetics as well as the nurturing afterwards.
3080
+
3081
+ 52:11.760 --> 52:14.600
3082
+ So you are one human with a brain
3083
+
3084
+ 52:14.600 --> 52:17.200
3085
+ sort of living through the whole mess of it.
3086
+
3087
+ 52:17.200 --> 52:21.000
3088
+ The human condition, full of love, maybe fear,
3089
+
3090
+ 52:21.000 --> 52:22.880
3091
+ ultimately mortal.
3092
+
3093
+ 52:23.880 --> 52:27.080
3094
+ How has studying the brain changed the way you see yourself?
3095
+
3096
+ 52:27.080 --> 52:29.880
3097
+ When you look in the mirror, when you think about your life,
3098
+
3099
+ 52:29.880 --> 52:31.880
3100
+ the fears, the love.
3101
+
3102
+ 52:31.880 --> 52:34.040
3103
+ When you see your own life, your own mortality.
3104
+
3105
+ 52:34.040 --> 52:36.840
3106
+ Yeah, that's a very good question.
3107
+
3108
+ 52:37.960 --> 52:43.160
3109
+ It's almost impossible to dissociate some time for me.
3110
+
3111
+ 52:43.160 --> 52:45.880
3112
+ Some of the things we do or some of the things
3113
+
3114
+ 52:45.880 --> 52:48.200
3115
+ that other people do from,
3116
+
3117
+ 52:48.200 --> 52:51.080
3118
+ oh, that's because that part of the brain
3119
+
3120
+ 52:51.960 --> 52:54.080
3121
+ is working in a certain way.
3122
+
3123
+ 52:54.080 --> 52:57.840
3124
+ Or thinking about a teenager,
3125
+
3126
+ 52:59.080 --> 53:01.800
3127
+ going through teenage years and being a time funny
3128
+
3129
+ 53:01.800 --> 53:03.560
3130
+ in the way they think.
3131
+
3132
+ 53:03.560 --> 53:07.200
3133
+ And impossible for me not to think it's because
3134
+
3135
+ 53:07.200 --> 53:10.480
3136
+ they're going through this period of time called
3137
+
3138
+ 53:10.480 --> 53:12.640
3139
+ critical periods of plasticity.
3140
+
3141
+ 53:12.640 --> 53:13.480
3142
+ Yeah.
3143
+
3144
+ 53:13.480 --> 53:16.400
3145
+ Where their synapses are being eliminated here and there
3146
+
3147
+ 53:16.400 --> 53:17.760
3148
+ and they're just confused.
3149
+
3150
+ 53:17.760 --> 53:22.280
3151
+ And so from that comes perhaps a different take
3152
+
3153
+ 53:22.280 --> 53:27.280
3154
+ on that behavior or maybe I can justify scientifically
3155
+
3156
+ 53:28.080 --> 53:30.120
3157
+ in some sort of way.
3158
+
3159
+ 53:30.120 --> 53:32.280
3160
+ I also look at humanity in general
3161
+
3162
+ 53:32.280 --> 53:37.040
3163
+ and I am amazed by what we can do
3164
+
3165
+ 53:37.040 --> 53:39.960
3166
+ and the kind of ideas that we can come up with.
3167
+
3168
+ 53:39.960 --> 53:42.840
3169
+ And I cannot stop thinking about
3170
+
3171
+ 53:42.840 --> 53:46.440
3172
+ how the brain is continuing to evolve.
3173
+
3174
+ 53:46.440 --> 53:47.360
3175
+ I don't know if you do this,
3176
+
3177
+ 53:47.360 --> 53:49.720
3178
+ but I think about the next brain sometimes.
3179
+
3180
+ 53:49.720 --> 53:51.080
3181
+ Where are we going with this?
3182
+
3183
+ 53:51.080 --> 53:53.920
3184
+ Like what are the features of this brain
3185
+
3186
+ 53:53.920 --> 53:57.920
3187
+ that evolution is really playing with
3188
+
3189
+ 53:57.920 --> 54:02.920
3190
+ to get us in the future, the new brain?
3191
+
3192
+ 54:03.080 --> 54:04.280
3193
+ It's not over, right?
3194
+
3195
+ 54:04.280 --> 54:07.200
3196
+ It's a work in progress.
3197
+
3198
+ 54:07.200 --> 54:09.280
3199
+ So let me just a quick comment on that.
3200
+
3201
+ 54:09.280 --> 54:14.280
3202
+ Do you see, do you think there's a lot of fascination
3203
+
3204
+ 54:14.520 --> 54:16.240
3205
+ and hope for artificial intelligence
3206
+
3207
+ 54:16.240 --> 54:17.960
3208
+ of creating artificial brains?
3209
+
3210
+ 54:17.960 --> 54:20.320
3211
+ You said the next brain.
3212
+
3213
+ 54:20.320 --> 54:23.600
3214
+ When you imagine over a period of a thousand years
3215
+
3216
+ 54:23.600 --> 54:25.760
3217
+ the evolution of the human brain,
3218
+
3219
+ 54:25.760 --> 54:28.920
3220
+ do you sometimes envisioning that future
3221
+
3222
+ 54:28.920 --> 54:31.440
3223
+ see an artificial one?
3224
+
3225
+ 54:31.440 --> 54:34.280
3226
+ Artificial intelligence as it is hoped by many,
3227
+
3228
+ 54:34.280 --> 54:36.840
3229
+ not hoped, thought by many people
3230
+
3231
+ 54:36.840 --> 54:39.120
3232
+ would be actually the next evolutionary step
3233
+
3234
+ 54:39.120 --> 54:40.680
3235
+ in the development of humans.
3236
+
3237
+ 54:40.680 --> 54:45.480
3238
+ Yeah, I think in a way that will happen, right?
3239
+
3240
+ 54:45.480 --> 54:48.760
3241
+ It's almost like a part of the way we evolve.
3242
+
3243
+ 54:48.760 --> 54:51.400
3244
+ We evolve in the world that we created,
3245
+
3246
+ 54:51.400 --> 54:55.520
3247
+ that we interact with, that shape us as we grow up
3248
+
3249
+ 54:55.520 --> 54:56.840
3250
+ and so on and so forth.
3251
+
3252
+ 54:58.440 --> 55:01.120
3253
+ Sometime I think about something that may sound silly,
3254
+
3255
+ 55:01.120 --> 55:04.720
3256
+ but think about the use of cell phones.
3257
+
3258
+ 55:04.720 --> 55:07.240
3259
+ Part of me thinks that somehow in their brain
3260
+
3261
+ 55:07.240 --> 55:09.160
3262
+ there will be a region of the cortex
3263
+
3264
+ 55:09.160 --> 55:13.720
3265
+ that is attuned to that tool.
3266
+
3267
+ 55:13.720 --> 55:16.600
3268
+ And this comes from a lot of studies
3269
+
3270
+ 55:16.600 --> 55:21.000
3271
+ in model organisms where really the cortex
3272
+
3273
+ 55:21.000 --> 55:24.280
3274
+ especially adapts to the kind of things you have to do.
3275
+
3276
+ 55:24.280 --> 55:28.640
3277
+ So if we need to move our fingers in a very specific way,
3278
+
3279
+ 55:28.640 --> 55:31.040
3280
+ we have a part of our cortex that allows us to do
3281
+
3282
+ 55:31.040 --> 55:33.080
3283
+ this kind of very precise movement.
3284
+
3285
+ 55:34.440 --> 55:37.000
3286
+ An owl that has to see very, very far away
3287
+
3288
+ 55:37.000 --> 55:39.440
3289
+ with big eyes, the visual cortex, very big.
3290
+
3291
+ 55:39.440 --> 55:43.280
3292
+ It's the brain attunes to your environment.
3293
+
3294
+ 55:43.280 --> 55:47.600
3295
+ So the brain will attune to the technologies
3296
+
3297
+ 55:47.600 --> 55:51.200
3298
+ that we will have and will be shaped by it.
3299
+
3300
+ 55:51.200 --> 55:53.000
3301
+ So the cortex very well may be.
3302
+
3303
+ 55:53.000 --> 55:54.640
3304
+ Will be shaped by it.
3305
+
3306
+ 55:54.640 --> 55:57.360
3307
+ In artificial intelligence, it may merge with it,
3308
+
3309
+ 55:57.360 --> 56:01.240
3310
+ it may get enveloped and adjusted.
3311
+
3312
+ 56:01.240 --> 56:04.200
3313
+ Even if it's not a merge of the kind of,
3314
+
3315
+ 56:04.200 --> 56:07.000
3316
+ oh, let's have a synthetic element together
3317
+
3318
+ 56:07.000 --> 56:08.800
3319
+ with a biological one.
3320
+
3321
+ 56:08.800 --> 56:11.840
3322
+ The very space around us, the fact, for example,
3323
+
3324
+ 56:11.840 --> 56:15.280
3325
+ think about we put on some goggles of virtual reality
3326
+
3327
+ 56:15.280 --> 56:18.840
3328
+ and we physically are surfing the ocean, right?
3329
+
3330
+ 56:18.840 --> 56:21.800
3331
+ Like I've done it and you have all these emotions
3332
+
3333
+ 56:21.800 --> 56:26.800
3334
+ that come to you, your brain placed you in that reality.
3335
+
3336
+ 56:27.200 --> 56:29.720
3337
+ And it was able to do it like that
3338
+
3339
+ 56:29.720 --> 56:31.160
3340
+ just by putting the goggles on.
3341
+
3342
+ 56:31.160 --> 56:36.040
3343
+ I didn't take thousands of years of adapting to this.
3344
+
3345
+ 56:36.040 --> 56:39.360
3346
+ The brain is plastic, so adapts to new technology.
3347
+
3348
+ 56:39.360 --> 56:41.840
3349
+ So you could do it from the outside
3350
+
3351
+ 56:41.840 --> 56:46.840
3352
+ by simply hijacking some sensory capacities that we have.
3353
+
3354
+ 56:47.680 --> 56:51.640
3355
+ So clearly over recent evolution,
3356
+
3357
+ 56:51.640 --> 56:54.040
3358
+ the cerebral cortex has been a part of the brain
3359
+
3360
+ 56:54.040 --> 56:56.040
3361
+ that has known the most evolution.
3362
+
3363
+ 56:56.040 --> 57:00.840
3364
+ So we have put a lot of chips on evolving
3365
+
3366
+ 57:00.840 --> 57:02.640
3367
+ this specific part of the brain
3368
+
3369
+ 57:02.640 --> 57:06.000
3370
+ and the evolution of cortex is plasticity.
3371
+
3372
+ 57:06.000 --> 57:10.320
3373
+ It's this ability to change in response to things.
3374
+
3375
+ 57:10.320 --> 57:13.840
3376
+ So yes, they will integrate that we want it or not.
3377
+
3378
+ 57:15.000 --> 57:18.200
3379
+ Well, there's no better way to end it, Paola.
3380
+
3381
+ 57:18.200 --> 57:19.520
3382
+ Thank you so much for talking to me.
3383
+
3384
+ 57:19.520 --> 57:20.360
3385
+ You're very welcome.
3386
+
3387
+ 57:20.360 --> 57:21.200
3388
+ That's great.
3389
+
3390
+ 57:21.200 --> 57:31.200
3391
+ Thank you.
3392
+
vtt/episode_033_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_034_small.vtt ADDED
@@ -0,0 +1,1529 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:04.640
4
+ The following is a conversation with Pamela McCordick. She's an author who has written
5
+
6
+ 00:04.640 --> 00:08.400
7
+ on the history and the philosophical significance of artificial intelligence.
8
+
9
+ 00:09.040 --> 00:17.440
10
+ Her books include Machines Who Think in 1979, The Fifth Generation in 1983, with Ed Fangenbaum,
11
+
12
+ 00:17.440 --> 00:22.960
13
+ who's considered to be the father of expert systems, The Edge of Chaos, The Features of Women,
14
+
15
+ 00:22.960 --> 00:28.960
16
+ and many more books. I came across her work in an unusual way by stumbling in a quote from
17
+
18
+ 00:28.960 --> 00:35.280
19
+ Machines Who Think that is something like, artificial intelligence began with the ancient
20
+
21
+ 00:35.280 --> 00:41.920
22
+ wish to forge the gods. That was a beautiful way to draw a connecting line between our societal
23
+
24
+ 00:41.920 --> 00:48.720
25
+ relationship with AI from the grounded day to day science, math, and engineering to popular stories
26
+
27
+ 00:48.720 --> 00:55.520
28
+ and science fiction and myths of automatons that go back for centuries. Through her literary work,
29
+
30
+ 00:55.520 --> 01:00.400
31
+ she has spent a lot of time with the seminal figures of artificial intelligence,
32
+
33
+ 01:00.400 --> 01:07.760
34
+ including the founding fathers of AI from the 1956 Dartmouth summer workshop where the field
35
+
36
+ 01:07.760 --> 01:13.600
37
+ was launched. I reached out to Pamela for a conversation in hopes of getting a sense of
38
+
39
+ 01:13.600 --> 01:18.400
40
+ what those early days were like and how their dreams continued to reverberate
41
+
42
+ 01:18.400 --> 01:23.840
43
+ through the work of our community today. I often don't know where the conversation may take us,
44
+
45
+ 01:23.840 --> 01:29.600
46
+ but I jump in and see. Having no constraints, rules, or goals is a wonderful way to discover new
47
+
48
+ 01:29.600 --> 01:36.320
49
+ ideas. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
50
+
51
+ 01:36.320 --> 01:41.600
52
+ give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter
53
+
54
+ 01:41.600 --> 01:49.680
55
+ at Lex Freedman, spelled F R I D M A N. And now here's my conversation with Pamela McCordick.
56
+
57
+ 01:49.680 --> 01:58.640
58
+ In 1979, your book, Machines Who Think, was published. In it, you interview some of the early
59
+
60
+ 01:58.640 --> 02:06.320
61
+ AI pioneers and explore the idea that AI was born not out of maybe math and computer science,
62
+
63
+ 02:06.320 --> 02:14.960
64
+ but out of myth and legend. So tell me if you could the story of how you first arrived at the
65
+
66
+ 02:14.960 --> 02:22.400
67
+ book, the journey of beginning to write it. I had been a novelist. I'd published two novels.
68
+
69
+ 02:23.120 --> 02:31.760
70
+ And I was sitting under the portal at Stanford one day in the house we were renting for the
71
+
72
+ 02:31.760 --> 02:37.760
73
+ summer. And I thought, I should write a novel about these weird people in AI, I know. And then I
74
+
75
+ 02:37.760 --> 02:44.240
76
+ thought, ah, don't write a novel, write a history. Simple. Just go around, you know, interview them,
77
+
78
+ 02:44.240 --> 02:50.400
79
+ splice it together. Voila, instant book. Ha, ha, ha. It was much harder than that.
80
+
81
+ 02:50.400 --> 02:58.320
82
+ But nobody else was doing it. And so I thought, well, this is a great opportunity. And there were
83
+
84
+ 02:59.760 --> 03:06.240
85
+ people who, John McCarthy, for example, thought it was a nutty idea. There were much, you know,
86
+
87
+ 03:06.240 --> 03:11.920
88
+ the field had not evolved yet, so on. And he had some mathematical thing he thought I should write
89
+
90
+ 03:11.920 --> 03:18.480
91
+ instead. And I said, no, John, I am not a woman in search of a project. I'm, this is what I want
92
+
93
+ 03:18.480 --> 03:24.000
94
+ to do. I hope you'll cooperate. And he said, oh, mother, mother, well, okay, it's your, your time.
95
+
96
+ 03:24.960 --> 03:31.280
97
+ What was the pitch for the, I mean, such a young field at that point. How do you write
98
+
99
+ 03:31.280 --> 03:37.520
100
+ a personal history of a field that's so young? I said, this is wonderful. The founders of the
101
+
102
+ 03:37.520 --> 03:43.120
103
+ field are alive and kicking and able to talk about what they're doing. Did they sound or feel like
104
+
105
+ 03:43.120 --> 03:48.560
106
+ founders at the time? Did they know that they've been found, that they've founded something? Oh,
107
+
108
+ 03:48.560 --> 03:55.120
109
+ yeah, they knew what they were doing was very important, very. What they, what I now see in
110
+
111
+ 03:55.120 --> 04:04.080
112
+ retrospect is that they were at the height of their research careers. And it's humbling to me
113
+
114
+ 04:04.080 --> 04:09.280
115
+ that they took time out from all the things that they had to do as a consequence of being there.
116
+
117
+ 04:10.320 --> 04:14.560
118
+ And to talk to this woman who said, I think I'm going to write a book about you.
119
+
120
+ 04:14.560 --> 04:23.840
121
+ No, it was amazing, just amazing. So who, who stands out to you? Maybe looking 63 years ago,
122
+
123
+ 04:23.840 --> 04:31.040
124
+ the Dartmouth conference. So Marvin Minsky was there. McCarthy was there. Claude Shannon,
125
+
126
+ 04:31.040 --> 04:36.960
127
+ Alan Newell, Herb Simon, some of the folks you've mentioned. Right. Then there's other characters,
128
+
129
+ 04:36.960 --> 04:44.720
130
+ right? One of your coauthors. He wasn't at Dartmouth. He wasn't at Dartmouth, but I mean.
131
+
132
+ 04:44.720 --> 04:50.880
133
+ He was a, I think an undergraduate then. And, and of course, Joe Traub. I mean,
134
+
135
+ 04:50.880 --> 04:58.720
136
+ all of these are players, not at Dartmouth them, but in that era. Right. It's same you and so on.
137
+
138
+ 04:58.720 --> 05:03.680
139
+ So who are the characters, if you could paint a picture that stand out to you from memory,
140
+
141
+ 05:03.680 --> 05:07.200
142
+ those people you've interviewed and maybe not people that were just in the,
143
+
144
+ 05:08.400 --> 05:13.760
145
+ in the, the atmosphere, in the atmosphere. Of course, the four founding fathers were
146
+
147
+ 05:13.760 --> 05:17.040
148
+ extraordinary guys. They really were. Who are the founding fathers?
149
+
150
+ 05:18.560 --> 05:22.480
151
+ Alan Newell, Herbert Simon, Marvin Minsky, John McCarthy,
152
+
153
+ 05:22.480 --> 05:26.240
154
+ they were the four who were not only at the Dartmouth conference,
155
+
156
+ 05:26.240 --> 05:31.200
157
+ but Newell and Simon arrived there with a working program called the logic theorist.
158
+
159
+ 05:31.200 --> 05:38.400
160
+ Everybody else had great ideas about how they might do it, but they weren't going to do it yet.
161
+
162
+ 05:41.040 --> 05:48.720
163
+ And you mentioned Joe Traub, my husband. I was immersed in AI before I met Joe,
164
+
165
+ 05:50.080 --> 05:54.960
166
+ because I had been Ed Feigenbaum's assistant at Stanford. And before that,
167
+
168
+ 05:54.960 --> 06:01.520
169
+ I had worked on a book by edited by Feigenbaum and Julian Feldman called
170
+
171
+ 06:02.800 --> 06:09.200
172
+ Computers and Thought. It was the first textbook of readings of AI. And they, they only did it
173
+
174
+ 06:09.200 --> 06:13.120
175
+ because they were trying to teach AI to people at Berkeley. And there was nothing, you know,
176
+
177
+ 06:13.120 --> 06:17.600
178
+ you'd have to send them to this journal and that journal. This was not the internet where you could
179
+
180
+ 06:17.600 --> 06:26.080
181
+ go look at an article. So I was fascinated from the get go by AI. I was an English major, you know,
182
+
183
+ 06:26.080 --> 06:33.200
184
+ what did I know? And yet I was fascinated. And that's why you saw that historical,
185
+
186
+ 06:33.200 --> 06:40.320
187
+ that literary background, which I think is very much a part of the continuum of AI that
188
+
189
+ 06:40.320 --> 06:48.000
190
+ the AI grew out of that same impulse. Was that, yeah, that traditional? What, what was, what drew
191
+
192
+ 06:48.000 --> 06:54.800
193
+ you to AI? How did you even think of it back, back then? What, what was the possibilities,
194
+
195
+ 06:54.800 --> 07:03.200
196
+ the dreams? What was interesting to you? The idea of intelligence outside the human cranium,
197
+
198
+ 07:03.200 --> 07:08.000
199
+ this was a phenomenal idea. And even when I finished machines who think,
200
+
201
+ 07:08.000 --> 07:15.040
202
+ I didn't know if they were going to succeed. In fact, the final chapter is very wishy washy,
203
+
204
+ 07:15.040 --> 07:25.200
205
+ frankly. I don't succeed the field did. Yeah. Yeah. So was there the idea that AI began with
206
+
207
+ 07:25.200 --> 07:32.000
208
+ the wish to forge the God? So the spiritual component that we crave to create this other
209
+
210
+ 07:32.000 --> 07:40.880
211
+ thing greater than ourselves? For those guys, I don't think so. Newell and Simon were cognitive
212
+
213
+ 07:40.880 --> 07:49.840
214
+ psychologists. What they wanted was to simulate aspects of human intelligence. And they found
215
+
216
+ 07:49.840 --> 07:57.600
217
+ they could do it on the computer. Minsky just thought it was a really cool thing to do.
218
+
219
+ 07:57.600 --> 08:07.120
220
+ Likewise, McCarthy. McCarthy had got the idea in 1949 when, when he was a Caltech student. And
221
+
222
+ 08:08.560 --> 08:15.520
223
+ he listened to somebody's lecture. It's in my book, I forget who it was. And he thought,
224
+
225
+ 08:15.520 --> 08:20.480
226
+ oh, that would be fun to do. How do we do that? And he took a very mathematical approach.
227
+
228
+ 08:20.480 --> 08:28.800
229
+ Minsky was hybrid. And Newell and Simon were very much cognitive psychology. How can we
230
+
231
+ 08:28.800 --> 08:37.280
232
+ simulate various things about human cognition? What happened over the many years is, of course,
233
+
234
+ 08:37.280 --> 08:42.480
235
+ our definition of intelligence expanded tremendously. I mean, these days,
236
+
237
+ 08:43.920 --> 08:48.960
238
+ biologists are comfortable talking about the intelligence of cell, the intelligence of the
239
+
240
+ 08:48.960 --> 08:57.840
241
+ brain, not just human brain, but the intelligence of any kind of brain, cephalopause. I mean,
242
+
243
+ 08:59.520 --> 09:05.840
244
+ an octopus is really intelligent by any, we wouldn't have thought of that in the 60s,
245
+
246
+ 09:05.840 --> 09:12.560
247
+ even the 70s. So all these things have worked in. And I did hear one behavioral
248
+
249
+ 09:12.560 --> 09:20.640
250
+ primatologist, Franz Duval, say AI taught us the questions to ask.
251
+
252
+ 09:22.800 --> 09:27.760
253
+ Yeah, this is what happens, right? It's when you try to build it, is when you start to actually
254
+
255
+ 09:27.760 --> 09:35.360
256
+ ask questions, if it puts a mirror to ourselves. So you were there in the middle of it. It seems
257
+
258
+ 09:35.360 --> 09:41.920
259
+ like not many people were asking the questions that you were trying to look at this field,
260
+
261
+ 09:41.920 --> 09:48.480
262
+ the way you were. I was solo. When I went to get funding for this, because I needed somebody to
263
+
264
+ 09:48.480 --> 09:59.840
265
+ transcribe the interviews and I needed travel expenses, I went to every thing you could think of,
266
+
267
+ 09:59.840 --> 10:11.280
268
+ the NSF, the DARPA. There was an Air Force place that doled out money. And each of them said,
269
+
270
+ 10:11.840 --> 10:18.640
271
+ well, that was very interesting. That's a very interesting idea. But we'll think about it.
272
+
273
+ 10:19.200 --> 10:24.320
274
+ And the National Science Foundation actually said to me in plain English,
275
+
276
+ 10:24.320 --> 10:30.960
277
+ hey, you're only a writer. You're not an historian of science. And I said, yeah, that's true. But
278
+
279
+ 10:30.960 --> 10:35.360
280
+ the historians of science will be crawling all over this field. I'm writing for the general
281
+
282
+ 10:35.360 --> 10:44.000
283
+ audience. So I thought, and they still wouldn't budge. I finally got a private grant without
284
+
285
+ 10:44.000 --> 10:51.440
286
+ knowing who it was from from Ed Fredkin at MIT. He was a wealthy man, and he liked what he called
287
+
288
+ 10:51.440 --> 10:56.880
289
+ crackpot ideas. And he considered this a crackpot idea. This a crackpot idea. And he was willing to
290
+
291
+ 10:56.880 --> 11:04.240
292
+ support it. I am ever grateful. Let me say that. You know, some would say that a history of science
293
+
294
+ 11:04.240 --> 11:09.360
295
+ approach to AI, or even just a history or anything like the book that you've written,
296
+
297
+ 11:09.360 --> 11:16.640
298
+ hasn't been written since. Maybe I'm not familiar. But it's certainly not many.
299
+
300
+ 11:16.640 --> 11:24.000
301
+ If we think about bigger than just these couple of decades, a few decades, what are the roots
302
+
303
+ 11:25.120 --> 11:32.160
304
+ of AI? Oh, they go back so far. Yes, of course, there's all the legendary stuff, the
305
+
306
+ 11:32.800 --> 11:42.160
307
+ Golem and the early robots of the 20th century. But they go back much further than that. If
308
+
309
+ 11:42.160 --> 11:50.320
310
+ you read Homer, Homer has robots in the Iliad. And a classical scholar was pointing out to me
311
+
312
+ 11:50.320 --> 11:55.440
313
+ just a few months ago. Well, you said you just read the Odyssey. The Odyssey is full of robots.
314
+
315
+ 11:55.440 --> 12:01.520
316
+ It is, I said. Yeah, how do you think Odysseus's ship gets from place one place to another? He
317
+
318
+ 12:01.520 --> 12:09.040
319
+ doesn't have the crew people to do that, the crew men. Yeah, it's magic. It's robots. Oh, I thought.
320
+
321
+ 12:09.040 --> 12:17.680
322
+ How interesting. So we've had this notion of AI for a long time. And then toward the end of the
323
+
324
+ 12:17.680 --> 12:24.000
325
+ 19th century, the beginning of the 20th century, there were scientists who actually tried to
326
+
327
+ 12:24.000 --> 12:29.280
328
+ make this happen some way or another, not successfully, they didn't have the technology
329
+
330
+ 12:29.280 --> 12:40.160
331
+ for it. And of course, Babbage, in the 1850s and 60s, he saw that what he was building was capable
332
+
333
+ 12:40.160 --> 12:47.040
334
+ of intelligent behavior. And he, when he ran out of funding, the British government finally said,
335
+
336
+ 12:47.040 --> 12:53.360
337
+ that's enough. He and Lady Lovelace decided, oh, well, why don't we make, you know, why don't we
338
+
339
+ 12:53.360 --> 13:00.560
340
+ play the ponies with this? He had other ideas for raising money too. But if we actually reach back
341
+
342
+ 13:00.560 --> 13:07.280
343
+ once again, I think people don't actually really know that robots do appear or ideas of robots.
344
+
345
+ 13:07.280 --> 13:14.240
346
+ You talk about the Hellenic and the Hebraic points of view. Oh, yes. Can you tell me about each?
347
+
348
+ 13:15.040 --> 13:22.480
349
+ I defined it this way, the Hellenic point of view is robots are great. You know, they're party help,
350
+
351
+ 13:22.480 --> 13:30.320
352
+ they help this guy, Hephaestus, this God Hephaestus in his forge. I presume he made them to help him,
353
+
354
+ 13:31.200 --> 13:38.560
355
+ and so on and so forth. And they welcome the whole idea of robots. The Hebraic view has to do with,
356
+
357
+ 13:39.360 --> 13:46.800
358
+ I think it's the second commandment, thou shalt not make any graven image. In other words, you
359
+
360
+ 13:46.800 --> 13:54.480
361
+ better not start imitating humans, because that's just forbidden. It's the second commandment.
362
+
363
+ 13:55.520 --> 14:05.840
364
+ And a lot of the reaction to artificial intelligence has been a sense that this is
365
+
366
+ 14:05.840 --> 14:16.800
367
+ this is somehow wicked. This is somehow blasphemous. We shouldn't be going there. Now, you can say,
368
+
369
+ 14:16.800 --> 14:21.600
370
+ yeah, but there're going to be some downsides. And I say, yes, there are. But blasphemy is not one of
371
+
372
+ 14:21.600 --> 14:29.520
373
+ them. You know, there's a kind of fear that feels to be almost primal. Is there religious roots to
374
+
375
+ 14:29.520 --> 14:36.160
376
+ that? Because so much of our society has religious roots. And so there is a feeling of, like you
377
+
378
+ 14:36.160 --> 14:44.080
379
+ said, blasphemy of creating the other, of creating something, you know, it doesn't have to be artificial
380
+
381
+ 14:44.080 --> 14:50.480
382
+ intelligence. It's creating life in general. It's the Frankenstein idea. There's the annotated
383
+
384
+ 14:50.480 --> 14:58.080
385
+ Frankenstein on my coffee table. It's a tremendous novel. It really is just beautifully perceptive.
386
+
387
+ 14:58.080 --> 15:06.800
388
+ Yes, we do fear this and we have good reason to fear it, but because it can get out of hand.
389
+
390
+ 15:06.800 --> 15:11.360
391
+ Maybe you can speak to that fear, the psychology, if you thought about it, you know,
392
+
393
+ 15:11.360 --> 15:16.160
394
+ there's a practical set of fears, concerns in the short term, you can think of, if we actually
395
+
396
+ 15:16.160 --> 15:22.720
397
+ think about artificial intelligence systems, you can think about bias of discrimination in
398
+
399
+ 15:22.720 --> 15:32.160
400
+ algorithms or you can think about their social networks, have algorithms that recommend the
401
+
402
+ 15:32.160 --> 15:37.680
403
+ content you see, thereby these algorithms control the behavior of the masses. There's these concerns.
404
+
405
+ 15:38.240 --> 15:45.040
406
+ But to me, it feels like the fear that people have is deeper than that. So have you thought about
407
+
408
+ 15:45.040 --> 15:53.440
409
+ the psychology of it? I think in a superficial way I have. There is this notion that if we
410
+
411
+ 15:55.680 --> 16:01.200
412
+ produce a machine that can think, it will outthink us and therefore replace us.
413
+
414
+ 16:02.000 --> 16:11.840
415
+ I guess that's a primal fear of almost kind of a kind of mortality. So around the time you said
416
+
417
+ 16:11.840 --> 16:21.920
418
+ you worked with Ed Stamford with Ed Faganbaum. So let's look at that one person throughout his
419
+
420
+ 16:21.920 --> 16:31.600
421
+ history, clearly a key person, one of the many in the history of AI. How has he changed in general
422
+
423
+ 16:31.600 --> 16:36.480
424
+ around him? How has Stamford changed in the last, how many years are we talking about here?
425
+
426
+ 16:36.480 --> 16:44.720
427
+ Oh, since 65. So maybe it doesn't have to be about him. It could be bigger, but because he was a
428
+
429
+ 16:44.720 --> 16:51.360
430
+ key person in expert systems, for example, how are these folks who you've interviewed
431
+
432
+ 16:53.040 --> 16:58.400
433
+ in the 70s, 79, changed through the decades?
434
+
435
+ 16:58.400 --> 17:10.720
436
+ In Ed's case, I know him well. We are dear friends. We see each other every month or so.
437
+
438
+ 17:11.520 --> 17:16.160
439
+ He told me that when machines who think first came out, he really thought all the front
440
+
441
+ 17:16.160 --> 17:26.000
442
+ matter was kind of baloney. And 10 years later, he said, no, I see what you're getting at. Yes,
443
+
444
+ 17:26.000 --> 17:31.520
445
+ this is an impulse that has been, this has been a human impulse for thousands of years
446
+
447
+ 17:32.160 --> 17:36.640
448
+ to create something outside the human cranium that has intelligence.
449
+
450
+ 17:41.120 --> 17:47.520
451
+ I think it's very hard when you're down at the algorithmic level, and you're just trying to
452
+
453
+ 17:47.520 --> 17:53.840
454
+ make something work, which is hard enough to step back and think of the big picture.
455
+
456
+ 17:53.840 --> 18:02.000
457
+ It reminds me of when I was in Santa Fe, I knew a lot of archaeologists, which was a hobby of mine,
458
+
459
+ 18:02.800 --> 18:08.000
460
+ and I would say, yeah, yeah, well, you can look at the shards and say, oh,
461
+
462
+ 18:08.000 --> 18:14.160
463
+ this came from this tribe and this came from this trade route and so on. But what about the big
464
+
465
+ 18:14.160 --> 18:21.600
466
+ picture? And a very distinguished archaeologist said to me, they don't think that way. You do
467
+
468
+ 18:21.600 --> 18:27.920
469
+ know they're trying to match the shard to the to where it came from. That's, you know, where did
470
+
471
+ 18:27.920 --> 18:34.480
472
+ this corn, the remainder of this corn come from? Was it grown here? Was it grown elsewhere? And I
473
+
474
+ 18:34.480 --> 18:44.560
475
+ think this is part of the AI, any scientific field. You're so busy doing the hard work. And it is
476
+
477
+ 18:44.560 --> 18:49.920
478
+ hard work that you don't step back and say, oh, well, now let's talk about the, you know,
479
+
480
+ 18:49.920 --> 18:56.720
481
+ the general meaning of all this. Yes. So none of the, even Minsky and McCarthy,
482
+
483
+ 18:58.080 --> 19:04.880
484
+ they, oh, those guys did. Yeah. The founding fathers did early on or pretty early on. Well,
485
+
486
+ 19:04.880 --> 19:11.200
487
+ they had, but in a different way from how I looked at it, the two cognitive psychologists,
488
+
489
+ 19:11.200 --> 19:20.960
490
+ Newell and Simon, they wanted to imagine reforming cognitive psychology so that we would really,
491
+
492
+ 19:20.960 --> 19:31.520
493
+ really understand the brain. Yeah. Minsky was more speculative. And John McCarthy saw it as,
494
+
495
+ 19:32.960 --> 19:40.080
496
+ I think I'm doing, doing him right by this. He really saw it as a great boon for human beings to
497
+
498
+ 19:40.080 --> 19:49.440
499
+ have this technology. And that was reason enough to do it. And he had wonderful, wonderful fables
500
+
501
+ 19:50.240 --> 19:57.920
502
+ about how if you do the mathematics, you will see that these things are really good for human beings.
503
+
504
+ 19:57.920 --> 20:05.280
505
+ And if you had a technological objection, he had an answer, a technological answer. But here's how
506
+
507
+ 20:05.280 --> 20:10.480
508
+ we could get over that. And then blah, blah, blah, blah. And one of his favorite things was
509
+
510
+ 20:10.480 --> 20:15.680
511
+ what he called the literary problem, which of course, he presented to me several times.
512
+
513
+ 20:16.400 --> 20:23.680
514
+ That is, everything in literature, there are conventions in literature. One of the conventions
515
+
516
+ 20:23.680 --> 20:37.040
517
+ is that you have a villain and a hero. And the hero in most literature is human. And the villain
518
+
519
+ 20:37.040 --> 20:42.000
520
+ in most literature is a machine. And he said, no, that's just not the way it's going to be.
521
+
522
+ 20:42.560 --> 20:48.000
523
+ But that's the way we're used to it. So when we tell stories about AI, it's always with this
524
+
525
+ 20:48.000 --> 20:57.760
526
+ paradigm. I thought, yeah, he's right. Looking back, the classics, RUR is certainly the machines
527
+
528
+ 20:57.760 --> 21:07.040
529
+ trying to overthrow the humans. Frankenstein is different. Frankenstein is a creature.
530
+
531
+ 21:08.480 --> 21:14.560
532
+ He never has a name. Frankenstein, of course, is the guy who created him, the human Dr. Frankenstein.
533
+
534
+ 21:14.560 --> 21:23.120
535
+ And this creature wants to be loved, wants to be accepted. And it is only when Frankenstein
536
+
537
+ 21:24.720 --> 21:32.720
538
+ turns his head, in fact, runs the other way. And the creature is without love
539
+
540
+ 21:34.400 --> 21:38.560
541
+ that he becomes the monster that he later becomes.
542
+
543
+ 21:39.680 --> 21:43.840
544
+ So who's the villain in Frankenstein? It's unclear, right?
545
+
546
+ 21:43.840 --> 21:45.520
547
+ Oh, it is unclear. Yeah.
548
+
549
+ 21:45.520 --> 21:54.320
550
+ It's really the people who drive him, by driving him away, they bring out the worst.
551
+
552
+ 21:54.320 --> 22:00.800
553
+ That's right. They give him no human solace. And he is driven away, you're right.
554
+
555
+ 22:03.040 --> 22:10.160
556
+ He becomes, at one point, the friend of a blind man. And he serves this blind man,
557
+
558
+ 22:10.160 --> 22:16.640
559
+ and they become very friendly. But when the sighted people of the blind man's family come in,
560
+
561
+ 22:18.640 --> 22:26.000
562
+ you got a monster here. So it's very didactic in its way. And what I didn't know is that Mary Shelley
563
+
564
+ 22:26.000 --> 22:33.440
565
+ and Percy Shelley were great readers of the literature surrounding abolition in the United
566
+
567
+ 22:33.440 --> 22:41.200
568
+ States, the abolition of slavery. And they picked that up wholesale. You are making monsters of
569
+
570
+ 22:41.200 --> 22:45.680
571
+ these people because you won't give them the respect and love that they deserve.
572
+
573
+ 22:46.800 --> 22:54.880
574
+ Do you have, if we get philosophical for a second, do you worry that once we create
575
+
576
+ 22:54.880 --> 22:59.840
577
+ machines that are a little bit more intelligent? Let's look at Roomba, the vacuum cleaner,
578
+
579
+ 22:59.840 --> 23:05.360
580
+ that this darker part of human nature where we abuse
581
+
582
+ 23:07.760 --> 23:12.400
583
+ the other, somebody who's different, will come out?
584
+
585
+ 23:13.520 --> 23:22.640
586
+ I don't worry about it. I could imagine it happening. But I think that what AI has to offer
587
+
588
+ 23:22.640 --> 23:32.480
589
+ the human race will be so attractive that people will be won over. So you have looked deep into
590
+
591
+ 23:32.480 --> 23:40.080
592
+ these people, had deep conversations, and it's interesting to get a sense of stories of the
593
+
594
+ 23:40.080 --> 23:44.480
595
+ way they were thinking and the way it was changed, the way your own thinking about AI has changed.
596
+
597
+ 23:44.480 --> 23:53.360
598
+ As you mentioned, McCarthy, what about the years at CMU, Carnegie Mellon, with Joe?
599
+
600
+ 23:53.360 --> 24:02.800
601
+ Sure. Joe was not in AI. He was in algorithmic complexity.
602
+
603
+ 24:03.440 --> 24:09.040
604
+ Was there always a line between AI and computer science, for example? Is AI its own place of
605
+
606
+ 24:09.040 --> 24:15.920
607
+ outcasts? Was that the feeling? There was a kind of outcast period for AI.
608
+
609
+ 24:15.920 --> 24:28.720
610
+ For instance, in 1974, the new field was hardly 10 years old. The new field of computer science
611
+
612
+ 24:28.720 --> 24:33.200
613
+ was asked by the National Science Foundation, I believe, but it may have been the National
614
+
615
+ 24:33.200 --> 24:42.720
616
+ Academies, I can't remember, to tell our fellow scientists where computer science is and what
617
+
618
+ 24:42.720 --> 24:52.880
619
+ it means. And they wanted to leave out AI. And they only agreed to put it in because Don Knuth
620
+
621
+ 24:52.880 --> 24:59.760
622
+ said, hey, this is important. You can't just leave that out. Really? Don? Don Knuth, yes.
623
+
624
+ 24:59.760 --> 25:06.480
625
+ I talked to Mr. Nietzsche. Out of all the people. Yes. But you see, an AI person couldn't have made
626
+
627
+ 25:06.480 --> 25:10.880
628
+ that argument. He wouldn't have been believed, but Knuth was believed. Yes.
629
+
630
+ 25:10.880 --> 25:18.160
631
+ So Joe Trout worked on the real stuff. Joe was working on algorithmic complexity,
632
+
633
+ 25:18.160 --> 25:24.800
634
+ but he would say in plain English again and again, the smartest people I know are in AI.
635
+
636
+ 25:24.800 --> 25:32.320
637
+ Really? Oh, yes. No question. Anyway, Joe loved these guys. What happened was that
638
+
639
+ 25:34.080 --> 25:40.160
640
+ I guess it was as I started to write machines who think, Herb Simon and I became very close
641
+
642
+ 25:40.160 --> 25:46.000
643
+ friends. He would walk past our house on Northumberland Street every day after work.
644
+
645
+ 25:46.560 --> 25:52.160
646
+ And I would just be putting my cover on my typewriter and I would lean out the door and say,
647
+
648
+ 25:52.160 --> 25:58.800
649
+ Herb, would you like a sherry? And Herb almost always would like a sherry. So he'd stop in
650
+
651
+ 25:59.440 --> 26:06.000
652
+ and we'd talk for an hour, two hours. My journal says we talked this afternoon for three hours.
653
+
654
+ 26:06.720 --> 26:11.520
655
+ What was on his mind at the time in terms of on the AI side of things?
656
+
657
+ 26:12.160 --> 26:15.120
658
+ We didn't talk too much about AI. We talked about other things. Just life.
659
+
660
+ 26:15.120 --> 26:24.480
661
+ We both love literature and Herb had read Proust in the original French twice all the way through.
662
+
663
+ 26:25.280 --> 26:31.280
664
+ I can't. I read it in English in translation. So we talked about literature. We talked about
665
+
666
+ 26:31.280 --> 26:37.120
667
+ languages. We talked about music because he loved music. We talked about art because he was
668
+
669
+ 26:37.120 --> 26:45.840
670
+ he was actually enough of a painter that he had to give it up because he was afraid it was interfering
671
+
672
+ 26:45.840 --> 26:53.120
673
+ with his research and so on. So no, it was really just chat chat, but it was very warm.
674
+
675
+ 26:54.000 --> 26:59.840
676
+ So one summer I said to Herb, you know, my students have all the really interesting
677
+
678
+ 26:59.840 --> 27:04.480
679
+ conversations. I was teaching at the University of Pittsburgh then in the English department.
680
+
681
+ 27:04.480 --> 27:08.880
682
+ And, you know, they get to talk about the meaning of life and that kind of thing.
683
+
684
+ 27:08.880 --> 27:15.200
685
+ And what do I have? I have university meetings where we talk about the photocopying budget and,
686
+
687
+ 27:15.200 --> 27:20.160
688
+ you know, whether the course on romantic poetry should be one semester or two.
689
+
690
+ 27:21.200 --> 27:25.760
691
+ So Herb laughed. He said, yes, I know what you mean. He said, but, you know, you could do something
692
+
693
+ 27:25.760 --> 27:33.920
694
+ about that. Dot, that was his wife, Dot and I used to have a salon at the University of Chicago every
695
+
696
+ 27:33.920 --> 27:42.400
697
+ Sunday night. And we would have essentially an open house. And people knew it wasn't for a small
698
+
699
+ 27:42.400 --> 27:51.440
700
+ talk. It was really for some topic of depth. He said, but my advice would be that you choose
701
+
702
+ 27:51.440 --> 27:59.200
703
+ the topic ahead of time. Fine, I said. So the following, we exchanged mail over the summer.
704
+
705
+ 27:59.200 --> 28:09.120
706
+ That was US post in those days because you didn't have personal email. And I decided I would organize
707
+
708
+ 28:09.120 --> 28:16.880
709
+ it. And there would be eight of us, Alan Nolan, his wife, Herb Simon, and his wife, Dorothea.
710
+
711
+ 28:16.880 --> 28:27.040
712
+ There was a novelist in town, a man named Mark Harris. He had just arrived and his wife, Josephine.
713
+
714
+ 28:27.840 --> 28:33.040
715
+ Mark was most famous then for a novel called Bang the Drum Slowly, which was about baseball.
716
+
717
+ 28:34.160 --> 28:43.360
718
+ And Joe and me, so eight people. And we met monthly and we just sank our teeth into really
719
+
720
+ 28:43.360 --> 28:52.160
721
+ hard topics. And it was great fun. How have your own views around artificial intelligence changed
722
+
723
+ 28:53.200 --> 28:57.520
724
+ in through the process of writing machines who think and afterwards the ripple effects?
725
+
726
+ 28:58.240 --> 29:04.400
727
+ I was a little skeptical that this whole thing would work out. It didn't matter. To me, it was
728
+
729
+ 29:04.400 --> 29:16.160
730
+ so audacious. This whole thing being AI generally. And in some ways, it hasn't worked out the way I
731
+
732
+ 29:16.160 --> 29:25.760
733
+ expected so far. That is to say, there is this wonderful lot of apps, thanks to deep learning
734
+
735
+ 29:25.760 --> 29:35.680
736
+ and so on. But those are algorithmic. And in the part of symbolic processing,
737
+
738
+ 29:36.640 --> 29:45.600
739
+ there is very little yet. And that's a field that lies waiting for industrious graduate students.
740
+
741
+ 29:46.800 --> 29:53.040
742
+ Maybe you can tell me some figures that popped up in your life in the 80s with expert systems,
743
+
744
+ 29:53.040 --> 30:01.840
745
+ where there was the symbolic AI possibilities of what most people think of as AI. If you dream
746
+
747
+ 30:01.840 --> 30:08.000
748
+ of the possibilities of AI, it's really expert systems. And those hit a few walls and there
749
+
750
+ 30:08.000 --> 30:12.960
751
+ were challenges there. And I think, yes, they will reemerge again with some new breakthroughs and so
752
+
753
+ 30:12.960 --> 30:18.640
754
+ on. But what did that feel like, both the possibility and the winter that followed, the
755
+
756
+ 30:18.640 --> 30:25.520
757
+ slowdown in research? This whole thing about AI winter is, to me, a crock.
758
+
759
+ 30:26.160 --> 30:33.200
760
+ It's no winters. Because I look at the basic research that was being done in the 80s, which is
761
+
762
+ 30:33.200 --> 30:39.520
763
+ supposed to be, my God, it was really important. It was laying down things that nobody had thought
764
+
765
+ 30:39.520 --> 30:44.880
766
+ about before. But it was basic research. You couldn't monetize it. Hence the winter.
767
+
768
+ 30:44.880 --> 30:53.680
769
+ Science research goes and fits and starts. It isn't this nice, smooth,
770
+
771
+ 30:54.240 --> 30:59.200
772
+ oh, this follows this, follows this. No, it just doesn't work that way.
773
+
774
+ 30:59.200 --> 31:03.600
775
+ Well, the interesting thing, the way winters happen, it's never the fault of the researchers.
776
+
777
+ 31:04.480 --> 31:11.920
778
+ It's the some source of hype, over promising. Well, no, let me take that back. Sometimes it
779
+
780
+ 31:11.920 --> 31:17.200
781
+ is the fault of the researchers. Sometimes certain researchers might overpromise the
782
+
783
+ 31:17.200 --> 31:23.760
784
+ possibilities. They themselves believe that we're just a few years away, sort of just recently talked
785
+
786
+ 31:23.760 --> 31:30.240
787
+ to Elon Musk and he believes he'll have an autonomous vehicle in a year and he believes it.
788
+
789
+ 31:30.240 --> 31:33.520
790
+ A year? A year, yeah, would have mass deployment of a time.
791
+
792
+ 31:33.520 --> 31:38.640
793
+ For the record, this is 2019 right now. So he's talking 2020.
794
+
795
+ 31:38.640 --> 31:44.800
796
+ To do the impossible, you really have to believe it. And I think what's going to happen when you
797
+
798
+ 31:44.800 --> 31:49.520
799
+ believe it, because there's a lot of really brilliant people around him, is some good stuff
800
+
801
+ 31:49.520 --> 31:54.640
802
+ will come out of it. Some unexpected brilliant breakthroughs will come out of it. When you
803
+
804
+ 31:54.640 --> 31:59.520
805
+ really believe it, when you work that hard. I believe that and I believe autonomous vehicles
806
+
807
+ 31:59.520 --> 32:05.280
808
+ will come. I just don't believe it'll be in a year. I wish. But nevertheless, there's
809
+
810
+ 32:05.280 --> 32:11.680
811
+ autonomous vehicles is a good example. There's a feeling many companies have promised by 2021,
812
+
813
+ 32:11.680 --> 32:18.000
814
+ by 2022 for GM. Basically, every single automotive company has promised they'll
815
+
816
+ 32:18.000 --> 32:22.480
817
+ have autonomous vehicles. So that kind of overpromise is what leads to the winter.
818
+
819
+ 32:23.040 --> 32:28.320
820
+ Because we'll come to those dates, there won't be autonomous vehicles, and there'll be a feeling,
821
+
822
+ 32:28.960 --> 32:34.160
823
+ well, wait a minute, if we took your word at that time, that means we just spent billions of
824
+
825
+ 32:34.160 --> 32:41.600
826
+ dollars, had made no money. And there's a counter response to where everybody gives up on it.
827
+
828
+ 32:41.600 --> 32:49.600
829
+ Sort of intellectually, at every level, the hope just dies. And all that's left is a few basic
830
+
831
+ 32:49.600 --> 32:56.800
832
+ researchers. So you're uncomfortable with some aspects of this idea. Well, it's the difference
833
+
834
+ 32:56.800 --> 33:04.160
835
+ between science and commerce. So you think science, science goes on the way it does?
836
+
837
+ 33:06.480 --> 33:14.800
838
+ Science can really be killed by not getting proper funding or timely funding. I think
839
+
840
+ 33:14.800 --> 33:22.080
841
+ Great Britain was a perfect example of that. The Lighthill report in the 1960s.
842
+
843
+ 33:22.080 --> 33:27.360
844
+ The year essentially said, there's no use of Great Britain putting any money into this.
845
+
846
+ 33:27.360 --> 33:35.600
847
+ It's going nowhere. And this was all about social factions in Great Britain.
848
+
849
+ 33:36.960 --> 33:44.400
850
+ Edinburgh hated Cambridge, and Cambridge hated Manchester, and somebody else can write that
851
+
852
+ 33:44.400 --> 33:53.760
853
+ story. But it really did have a hard effect on research there. Now, they've come roaring back
854
+
855
+ 33:53.760 --> 34:01.360
856
+ with deep mind. But that's one guy and his visionaries around him.
857
+
858
+ 34:01.360 --> 34:08.320
859
+ But just to push on that, it's kind of interesting, you have this dislike of the idea of an AI winter.
860
+
861
+ 34:08.320 --> 34:15.440
862
+ Where's that coming from? Where were you? Oh, because I just don't think it's true.
863
+
864
+ 34:16.560 --> 34:21.360
865
+ There was a particular period of time. It's a romantic notion, certainly.
866
+
867
+ 34:21.360 --> 34:32.960
868
+ Yeah, well, I admire science, perhaps more than I admire commerce. Commerce is fine. Hey,
869
+
870
+ 34:32.960 --> 34:42.960
871
+ you know, we all got to live. But science has a much longer view than commerce,
872
+
873
+ 34:44.080 --> 34:54.000
874
+ and continues almost regardless. It can't continue totally regardless, but it almost
875
+
876
+ 34:54.000 --> 34:59.600
877
+ regardless of what's saleable and what's not, what's monetizable and what's not.
878
+
879
+ 34:59.600 --> 35:05.840
880
+ So the winter is just something that happens on the commerce side, and the science marches.
881
+
882
+ 35:07.200 --> 35:12.560
883
+ That's a beautifully optimistic inspired message. I agree with you. I think
884
+
885
+ 35:13.760 --> 35:19.440
886
+ if we look at the key people that work in AI, they work in key scientists in most disciplines,
887
+
888
+ 35:19.440 --> 35:25.360
889
+ they continue working out of the love for science. You can always scrape up some funding
890
+
891
+ 35:25.360 --> 35:33.120
892
+ to stay alive, and they continue working diligently. But there certainly is a huge
893
+
894
+ 35:33.120 --> 35:39.840
895
+ amount of funding now, and there's a concern on the AI side and deep learning. There's a concern
896
+
897
+ 35:39.840 --> 35:46.160
898
+ that we might, with over promising, hit another slowdown in funding, which does affect the number
899
+
900
+ 35:46.160 --> 35:51.280
901
+ of students, you know, that kind of thing. Yeah, it does. So the kind of ideas you had
902
+
903
+ 35:51.280 --> 35:56.400
904
+ in machines who think, did you continue that curiosity through the decades that followed?
905
+
906
+ 35:56.400 --> 36:04.800
907
+ Yes, I did. And what was your view, historical view of how AI community evolved, the conversations
908
+
909
+ 36:04.800 --> 36:11.520
910
+ about it, the work? Has it persisted the same way from its birth? No, of course not. It's just
911
+
912
+ 36:11.520 --> 36:21.360
913
+ we were just talking. The symbolic AI really kind of dried up and it all became algorithmic.
914
+
915
+ 36:22.400 --> 36:29.520
916
+ I remember a young AI student telling me what he was doing, and I had been away from the field
917
+
918
+ 36:29.520 --> 36:34.080
919
+ long enough. I'd gotten involved with complexity at the Santa Fe Institute.
920
+
921
+ 36:34.080 --> 36:40.960
922
+ I thought, algorithms, yeah, they're in the service of, but they're not the main event.
923
+
924
+ 36:41.680 --> 36:49.200
925
+ No, they became the main event. That surprised me. And we all know the downside of this. We
926
+
927
+ 36:49.200 --> 36:58.240
928
+ all know that if you're using an algorithm to make decisions based on a gazillion human decisions
929
+
930
+ 36:58.240 --> 37:05.040
931
+ baked into it, are all the mistakes that humans make, the bigotries, the short sightedness,
932
+
933
+ 37:06.000 --> 37:14.000
934
+ so on and so on. So you mentioned Santa Fe Institute. So you've written the novel Edge
935
+
936
+ 37:14.000 --> 37:21.200
937
+ of Chaos, but it's inspired by the ideas of complexity, a lot of which have been extensively
938
+
939
+ 37:21.200 --> 37:30.160
940
+ explored at the Santa Fe Institute. It's another fascinating topic of just sort of
941
+
942
+ 37:31.040 --> 37:37.600
943
+ emergent complexity from chaos. Nobody knows how it happens really, but it seems to wear
944
+
945
+ 37:37.600 --> 37:44.160
946
+ all the interesting stuff that does happen. So how do first, not your novel, but just
947
+
948
+ 37:44.160 --> 37:49.520
949
+ complexity in general in the work at Santa Fe fit into the bigger puzzle of the history of AI?
950
+
951
+ 37:49.520 --> 37:53.520
952
+ Or it may be even your personal journey through that.
953
+
954
+ 37:54.480 --> 38:03.040
955
+ One of the last projects I did concerning AI in particular was looking at the work of
956
+
957
+ 38:03.040 --> 38:12.960
958
+ Harold Cohen, the painter. And Harold was deeply involved with AI. He was a painter first.
959
+
960
+ 38:12.960 --> 38:27.600
961
+ And what his project, Aaron, which was a lifelong project, did, was reflect his own cognitive
962
+
963
+ 38:27.600 --> 38:34.800
964
+ processes. Okay. Harold and I, even though I wrote a book about it, we had a lot of friction between
965
+
966
+ 38:34.800 --> 38:44.560
967
+ us. And I went, I thought, this is it, you know, the book died. It was published and fell into a
968
+
969
+ 38:44.560 --> 38:53.040
970
+ ditch. This is it. I'm finished. It's time for me to do something different. By chance,
971
+
972
+ 38:53.040 --> 38:59.280
973
+ this was a sabbatical year for my husband. And we spent two months at the Santa Fe Institute
974
+
975
+ 38:59.280 --> 39:09.200
976
+ and two months at Caltech. And then the spring semester in Munich, Germany. Okay. Those two
977
+
978
+ 39:09.200 --> 39:19.520
979
+ months at the Santa Fe Institute were so restorative for me. And I began to, the institute was very
980
+
981
+ 39:19.520 --> 39:26.240
982
+ small then. It was in some kind of office complex on old Santa Fe trail. Everybody kept their door
983
+
984
+ 39:26.240 --> 39:33.440
985
+ open. So you could crack your head on a problem. And if you finally didn't get it, you could walk
986
+
987
+ 39:33.440 --> 39:42.480
988
+ in to see Stuart Kaufman or any number of people and say, I don't get this. Can you explain?
989
+
990
+ 39:43.680 --> 39:51.120
991
+ And one of the people that I was talking to about complex adaptive systems was Murray Gellemann.
992
+
993
+ 39:51.120 --> 39:58.960
994
+ And I told Murray what Harold Cohen had done. And I said, you know, this sounds to me
995
+
996
+ 39:58.960 --> 40:06.080
997
+ like a complex adaptive system. And he said, yeah, it is. Well, what do you know? Harold's
998
+
999
+ 40:06.080 --> 40:12.560
1000
+ Aaron had all these kissing cousins all over the world in science and in economics and so on and
1001
+
1002
+ 40:12.560 --> 40:21.200
1003
+ so forth. I was so relieved. I thought, okay, your instincts are okay. You're doing the right thing. I
1004
+
1005
+ 40:21.200 --> 40:25.920
1006
+ didn't have the vocabulary. And that was one of the things that the Santa Fe Institute gave me.
1007
+
1008
+ 40:25.920 --> 40:31.040
1009
+ If I could have rewritten that book, no, it had just come out. I couldn't rewrite it. I would have
1010
+
1011
+ 40:31.040 --> 40:37.680
1012
+ had a vocabulary to explain what Aaron was doing. Okay. So I got really interested in
1013
+
1014
+ 40:37.680 --> 40:47.440
1015
+ what was going on at the Institute. The people were again, bright and funny and willing to explain
1016
+
1017
+ 40:47.440 --> 40:54.800
1018
+ anything to this amateur. George Cowan, who was then the head of the Institute, said he thought it
1019
+
1020
+ 40:54.800 --> 41:02.160
1021
+ might be a nice idea if I wrote a book about the Institute. And I thought about it. And I had my
1022
+
1023
+ 41:02.160 --> 41:08.960
1024
+ eye on some other project. God knows what. And I said, oh, I'm sorry, George. Yeah, I'd really love
1025
+
1026
+ 41:08.960 --> 41:13.840
1027
+ to do it. But, you know, just not going to work for me at this moment. And he said, oh, too bad.
1028
+
1029
+ 41:13.840 --> 41:18.560
1030
+ I think it would make an interesting book. Well, he was right and I was wrong. I wish I'd done it.
1031
+
1032
+ 41:18.560 --> 41:24.080
1033
+ But that's interesting. I hadn't thought about that, that that was a road not taken that I wish
1034
+
1035
+ 41:24.080 --> 41:32.400
1036
+ I'd taken. Well, you know what? That's just on that point. It's quite brave for you as a writer,
1037
+
1038
+ 41:32.400 --> 41:39.680
1039
+ as sort of coming from a world of literature, the literary thinking and historical thinking. I mean,
1040
+
1041
+ 41:39.680 --> 41:52.640
1042
+ just from that world and bravely talking to quite, I assume, large egos in AI or in complexity and so
1043
+
1044
+ 41:52.640 --> 42:00.560
1045
+ on. How'd you do it? Like, where did you? I mean, I suppose they could be intimidated of you as well,
1046
+
1047
+ 42:00.560 --> 42:06.160
1048
+ because it's two different worlds. I never picked up that anybody was intimidated by me.
1049
+
1050
+ 42:06.160 --> 42:11.040
1051
+ But how were you brave enough? Where did you find the guts to just dumb, dumb luck? I mean,
1052
+
1053
+ 42:11.680 --> 42:16.160
1054
+ this is an interesting rock to turn over. I'm going to write a book about and you know,
1055
+
1056
+ 42:16.160 --> 42:21.840
1057
+ people have enough patience with writers, if they think they're going to end up at a book
1058
+
1059
+ 42:21.840 --> 42:27.840
1060
+ that they let you flail around and so on. It's well, but they also look if the writer has.
1061
+
1062
+ 42:27.840 --> 42:33.440
1063
+ There's like, if there's a sparkle in their eye, if they get it. Yeah, sure. Right. When were you
1064
+
1065
+ 42:33.440 --> 42:44.480
1066
+ at the Santa Fe Institute? The time I'm talking about is 1990. Yeah, 1990, 1991, 1992. But we then,
1067
+
1068
+ 42:44.480 --> 42:49.920
1069
+ because Joe was an external faculty member, we're in Santa Fe every summer, we bought a house there.
1070
+
1071
+ 42:49.920 --> 42:55.600
1072
+ And I didn't have that much to do with the Institute anymore. I was writing my novels,
1073
+
1074
+ 42:55.600 --> 43:04.320
1075
+ I was doing whatever I was doing. But I loved the Institute and I loved
1076
+
1077
+ 43:06.720 --> 43:12.960
1078
+ the, again, the audacity of the ideas. That really appeals to me.
1079
+
1080
+ 43:12.960 --> 43:22.160
1081
+ I think that there's this feeling, much like in great institutes of neuroscience, for example,
1082
+
1083
+ 43:23.040 --> 43:29.840
1084
+ that they're in it for the long game of understanding something fundamental about
1085
+
1086
+ 43:29.840 --> 43:36.800
1087
+ reality and nature. And that's really exciting. So if we start not to look a little bit more recently,
1088
+
1089
+ 43:36.800 --> 43:49.280
1090
+ how AI is really popular today. How is this world, you mentioned algorithmic, but in general,
1091
+
1092
+ 43:50.080 --> 43:54.320
1093
+ is the spirit of the people, the kind of conversations you hear through the grapevine
1094
+
1095
+ 43:54.320 --> 44:00.160
1096
+ and so on, is that different than the roots that you remember? No, the same kind of excitement,
1097
+
1098
+ 44:00.160 --> 44:07.120
1099
+ the same kind of, this is really going to make a difference in the world. And it will, it has.
1100
+
1101
+ 44:07.120 --> 44:13.280
1102
+ A lot of folks, especially young, 20 years old or something, they think we've just found something
1103
+
1104
+ 44:14.080 --> 44:21.040
1105
+ special here. We're going to change the world tomorrow. On a time scale, do you have
1106
+
1107
+ 44:22.000 --> 44:27.120
1108
+ a sense of what, of the time scale at which breakthroughs in AI happen?
1109
+
1110
+ 44:27.120 --> 44:39.040
1111
+ I really don't, because look at deep learning. That was, Jeffrey Hinton came up with the algorithm
1112
+
1113
+ 44:39.920 --> 44:48.960
1114
+ in 86. But it took all these years for the technology to be good enough to actually
1115
+
1116
+ 44:48.960 --> 44:57.680
1117
+ be applicable. So no, I can't predict that at all. I can't, I wouldn't even try.
1118
+
1119
+ 44:58.320 --> 45:05.440
1120
+ Well, let me ask you to, not to try to predict, but to speak to the, I'm sure in the 60s,
1121
+
1122
+ 45:06.000 --> 45:10.320
1123
+ as it continues now, there's people that think, let's call it, we can call it this
1124
+
1125
+ 45:11.040 --> 45:17.120
1126
+ fun word, the singularity. When there's a phase shift, there's some profound feeling where
1127
+
1128
+ 45:17.120 --> 45:23.040
1129
+ we're all really surprised by what's able to be achieved. I'm sure those dreams are there.
1130
+
1131
+ 45:23.040 --> 45:29.200
1132
+ I remember reading quotes in the 60s and those continued. How have your own views,
1133
+
1134
+ 45:29.200 --> 45:34.960
1135
+ maybe if you look back, about the timeline of a singularity changed?
1136
+
1137
+ 45:37.040 --> 45:45.760
1138
+ Well, I'm not a big fan of the singularity as Ray Kurzweil has presented it.
1139
+
1140
+ 45:45.760 --> 45:52.480
1141
+ But how would you define the Ray Kurzweil? How would you, how do you think of singularity in
1142
+
1143
+ 45:52.480 --> 45:58.880
1144
+ those? If I understand Kurzweil's view, it's sort of, there's going to be this moment when
1145
+
1146
+ 45:58.880 --> 46:06.320
1147
+ machines are smarter than humans and, you know, game over. However, the game over is,
1148
+
1149
+ 46:06.320 --> 46:10.800
1150
+ I mean, do they put us on a reservation? Do they, et cetera, et cetera. And
1151
+
1152
+ 46:10.800 --> 46:19.840
1153
+ first of all, machines are smarter than humans in some ways, all over the place. And they have been
1154
+
1155
+ 46:19.840 --> 46:27.280
1156
+ since adding machines were invented. So it's not, it's not going to come like some great
1157
+
1158
+ 46:27.280 --> 46:34.320
1159
+ edible crossroads, you know, where they meet each other and our offspring, Oedipus says,
1160
+
1161
+ 46:34.320 --> 46:41.040
1162
+ you're dead. It's just not going to happen. Yeah. So it's already game over with calculators,
1163
+
1164
+ 46:41.040 --> 46:47.920
1165
+ right? They're already out to do much better at basic arithmetic than us. But, you know,
1166
+
1167
+ 46:47.920 --> 46:55.840
1168
+ there's a human like intelligence. And that's not the ones that destroy us. But, you know,
1169
+
1170
+ 46:55.840 --> 47:01.520
1171
+ somebody that you can have as a, as a friend, you can have deep connections with that kind of
1172
+
1173
+ 47:01.520 --> 47:07.680
1174
+ passing the Turing test and beyond, those kinds of ideas. Have you dreamt of those?
1175
+
1176
+ 47:07.680 --> 47:08.880
1177
+ Oh, yes, yes, yes.
1178
+
1179
+ 47:08.880 --> 47:10.160
1180
+ Those possibilities.
1181
+
1182
+ 47:10.160 --> 47:17.520
1183
+ In a book I wrote with Ed Feigenbaum, there's a little story called the geriatric robot. And
1184
+
1185
+ 47:18.880 --> 47:24.240
1186
+ how I came up with the geriatric robot is a story in itself. But here's, here's what the
1187
+
1188
+ 47:24.240 --> 47:29.520
1189
+ geriatric robot does. It doesn't just clean you up and feed you and will you out into the sun.
1190
+
1191
+ 47:29.520 --> 47:42.720
1192
+ It's great advantages. It listens. It says, tell me again about the great coup of 73.
1193
+
1194
+ 47:43.520 --> 47:52.080
1195
+ Tell me again about how awful or how wonderful your grandchildren are and so on and so forth.
1196
+
1197
+ 47:53.040 --> 47:59.440
1198
+ And it isn't hanging around to inherit your money. It isn't hanging around because it can't get
1199
+
1200
+ 47:59.440 --> 48:08.320
1201
+ any other job. This is its job and so on and so forth. Well, I would love something like that.
1202
+
1203
+ 48:09.120 --> 48:15.600
1204
+ Yeah. I mean, for me, that deeply excites me. So I think there's a lot of us.
1205
+
1206
+ 48:15.600 --> 48:20.880
1207
+ Lex, you got to know it was a joke. I dreamed it up because I needed to talk to college students
1208
+
1209
+ 48:20.880 --> 48:26.880
1210
+ and I needed to give them some idea of what AI might be. And they were rolling in the aisles
1211
+
1212
+ 48:26.880 --> 48:32.160
1213
+ as I elaborated and elaborated and elaborated. When it went into the book,
1214
+
1215
+ 48:34.320 --> 48:40.240
1216
+ they took my hide off in the New York Review of Books. This is just what we've thought about
1217
+
1218
+ 48:40.240 --> 48:44.400
1219
+ these people in AI. They're inhuman. Oh, come on. Get over it.
1220
+
1221
+ 48:45.120 --> 48:49.120
1222
+ Don't you think that's a good thing for the world that AI could potentially...
1223
+
1224
+ 48:49.120 --> 48:58.800
1225
+ Why? I do. Absolutely. And furthermore, I want... I'm pushing 80 now. By the time I need help
1226
+
1227
+ 48:59.360 --> 49:04.160
1228
+ like that, I also want it to roll itself in a corner and shut the fuck up.
1229
+
1230
+ 49:06.960 --> 49:09.920
1231
+ Let me linger on that point. Do you really, though?
1232
+
1233
+ 49:09.920 --> 49:11.040
1234
+ Yeah, I do. Here's what.
1235
+
1236
+ 49:11.040 --> 49:15.120
1237
+ But you wanted to push back a little bit a little.
1238
+
1239
+ 49:15.120 --> 49:21.520
1240
+ But I have watched my friends go through the whole issue around having help in the house.
1241
+
1242
+ 49:22.480 --> 49:29.760
1243
+ And some of them have been very lucky and had fabulous help. And some of them have had people
1244
+
1245
+ 49:29.760 --> 49:35.120
1246
+ in the house who want to keep the television going on all day, who want to talk on their phones all day.
1247
+
1248
+ 49:35.760 --> 49:38.960
1249
+ No. So basically... Just roll yourself in the corner.
1250
+
1251
+ 49:38.960 --> 49:45.760
1252
+ Unfortunately, us humans, when we're assistants, we care... We're still...
1253
+
1254
+ 49:45.760 --> 49:48.400
1255
+ Even when we're assisting others, we care about ourselves more.
1256
+
1257
+ 49:48.400 --> 49:49.280
1258
+ Of course.
1259
+
1260
+ 49:49.280 --> 49:56.800
1261
+ And so you create more frustration. And a robot AI assistant can really optimize
1262
+
1263
+ 49:57.360 --> 50:03.040
1264
+ the experience for you. I was just speaking to the point... You actually bring up a very,
1265
+
1266
+ 50:03.040 --> 50:07.120
1267
+ very good point. But I was speaking to the fact that us humans are a little complicated,
1268
+
1269
+ 50:07.120 --> 50:14.560
1270
+ that we don't necessarily want a perfect servant. I don't... Maybe you disagree with that.
1271
+
1272
+ 50:14.560 --> 50:21.360
1273
+ But there's... I think there's a push and pull with humans.
1274
+
1275
+ 50:22.240 --> 50:28.080
1276
+ A little tension, a little mystery that, of course, that's really difficult for you to get right.
1277
+
1278
+ 50:28.080 --> 50:35.120
1279
+ But I do sense, especially in today with social media, that people are getting more and more
1280
+
1281
+ 50:35.120 --> 50:42.000
1282
+ lonely, even young folks. And sometimes, especially young folks, that loneliness,
1283
+
1284
+ 50:42.000 --> 50:47.760
1285
+ there's a longing for connection. And AI can help alleviate some of that loneliness.
1286
+
1287
+ 50:48.560 --> 50:52.880
1288
+ Some. Just somebody who listens. Like in person.
1289
+
1290
+ 50:54.640 --> 50:54.960
1291
+ That...
1292
+
1293
+ 50:54.960 --> 50:55.680
1294
+ So to speak.
1295
+
1296
+ 50:56.240 --> 51:00.080
1297
+ So to speak, yeah. So to speak.
1298
+
1299
+ 51:00.080 --> 51:07.120
1300
+ Yeah, that to me is really exciting. But so if we look at that level of intelligence,
1301
+
1302
+ 51:07.120 --> 51:12.560
1303
+ which is exceptionally difficult to achieve, actually, as the singularity, or whatever,
1304
+
1305
+ 51:12.560 --> 51:18.320
1306
+ that's the human level bar, that people have dreamt of that too. Touring dreamt of it.
1307
+
1308
+ 51:19.520 --> 51:26.320
1309
+ He had a date timeline. Do you have how of your own timeline evolved on past time?
1310
+
1311
+ 51:26.320 --> 51:27.520
1312
+ I don't even think about it.
1313
+
1314
+ 51:27.520 --> 51:28.240
1315
+ You don't even think?
1316
+
1317
+ 51:28.240 --> 51:35.520
1318
+ No. Just this field has been so full of surprises for me.
1319
+
1320
+ 51:35.520 --> 51:39.120
1321
+ That you're just taking in and see a fun bunch of basic science?
1322
+
1323
+ 51:39.120 --> 51:45.840
1324
+ Yeah, I just can't. Maybe that's because I've been around the field long enough to think,
1325
+
1326
+ 51:45.840 --> 51:52.240
1327
+ you know, don't go that way. Herb Simon was terrible about making these predictions of
1328
+
1329
+ 51:52.240 --> 51:53.840
1330
+ when this and that would happen.
1331
+
1332
+ 51:53.840 --> 51:54.320
1333
+ Right.
1334
+
1335
+ 51:54.320 --> 51:58.400
1336
+ And he was a sensible guy.
1337
+
1338
+ 51:58.400 --> 52:01.600
1339
+ Yeah. And his quotes are often used, right, as a...
1340
+
1341
+ 52:01.600 --> 52:02.880
1342
+ That's a legend, yeah.
1343
+
1344
+ 52:02.880 --> 52:13.840
1345
+ Yeah. Do you have concerns about AI, the existential threats that many people like Elon Musk and
1346
+
1347
+ 52:13.840 --> 52:16.160
1348
+ Sam Harris and others that are thinking about?
1349
+
1350
+ 52:16.160 --> 52:21.200
1351
+ Oh, yeah, yeah. That takes up a half a chapter in my book.
1352
+
1353
+ 52:21.200 --> 52:27.120
1354
+ I call it the male gaze.
1355
+
1356
+ 52:27.120 --> 52:35.200
1357
+ Well, you hear me out. The male gaze is actually a term from film criticism.
1358
+
1359
+ 52:36.240 --> 52:40.640
1360
+ And I'm blocking on the woman who dreamed this up.
1361
+
1362
+ 52:41.280 --> 52:48.880
1363
+ But she pointed out how most movies were made from the male point of view, that women were
1364
+
1365
+ 52:48.880 --> 52:55.520
1366
+ objects, not subjects. They didn't have any agency and so on and so forth.
1367
+
1368
+ 52:56.080 --> 53:00.560
1369
+ So when Elon and his pals, Hawking and so on,
1370
+
1371
+ 53:00.560 --> 53:05.760
1372
+ okay, AI is going to eat our lunch and our dinner and our midnight snack too,
1373
+
1374
+ 53:06.640 --> 53:11.600
1375
+ I thought, what? And I said to Ed Feigenbaum, oh, this is the first guy.
1376
+
1377
+ 53:11.600 --> 53:14.880
1378
+ First, these guys have always been the smartest guy on the block.
1379
+
1380
+ 53:14.880 --> 53:20.880
1381
+ And here comes something that might be smarter. Ooh, let's stamp it out before it takes over.
1382
+
1383
+ 53:20.880 --> 53:23.360
1384
+ And Ed laughed. He said, I didn't think about it that way.
1385
+
1386
+ 53:24.080 --> 53:30.320
1387
+ But I did. I did. And it is the male gaze.
1388
+
1389
+ 53:32.000 --> 53:37.120
1390
+ Okay, suppose these things do have agency. Well, let's wait and see what happens.
1391
+
1392
+ 53:37.120 --> 53:47.040
1393
+ Can we imbue them with ethics? Can we imbue them with a sense of empathy?
1394
+
1395
+ 53:48.560 --> 53:54.960
1396
+ Or are they just going to be, I don't know, we've had centuries of guys like that?
1397
+
1398
+ 53:55.760 --> 54:03.600
1399
+ That's interesting that the ego, the male gaze is immediately threatened.
1400
+
1401
+ 54:03.600 --> 54:12.320
1402
+ And so you can't think in a patient, calm way of how the tech could evolve.
1403
+
1404
+ 54:13.280 --> 54:21.520
1405
+ Speaking of which, here in 96 book, The Future of Women, I think at the time and now, certainly now,
1406
+
1407
+ 54:21.520 --> 54:27.760
1408
+ I mean, I'm sorry, maybe at the time, but I'm more cognizant of now, is extremely relevant.
1409
+
1410
+ 54:27.760 --> 54:34.160
1411
+ And you and Nancy Ramsey talk about four possible futures of women in science and tech.
1412
+
1413
+ 54:35.120 --> 54:42.400
1414
+ So if we look at the decades before and after the book was released, can you tell a history,
1415
+
1416
+ 54:43.120 --> 54:50.560
1417
+ sorry, of women in science and tech and how it has evolved? How have things changed? Where do we
1418
+
1419
+ 54:50.560 --> 55:01.520
1420
+ stand? Not enough. They have not changed enough. The way that women are ground down in computing is
1421
+
1422
+ 55:02.320 --> 55:09.680
1423
+ simply unbelievable. But what are the four possible futures for women in tech from the book?
1424
+
1425
+ 55:10.800 --> 55:16.720
1426
+ What you're really looking at are various aspects of the present. So for each of those,
1427
+
1428
+ 55:16.720 --> 55:22.800
1429
+ you could say, oh, yeah, we do have backlash. Look at what's happening with abortion and so on and so
1430
+
1431
+ 55:22.800 --> 55:31.280
1432
+ forth. We have one step forward, one step back. The golden age of equality was the hardest chapter
1433
+
1434
+ 55:31.280 --> 55:37.040
1435
+ to write. And I used something from the Santa Fe Institute, which is the sand pile effect,
1436
+
1437
+ 55:37.760 --> 55:44.480
1438
+ that you drop sand very slowly onto a pile, and it grows and it grows and it grows until
1439
+
1440
+ 55:44.480 --> 55:56.640
1441
+ suddenly it just breaks apart. And in a way, MeToo has done that. That was the last drop of sand
1442
+
1443
+ 55:56.640 --> 56:02.800
1444
+ that broke everything apart. That was a perfect example of the sand pile effect. And that made
1445
+
1446
+ 56:02.800 --> 56:07.440
1447
+ me feel good. It didn't change all of society, but it really woke a lot of people up.
1448
+
1449
+ 56:07.440 --> 56:15.760
1450
+ But are you in general optimistic about maybe after MeToo? MeToo is about a very specific kind
1451
+
1452
+ 56:15.760 --> 56:21.920
1453
+ of thing. Boy, solve that and you'll solve everything. But are you in general optimistic
1454
+
1455
+ 56:21.920 --> 56:27.600
1456
+ about the future? Yes, I'm a congenital optimistic. I can't help it.
1457
+
1458
+ 56:28.400 --> 56:35.600
1459
+ What about AI? What are your thoughts about the future of AI? Of course, I get asked,
1460
+
1461
+ 56:35.600 --> 56:41.280
1462
+ what do you worry about? And the one thing I worry about is the things we can't anticipate.
1463
+
1464
+ 56:44.320 --> 56:47.440
1465
+ There's going to be something out of that field that we will just say,
1466
+
1467
+ 56:47.440 --> 56:59.040
1468
+ we weren't prepared for that. I am generally optimistic. When I first took up being interested
1469
+
1470
+ 56:59.040 --> 57:06.560
1471
+ in AI, like most people in the field, more intelligence was like more virtue. What could be
1472
+
1473
+ 57:06.560 --> 57:15.760
1474
+ bad? And in a way, I still believe that, but I realize that my notion of intelligence has
1475
+
1476
+ 57:15.760 --> 57:22.240
1477
+ broadened. There are many kinds of intelligence, and we need to imbue our machines with those many
1478
+
1479
+ 57:22.240 --> 57:32.720
1480
+ kinds. So you've now just finished or in the process of finishing the book, even working on
1481
+
1482
+ 57:32.720 --> 57:40.160
1483
+ the memoir. How have you changed? I know it's just writing, but how have you changed the process?
1484
+
1485
+ 57:40.800 --> 57:48.880
1486
+ If you look back, what kind of stuff did it bring up to you that surprised you looking at the entirety
1487
+
1488
+ 57:48.880 --> 58:01.200
1489
+ of it all? The biggest thing, and it really wasn't a surprise, is how lucky I was, oh my, to be,
1490
+
1491
+ 58:03.680 --> 58:08.880
1492
+ to have access to the beginning of a scientific field that is going to change the world.
1493
+
1494
+ 58:08.880 --> 58:20.880
1495
+ How did I luck out? And yes, of course, my view of things has widened a lot.
1496
+
1497
+ 58:23.040 --> 58:32.080
1498
+ If I can get back to one feminist part of our conversation, without knowing it, it really
1499
+
1500
+ 58:32.080 --> 58:40.400
1501
+ was subconscious. I wanted AI to succeed because I was so tired of hearing that intelligence was
1502
+
1503
+ 58:40.400 --> 58:47.920
1504
+ inside the male cranium. And I thought if there was something out there that wasn't a male
1505
+
1506
+ 58:49.360 --> 58:57.120
1507
+ thinking and doing well, then that would put a lie to this whole notion of intelligence resides
1508
+
1509
+ 58:57.120 --> 59:04.560
1510
+ in the male cranium. I did not know that until one night, Harold Cohen and I were
1511
+
1512
+ 59:05.760 --> 59:12.560
1513
+ having a glass of wine, maybe two, and he said, what drew you to AI? And I said, oh, you know,
1514
+
1515
+ 59:12.560 --> 59:18.080
1516
+ smartest people I knew, great project, blah, blah, blah. And I said, and I wanted something
1517
+
1518
+ 59:18.080 --> 59:30.640
1519
+ besides male smarts. And it just bubbled up out of me, Lex. It's brilliant, actually. So AI really
1520
+
1521
+ 59:32.160 --> 59:36.080
1522
+ humbles all of us and humbles the people that need to be humbled the most.
1523
+
1524
+ 59:37.040 --> 59:43.440
1525
+ Let's hope. Oh, wow, that is so beautiful. Pamela, thank you so much for talking to
1526
+
1527
+ 59:43.440 --> 59:49.440
1528
+ us. Oh, it's been a great pleasure. Thank you.
1529
+
vtt/episode_035_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_036_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_037_large.vtt ADDED
@@ -0,0 +1,3590 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.080
4
+ The following is a conversation with Vijay Kumar.
5
+
6
+ 00:03.080 --> 00:05.760
7
+ He's one of the top roboticists in the world,
8
+
9
+ 00:05.760 --> 00:08.760
10
+ a professor at the University of Pennsylvania,
11
+
12
+ 00:08.760 --> 00:12.880
13
+ a dean of pen engineering, former director of Grasp Lab,
14
+
15
+ 00:12.880 --> 00:15.300
16
+ or the General Robotics Automation Sensing
17
+
18
+ 00:15.300 --> 00:17.560
19
+ and Perception Laboratory at Penn,
20
+
21
+ 00:17.560 --> 00:22.560
22
+ that was established back in 1979, that's 40 years ago.
23
+
24
+ 00:22.600 --> 00:25.280
25
+ Vijay is perhaps best known for his work
26
+
27
+ 00:25.280 --> 00:28.520
28
+ in multi robot systems, robot swarms,
29
+
30
+ 00:28.520 --> 00:30.880
31
+ and micro aerial vehicles,
32
+
33
+ 00:30.880 --> 00:34.020
34
+ robots that elegantly cooperate in flight
35
+
36
+ 00:34.020 --> 00:36.200
37
+ under all the uncertainty and challenges
38
+
39
+ 00:36.200 --> 00:38.760
40
+ that the real world conditions present.
41
+
42
+ 00:38.760 --> 00:41.920
43
+ This is the Artificial Intelligence Podcast.
44
+
45
+ 00:41.920 --> 00:44.320
46
+ If you enjoy it, subscribe on YouTube,
47
+
48
+ 00:44.320 --> 00:47.560
49
+ give it five stars on iTunes, support on Patreon,
50
+
51
+ 00:47.560 --> 00:49.500
52
+ or simply connect with me on Twitter
53
+
54
+ 00:49.500 --> 00:53.280
55
+ at Lex Friedman, spelled F R I D M A N.
56
+
57
+ 00:53.280 --> 00:58.280
58
+ And now, here's my conversation with Vijay Kumar.
59
+
60
+ 00:58.700 --> 01:01.080
61
+ What is the first robot you've ever built
62
+
63
+ 01:01.080 --> 01:02.840
64
+ or were a part of building?
65
+
66
+ 01:02.840 --> 01:04.760
67
+ Way back when I was in graduate school,
68
+
69
+ 01:04.760 --> 01:06.760
70
+ I was part of a fairly big project
71
+
72
+ 01:06.760 --> 01:11.760
73
+ that involved building a very large hexapod.
74
+
75
+ 01:12.040 --> 01:16.700
76
+ It's weighed close to 7,000 pounds,
77
+
78
+ 01:17.520 --> 01:21.620
79
+ and it was powered by hydraulic actuation,
80
+
81
+ 01:21.620 --> 01:26.620
82
+ or it was actuated by hydraulics with 18 motors,
83
+
84
+ 01:27.720 --> 01:32.720
85
+ hydraulic motors, each controlled by an Intel 8085 processor
86
+
87
+ 01:34.160 --> 01:36.680
88
+ and an 8086 co processor.
89
+
90
+ 01:38.120 --> 01:43.120
91
+ And so imagine this huge monster that had 18 joints,
92
+
93
+ 01:44.800 --> 01:46.960
94
+ each controlled by an independent computer,
95
+
96
+ 01:46.960 --> 01:49.320
97
+ and there was a 19th computer that actually did
98
+
99
+ 01:49.320 --> 01:52.320
100
+ the coordination between these 18 joints.
101
+
102
+ 01:52.320 --> 01:53.720
103
+ So I was part of this project,
104
+
105
+ 01:53.720 --> 01:58.720
106
+ and my thesis work was how do you coordinate the 18 legs?
107
+
108
+ 02:02.080 --> 02:06.320
109
+ And in particular, the pressures in the hydraulic cylinders
110
+
111
+ 02:06.320 --> 02:09.200
112
+ to get efficient locomotion.
113
+
114
+ 02:09.200 --> 02:11.640
115
+ It sounds like a giant mess.
116
+
117
+ 02:11.640 --> 02:14.440
118
+ So how difficult is it to make all the motors communicate?
119
+
120
+ 02:14.440 --> 02:17.600
121
+ Presumably, you have to send signals hundreds of times
122
+
123
+ 02:17.600 --> 02:18.440
124
+ a second, or at least.
125
+
126
+ 02:18.440 --> 02:19.880
127
+ So this was not my work,
128
+
129
+ 02:19.880 --> 02:23.960
130
+ but the folks who worked on this wrote what I believe
131
+
132
+ 02:23.960 --> 02:26.640
133
+ to be the first multiprocessor operating system.
134
+
135
+ 02:26.640 --> 02:30.320
136
+ This was in the 80s, and you had to make sure
137
+
138
+ 02:30.320 --> 02:32.800
139
+ that obviously messages got across
140
+
141
+ 02:32.800 --> 02:34.640
142
+ from one joint to another.
143
+
144
+ 02:34.640 --> 02:37.960
145
+ You have to remember the clock speeds on those computers
146
+
147
+ 02:37.960 --> 02:39.660
148
+ were about half a megahertz.
149
+
150
+ 02:39.660 --> 02:42.180
151
+ Right, the 80s.
152
+
153
+ 02:42.180 --> 02:45.320
154
+ So not to romanticize the notion,
155
+
156
+ 02:45.320 --> 02:49.700
157
+ but how did it make you feel to see that robot move?
158
+
159
+ 02:51.080 --> 02:52.280
160
+ It was amazing.
161
+
162
+ 02:52.280 --> 02:55.280
163
+ In hindsight, it looks like, well, we built this thing
164
+
165
+ 02:55.280 --> 02:57.320
166
+ which really should have been much smaller.
167
+
168
+ 02:57.320 --> 02:59.160
169
+ And of course, today's robots are much smaller.
170
+
171
+ 02:59.160 --> 03:03.120
172
+ You look at Boston Dynamics or Ghost Robotics,
173
+
174
+ 03:03.120 --> 03:04.780
175
+ a spinoff from Penn.
176
+
177
+ 03:06.080 --> 03:10.080
178
+ But back then, you were stuck with the substrate you had,
179
+
180
+ 03:10.080 --> 03:13.720
181
+ the compute you had, so things were unnecessarily big.
182
+
183
+ 03:13.720 --> 03:18.040
184
+ But at the same time, and this is just human psychology,
185
+
186
+ 03:18.040 --> 03:20.400
187
+ somehow bigger means grander.
188
+
189
+ 03:21.600 --> 03:23.640
190
+ People never had the same appreciation
191
+
192
+ 03:23.640 --> 03:26.360
193
+ for nanotechnology or nanodevices
194
+
195
+ 03:26.360 --> 03:30.160
196
+ as they do for the Space Shuttle or the Boeing 747.
197
+
198
+ 03:30.160 --> 03:32.760
199
+ Yeah, you've actually done quite a good job
200
+
201
+ 03:32.760 --> 03:36.000
202
+ at illustrating that small is beautiful
203
+
204
+ 03:36.000 --> 03:37.760
205
+ in terms of robotics.
206
+
207
+ 03:37.760 --> 03:42.600
208
+ So what is on that topic is the most beautiful
209
+
210
+ 03:42.600 --> 03:46.200
211
+ or elegant robot in motion that you've ever seen?
212
+
213
+ 03:46.200 --> 03:47.880
214
+ Not to pick favorites or whatever,
215
+
216
+ 03:47.880 --> 03:51.000
217
+ but something that just inspires you that you remember.
218
+
219
+ 03:51.000 --> 03:54.000
220
+ Well, I think the thing that I'm most proud of
221
+
222
+ 03:54.000 --> 03:57.200
223
+ that my students have done is really think about
224
+
225
+ 03:57.200 --> 04:00.360
226
+ small UAVs that can maneuver in constrained spaces
227
+
228
+ 04:00.360 --> 04:03.640
229
+ and in particular, their ability to coordinate
230
+
231
+ 04:03.640 --> 04:06.760
232
+ with each other and form three dimensional patterns.
233
+
234
+ 04:06.760 --> 04:08.920
235
+ So once you can do that,
236
+
237
+ 04:08.920 --> 04:13.920
238
+ you can essentially create 3D objects in the sky
239
+
240
+ 04:14.960 --> 04:17.680
241
+ and you can deform these objects on the fly.
242
+
243
+ 04:17.680 --> 04:21.560
244
+ So in some sense, your toolbox of what you can create
245
+
246
+ 04:21.560 --> 04:23.400
247
+ has suddenly got enhanced.
248
+
249
+ 04:25.240 --> 04:27.800
250
+ And before that, we did the two dimensional version of this.
251
+
252
+ 04:27.800 --> 04:31.680
253
+ So we had ground robots forming patterns and so on.
254
+
255
+ 04:31.680 --> 04:34.960
256
+ So that was not as impressive, that was not as beautiful.
257
+
258
+ 04:34.960 --> 04:36.560
259
+ But if you do it in 3D,
260
+
261
+ 04:36.560 --> 04:40.240
262
+ suspended in midair, and you've got to go back to 2011
263
+
264
+ 04:40.240 --> 04:43.040
265
+ when we did this, now it's actually pretty standard
266
+
267
+ 04:43.040 --> 04:45.600
268
+ to do these things eight years later.
269
+
270
+ 04:45.600 --> 04:47.680
271
+ But back then it was a big accomplishment.
272
+
273
+ 04:47.680 --> 04:50.280
274
+ So the distributed cooperation
275
+
276
+ 04:50.280 --> 04:53.480
277
+ is where beauty emerges in your eyes?
278
+
279
+ 04:53.480 --> 04:55.800
280
+ Well, I think beauty to an engineer is very different
281
+
282
+ 04:55.800 --> 04:59.400
283
+ from beauty to someone who's looking at robots
284
+
285
+ 04:59.400 --> 05:01.240
286
+ from the outside, if you will.
287
+
288
+ 05:01.240 --> 05:04.800
289
+ But what I meant there, so before we said that grand,
290
+
291
+ 05:04.800 --> 05:09.800
292
+ so before we said that grand is associated with size.
293
+
294
+ 05:10.520 --> 05:13.720
295
+ And another way of thinking about this
296
+
297
+ 05:13.720 --> 05:15.600
298
+ is just the physical shape
299
+
300
+ 05:15.600 --> 05:18.400
301
+ and the idea that you can get physical shapes in midair
302
+
303
+ 05:18.400 --> 05:21.560
304
+ and have them deform, that's beautiful.
305
+
306
+ 05:21.560 --> 05:23.040
307
+ But the individual components,
308
+
309
+ 05:23.040 --> 05:24.880
310
+ the agility is beautiful too, right?
311
+
312
+ 05:24.880 --> 05:25.720
313
+ That is true too.
314
+
315
+ 05:25.720 --> 05:28.480
316
+ So then how quickly can you actually manipulate
317
+
318
+ 05:28.480 --> 05:29.560
319
+ these three dimensional shapes
320
+
321
+ 05:29.560 --> 05:31.280
322
+ and the individual components?
323
+
324
+ 05:31.280 --> 05:32.240
325
+ Yes, you're right.
326
+
327
+ 05:32.240 --> 05:36.760
328
+ But by the way, you said UAV, unmanned aerial vehicle.
329
+
330
+ 05:36.760 --> 05:41.760
331
+ What's a good term for drones, UAVs, quad copters?
332
+
333
+ 05:41.840 --> 05:44.560
334
+ Is there a term that's being standardized?
335
+
336
+ 05:44.560 --> 05:45.440
337
+ I don't know if there is.
338
+
339
+ 05:45.440 --> 05:47.920
340
+ Everybody wants to use the word drones.
341
+
342
+ 05:47.920 --> 05:51.080
343
+ And I've often said this, drones to me is a pejorative word.
344
+
345
+ 05:51.080 --> 05:53.960
346
+ It signifies something that's dumb,
347
+
348
+ 05:53.960 --> 05:56.360
349
+ that's pre programmed, that does one little thing
350
+
351
+ 05:56.360 --> 05:58.600
352
+ and robots are anything but drones.
353
+
354
+ 05:58.600 --> 06:00.680
355
+ So I actually don't like that word,
356
+
357
+ 06:00.680 --> 06:02.960
358
+ but that's what everybody uses.
359
+
360
+ 06:02.960 --> 06:04.880
361
+ You could call it unpiloted.
362
+
363
+ 06:04.880 --> 06:05.800
364
+ Unpiloted.
365
+
366
+ 06:05.800 --> 06:08.120
367
+ But even unpiloted could be radio controlled,
368
+
369
+ 06:08.120 --> 06:11.560
370
+ could be remotely controlled in many different ways.
371
+
372
+ 06:11.560 --> 06:12.960
373
+ And I think the right word is,
374
+
375
+ 06:12.960 --> 06:15.040
376
+ thinking about it as an aerial robot.
377
+
378
+ 06:15.040 --> 06:19.080
379
+ You also say agile, autonomous, aerial robot, right?
380
+
381
+ 06:19.080 --> 06:22.160
382
+ Yeah, so agility is an attribute, but they don't have to be.
383
+
384
+ 06:23.080 --> 06:24.800
385
+ So what biological system,
386
+
387
+ 06:24.800 --> 06:27.200
388
+ because you've also drawn a lot of inspiration with those.
389
+
390
+ 06:27.200 --> 06:30.360
391
+ I've seen bees and ants that you've talked about.
392
+
393
+ 06:30.360 --> 06:35.240
394
+ What living creatures have you found to be most inspiring
395
+
396
+ 06:35.240 --> 06:38.520
397
+ as an engineer, instructive in your work in robotics?
398
+
399
+ 06:38.520 --> 06:43.440
400
+ To me, so ants are really quite incredible creatures, right?
401
+
402
+ 06:43.440 --> 06:47.880
403
+ So you, I mean, the individuals arguably are very simple
404
+
405
+ 06:47.880 --> 06:52.360
406
+ in how they're built and yet they're incredibly resilient
407
+
408
+ 06:52.360 --> 06:53.960
409
+ as a population.
410
+
411
+ 06:53.960 --> 06:56.760
412
+ And as individuals, they're incredibly robust.
413
+
414
+ 06:56.760 --> 07:00.600
415
+ So, if you take an ant, it's six legs,
416
+
417
+ 07:00.600 --> 07:04.120
418
+ you remove one leg, it still works just fine.
419
+
420
+ 07:04.120 --> 07:05.760
421
+ And it moves along.
422
+
423
+ 07:05.760 --> 07:08.720
424
+ And I don't know that he even realizes it's lost a leg.
425
+
426
+ 07:09.760 --> 07:12.520
427
+ So that's the robustness at the individual ant level.
428
+
429
+ 07:13.400 --> 07:15.360
430
+ But then you look about this instinct
431
+
432
+ 07:15.360 --> 07:17.680
433
+ for self preservation of the colonies
434
+
435
+ 07:17.680 --> 07:20.400
436
+ and they adapt in so many amazing ways.
437
+
438
+ 07:20.400 --> 07:25.400
439
+ You know, transcending gaps by just chaining themselves
440
+
441
+ 07:26.800 --> 07:29.600
442
+ together when you have a flood,
443
+
444
+ 07:29.600 --> 07:32.360
445
+ being able to recruit other teammates
446
+
447
+ 07:32.360 --> 07:34.320
448
+ to carry big morsels of food,
449
+
450
+ 07:35.760 --> 07:38.760
451
+ and then going out in different directions looking for food,
452
+
453
+ 07:38.760 --> 07:43.160
454
+ and then being able to demonstrate consensus,
455
+
456
+ 07:43.160 --> 07:47.040
457
+ even though they don't communicate directly with each other
458
+
459
+ 07:47.040 --> 07:49.080
460
+ the way we communicate with each other.
461
+
462
+ 07:49.080 --> 07:51.880
463
+ In some sense, they also know how to do democracy,
464
+
465
+ 07:51.880 --> 07:53.640
466
+ probably better than what we do.
467
+
468
+ 07:53.640 --> 07:57.000
469
+ Yeah, somehow it's even democracy is emergent.
470
+
471
+ 07:57.000 --> 07:59.120
472
+ It seems like all of the phenomena that we see
473
+
474
+ 07:59.120 --> 08:00.480
475
+ is all emergent.
476
+
477
+ 08:00.480 --> 08:03.560
478
+ It seems like there's no centralized communicator.
479
+
480
+ 08:03.560 --> 08:06.520
481
+ There is, so I think a lot is made about that word,
482
+
483
+ 08:06.520 --> 08:09.640
484
+ emergent, and it means lots of things to different people.
485
+
486
+ 08:09.640 --> 08:10.680
487
+ But you're absolutely right.
488
+
489
+ 08:10.680 --> 08:13.040
490
+ I think as an engineer, you think about
491
+
492
+ 08:13.040 --> 08:17.720
493
+ what element, elemental behaviors
494
+
495
+ 08:17.720 --> 08:21.320
496
+ were primitives you could synthesize
497
+
498
+ 08:21.320 --> 08:25.240
499
+ so that the whole looks incredibly powerful,
500
+
501
+ 08:25.240 --> 08:26.520
502
+ incredibly synergistic,
503
+
504
+ 08:26.520 --> 08:29.520
505
+ the whole definitely being greater than some of the parts,
506
+
507
+ 08:29.520 --> 08:31.480
508
+ and ants are living proof of that.
509
+
510
+ 08:32.480 --> 08:34.960
511
+ So when you see these beautiful swarms
512
+
513
+ 08:34.960 --> 08:37.520
514
+ where there's biological systems of robots,
515
+
516
+ 08:38.520 --> 08:40.200
517
+ do you sometimes think of them
518
+
519
+ 08:40.200 --> 08:44.640
520
+ as a single individual living intelligent organism?
521
+
522
+ 08:44.640 --> 08:47.400
523
+ So it's the same as thinking of our human beings
524
+
525
+ 08:47.400 --> 08:51.160
526
+ are human civilization as one organism,
527
+
528
+ 08:51.160 --> 08:52.960
529
+ or do you still, as an engineer,
530
+
531
+ 08:52.960 --> 08:54.600
532
+ think about the individual components
533
+
534
+ 08:54.600 --> 08:55.440
535
+ and all the engineering
536
+
537
+ 08:55.440 --> 08:57.320
538
+ that went into the individual components?
539
+
540
+ 08:57.320 --> 08:58.640
541
+ Well, that's very interesting.
542
+
543
+ 08:58.640 --> 09:01.480
544
+ So again, philosophically as engineers,
545
+
546
+ 09:01.480 --> 09:05.400
547
+ what we wanna do is to go beyond
548
+
549
+ 09:05.400 --> 09:08.280
550
+ the individual components, the individual units,
551
+
552
+ 09:08.280 --> 09:11.520
553
+ and think about it as a unit, as a cohesive unit,
554
+
555
+ 09:11.520 --> 09:15.120
556
+ without worrying about the individual components.
557
+
558
+ 09:15.120 --> 09:17.760
559
+ If you start obsessing about
560
+
561
+ 09:17.760 --> 09:22.120
562
+ the individual building blocks and what they do,
563
+
564
+ 09:23.320 --> 09:27.960
565
+ you inevitably will find it hard to scale up.
566
+
567
+ 09:27.960 --> 09:29.000
568
+ Just mathematically,
569
+
570
+ 09:29.000 --> 09:31.600
571
+ just think about individual things you wanna model,
572
+
573
+ 09:31.600 --> 09:34.040
574
+ and if you want to have 10 of those,
575
+
576
+ 09:34.040 --> 09:36.440
577
+ then you essentially are taking Cartesian products
578
+
579
+ 09:36.440 --> 09:39.320
580
+ of 10 things, and that makes it really complicated.
581
+
582
+ 09:39.320 --> 09:41.840
583
+ Then to do any kind of synthesis or design
584
+
585
+ 09:41.840 --> 09:44.200
586
+ in that high dimension space is really hard.
587
+
588
+ 09:44.200 --> 09:45.800
589
+ So the right way to do this
590
+
591
+ 09:45.800 --> 09:49.040
592
+ is to think about the individuals in a clever way
593
+
594
+ 09:49.040 --> 09:51.120
595
+ so that at the higher level,
596
+
597
+ 09:51.120 --> 09:53.400
598
+ when you look at lots and lots of them,
599
+
600
+ 09:53.400 --> 09:55.320
601
+ abstractly, you can think of them
602
+
603
+ 09:55.320 --> 09:57.120
604
+ in some low dimensional space.
605
+
606
+ 09:57.120 --> 09:58.680
607
+ So what does that involve?
608
+
609
+ 09:58.680 --> 10:02.160
610
+ For the individual, do you have to try to make
611
+
612
+ 10:02.160 --> 10:05.160
613
+ the way they see the world as local as possible?
614
+
615
+ 10:05.160 --> 10:06.440
616
+ And the other thing,
617
+
618
+ 10:06.440 --> 10:09.560
619
+ do you just have to make them robust to collisions?
620
+
621
+ 10:09.560 --> 10:10.880
622
+ Like you said with the ants,
623
+
624
+ 10:10.880 --> 10:15.320
625
+ if something fails, the whole swarm doesn't fail.
626
+
627
+ 10:15.320 --> 10:17.760
628
+ Right, I think as engineers, we do this.
629
+
630
+ 10:17.760 --> 10:19.760
631
+ I mean, you think about, we build planes,
632
+
633
+ 10:19.760 --> 10:21.240
634
+ or we build iPhones,
635
+
636
+ 10:22.240 --> 10:26.280
637
+ and we know that by taking individual components,
638
+
639
+ 10:26.280 --> 10:30.080
640
+ well engineered components with well specified interfaces
641
+
642
+ 10:30.080 --> 10:31.680
643
+ that behave in a predictable way,
644
+
645
+ 10:31.680 --> 10:33.560
646
+ you can build complex systems.
647
+
648
+ 10:34.440 --> 10:36.880
649
+ So that's ingrained, I would claim,
650
+
651
+ 10:36.880 --> 10:39.400
652
+ in most engineers thinking,
653
+
654
+ 10:39.400 --> 10:41.600
655
+ and it's true for computer scientists as well.
656
+
657
+ 10:41.600 --> 10:44.760
658
+ I think what's different here is that you want
659
+
660
+ 10:44.760 --> 10:49.480
661
+ the individuals to be robust in some sense,
662
+
663
+ 10:49.480 --> 10:52.000
664
+ as we do in these other settings,
665
+
666
+ 10:52.000 --> 10:54.480
667
+ but you also want some degree of resiliency
668
+
669
+ 10:54.480 --> 10:56.320
670
+ for the population.
671
+
672
+ 10:56.320 --> 11:00.560
673
+ And so you really want them to be able to reestablish
674
+
675
+ 11:02.040 --> 11:03.840
676
+ communication with their neighbors.
677
+
678
+ 11:03.840 --> 11:08.840
679
+ You want them to rethink their strategy for group behavior.
680
+
681
+ 11:08.840 --> 11:10.760
682
+ You want them to reorganize.
683
+
684
+ 11:12.200 --> 11:15.920
685
+ And that's where I think a lot of the challenges lie.
686
+
687
+ 11:15.920 --> 11:18.160
688
+ So just at a high level,
689
+
690
+ 11:18.160 --> 11:20.880
691
+ what does it take for a bunch of,
692
+
693
+ 11:22.200 --> 11:24.440
694
+ what should we call them, flying robots,
695
+
696
+ 11:24.440 --> 11:26.680
697
+ to create a formation?
698
+
699
+ 11:26.680 --> 11:28.680
700
+ Just for people who are not familiar
701
+
702
+ 11:28.680 --> 11:32.760
703
+ with robotics in general, how much information is needed?
704
+
705
+ 11:32.760 --> 11:35.840
706
+ How do you even make it happen
707
+
708
+ 11:35.840 --> 11:39.520
709
+ without a centralized controller?
710
+
711
+ 11:39.520 --> 11:41.080
712
+ So, I mean, there are a couple of different ways
713
+
714
+ 11:41.080 --> 11:43.160
715
+ of looking at this.
716
+
717
+ 11:43.160 --> 11:45.680
718
+ If you are a purist,
719
+
720
+ 11:45.680 --> 11:50.680
721
+ you think of it as a way of recreating what nature does.
722
+
723
+ 11:53.560 --> 11:58.440
724
+ So nature forms groups for several reasons,
725
+
726
+ 11:58.440 --> 12:02.000
727
+ but mostly it's because of this instinct
728
+
729
+ 12:02.000 --> 12:05.680
730
+ that organisms have of preserving their colonies,
731
+
732
+ 12:05.680 --> 12:09.520
733
+ their population, which means what?
734
+
735
+ 12:09.520 --> 12:12.920
736
+ You need shelter, you need food, you need to procreate,
737
+
738
+ 12:12.920 --> 12:14.760
739
+ and that's basically it.
740
+
741
+ 12:14.760 --> 12:18.440
742
+ So the kinds of interactions you see are all organic.
743
+
744
+ 12:18.440 --> 12:19.760
745
+ They're all local.
746
+
747
+ 12:20.760 --> 12:24.080
748
+ And the only information that they share,
749
+
750
+ 12:24.080 --> 12:27.520
751
+ and mostly it's indirectly, is to, again,
752
+
753
+ 12:27.520 --> 12:30.000
754
+ preserve the herd or the flock,
755
+
756
+ 12:30.000 --> 12:35.000
757
+ or the swarm, and either by looking for new sources of food
758
+
759
+ 12:37.480 --> 12:39.440
760
+ or looking for new shelters, right?
761
+
762
+ 12:39.440 --> 12:40.280
763
+ Right.
764
+
765
+ 12:41.240 --> 12:45.360
766
+ As engineers, when we build swarms, we have a mission.
767
+
768
+ 12:46.560 --> 12:51.560
769
+ And when you think of a mission, and it involves mobility,
770
+
771
+ 12:52.480 --> 12:55.000
772
+ most often it's described in some kind
773
+
774
+ 12:55.000 --> 12:56.880
775
+ of a global coordinate system.
776
+
777
+ 12:56.880 --> 12:59.440
778
+ As a human, as an operator, as a commander,
779
+
780
+ 12:59.440 --> 13:03.560
781
+ or as a collaborator, I have my coordinate system,
782
+
783
+ 13:03.560 --> 13:06.640
784
+ and I want the robots to be consistent with that.
785
+
786
+ 13:07.600 --> 13:11.240
787
+ So I might think of it slightly differently.
788
+
789
+ 13:11.240 --> 13:15.440
790
+ I might want the robots to recognize that coordinate system,
791
+
792
+ 13:15.440 --> 13:17.720
793
+ which means not only do they have to think locally
794
+
795
+ 13:17.720 --> 13:19.600
796
+ in terms of who their immediate neighbors are,
797
+
798
+ 13:19.600 --> 13:20.920
799
+ but they have to be cognizant
800
+
801
+ 13:20.920 --> 13:24.040
802
+ of what the global environment is.
803
+
804
+ 13:24.040 --> 13:27.040
805
+ They have to be cognizant of what the global environment
806
+
807
+ 13:27.040 --> 13:28.280
808
+ looks like.
809
+
810
+ 13:28.280 --> 13:31.040
811
+ So if I say, surround this building
812
+
813
+ 13:31.040 --> 13:33.240
814
+ and protect this from intruders,
815
+
816
+ 13:33.240 --> 13:35.600
817
+ well, they're immediately in a building centered
818
+
819
+ 13:35.600 --> 13:37.040
820
+ coordinate system, and I have to tell them
821
+
822
+ 13:37.040 --> 13:38.680
823
+ where the building is.
824
+
825
+ 13:38.680 --> 13:40.040
826
+ And they're globally collaborating
827
+
828
+ 13:40.040 --> 13:41.280
829
+ on the map of that building.
830
+
831
+ 13:41.280 --> 13:44.160
832
+ They're maintaining some kind of global,
833
+
834
+ 13:44.160 --> 13:45.480
835
+ not just in the frame of the building,
836
+
837
+ 13:45.480 --> 13:49.000
838
+ but there's information that's ultimately being built up
839
+
840
+ 13:49.000 --> 13:53.280
841
+ explicitly as opposed to kind of implicitly,
842
+
843
+ 13:53.280 --> 13:54.360
844
+ like nature might.
845
+
846
+ 13:54.360 --> 13:55.200
847
+ Correct, correct.
848
+
849
+ 13:55.200 --> 13:57.680
850
+ So in some sense, nature is very, very sophisticated,
851
+
852
+ 13:57.680 --> 14:01.880
853
+ but the tasks that nature solves or needs to solve
854
+
855
+ 14:01.880 --> 14:05.160
856
+ are very different from the kind of engineered tasks,
857
+
858
+ 14:05.160 --> 14:09.760
859
+ artificial tasks that we are forced to address.
860
+
861
+ 14:09.760 --> 14:12.520
862
+ And again, there's nothing preventing us
863
+
864
+ 14:12.520 --> 14:15.160
865
+ from solving these other problems,
866
+
867
+ 14:15.160 --> 14:16.600
868
+ but ultimately it's about impact.
869
+
870
+ 14:16.600 --> 14:19.360
871
+ You want these swarms to do something useful.
872
+
873
+ 14:19.360 --> 14:24.360
874
+ And so you're kind of driven into this very unnatural,
875
+
876
+ 14:24.640 --> 14:25.480
877
+ if you will.
878
+
879
+ 14:25.480 --> 14:29.160
880
+ Unnatural, meaning not like how nature does, setting.
881
+
882
+ 14:29.160 --> 14:31.920
883
+ And it's probably a little bit more expensive
884
+
885
+ 14:31.920 --> 14:33.760
886
+ to do it the way nature does,
887
+
888
+ 14:33.760 --> 14:37.560
889
+ because nature is less sensitive
890
+
891
+ 14:37.560 --> 14:39.480
892
+ to the loss of the individual.
893
+
894
+ 14:39.480 --> 14:42.280
895
+ And cost wise in robotics,
896
+
897
+ 14:42.280 --> 14:45.480
898
+ I think you're more sensitive to losing individuals.
899
+
900
+ 14:45.480 --> 14:49.000
901
+ I think that's true, although if you look at the price
902
+
903
+ 14:49.000 --> 14:51.520
904
+ to performance ratio of robotic components,
905
+
906
+ 14:51.520 --> 14:54.720
907
+ it's coming down dramatically, right?
908
+
909
+ 14:54.720 --> 14:56.040
910
+ It continues to come down.
911
+
912
+ 14:56.040 --> 14:58.920
913
+ So I think we're asymptotically approaching the point
914
+
915
+ 14:58.920 --> 14:59.960
916
+ where we would get, yeah,
917
+
918
+ 14:59.960 --> 15:05.040
919
+ the cost of individuals would really become insignificant.
920
+
921
+ 15:05.040 --> 15:07.640
922
+ So let's step back at a high level view,
923
+
924
+ 15:07.640 --> 15:12.480
925
+ the impossible question of what kind of, as an overview,
926
+
927
+ 15:12.480 --> 15:14.400
928
+ what kind of autonomous flying vehicles
929
+
930
+ 15:14.400 --> 15:16.200
931
+ are there in general?
932
+
933
+ 15:16.200 --> 15:19.720
934
+ I think the ones that receive a lot of notoriety
935
+
936
+ 15:19.720 --> 15:22.560
937
+ are obviously the military vehicles.
938
+
939
+ 15:22.560 --> 15:26.280
940
+ Military vehicles are controlled by a base station,
941
+
942
+ 15:26.280 --> 15:29.640
943
+ but have a lot of human supervision.
944
+
945
+ 15:29.640 --> 15:31.800
946
+ But they have limited autonomy,
947
+
948
+ 15:31.800 --> 15:34.760
949
+ which is the ability to go from point A to point B.
950
+
951
+ 15:34.760 --> 15:37.080
952
+ And even the more sophisticated now,
953
+
954
+ 15:37.080 --> 15:40.400
955
+ sophisticated vehicles can do autonomous takeoff
956
+
957
+ 15:40.400 --> 15:41.760
958
+ and landing.
959
+
960
+ 15:41.760 --> 15:44.360
961
+ And those usually have wings and they're heavy.
962
+
963
+ 15:44.360 --> 15:45.360
964
+ Usually they're wings,
965
+
966
+ 15:45.360 --> 15:47.440
967
+ but then there's nothing preventing us from doing this
968
+
969
+ 15:47.440 --> 15:49.000
970
+ for helicopters as well.
971
+
972
+ 15:49.000 --> 15:52.480
973
+ There are many military organizations
974
+
975
+ 15:52.480 --> 15:56.560
976
+ that have autonomous helicopters in the same vein.
977
+
978
+ 15:56.560 --> 16:00.080
979
+ And by the way, you look at autopilots and airplanes
980
+
981
+ 16:00.080 --> 16:02.840
982
+ and it's actually very similar.
983
+
984
+ 16:02.840 --> 16:07.160
985
+ In fact, one interesting question we can ask is,
986
+
987
+ 16:07.160 --> 16:12.120
988
+ if you look at all the air safety violations,
989
+
990
+ 16:12.120 --> 16:14.080
991
+ all the crashes that occurred,
992
+
993
+ 16:14.080 --> 16:18.640
994
+ would they have happened if the plane were truly autonomous?
995
+
996
+ 16:18.640 --> 16:21.960
997
+ And I think you'll find that in many of the cases,
998
+
999
+ 16:21.960 --> 16:24.600
1000
+ because of pilot error, we made silly decisions.
1001
+
1002
+ 16:24.600 --> 16:26.960
1003
+ And so in some sense, even in air traffic,
1004
+
1005
+ 16:26.960 --> 16:29.800
1006
+ commercial air traffic, there's a lot of applications,
1007
+
1008
+ 16:29.800 --> 16:33.960
1009
+ although we only see autonomy being enabled
1010
+
1011
+ 16:33.960 --> 16:38.960
1012
+ at very high altitudes when the plane is an autopilot.
1013
+
1014
+ 16:38.960 --> 16:41.960
1015
+ The plane is an autopilot.
1016
+
1017
+ 16:41.960 --> 16:42.800
1018
+ There's still a role for the human
1019
+
1020
+ 16:42.800 --> 16:47.640
1021
+ and that kind of autonomy is, you're kind of implying,
1022
+
1023
+ 16:47.640 --> 16:48.680
1024
+ I don't know what the right word is,
1025
+
1026
+ 16:48.680 --> 16:53.480
1027
+ but it's a little dumber than it could be.
1028
+
1029
+ 16:53.480 --> 16:55.720
1030
+ Right, so in the lab, of course,
1031
+
1032
+ 16:55.720 --> 16:59.200
1033
+ we can afford to be a lot more aggressive.
1034
+
1035
+ 16:59.200 --> 17:04.200
1036
+ And the question we try to ask is,
1037
+
1038
+ 17:04.200 --> 17:09.200
1039
+ can we make robots that will be able to make decisions
1040
+
1041
+ 17:10.360 --> 17:13.680
1042
+ without any kind of external infrastructure?
1043
+
1044
+ 17:13.680 --> 17:14.880
1045
+ So what does that mean?
1046
+
1047
+ 17:14.880 --> 17:16.960
1048
+ So the most common piece of infrastructure
1049
+
1050
+ 17:16.960 --> 17:19.640
1051
+ that airplanes use today is GPS.
1052
+
1053
+ 17:20.560 --> 17:25.160
1054
+ GPS is also the most brittle form of information.
1055
+
1056
+ 17:26.680 --> 17:30.480
1057
+ If you have driven in a city, try to use GPS navigation,
1058
+
1059
+ 17:30.480 --> 17:32.760
1060
+ in tall buildings, you immediately lose GPS.
1061
+
1062
+ 17:32.760 --> 17:36.280
1063
+ And so that's not a very sophisticated way
1064
+
1065
+ 17:36.280 --> 17:37.840
1066
+ of building autonomy.
1067
+
1068
+ 17:37.840 --> 17:39.560
1069
+ I think the second piece of infrastructure
1070
+
1071
+ 17:39.560 --> 17:41.920
1072
+ they rely on is communications.
1073
+
1074
+ 17:41.920 --> 17:46.200
1075
+ Again, it's very easy to jam communications.
1076
+
1077
+ 17:47.360 --> 17:51.320
1078
+ In fact, if you use wifi, you know that wifi signals
1079
+
1080
+ 17:51.320 --> 17:53.520
1081
+ drop out, cell signals drop out.
1082
+
1083
+ 17:53.520 --> 17:56.800
1084
+ So to rely on something like that is not good.
1085
+
1086
+ 17:58.560 --> 18:01.200
1087
+ The third form of infrastructure we use,
1088
+
1089
+ 18:01.200 --> 18:02.920
1090
+ and I hate to call it infrastructure,
1091
+
1092
+ 18:02.920 --> 18:06.360
1093
+ but it is that, in the sense of robots, is people.
1094
+
1095
+ 18:06.360 --> 18:08.640
1096
+ So you could rely on somebody to pilot you.
1097
+
1098
+ 18:09.960 --> 18:11.600
1099
+ And so the question you wanna ask is,
1100
+
1101
+ 18:11.600 --> 18:14.760
1102
+ if there are no pilots, there's no communications
1103
+
1104
+ 18:14.760 --> 18:18.720
1105
+ with any base station, if there's no knowledge of position,
1106
+
1107
+ 18:18.720 --> 18:21.640
1108
+ and if there's no a priori map,
1109
+
1110
+ 18:21.640 --> 18:24.880
1111
+ a priori knowledge of what the environment looks like,
1112
+
1113
+ 18:24.880 --> 18:28.240
1114
+ a priori model of what might happen in the future,
1115
+
1116
+ 18:28.240 --> 18:29.560
1117
+ can robots navigate?
1118
+
1119
+ 18:29.560 --> 18:31.480
1120
+ So that is true autonomy.
1121
+
1122
+ 18:31.480 --> 18:34.160
1123
+ So that's true autonomy, and we're talking about,
1124
+
1125
+ 18:34.160 --> 18:36.880
1126
+ you mentioned like military application of drones.
1127
+
1128
+ 18:36.880 --> 18:38.320
1129
+ Okay, so what else is there?
1130
+
1131
+ 18:38.320 --> 18:42.080
1132
+ You talk about agile, autonomous flying robots,
1133
+
1134
+ 18:42.080 --> 18:45.680
1135
+ aerial robots, so that's a different kind of,
1136
+
1137
+ 18:45.680 --> 18:48.160
1138
+ it's not winged, it's not big, at least it's small.
1139
+
1140
+ 18:48.160 --> 18:50.840
1141
+ So I use the word agility mostly,
1142
+
1143
+ 18:50.840 --> 18:53.520
1144
+ or at least we're motivated to do agile robots,
1145
+
1146
+ 18:53.520 --> 18:58.000
1147
+ mostly because robots can operate
1148
+
1149
+ 18:58.000 --> 19:01.120
1150
+ and should be operating in constrained environments.
1151
+
1152
+ 19:02.120 --> 19:06.960
1153
+ And if you want to operate the way a global hawk operates,
1154
+
1155
+ 19:06.960 --> 19:09.120
1156
+ I mean, the kinds of conditions in which you operate
1157
+
1158
+ 19:09.120 --> 19:10.760
1159
+ are very, very restrictive.
1160
+
1161
+ 19:11.760 --> 19:13.720
1162
+ If you wanna go inside a building,
1163
+
1164
+ 19:13.720 --> 19:15.600
1165
+ for example, for search and rescue,
1166
+
1167
+ 19:15.600 --> 19:18.120
1168
+ or to locate an active shooter,
1169
+
1170
+ 19:18.120 --> 19:22.120
1171
+ or you wanna navigate under the canopy in an orchard
1172
+
1173
+ 19:22.120 --> 19:23.880
1174
+ to look at health of plants,
1175
+
1176
+ 19:23.880 --> 19:28.240
1177
+ or to look for, to count fruits,
1178
+
1179
+ 19:28.240 --> 19:31.240
1180
+ to measure the tree trunks.
1181
+
1182
+ 19:31.240 --> 19:33.240
1183
+ These are things we do, by the way.
1184
+
1185
+ 19:33.240 --> 19:35.400
1186
+ There's some cool agriculture stuff you've shown
1187
+
1188
+ 19:35.400 --> 19:37.080
1189
+ in the past, it's really awesome.
1190
+
1191
+ 19:37.080 --> 19:40.360
1192
+ So in those kinds of settings, you do need that agility.
1193
+
1194
+ 19:40.360 --> 19:42.560
1195
+ Agility does not necessarily mean
1196
+
1197
+ 19:42.560 --> 19:45.440
1198
+ you break records for the 100 meters dash.
1199
+
1200
+ 19:45.440 --> 19:48.000
1201
+ What it really means is you see the unexpected
1202
+
1203
+ 19:48.000 --> 19:51.480
1204
+ and you're able to maneuver in a safe way,
1205
+
1206
+ 19:51.480 --> 19:55.400
1207
+ and in a way that gets you the most information
1208
+
1209
+ 19:55.400 --> 19:57.640
1210
+ about the thing you're trying to do.
1211
+
1212
+ 19:57.640 --> 20:00.440
1213
+ By the way, you may be the only person
1214
+
1215
+ 20:00.440 --> 20:04.200
1216
+ who, in a TED Talk, has used a math equation,
1217
+
1218
+ 20:04.200 --> 20:07.600
1219
+ which is amazing, people should go see one of your TED Talks.
1220
+
1221
+ 20:07.600 --> 20:08.800
1222
+ Actually, it's very interesting,
1223
+
1224
+ 20:08.800 --> 20:12.400
1225
+ because the TED curator, Chris Anderson,
1226
+
1227
+ 20:12.400 --> 20:15.360
1228
+ told me, you can't show math.
1229
+
1230
+ 20:15.360 --> 20:18.200
1231
+ And I thought about it, but that's who I am.
1232
+
1233
+ 20:18.200 --> 20:20.760
1234
+ I mean, that's our work.
1235
+
1236
+ 20:20.760 --> 20:25.760
1237
+ And so I felt compelled to give the audience a taste
1238
+
1239
+ 20:25.760 --> 20:27.640
1240
+ for at least some math.
1241
+
1242
+ 20:27.640 --> 20:32.640
1243
+ So on that point, simply, what does it take
1244
+
1245
+ 20:32.880 --> 20:37.360
1246
+ to make a thing with four motors fly, a quadcopter,
1247
+
1248
+ 20:37.360 --> 20:40.640
1249
+ one of these little flying robots?
1250
+
1251
+ 20:41.760 --> 20:43.960
1252
+ How hard is it to make it fly?
1253
+
1254
+ 20:43.960 --> 20:46.560
1255
+ How do you coordinate the four motors?
1256
+
1257
+ 20:46.560 --> 20:51.560
1258
+ How do you convert those motors into actual movement?
1259
+
1260
+ 20:52.600 --> 20:54.800
1261
+ So this is an interesting question.
1262
+
1263
+ 20:54.800 --> 20:58.080
1264
+ We've been trying to do this since 2000.
1265
+
1266
+ 20:58.080 --> 21:00.560
1267
+ It is a commentary on the sensors
1268
+
1269
+ 21:00.560 --> 21:02.080
1270
+ that were available back then,
1271
+
1272
+ 21:02.080 --> 21:04.280
1273
+ the computers that were available back then.
1274
+
1275
+ 21:05.560 --> 21:10.280
1276
+ And a number of things happened between 2000 and 2007.
1277
+
1278
+ 21:11.520 --> 21:14.120
1279
+ One is the advances in computing,
1280
+
1281
+ 21:14.120 --> 21:16.760
1282
+ which is, so we all know about Moore's Law,
1283
+
1284
+ 21:16.760 --> 21:19.680
1285
+ but I think 2007 was a tipping point,
1286
+
1287
+ 21:19.680 --> 21:22.720
1288
+ the year of the iPhone, the year of the cloud.
1289
+
1290
+ 21:22.720 --> 21:24.640
1291
+ Lots of things happened in 2007.
1292
+
1293
+ 21:25.600 --> 21:27.600
1294
+ But going back even further,
1295
+
1296
+ 21:27.600 --> 21:31.360
1297
+ inertial measurement units as a sensor really matured.
1298
+
1299
+ 21:31.360 --> 21:33.040
1300
+ Again, lots of reasons for that.
1301
+
1302
+ 21:33.920 --> 21:35.400
1303
+ Certainly, there's a lot of federal funding,
1304
+
1305
+ 21:35.400 --> 21:37.360
1306
+ particularly DARPA in the US,
1307
+
1308
+ 21:38.320 --> 21:42.760
1309
+ but they didn't anticipate this boom in IMUs.
1310
+
1311
+ 21:42.760 --> 21:46.560
1312
+ But if you look, subsequently what happened
1313
+
1314
+ 21:46.560 --> 21:50.040
1315
+ is that every car manufacturer had to put an airbag in,
1316
+
1317
+ 21:50.040 --> 21:52.600
1318
+ which meant you had to have an accelerometer on board.
1319
+
1320
+ 21:52.600 --> 21:55.000
1321
+ And so that drove down the price to performance ratio.
1322
+
1323
+ 21:55.000 --> 21:56.880
1324
+ Wow, I should know this.
1325
+
1326
+ 21:56.880 --> 21:57.960
1327
+ That's very interesting.
1328
+
1329
+ 21:57.960 --> 21:59.360
1330
+ That's very interesting, the connection there.
1331
+
1332
+ 21:59.360 --> 22:01.320
1333
+ And that's why research is very,
1334
+
1335
+ 22:01.320 --> 22:03.280
1336
+ it's very hard to predict the outcomes.
1337
+
1338
+ 22:04.840 --> 22:07.640
1339
+ And again, the federal government spent a ton of money
1340
+
1341
+ 22:07.640 --> 22:12.280
1342
+ on things that they thought were useful for resonators,
1343
+
1344
+ 22:12.280 --> 22:16.840
1345
+ but it ended up enabling these small UAVs, which is great,
1346
+
1347
+ 22:16.840 --> 22:18.520
1348
+ because I could have never raised that much money
1349
+
1350
+ 22:18.520 --> 22:20.760
1351
+ and sold this project,
1352
+
1353
+ 22:20.760 --> 22:22.200
1354
+ hey, we want to build these small UAVs.
1355
+
1356
+ 22:22.200 --> 22:25.440
1357
+ Can you actually fund the development of low cost IMUs?
1358
+
1359
+ 22:25.440 --> 22:27.600
1360
+ So why do you need an IMU on an IMU?
1361
+
1362
+ 22:27.600 --> 22:31.000
1363
+ So I'll come back to that.
1364
+
1365
+ 22:31.000 --> 22:33.320
1366
+ So in 2007, 2008, we were able to build these.
1367
+
1368
+ 22:33.320 --> 22:35.200
1369
+ And then the question you're asking was a good one.
1370
+
1371
+ 22:35.200 --> 22:40.240
1372
+ How do you coordinate the motors to develop this?
1373
+
1374
+ 22:40.240 --> 22:43.880
1375
+ But over the last 10 years, everything is commoditized.
1376
+
1377
+ 22:43.880 --> 22:46.240
1378
+ A high school kid today can pick up
1379
+
1380
+ 22:46.240 --> 22:50.560
1381
+ a Raspberry Pi kit and build this.
1382
+
1383
+ 22:50.560 --> 22:53.200
1384
+ All the low levels functionality is all automated.
1385
+
1386
+ 22:54.160 --> 22:56.360
1387
+ But basically at some level,
1388
+
1389
+ 22:56.360 --> 23:01.360
1390
+ you have to drive the motors at the right RPMs,
1391
+
1392
+ 23:01.360 --> 23:03.680
1393
+ the right velocity,
1394
+
1395
+ 23:04.560 --> 23:07.480
1396
+ in order to generate the right amount of thrust,
1397
+
1398
+ 23:07.480 --> 23:10.360
1399
+ in order to position it and orient it in a way
1400
+
1401
+ 23:10.360 --> 23:12.840
1402
+ that you need to in order to fly.
1403
+
1404
+ 23:13.800 --> 23:16.680
1405
+ The feedback that you get is from onboard sensors,
1406
+
1407
+ 23:16.680 --> 23:18.400
1408
+ and the IMU is an important part of it.
1409
+
1410
+ 23:18.400 --> 23:23.400
1411
+ The IMU tells you what the acceleration is,
1412
+
1413
+ 23:23.840 --> 23:26.400
1414
+ as well as what the angular velocity is.
1415
+
1416
+ 23:26.400 --> 23:29.200
1417
+ And those are important pieces of information.
1418
+
1419
+ 23:30.440 --> 23:34.200
1420
+ In addition to that, you need some kind of local position
1421
+
1422
+ 23:34.200 --> 23:37.480
1423
+ or velocity information.
1424
+
1425
+ 23:37.480 --> 23:39.360
1426
+ For example, when we walk,
1427
+
1428
+ 23:39.360 --> 23:41.560
1429
+ we implicitly have this information
1430
+
1431
+ 23:41.560 --> 23:45.840
1432
+ because we kind of know what our stride length is.
1433
+
1434
+ 23:46.720 --> 23:51.480
1435
+ We also are looking at images fly past our retina,
1436
+
1437
+ 23:51.480 --> 23:54.280
1438
+ if you will, and so we can estimate velocity.
1439
+
1440
+ 23:54.280 --> 23:56.360
1441
+ We also have accelerometers in our head,
1442
+
1443
+ 23:56.360 --> 23:59.160
1444
+ and we're able to integrate all these pieces of information
1445
+
1446
+ 23:59.160 --> 24:02.360
1447
+ to determine where we are as we walk.
1448
+
1449
+ 24:02.360 --> 24:04.320
1450
+ And so robots have to do something very similar.
1451
+
1452
+ 24:04.320 --> 24:08.160
1453
+ You need an IMU, you need some kind of a camera
1454
+
1455
+ 24:08.160 --> 24:11.640
1456
+ or other sensor that's measuring velocity,
1457
+
1458
+ 24:12.560 --> 24:15.800
1459
+ and then you need some kind of a global reference frame
1460
+
1461
+ 24:15.800 --> 24:19.520
1462
+ if you really want to think about doing something
1463
+
1464
+ 24:19.520 --> 24:21.280
1465
+ in a world coordinate system.
1466
+
1467
+ 24:21.280 --> 24:23.680
1468
+ And so how do you estimate your position
1469
+
1470
+ 24:23.680 --> 24:25.160
1471
+ with respect to that global reference frame?
1472
+
1473
+ 24:25.160 --> 24:26.560
1474
+ That's important as well.
1475
+
1476
+ 24:26.560 --> 24:29.520
1477
+ So coordinating the RPMs of the four motors
1478
+
1479
+ 24:29.520 --> 24:32.640
1480
+ is what allows you to, first of all, fly and hover,
1481
+
1482
+ 24:32.640 --> 24:35.600
1483
+ and then you can change the orientation
1484
+
1485
+ 24:35.600 --> 24:37.600
1486
+ and the velocity and so on.
1487
+
1488
+ 24:37.600 --> 24:38.440
1489
+ Exactly, exactly.
1490
+
1491
+ 24:38.440 --> 24:40.320
1492
+ So it's a bunch of degrees of freedom
1493
+
1494
+ 24:40.320 --> 24:41.160
1495
+ that you're complaining about.
1496
+
1497
+ 24:41.160 --> 24:42.200
1498
+ There's six degrees of freedom,
1499
+
1500
+ 24:42.200 --> 24:44.920
1501
+ but you only have four inputs, the four motors.
1502
+
1503
+ 24:44.920 --> 24:49.920
1504
+ And it turns out to be a remarkably versatile configuration.
1505
+
1506
+ 24:50.920 --> 24:53.080
1507
+ You think at first, well, I only have four motors,
1508
+
1509
+ 24:53.080 --> 24:55.000
1510
+ how do I go sideways?
1511
+
1512
+ 24:55.000 --> 24:57.280
1513
+ But it's not too hard to say, well, if I tilt myself,
1514
+
1515
+ 24:57.280 --> 25:00.440
1516
+ I can go sideways, and then you have four motors
1517
+
1518
+ 25:00.440 --> 25:03.320
1519
+ pointing up, how do I rotate in place
1520
+
1521
+ 25:03.320 --> 25:05.360
1522
+ about a vertical axis?
1523
+
1524
+ 25:05.360 --> 25:07.800
1525
+ Well, you rotate them at different speeds
1526
+
1527
+ 25:07.800 --> 25:09.720
1528
+ and that generates reaction moments
1529
+
1530
+ 25:09.720 --> 25:11.520
1531
+ and that allows you to turn.
1532
+
1533
+ 25:11.520 --> 25:14.960
1534
+ So it's actually a pretty, it's an optimal configuration
1535
+
1536
+ 25:14.960 --> 25:17.040
1537
+ from an engineer standpoint.
1538
+
1539
+ 25:18.360 --> 25:23.360
1540
+ It's very simple, very cleverly done, and very versatile.
1541
+
1542
+ 25:23.360 --> 25:27.240
1543
+ So if you could step back to a time,
1544
+
1545
+ 25:27.240 --> 25:30.000
1546
+ so I've always known flying robots as,
1547
+
1548
+ 25:31.040 --> 25:35.760
1549
+ to me, it was natural that a quadcopter should fly.
1550
+
1551
+ 25:35.760 --> 25:37.880
1552
+ But when you first started working with it,
1553
+
1554
+ 25:38.800 --> 25:42.000
1555
+ how surprised are you that you can make,
1556
+
1557
+ 25:42.000 --> 25:45.520
1558
+ do so much with the four motors?
1559
+
1560
+ 25:45.520 --> 25:47.600
1561
+ How surprising is it that you can make this thing fly,
1562
+
1563
+ 25:47.600 --> 25:49.760
1564
+ first of all, that you can make it hover,
1565
+
1566
+ 25:49.760 --> 25:52.000
1567
+ that you can add control to it?
1568
+
1569
+ 25:52.000 --> 25:55.080
1570
+ Firstly, this is not, the four motor configuration
1571
+
1572
+ 25:55.080 --> 25:56.400
1573
+ is not ours.
1574
+
1575
+ 25:56.400 --> 25:59.320
1576
+ You can, it has at least a hundred year history.
1577
+
1578
+ 26:00.320 --> 26:04.160
1579
+ And various people, various people try to get quadrotors
1580
+
1581
+ 26:04.160 --> 26:06.840
1582
+ to fly without much success.
1583
+
1584
+ 26:08.480 --> 26:10.760
1585
+ As I said, we've been working on this since 2000.
1586
+
1587
+ 26:10.760 --> 26:14.400
1588
+ Our first designs were, well, this is way too complicated.
1589
+
1590
+ 26:14.400 --> 26:18.480
1591
+ Why not we try to get an omnidirectional flying robot?
1592
+
1593
+ 26:18.480 --> 26:21.760
1594
+ So our early designs, we had eight rotors.
1595
+
1596
+ 26:21.760 --> 26:25.200
1597
+ And so these eight rotors were arranged uniformly
1598
+
1599
+ 26:26.600 --> 26:28.000
1600
+ on a sphere, if you will.
1601
+
1602
+ 26:28.000 --> 26:30.440
1603
+ So you can imagine a symmetric configuration.
1604
+
1605
+ 26:30.440 --> 26:33.280
1606
+ And so you should be able to fly anywhere.
1607
+
1608
+ 26:33.280 --> 26:36.240
1609
+ But the real challenge we had is the strength to weight ratio
1610
+
1611
+ 26:36.240 --> 26:37.080
1612
+ is not enough.
1613
+
1614
+ 26:37.080 --> 26:39.680
1615
+ And of course, we didn't have the sensors and so on.
1616
+
1617
+ 26:40.520 --> 26:43.040
1618
+ So everybody knew, or at least the people
1619
+
1620
+ 26:43.040 --> 26:44.800
1621
+ who worked with rotorcrafts knew,
1622
+
1623
+ 26:44.800 --> 26:46.520
1624
+ four rotors will get it done.
1625
+
1626
+ 26:47.520 --> 26:49.400
1627
+ So that was not our idea.
1628
+
1629
+ 26:49.400 --> 26:52.800
1630
+ But it took a while before we could actually do
1631
+
1632
+ 26:52.800 --> 26:56.920
1633
+ the onboard sensing and the computation that was needed
1634
+
1635
+ 26:56.920 --> 27:01.000
1636
+ for the kinds of agile maneuvering that we wanted to do
1637
+
1638
+ 27:01.000 --> 27:03.000
1639
+ in our little aerial robots.
1640
+
1641
+ 27:03.000 --> 27:07.560
1642
+ And that only happened between 2007 and 2009 in our lab.
1643
+
1644
+ 27:07.560 --> 27:09.960
1645
+ Yeah, and you have to send the signal
1646
+
1647
+ 27:09.960 --> 27:12.480
1648
+ maybe a hundred times a second.
1649
+
1650
+ 27:12.480 --> 27:15.960
1651
+ So the compute there, everything has to come down in price.
1652
+
1653
+ 27:15.960 --> 27:20.960
1654
+ And what are the steps of getting from point A to point B?
1655
+
1656
+ 27:21.720 --> 27:25.200
1657
+ So we just talked about like local control.
1658
+
1659
+ 27:25.200 --> 27:30.200
1660
+ But if all the kind of cool dancing in the air
1661
+
1662
+ 27:30.840 --> 27:34.520
1663
+ that I've seen you show, how do you make it happen?
1664
+
1665
+ 27:34.520 --> 27:37.360
1666
+ How do you make a trajectory?
1667
+
1668
+ 27:37.360 --> 27:40.520
1669
+ First of all, okay, figure out a trajectory.
1670
+
1671
+ 27:40.520 --> 27:41.680
1672
+ So plan a trajectory.
1673
+
1674
+ 27:41.680 --> 27:44.400
1675
+ And then how do you make that trajectory happen?
1676
+
1677
+ 27:44.400 --> 27:47.280
1678
+ Yeah, I think planning is a very fundamental problem
1679
+
1680
+ 27:47.280 --> 27:48.120
1681
+ in robotics.
1682
+
1683
+ 27:48.120 --> 27:50.800
1684
+ I think 10 years ago it was an esoteric thing,
1685
+
1686
+ 27:50.800 --> 27:53.040
1687
+ but today with self driving cars,
1688
+
1689
+ 27:53.040 --> 27:55.840
1690
+ everybody can understand this basic idea
1691
+
1692
+ 27:55.840 --> 27:57.920
1693
+ that a car sees a whole bunch of things
1694
+
1695
+ 27:57.920 --> 28:00.320
1696
+ and it has to keep a lane or maybe make a right turn
1697
+
1698
+ 28:00.320 --> 28:01.280
1699
+ or switch lanes.
1700
+
1701
+ 28:01.280 --> 28:02.680
1702
+ It has to plan a trajectory.
1703
+
1704
+ 28:02.680 --> 28:03.560
1705
+ It has to be safe.
1706
+
1707
+ 28:03.560 --> 28:04.840
1708
+ It has to be efficient.
1709
+
1710
+ 28:04.840 --> 28:06.640
1711
+ So everybody's familiar with that.
1712
+
1713
+ 28:06.640 --> 28:10.240
1714
+ That's kind of the first step that you have to think about
1715
+
1716
+ 28:10.240 --> 28:14.800
1717
+ when you say autonomy.
1718
+
1719
+ 28:14.800 --> 28:19.120
1720
+ And so for us, it's about finding smooth motions,
1721
+
1722
+ 28:19.120 --> 28:21.320
1723
+ motions that are safe.
1724
+
1725
+ 28:21.320 --> 28:22.880
1726
+ So we think about these two things.
1727
+
1728
+ 28:22.880 --> 28:24.680
1729
+ One is optimality, one is safety.
1730
+
1731
+ 28:24.680 --> 28:27.200
1732
+ Clearly you cannot compromise safety.
1733
+
1734
+ 28:28.440 --> 28:31.360
1735
+ So you're looking for safe, optimal motions.
1736
+
1737
+ 28:31.360 --> 28:34.480
1738
+ The other thing you have to think about is
1739
+
1740
+ 28:34.480 --> 28:38.160
1741
+ can you actually compute a reasonable trajectory
1742
+
1743
+ 28:38.160 --> 28:40.760
1744
+ in a small amount of time?
1745
+
1746
+ 28:40.760 --> 28:42.280
1747
+ Cause you have a time budget.
1748
+
1749
+ 28:42.280 --> 28:45.160
1750
+ So the optimal becomes suboptimal,
1751
+
1752
+ 28:45.160 --> 28:50.160
1753
+ but in our lab we focus on synthesizing smooth trajectory
1754
+
1755
+ 28:51.160 --> 28:53.000
1756
+ that satisfy all the constraints.
1757
+
1758
+ 28:53.000 --> 28:57.120
1759
+ In other words, don't violate any safety constraints
1760
+
1761
+ 28:58.440 --> 29:02.880
1762
+ and is as efficient as possible.
1763
+
1764
+ 29:02.880 --> 29:04.360
1765
+ And when I say efficient,
1766
+
1767
+ 29:04.360 --> 29:06.600
1768
+ it could mean I want to get from point A to point B
1769
+
1770
+ 29:06.600 --> 29:08.360
1771
+ as quickly as possible,
1772
+
1773
+ 29:08.360 --> 29:11.840
1774
+ or I want to get to it as gracefully as possible,
1775
+
1776
+ 29:12.840 --> 29:15.960
1777
+ or I want to consume as little energy as possible.
1778
+
1779
+ 29:15.960 --> 29:18.240
1780
+ But always staying within the safety constraints.
1781
+
1782
+ 29:18.240 --> 29:22.800
1783
+ But yes, always finding a safe trajectory.
1784
+
1785
+ 29:22.800 --> 29:25.040
1786
+ So there's a lot of excitement and progress
1787
+
1788
+ 29:25.040 --> 29:27.360
1789
+ in the field of machine learning
1790
+
1791
+ 29:27.360 --> 29:29.360
1792
+ and reinforcement learning
1793
+
1794
+ 29:29.360 --> 29:32.200
1795
+ and the neural network variant of that
1796
+
1797
+ 29:32.200 --> 29:33.920
1798
+ with deep reinforcement learning.
1799
+
1800
+ 29:33.920 --> 29:36.360
1801
+ Do you see a role of machine learning
1802
+
1803
+ 29:36.360 --> 29:40.560
1804
+ in, so a lot of the success of flying robots
1805
+
1806
+ 29:40.560 --> 29:42.320
1807
+ did not rely on machine learning,
1808
+
1809
+ 29:42.320 --> 29:45.040
1810
+ except for maybe a little bit of the perception
1811
+
1812
+ 29:45.040 --> 29:46.600
1813
+ on the computer vision side.
1814
+
1815
+ 29:46.600 --> 29:48.440
1816
+ On the control side and the planning,
1817
+
1818
+ 29:48.440 --> 29:50.400
1819
+ do you see there's a role in the future
1820
+
1821
+ 29:50.400 --> 29:51.680
1822
+ for machine learning?
1823
+
1824
+ 29:51.680 --> 29:53.800
1825
+ So let me disagree a little bit with you.
1826
+
1827
+ 29:53.800 --> 29:56.800
1828
+ I think we never perhaps called out in my work,
1829
+
1830
+ 29:56.800 --> 29:57.720
1831
+ called out learning,
1832
+
1833
+ 29:57.720 --> 30:00.600
1834
+ but even this very simple idea of being able to fly
1835
+
1836
+ 30:00.600 --> 30:02.200
1837
+ through a constrained space.
1838
+
1839
+ 30:02.200 --> 30:05.680
1840
+ The first time you try it, you'll invariably,
1841
+
1842
+ 30:05.680 --> 30:08.440
1843
+ you might get it wrong if the task is challenging.
1844
+
1845
+ 30:08.440 --> 30:12.200
1846
+ And the reason is to get it perfectly right,
1847
+
1848
+ 30:12.200 --> 30:14.600
1849
+ you have to model everything in the environment.
1850
+
1851
+ 30:15.600 --> 30:19.960
1852
+ And flying is notoriously hard to model.
1853
+
1854
+ 30:19.960 --> 30:24.960
1855
+ There are aerodynamic effects that we constantly discover.
1856
+
1857
+ 30:26.520 --> 30:29.440
1858
+ Even just before I was talking to you,
1859
+
1860
+ 30:29.440 --> 30:33.440
1861
+ I was talking to a student about how blades flap
1862
+
1863
+ 30:33.440 --> 30:35.320
1864
+ when they fly.
1865
+
1866
+ 30:35.320 --> 30:40.320
1867
+ And that ends up changing how a rotorcraft
1868
+
1869
+ 30:40.880 --> 30:43.960
1870
+ is accelerated in the angular direction.
1871
+
1872
+ 30:43.960 --> 30:46.360
1873
+ Does he use like micro flaps or something?
1874
+
1875
+ 30:46.360 --> 30:47.280
1876
+ It's not micro flaps.
1877
+
1878
+ 30:47.280 --> 30:49.640
1879
+ So we assume that each blade is rigid,
1880
+
1881
+ 30:49.640 --> 30:51.720
1882
+ but actually it flaps a little bit.
1883
+
1884
+ 30:51.720 --> 30:52.880
1885
+ It bends.
1886
+
1887
+ 30:52.880 --> 30:53.720
1888
+ Interesting, yeah.
1889
+
1890
+ 30:53.720 --> 30:56.040
1891
+ And so the models rely on the fact,
1892
+
1893
+ 30:56.040 --> 30:58.640
1894
+ on the assumption that they're not rigid.
1895
+
1896
+ 30:58.640 --> 31:00.640
1897
+ On the assumption that they're actually rigid,
1898
+
1899
+ 31:00.640 --> 31:02.240
1900
+ but that's not true.
1901
+
1902
+ 31:02.240 --> 31:03.720
1903
+ If you're flying really quickly,
1904
+
1905
+ 31:03.720 --> 31:06.920
1906
+ these effects become significant.
1907
+
1908
+ 31:06.920 --> 31:09.240
1909
+ If you're flying close to the ground,
1910
+
1911
+ 31:09.240 --> 31:12.160
1912
+ you get pushed off by the ground, right?
1913
+
1914
+ 31:12.160 --> 31:14.920
1915
+ Something which every pilot knows when he tries to land
1916
+
1917
+ 31:14.920 --> 31:18.000
1918
+ or she tries to land, this is called a ground effect.
1919
+
1920
+ 31:18.920 --> 31:21.000
1921
+ Something very few pilots think about
1922
+
1923
+ 31:21.000 --> 31:23.040
1924
+ is what happens when you go close to a ceiling
1925
+
1926
+ 31:23.040 --> 31:25.320
1927
+ or you get sucked into a ceiling.
1928
+
1929
+ 31:25.320 --> 31:26.880
1930
+ There are very few aircrafts
1931
+
1932
+ 31:26.880 --> 31:29.520
1933
+ that fly close to any kind of ceiling.
1934
+
1935
+ 31:29.520 --> 31:33.520
1936
+ Likewise, when you go close to a wall,
1937
+
1938
+ 31:33.520 --> 31:35.720
1939
+ there are these wall effects.
1940
+
1941
+ 31:35.720 --> 31:37.680
1942
+ And if you've gone on a train
1943
+
1944
+ 31:37.680 --> 31:39.600
1945
+ and you pass another train that's traveling
1946
+
1947
+ 31:39.600 --> 31:42.400
1948
+ in the opposite direction, you feel the buffeting.
1949
+
1950
+ 31:42.400 --> 31:45.400
1951
+ And so these kinds of microclimates
1952
+
1953
+ 31:45.400 --> 31:47.880
1954
+ affect our UAV significantly.
1955
+
1956
+ 31:47.880 --> 31:48.720
1957
+ So if you want...
1958
+
1959
+ 31:48.720 --> 31:50.640
1960
+ And they're impossible to model, essentially.
1961
+
1962
+ 31:50.640 --> 31:52.480
1963
+ I wouldn't say they're impossible to model,
1964
+
1965
+ 31:52.480 --> 31:54.880
1966
+ but the level of sophistication you would need
1967
+
1968
+ 31:54.880 --> 31:58.600
1969
+ in the model and the software would be tremendous.
1970
+
1971
+ 32:00.000 --> 32:02.920
1972
+ Plus, to get everything right would be awfully tedious.
1973
+
1974
+ 32:02.920 --> 32:05.080
1975
+ So the way we do this is over time,
1976
+
1977
+ 32:05.080 --> 32:09.000
1978
+ we figure out how to adapt to these conditions.
1979
+
1980
+ 32:10.360 --> 32:13.160
1981
+ So early on, we use the form of learning
1982
+
1983
+ 32:13.160 --> 32:15.760
1984
+ that we call iterative learning.
1985
+
1986
+ 32:15.760 --> 32:18.600
1987
+ So this idea, if you want to perform a task,
1988
+
1989
+ 32:18.600 --> 32:22.120
1990
+ there are a few things that you need to change
1991
+
1992
+ 32:22.120 --> 32:24.960
1993
+ and iterate over a few parameters
1994
+
1995
+ 32:24.960 --> 32:29.280
1996
+ that over time you can figure out.
1997
+
1998
+ 32:29.280 --> 32:33.400
1999
+ So I could call it policy gradient reinforcement learning,
2000
+
2001
+ 32:33.400 --> 32:34.920
2002
+ but actually it was just iterative learning.
2003
+
2004
+ 32:34.920 --> 32:36.000
2005
+ Iterative learning.
2006
+
2007
+ 32:36.000 --> 32:37.800
2008
+ And so this was there way back.
2009
+
2010
+ 32:37.800 --> 32:39.440
2011
+ I think what's interesting is,
2012
+
2013
+ 32:39.440 --> 32:41.640
2014
+ if you look at autonomous vehicles today,
2015
+
2016
+ 32:43.120 --> 32:45.680
2017
+ learning occurs, could occur in two pieces.
2018
+
2019
+ 32:45.680 --> 32:47.960
2020
+ One is perception, understanding the world.
2021
+
2022
+ 32:47.960 --> 32:50.080
2023
+ Second is action, taking actions.
2024
+
2025
+ 32:50.080 --> 32:52.240
2026
+ Everything that I've seen that is successful
2027
+
2028
+ 32:52.240 --> 32:54.360
2029
+ is on the perception side of things.
2030
+
2031
+ 32:54.360 --> 32:55.400
2032
+ So in computer vision,
2033
+
2034
+ 32:55.400 --> 32:57.840
2035
+ we've made amazing strides in the last 10 years.
2036
+
2037
+ 32:57.840 --> 33:01.640
2038
+ So recognizing objects, actually detecting objects,
2039
+
2040
+ 33:01.640 --> 33:06.400
2041
+ classifying them and tagging them in some sense,
2042
+
2043
+ 33:06.400 --> 33:07.440
2044
+ annotating them.
2045
+
2046
+ 33:07.440 --> 33:09.640
2047
+ This is all done through machine learning.
2048
+
2049
+ 33:09.640 --> 33:12.160
2050
+ On the action side, on the other hand,
2051
+
2052
+ 33:12.160 --> 33:13.720
2053
+ I don't know of any examples
2054
+
2055
+ 33:13.720 --> 33:15.560
2056
+ where there are fielded systems
2057
+
2058
+ 33:15.560 --> 33:17.560
2059
+ where we actually learn
2060
+
2061
+ 33:17.560 --> 33:20.560
2062
+ the right behavior.
2063
+
2064
+ 33:20.560 --> 33:22.760
2065
+ Outside of single demonstration is successful.
2066
+
2067
+ 33:22.760 --> 33:24.640
2068
+ In the laboratory, this is the holy grail.
2069
+
2070
+ 33:24.640 --> 33:26.040
2071
+ Can you do end to end learning?
2072
+
2073
+ 33:26.040 --> 33:28.800
2074
+ Can you go from pixels to motor currents?
2075
+
2076
+ 33:30.200 --> 33:31.600
2077
+ This is really, really hard.
2078
+
2079
+ 33:32.800 --> 33:35.080
2080
+ And I think if you go forward,
2081
+
2082
+ 33:35.080 --> 33:37.600
2083
+ the right way to think about these things
2084
+
2085
+ 33:37.600 --> 33:40.720
2086
+ is data driven approaches,
2087
+
2088
+ 33:40.720 --> 33:42.400
2089
+ learning based approaches,
2090
+
2091
+ 33:42.400 --> 33:45.280
2092
+ in concert with model based approaches,
2093
+
2094
+ 33:45.280 --> 33:47.320
2095
+ which is the traditional way of doing things.
2096
+
2097
+ 33:47.320 --> 33:48.720
2098
+ So I think there's a piece,
2099
+
2100
+ 33:48.720 --> 33:51.400
2101
+ there's a role for each of these methodologies.
2102
+
2103
+ 33:51.400 --> 33:52.440
2104
+ So what do you think,
2105
+
2106
+ 33:52.440 --> 33:53.880
2107
+ just jumping out on topic
2108
+
2109
+ 33:53.880 --> 33:56.200
2110
+ since you mentioned autonomous vehicles,
2111
+
2112
+ 33:56.200 --> 33:58.480
2113
+ what do you think are the limits on the perception side?
2114
+
2115
+ 33:58.480 --> 34:01.080
2116
+ So I've talked to Elon Musk
2117
+
2118
+ 34:01.080 --> 34:03.320
2119
+ and there on the perception side,
2120
+
2121
+ 34:03.320 --> 34:05.960
2122
+ they're using primarily computer vision
2123
+
2124
+ 34:05.960 --> 34:08.080
2125
+ to perceive the environment.
2126
+
2127
+ 34:08.080 --> 34:09.760
2128
+ In your work with,
2129
+
2130
+ 34:09.760 --> 34:12.560
2131
+ because you work with the real world a lot
2132
+
2133
+ 34:12.560 --> 34:13.720
2134
+ and the physical world,
2135
+
2136
+ 34:13.720 --> 34:15.800
2137
+ what are the limits of computer vision?
2138
+
2139
+ 34:15.800 --> 34:18.000
2140
+ Do you think we can solve autonomous vehicles
2141
+
2142
+ 34:19.160 --> 34:20.880
2143
+ on the perception side,
2144
+
2145
+ 34:20.880 --> 34:24.240
2146
+ focusing on vision alone and machine learning?
2147
+
2148
+ 34:24.240 --> 34:27.480
2149
+ So, we also have a spinoff company,
2150
+
2151
+ 34:27.480 --> 34:31.840
2152
+ Exxon Technologies that works underground in mines.
2153
+
2154
+ 34:31.840 --> 34:35.600
2155
+ So you go into mines, they're dark, they're dirty.
2156
+
2157
+ 34:36.480 --> 34:38.600
2158
+ You fly in a dirty area,
2159
+
2160
+ 34:38.600 --> 34:41.120
2161
+ there's stuff you kick up from by the propellers,
2162
+
2163
+ 34:41.120 --> 34:42.720
2164
+ the downwash kicks up dust.
2165
+
2166
+ 34:42.720 --> 34:45.520
2167
+ I challenge you to get a computer vision algorithm
2168
+
2169
+ 34:45.520 --> 34:46.680
2170
+ to work there.
2171
+
2172
+ 34:46.680 --> 34:49.600
2173
+ So we use LIDARs in that setting.
2174
+
2175
+ 34:51.200 --> 34:55.360
2176
+ Indoors and even outdoors when we fly through fields,
2177
+
2178
+ 34:55.360 --> 34:57.120
2179
+ I think there's a lot of potential
2180
+
2181
+ 34:57.120 --> 34:59.960
2182
+ for just solving the problem using computer vision alone.
2183
+
2184
+ 35:01.240 --> 35:02.760
2185
+ But I think the bigger question is,
2186
+
2187
+ 35:02.760 --> 35:06.160
2188
+ can you actually solve
2189
+
2190
+ 35:06.160 --> 35:09.440
2191
+ or can you actually identify all the corner cases
2192
+
2193
+ 35:09.440 --> 35:13.920
2194
+ using a single sensing modality and using learning alone?
2195
+
2196
+ 35:13.920 --> 35:15.400
2197
+ So what's your intuition there?
2198
+
2199
+ 35:15.400 --> 35:17.920
2200
+ So look, if you have a corner case
2201
+
2202
+ 35:17.920 --> 35:20.000
2203
+ and your algorithm doesn't work,
2204
+
2205
+ 35:20.000 --> 35:23.200
2206
+ your instinct is to go get data about the corner case
2207
+
2208
+ 35:23.200 --> 35:26.640
2209
+ and patch it up, learn how to deal with that corner case.
2210
+
2211
+ 35:27.640 --> 35:32.040
2212
+ But at some point, this is gonna saturate,
2213
+
2214
+ 35:32.040 --> 35:34.200
2215
+ this approach is not viable.
2216
+
2217
+ 35:34.200 --> 35:38.000
2218
+ So today, computer vision algorithms can detect
2219
+
2220
+ 35:38.000 --> 35:41.360
2221
+ 90% of the objects or can detect objects 90% of the time,
2222
+
2223
+ 35:41.360 --> 35:43.920
2224
+ classify them 90% of the time.
2225
+
2226
+ 35:43.920 --> 35:47.960
2227
+ Cats on the internet probably can do 95%, I don't know.
2228
+
2229
+ 35:47.960 --> 35:52.520
2230
+ But to get from 90% to 99%, you need a lot more data.
2231
+
2232
+ 35:52.520 --> 35:54.480
2233
+ And then I tell you, well, that's not enough
2234
+
2235
+ 35:54.480 --> 35:56.680
2236
+ because I have a safety critical application,
2237
+
2238
+ 35:56.680 --> 36:00.160
2239
+ I wanna go from 99% to 99.9%.
2240
+
2241
+ 36:00.160 --> 36:01.600
2242
+ That's even more data.
2243
+
2244
+ 36:01.600 --> 36:08.600
2245
+ So I think if you look at wanting accuracy on the X axis
2246
+
2247
+ 36:09.600 --> 36:14.080
2248
+ and look at the amount of data on the Y axis,
2249
+
2250
+ 36:14.080 --> 36:16.440
2251
+ I believe that curve is an exponential curve.
2252
+
2253
+ 36:16.440 --> 36:19.480
2254
+ Wow, okay, it's even hard if it's linear.
2255
+
2256
+ 36:19.480 --> 36:20.800
2257
+ It's hard if it's linear, totally,
2258
+
2259
+ 36:20.800 --> 36:22.560
2260
+ but I think it's exponential.
2261
+
2262
+ 36:22.560 --> 36:24.120
2263
+ And the other thing you have to think about
2264
+
2265
+ 36:24.120 --> 36:29.600
2266
+ is that this process is a very, very power hungry process
2267
+
2268
+ 36:29.600 --> 36:32.880
2269
+ to run data farms or servers.
2270
+
2271
+ 36:32.880 --> 36:34.600
2272
+ Power, do you mean literally power?
2273
+
2274
+ 36:34.600 --> 36:36.600
2275
+ Literally power, literally power.
2276
+
2277
+ 36:36.600 --> 36:41.760
2278
+ So in 2014, five years ago, and I don't have more recent data,
2279
+
2280
+ 36:41.760 --> 36:48.360
2281
+ 2% of US electricity consumption was from data farms.
2282
+
2283
+ 36:48.360 --> 36:52.080
2284
+ So we think about this as an information science
2285
+
2286
+ 36:52.080 --> 36:54.240
2287
+ and information processing problem.
2288
+
2289
+ 36:54.240 --> 36:57.840
2290
+ Actually, it is an energy processing problem.
2291
+
2292
+ 36:57.840 --> 37:00.440
2293
+ And so unless we figured out better ways of doing this,
2294
+
2295
+ 37:00.440 --> 37:02.440
2296
+ I don't think this is viable.
2297
+
2298
+ 37:02.440 --> 37:06.600
2299
+ So talking about driving, which is a safety critical application
2300
+
2301
+ 37:06.600 --> 37:10.440
2302
+ and some aspect of flight is safety critical,
2303
+
2304
+ 37:10.440 --> 37:12.960
2305
+ maybe philosophical question, maybe an engineering one,
2306
+
2307
+ 37:12.960 --> 37:15.000
2308
+ what problem do you think is harder to solve,
2309
+
2310
+ 37:15.000 --> 37:18.120
2311
+ autonomous driving or autonomous flight?
2312
+
2313
+ 37:18.120 --> 37:19.920
2314
+ That's a really interesting question.
2315
+
2316
+ 37:19.920 --> 37:25.440
2317
+ I think autonomous flight has several advantages
2318
+
2319
+ 37:25.440 --> 37:29.360
2320
+ that autonomous driving doesn't have.
2321
+
2322
+ 37:29.360 --> 37:32.400
2323
+ So look, if I want to go from point A to point B,
2324
+
2325
+ 37:32.400 --> 37:34.320
2326
+ I have a very, very safe trajectory.
2327
+
2328
+ 37:34.320 --> 37:36.800
2329
+ Go vertically up to a maximum altitude,
2330
+
2331
+ 37:36.800 --> 37:39.480
2332
+ fly horizontally to just about the destination,
2333
+
2334
+ 37:39.480 --> 37:42.400
2335
+ and then come down vertically.
2336
+
2337
+ 37:42.400 --> 37:45.400
2338
+ This is preprogrammed.
2339
+
2340
+ 37:45.400 --> 37:48.040
2341
+ The equivalent of that is very hard to find
2342
+
2343
+ 37:48.040 --> 37:51.560
2344
+ in the self driving car world because you're on the ground,
2345
+
2346
+ 37:51.560 --> 37:53.560
2347
+ you're in a two dimensional surface,
2348
+
2349
+ 37:53.560 --> 37:56.680
2350
+ and the trajectories on the two dimensional surface
2351
+
2352
+ 37:56.680 --> 38:00.200
2353
+ are more likely to encounter obstacles.
2354
+
2355
+ 38:00.200 --> 38:03.280
2356
+ I mean this in an intuitive sense, but mathematically true.
2357
+
2358
+ 38:03.280 --> 38:06.360
2359
+ That's mathematically as well, that's true.
2360
+
2361
+ 38:06.360 --> 38:10.040
2362
+ There's other option on the 2G space of platooning,
2363
+
2364
+ 38:10.040 --> 38:11.640
2365
+ or because there's so many obstacles,
2366
+
2367
+ 38:11.640 --> 38:13.280
2368
+ you can connect with those obstacles
2369
+
2370
+ 38:13.280 --> 38:14.560
2371
+ and all these kind of options.
2372
+
2373
+ 38:14.560 --> 38:16.560
2374
+ Sure, but those exist in the three dimensional space as well.
2375
+
2376
+ 38:16.560 --> 38:17.560
2377
+ So they do.
2378
+
2379
+ 38:17.560 --> 38:21.800
2380
+ So the question also implies how difficult are obstacles
2381
+
2382
+ 38:21.800 --> 38:23.800
2383
+ in the three dimensional space in flight?
2384
+
2385
+ 38:23.800 --> 38:25.600
2386
+ So that's the downside.
2387
+
2388
+ 38:25.600 --> 38:26.920
2389
+ I think in three dimensional space,
2390
+
2391
+ 38:26.920 --> 38:29.080
2392
+ you're modeling three dimensional world,
2393
+
2394
+ 38:29.080 --> 38:31.280
2395
+ not just because you want to avoid it,
2396
+
2397
+ 38:31.280 --> 38:33.040
2398
+ but you want to reason about it,
2399
+
2400
+ 38:33.040 --> 38:35.360
2401
+ and you want to work in the three dimensional environment,
2402
+
2403
+ 38:35.360 --> 38:37.480
2404
+ and that's significantly harder.
2405
+
2406
+ 38:37.480 --> 38:38.920
2407
+ So that's one disadvantage.
2408
+
2409
+ 38:38.920 --> 38:41.040
2410
+ I think the second disadvantage is of course,
2411
+
2412
+ 38:41.040 --> 38:43.200
2413
+ anytime you fly, you have to put up
2414
+
2415
+ 38:43.200 --> 38:46.560
2416
+ with the peculiarities of aerodynamics
2417
+
2418
+ 38:46.560 --> 38:48.720
2419
+ and their complicated environments.
2420
+
2421
+ 38:48.720 --> 38:49.800
2422
+ How do you negotiate that?
2423
+
2424
+ 38:49.800 --> 38:51.880
2425
+ So that's always a problem.
2426
+
2427
+ 38:51.880 --> 38:55.240
2428
+ Do you see a time in the future where there is,
2429
+
2430
+ 38:55.240 --> 38:58.720
2431
+ you mentioned there's agriculture applications.
2432
+
2433
+ 38:58.720 --> 39:01.680
2434
+ So there's a lot of applications of flying robots,
2435
+
2436
+ 39:01.680 --> 39:03.040
2437
+ but do you see a time in the future
2438
+
2439
+ 39:03.040 --> 39:05.360
2440
+ where there's tens of thousands,
2441
+
2442
+ 39:05.360 --> 39:08.160
2443
+ or maybe hundreds of thousands of delivery drones
2444
+
2445
+ 39:08.160 --> 39:12.160
2446
+ that fill the sky, delivery flying robots?
2447
+
2448
+ 39:12.160 --> 39:14.200
2449
+ I think there's a lot of potential
2450
+
2451
+ 39:14.200 --> 39:15.920
2452
+ for the last mile delivery.
2453
+
2454
+ 39:15.920 --> 39:19.240
2455
+ And so in crowded cities, I don't know,
2456
+
2457
+ 39:19.240 --> 39:21.400
2458
+ if you go to a place like Hong Kong,
2459
+
2460
+ 39:21.400 --> 39:24.400
2461
+ just crossing the river can take half an hour,
2462
+
2463
+ 39:24.400 --> 39:29.400
2464
+ and while a drone can just do it in five minutes at most.
2465
+
2466
+ 39:29.400 --> 39:34.400
2467
+ I think you look at delivery of supplies to remote villages.
2468
+
2469
+ 39:35.800 --> 39:38.680
2470
+ I work with a nonprofit called Weave Robotics.
2471
+
2472
+ 39:38.680 --> 39:40.920
2473
+ So they work in the Peruvian Amazon,
2474
+
2475
+ 39:40.920 --> 39:44.680
2476
+ where the only highways that are available
2477
+
2478
+ 39:44.680 --> 39:47.440
2479
+ are the only highways or rivers.
2480
+
2481
+ 39:47.440 --> 39:52.440
2482
+ And to get from point A to point B may take five hours,
2483
+
2484
+ 39:52.960 --> 39:55.600
2485
+ while with a drone, you can get there in 30 minutes.
2486
+
2487
+ 39:56.680 --> 39:59.880
2488
+ So just delivering drugs,
2489
+
2490
+ 39:59.880 --> 40:04.880
2491
+ retrieving samples for testing vaccines,
2492
+
2493
+ 40:05.160 --> 40:07.120
2494
+ I think there's huge potential here.
2495
+
2496
+ 40:07.120 --> 40:09.960
2497
+ So I think the challenges are not technological,
2498
+
2499
+ 40:09.960 --> 40:12.040
2500
+ but the challenge is economical.
2501
+
2502
+ 40:12.040 --> 40:15.560
2503
+ The one thing I'll tell you that nobody thinks about
2504
+
2505
+ 40:15.560 --> 40:18.920
2506
+ is the fact that we've not made huge strides
2507
+
2508
+ 40:18.920 --> 40:20.840
2509
+ in battery technology.
2510
+
2511
+ 40:20.840 --> 40:23.520
2512
+ Yes, it's true, batteries are becoming less expensive
2513
+
2514
+ 40:23.520 --> 40:26.240
2515
+ because we have these mega factories that are coming up,
2516
+
2517
+ 40:26.240 --> 40:28.800
2518
+ but they're all based on lithium based technologies.
2519
+
2520
+ 40:28.800 --> 40:31.480
2521
+ And if you look at the energy density
2522
+
2523
+ 40:31.480 --> 40:33.240
2524
+ and the power density,
2525
+
2526
+ 40:33.240 --> 40:38.000
2527
+ those are two fundamentally limiting numbers.
2528
+
2529
+ 40:38.000 --> 40:39.680
2530
+ So power density is important
2531
+
2532
+ 40:39.680 --> 40:42.480
2533
+ because for a UAV to take off vertically into the air,
2534
+
2535
+ 40:42.480 --> 40:46.360
2536
+ which most drones do, they don't have a runway,
2537
+
2538
+ 40:46.360 --> 40:50.240
2539
+ you consume roughly 200 watts per kilo at the small size.
2540
+
2541
+ 40:51.560 --> 40:53.920
2542
+ That's a lot, right?
2543
+
2544
+ 40:53.920 --> 40:57.520
2545
+ In contrast, the human brain consumes less than 80 watts,
2546
+
2547
+ 40:57.520 --> 40:58.920
2548
+ the whole of the human brain.
2549
+
2550
+ 40:59.920 --> 41:03.600
2551
+ So just imagine just lifting yourself into the air
2552
+
2553
+ 41:03.600 --> 41:06.000
2554
+ is like two or three light bulbs,
2555
+
2556
+ 41:06.000 --> 41:07.840
2557
+ which makes no sense to me.
2558
+
2559
+ 41:07.840 --> 41:10.440
2560
+ Yeah, so you're going to have to at scale
2561
+
2562
+ 41:10.440 --> 41:12.880
2563
+ solve the energy problem then,
2564
+
2565
+ 41:12.880 --> 41:17.880
2566
+ charging the batteries, storing the energy and so on.
2567
+
2568
+ 41:18.920 --> 41:20.680
2569
+ And then the storage is the second problem,
2570
+
2571
+ 41:20.680 --> 41:22.960
2572
+ but storage limits the range.
2573
+
2574
+ 41:22.960 --> 41:27.960
2575
+ But you have to remember that you have to burn
2576
+
2577
+ 41:28.680 --> 41:31.600
2578
+ a lot of it per given time.
2579
+
2580
+ 41:31.600 --> 41:32.920
2581
+ So the burning is another problem.
2582
+
2583
+ 41:32.920 --> 41:34.640
2584
+ Which is a power question.
2585
+
2586
+ 41:34.640 --> 41:38.640
2587
+ Yes, and do you think just your intuition,
2588
+
2589
+ 41:38.640 --> 41:43.640
2590
+ there are breakthroughs in batteries on the horizon?
2591
+
2592
+ 41:44.960 --> 41:46.440
2593
+ How hard is that problem?
2594
+
2595
+ 41:46.440 --> 41:47.600
2596
+ Look, there are a lot of companies
2597
+
2598
+ 41:47.600 --> 41:52.600
2599
+ that are promising flying cars that are autonomous
2600
+
2601
+ 41:53.880 --> 41:55.120
2602
+ and that are clean.
2603
+
2604
+ 41:59.400 --> 42:01.680
2605
+ I think they're over promising.
2606
+
2607
+ 42:01.680 --> 42:04.800
2608
+ The autonomy piece is doable.
2609
+
2610
+ 42:04.800 --> 42:07.040
2611
+ The clean piece, I don't think so.
2612
+
2613
+ 42:08.000 --> 42:11.840
2614
+ There's another company that I work with called JetOptra.
2615
+
2616
+ 42:11.840 --> 42:14.360
2617
+ They make small jet engines.
2618
+
2619
+ 42:15.760 --> 42:18.080
2620
+ And they can get up to 50 miles an hour very easily
2621
+
2622
+ 42:18.080 --> 42:19.960
2623
+ and lift 50 kilos.
2624
+
2625
+ 42:19.960 --> 42:22.840
2626
+ But they're jet engines, they're efficient,
2627
+
2628
+ 42:23.920 --> 42:26.320
2629
+ they're a little louder than electric vehicles,
2630
+
2631
+ 42:26.320 --> 42:28.960
2632
+ but they can build flying cars.
2633
+
2634
+ 42:28.960 --> 42:32.440
2635
+ So your sense is that there's a lot of pieces
2636
+
2637
+ 42:32.440 --> 42:33.520
2638
+ that have come together.
2639
+
2640
+ 42:33.520 --> 42:37.360
2641
+ So on this crazy question,
2642
+
2643
+ 42:37.360 --> 42:39.720
2644
+ if you look at companies like Kitty Hawk,
2645
+
2646
+ 42:39.720 --> 42:42.080
2647
+ working on electric, so the clean,
2648
+
2649
+ 42:43.880 --> 42:45.840
2650
+ talking to Sebastian Thrun, right?
2651
+
2652
+ 42:45.840 --> 42:48.840
2653
+ It's a crazy dream, you know?
2654
+
2655
+ 42:48.840 --> 42:52.080
2656
+ But you work with flight a lot.
2657
+
2658
+ 42:52.080 --> 42:55.760
2659
+ You've mentioned before that manned flights
2660
+
2661
+ 42:55.760 --> 43:00.760
2662
+ or carrying a human body is very difficult to do.
2663
+
2664
+ 43:01.640 --> 43:04.240
2665
+ So how crazy is flying cars?
2666
+
2667
+ 43:04.240 --> 43:05.400
2668
+ Do you think there'll be a day
2669
+
2670
+ 43:05.400 --> 43:10.400
2671
+ when we have vertical takeoff and landing vehicles
2672
+
2673
+ 43:11.080 --> 43:14.040
2674
+ that are sufficiently affordable
2675
+
2676
+ 43:14.960 --> 43:17.440
2677
+ that we're going to see a huge amount of them?
2678
+
2679
+ 43:17.440 --> 43:19.680
2680
+ And they would look like something like we dream of
2681
+
2682
+ 43:19.680 --> 43:21.080
2683
+ when we think about flying cars.
2684
+
2685
+ 43:21.080 --> 43:22.200
2686
+ Yeah, like the Jetsons.
2687
+
2688
+ 43:22.200 --> 43:23.160
2689
+ The Jetsons, yeah.
2690
+
2691
+ 43:23.160 --> 43:25.560
2692
+ So look, there are a lot of smart people working on this
2693
+
2694
+ 43:25.560 --> 43:29.640
2695
+ and you never say something is not possible
2696
+
2697
+ 43:29.640 --> 43:32.200
2698
+ when you have people like Sebastian Thrun working on it.
2699
+
2700
+ 43:32.200 --> 43:35.160
2701
+ So I totally think it's viable.
2702
+
2703
+ 43:35.160 --> 43:38.240
2704
+ I question, again, the electric piece.
2705
+
2706
+ 43:38.240 --> 43:39.520
2707
+ The electric piece, yeah.
2708
+
2709
+ 43:39.520 --> 43:41.680
2710
+ And again, for short distances, you can do it.
2711
+
2712
+ 43:41.680 --> 43:43.640
2713
+ And there's no reason to suggest
2714
+
2715
+ 43:43.640 --> 43:45.840
2716
+ that these all just have to be rotorcrafts.
2717
+
2718
+ 43:45.840 --> 43:46.920
2719
+ You take off vertically,
2720
+
2721
+ 43:46.920 --> 43:49.680
2722
+ but then you morph into a forward flight.
2723
+
2724
+ 43:49.680 --> 43:51.600
2725
+ I think there are a lot of interesting designs.
2726
+
2727
+ 43:51.600 --> 43:56.040
2728
+ The question to me is, are these economically viable?
2729
+
2730
+ 43:56.040 --> 43:59.160
2731
+ And if you agree to do this with fossil fuels,
2732
+
2733
+ 43:59.160 --> 44:01.960
2734
+ it instantly immediately becomes viable.
2735
+
2736
+ 44:01.960 --> 44:03.480
2737
+ That's a real challenge.
2738
+
2739
+ 44:03.480 --> 44:06.560
2740
+ Do you think it's possible for robots and humans
2741
+
2742
+ 44:06.560 --> 44:08.840
2743
+ to collaborate successfully on tasks?
2744
+
2745
+ 44:08.840 --> 44:13.640
2746
+ So a lot of robotics folks that I talk to and work with,
2747
+
2748
+ 44:13.640 --> 44:18.000
2749
+ I mean, humans just add a giant mess to the picture.
2750
+
2751
+ 44:18.000 --> 44:20.320
2752
+ So it's best to remove them from consideration
2753
+
2754
+ 44:20.320 --> 44:22.400
2755
+ when solving specific tasks.
2756
+
2757
+ 44:22.400 --> 44:23.600
2758
+ It's very difficult to model.
2759
+
2760
+ 44:23.600 --> 44:26.000
2761
+ There's just a source of uncertainty.
2762
+
2763
+ 44:26.000 --> 44:31.000
2764
+ In your work with these agile flying robots,
2765
+
2766
+ 44:32.560 --> 44:35.680
2767
+ do you think there's a role for collaboration with humans?
2768
+
2769
+ 44:35.680 --> 44:38.600
2770
+ Or is it best to model tasks in a way
2771
+
2772
+ 44:38.600 --> 44:43.400
2773
+ that doesn't have a human in the picture?
2774
+
2775
+ 44:43.400 --> 44:46.760
2776
+ Well, I don't think we should ever think about robots
2777
+
2778
+ 44:46.760 --> 44:48.120
2779
+ without human in the picture.
2780
+
2781
+ 44:48.120 --> 44:50.960
2782
+ Ultimately, robots are there because we want them
2783
+
2784
+ 44:50.960 --> 44:54.360
2785
+ to solve problems for humans.
2786
+
2787
+ 44:54.360 --> 44:58.280
2788
+ But there's no general solution to this problem.
2789
+
2790
+ 44:58.280 --> 45:00.000
2791
+ I think if you look at human interaction
2792
+
2793
+ 45:00.000 --> 45:02.400
2794
+ and how humans interact with robots,
2795
+
2796
+ 45:02.400 --> 45:05.280
2797
+ you know, we think of these in sort of three different ways.
2798
+
2799
+ 45:05.280 --> 45:07.600
2800
+ One is the human commanding the robot.
2801
+
2802
+ 45:08.880 --> 45:12.880
2803
+ The second is the human collaborating with the robot.
2804
+
2805
+ 45:12.880 --> 45:15.520
2806
+ So for example, we work on how a robot
2807
+
2808
+ 45:15.520 --> 45:18.720
2809
+ can actually pick up things with a human and carry things.
2810
+
2811
+ 45:18.720 --> 45:20.880
2812
+ That's like true collaboration.
2813
+
2814
+ 45:20.880 --> 45:25.000
2815
+ And third, we think about humans as bystanders,
2816
+
2817
+ 45:25.000 --> 45:27.240
2818
+ self driving cars, what's the human's role
2819
+
2820
+ 45:27.240 --> 45:30.320
2821
+ and how do self driving cars
2822
+
2823
+ 45:30.320 --> 45:32.920
2824
+ acknowledge the presence of humans?
2825
+
2826
+ 45:32.920 --> 45:35.840
2827
+ So I think all of these things are different scenarios.
2828
+
2829
+ 45:35.840 --> 45:38.480
2830
+ It depends on what kind of humans, what kind of task.
2831
+
2832
+ 45:39.640 --> 45:41.840
2833
+ And I think it's very difficult to say
2834
+
2835
+ 45:41.840 --> 45:45.520
2836
+ that there's a general theory that we all have for this.
2837
+
2838
+ 45:45.520 --> 45:48.440
2839
+ But at the same time, it's also silly to say
2840
+
2841
+ 45:48.440 --> 45:52.000
2842
+ that we should think about robots independent of humans.
2843
+
2844
+ 45:52.000 --> 45:55.760
2845
+ So to me, human robot interaction
2846
+
2847
+ 45:55.760 --> 45:59.760
2848
+ is almost a mandatory aspect of everything we do.
2849
+
2850
+ 45:59.760 --> 46:02.440
2851
+ Yes, but to which degree, so your thoughts,
2852
+
2853
+ 46:02.440 --> 46:05.240
2854
+ if we jump to autonomous vehicles, for example,
2855
+
2856
+ 46:05.240 --> 46:08.680
2857
+ there's a big debate between what's called
2858
+
2859
+ 46:08.680 --> 46:10.640
2860
+ level two and level four.
2861
+
2862
+ 46:10.640 --> 46:13.680
2863
+ So semi autonomous and autonomous vehicles.
2864
+
2865
+ 46:13.680 --> 46:16.440
2866
+ And so the Tesla approach currently at least
2867
+
2868
+ 46:16.440 --> 46:18.960
2869
+ has a lot of collaboration between human and machine.
2870
+
2871
+ 46:18.960 --> 46:22.040
2872
+ So the human is supposed to actively supervise
2873
+
2874
+ 46:22.040 --> 46:23.880
2875
+ the operation of the robot.
2876
+
2877
+ 46:23.880 --> 46:28.880
2878
+ Part of the safety definition of how safe a robot is
2879
+
2880
+ 46:29.160 --> 46:32.880
2881
+ in that case is how effective is the human in monitoring it.
2882
+
2883
+ 46:32.880 --> 46:37.880
2884
+ Do you think that's ultimately not a good approach
2885
+
2886
+ 46:37.880 --> 46:42.360
2887
+ in sort of having a human in the picture,
2888
+
2889
+ 46:42.360 --> 46:47.360
2890
+ not as a bystander or part of the infrastructure,
2891
+
2892
+ 46:47.400 --> 46:50.000
2893
+ but really as part of what's required
2894
+
2895
+ 46:50.000 --> 46:51.560
2896
+ to make the system safe?
2897
+
2898
+ 46:51.560 --> 46:53.720
2899
+ This is harder than it sounds.
2900
+
2901
+ 46:53.720 --> 46:58.200
2902
+ I think, you know, if you, I mean,
2903
+
2904
+ 46:58.200 --> 47:01.360
2905
+ I'm sure you've driven before in highways and so on.
2906
+
2907
+ 47:01.360 --> 47:06.120
2908
+ It's really very hard to have to relinquish control
2909
+
2910
+ 47:06.120 --> 47:10.440
2911
+ to a machine and then take over when needed.
2912
+
2913
+ 47:10.440 --> 47:12.280
2914
+ So I think Tesla's approach is interesting
2915
+
2916
+ 47:12.280 --> 47:14.800
2917
+ because it allows you to periodically establish
2918
+
2919
+ 47:14.800 --> 47:18.520
2920
+ some kind of contact with the car.
2921
+
2922
+ 47:18.520 --> 47:20.640
2923
+ Toyota, on the other hand, is thinking about
2924
+
2925
+ 47:20.640 --> 47:24.800
2926
+ shared autonomy or collaborative autonomy as a paradigm.
2927
+
2928
+ 47:24.800 --> 47:27.480
2929
+ If I may argue, these are very, very simple ways
2930
+
2931
+ 47:27.480 --> 47:29.680
2932
+ of human robot collaboration,
2933
+
2934
+ 47:29.680 --> 47:31.880
2935
+ because the task is pretty boring.
2936
+
2937
+ 47:31.880 --> 47:35.000
2938
+ You sit in a vehicle, you go from point A to point B.
2939
+
2940
+ 47:35.000 --> 47:37.360
2941
+ I think the more interesting thing to me is,
2942
+
2943
+ 47:37.360 --> 47:38.760
2944
+ for example, search and rescue.
2945
+
2946
+ 47:38.760 --> 47:41.980
2947
+ I've got a human first responder, robot first responders.
2948
+
2949
+ 47:43.160 --> 47:45.120
2950
+ I gotta do something.
2951
+
2952
+ 47:45.120 --> 47:46.000
2953
+ It's important.
2954
+
2955
+ 47:46.000 --> 47:47.800
2956
+ I have to do it in two minutes.
2957
+
2958
+ 47:47.800 --> 47:49.240
2959
+ The building is burning.
2960
+
2961
+ 47:49.240 --> 47:50.440
2962
+ There's been an explosion.
2963
+
2964
+ 47:50.440 --> 47:51.360
2965
+ It's collapsed.
2966
+
2967
+ 47:51.360 --> 47:52.800
2968
+ How do I do it?
2969
+
2970
+ 47:52.800 --> 47:54.740
2971
+ I think to me, those are the interesting things
2972
+
2973
+ 47:54.740 --> 47:57.160
2974
+ where it's very, very unstructured.
2975
+
2976
+ 47:57.160 --> 47:58.480
2977
+ And what's the role of the human?
2978
+
2979
+ 47:58.480 --> 48:00.200
2980
+ What's the role of the robot?
2981
+
2982
+ 48:00.200 --> 48:02.440
2983
+ Clearly, there's lots of interesting challenges
2984
+
2985
+ 48:02.440 --> 48:03.440
2986
+ and there's a field.
2987
+
2988
+ 48:03.440 --> 48:05.760
2989
+ I think we're gonna make a lot of progress in this area.
2990
+
2991
+ 48:05.760 --> 48:07.600
2992
+ Yeah, it's an exciting form of collaboration.
2993
+
2994
+ 48:07.600 --> 48:08.440
2995
+ You're right.
2996
+
2997
+ 48:08.440 --> 48:11.120
2998
+ In autonomous driving, the main enemy
2999
+
3000
+ 48:11.120 --> 48:13.120
3001
+ is just boredom of the human.
3002
+
3003
+ 48:13.120 --> 48:13.960
3004
+ Yes.
3005
+
3006
+ 48:13.960 --> 48:15.680
3007
+ As opposed to in rescue operations,
3008
+
3009
+ 48:15.680 --> 48:18.360
3010
+ it's literally life and death.
3011
+
3012
+ 48:18.360 --> 48:22.080
3013
+ And the collaboration enables
3014
+
3015
+ 48:22.080 --> 48:23.820
3016
+ the effective completion of the mission.
3017
+
3018
+ 48:23.820 --> 48:24.760
3019
+ So it's exciting.
3020
+
3021
+ 48:24.760 --> 48:27.400
3022
+ In some sense, we're also doing this.
3023
+
3024
+ 48:27.400 --> 48:30.520
3025
+ You think about the human driving a car
3026
+
3027
+ 48:30.520 --> 48:33.800
3028
+ and almost invariably, the human's trying
3029
+
3030
+ 48:33.800 --> 48:35.000
3031
+ to estimate the state of the car,
3032
+
3033
+ 48:35.000 --> 48:37.280
3034
+ they estimate the state of the environment and so on.
3035
+
3036
+ 48:37.280 --> 48:40.120
3037
+ But what if the car were to estimate the state of the human?
3038
+
3039
+ 48:40.120 --> 48:41.960
3040
+ So for example, I'm sure you have a smartphone
3041
+
3042
+ 48:41.960 --> 48:44.580
3043
+ and the smartphone tries to figure out what you're doing
3044
+
3045
+ 48:44.580 --> 48:48.320
3046
+ and send you reminders and oftentimes telling you
3047
+
3048
+ 48:48.320 --> 48:49.540
3049
+ to drive to a certain place,
3050
+
3051
+ 48:49.540 --> 48:51.400
3052
+ although you have no intention of going there
3053
+
3054
+ 48:51.400 --> 48:53.880
3055
+ because it thinks that that's where you should be
3056
+
3057
+ 48:53.880 --> 48:56.240
3058
+ because of some Gmail calendar entry
3059
+
3060
+ 48:57.520 --> 48:58.960
3061
+ or something like that.
3062
+
3063
+ 48:58.960 --> 49:01.600
3064
+ And it's trying to constantly figure out who you are,
3065
+
3066
+ 49:01.600 --> 49:02.740
3067
+ what you're doing.
3068
+
3069
+ 49:02.740 --> 49:04.200
3070
+ If a car were to do that,
3071
+
3072
+ 49:04.200 --> 49:06.840
3073
+ maybe that would make the driver safer
3074
+
3075
+ 49:06.840 --> 49:08.160
3076
+ because the car is trying to figure out
3077
+
3078
+ 49:08.160 --> 49:09.760
3079
+ is the driver paying attention,
3080
+
3081
+ 49:09.760 --> 49:11.600
3082
+ looking at his or her eyes,
3083
+
3084
+ 49:12.480 --> 49:14.400
3085
+ looking at circadian movements.
3086
+
3087
+ 49:14.400 --> 49:16.480
3088
+ So I think the potential is there,
3089
+
3090
+ 49:16.480 --> 49:18.600
3091
+ but from the reverse side,
3092
+
3093
+ 49:18.600 --> 49:21.640
3094
+ it's not robot modeling, but it's human modeling.
3095
+
3096
+ 49:21.640 --> 49:22.880
3097
+ It's more on the human, right.
3098
+
3099
+ 49:22.880 --> 49:25.320
3100
+ And I think the robots can do a very good job
3101
+
3102
+ 49:25.320 --> 49:29.120
3103
+ of modeling humans if you really think about the framework
3104
+
3105
+ 49:29.120 --> 49:32.640
3106
+ that you have a human sitting in a cockpit,
3107
+
3108
+ 49:32.640 --> 49:35.820
3109
+ surrounded by sensors, all staring at him,
3110
+
3111
+ 49:35.820 --> 49:37.860
3112
+ in addition to be staring outside,
3113
+
3114
+ 49:37.860 --> 49:39.160
3115
+ but also staring at him.
3116
+
3117
+ 49:39.160 --> 49:40.960
3118
+ I think there's a real synergy there.
3119
+
3120
+ 49:40.960 --> 49:42.360
3121
+ Yeah, I love that problem
3122
+
3123
+ 49:42.360 --> 49:45.560
3124
+ because it's the new 21st century form of psychology,
3125
+
3126
+ 49:45.560 --> 49:48.520
3127
+ actually AI enabled psychology.
3128
+
3129
+ 49:48.520 --> 49:51.280
3130
+ A lot of people have sci fi inspired fears
3131
+
3132
+ 49:51.280 --> 49:54.080
3133
+ of walking robots like those from Boston Dynamics.
3134
+
3135
+ 49:54.080 --> 49:56.480
3136
+ If you just look at shows on Netflix and so on,
3137
+
3138
+ 49:56.480 --> 49:59.040
3139
+ or flying robots like those you work with,
3140
+
3141
+ 49:59.920 --> 50:03.160
3142
+ how would you, how do you think about those fears?
3143
+
3144
+ 50:03.160 --> 50:05.040
3145
+ How would you alleviate those fears?
3146
+
3147
+ 50:05.040 --> 50:09.040
3148
+ Do you have inklings, echoes of those same concerns?
3149
+
3150
+ 50:09.040 --> 50:11.760
3151
+ You know, anytime we develop a technology
3152
+
3153
+ 50:11.760 --> 50:14.160
3154
+ meaning to have positive impact in the world,
3155
+
3156
+ 50:14.160 --> 50:15.780
3157
+ there's always the worry that,
3158
+
3159
+ 50:17.440 --> 50:21.000
3160
+ you know, somebody could subvert those technologies
3161
+
3162
+ 50:21.000 --> 50:23.280
3163
+ and use it in an adversarial setting.
3164
+
3165
+ 50:23.280 --> 50:25.280
3166
+ And robotics is no exception, right?
3167
+
3168
+ 50:25.280 --> 50:29.280
3169
+ So I think it's very easy to weaponize robots.
3170
+
3171
+ 50:29.280 --> 50:30.880
3172
+ I think we talk about swarms.
3173
+
3174
+ 50:31.720 --> 50:33.960
3175
+ One thing I worry a lot about is,
3176
+
3177
+ 50:33.960 --> 50:35.880
3178
+ so, you know, for us to get swarms to work
3179
+
3180
+ 50:35.880 --> 50:38.280
3181
+ and do something reliably, it's really hard.
3182
+
3183
+ 50:38.280 --> 50:42.040
3184
+ But suppose I have this challenge
3185
+
3186
+ 50:42.040 --> 50:44.360
3187
+ of trying to destroy something,
3188
+
3189
+ 50:44.360 --> 50:45.720
3190
+ and I have a swarm of robots,
3191
+
3192
+ 50:45.720 --> 50:47.280
3193
+ where only one out of the swarm
3194
+
3195
+ 50:47.280 --> 50:48.920
3196
+ needs to get to its destination.
3197
+
3198
+ 50:48.920 --> 50:52.640
3199
+ So that suddenly becomes a lot more doable.
3200
+
3201
+ 50:52.640 --> 50:54.720
3202
+ And so I worry about, you know,
3203
+
3204
+ 50:54.720 --> 50:56.920
3205
+ this general idea of using autonomy
3206
+
3207
+ 50:56.920 --> 50:58.600
3208
+ with lots and lots of agents.
3209
+
3210
+ 51:00.040 --> 51:01.320
3211
+ I mean, having said that, look,
3212
+
3213
+ 51:01.320 --> 51:03.760
3214
+ a lot of this technology is not very mature.
3215
+
3216
+ 51:03.760 --> 51:05.520
3217
+ My favorite saying is that
3218
+
3219
+ 51:06.560 --> 51:10.520
3220
+ if somebody had to develop this technology,
3221
+
3222
+ 51:10.520 --> 51:12.320
3223
+ wouldn't you rather the good guys do it?
3224
+
3225
+ 51:12.320 --> 51:13.880
3226
+ So the good guys have a good understanding
3227
+
3228
+ 51:13.880 --> 51:15.560
3229
+ of the technology, so they can figure out
3230
+
3231
+ 51:15.560 --> 51:18.320
3232
+ how this technology is being used in a bad way,
3233
+
3234
+ 51:18.320 --> 51:21.360
3235
+ or could be used in a bad way and try to defend against it.
3236
+
3237
+ 51:21.360 --> 51:22.760
3238
+ So we think a lot about that.
3239
+
3240
+ 51:22.760 --> 51:25.400
3241
+ So we have, we're doing research
3242
+
3243
+ 51:25.400 --> 51:28.240
3244
+ on how to defend against swarms, for example.
3245
+
3246
+ 51:28.240 --> 51:29.600
3247
+ That's interesting.
3248
+
3249
+ 51:29.600 --> 51:32.960
3250
+ There's in fact a report by the National Academies
3251
+
3252
+ 51:32.960 --> 51:35.520
3253
+ on counter UAS technologies.
3254
+
3255
+ 51:36.680 --> 51:38.200
3256
+ This is a real threat,
3257
+
3258
+ 51:38.200 --> 51:40.320
3259
+ but we're also thinking about how to defend against this
3260
+
3261
+ 51:40.320 --> 51:42.920
3262
+ and knowing how swarms work.
3263
+
3264
+ 51:42.920 --> 51:47.160
3265
+ Knowing how autonomy works is, I think, very important.
3266
+
3267
+ 51:47.160 --> 51:49.280
3268
+ So it's not just politicians?
3269
+
3270
+ 51:49.280 --> 51:51.640
3271
+ Do you think engineers have a role in this discussion?
3272
+
3273
+ 51:51.640 --> 51:52.480
3274
+ Absolutely.
3275
+
3276
+ 51:52.480 --> 51:55.280
3277
+ I think the days where politicians
3278
+
3279
+ 51:55.280 --> 51:57.680
3280
+ can be agnostic to technology are gone.
3281
+
3282
+ 51:59.200 --> 52:02.640
3283
+ I think every politician needs to be
3284
+
3285
+ 52:03.840 --> 52:05.680
3286
+ literate in technology.
3287
+
3288
+ 52:05.680 --> 52:08.640
3289
+ And I often say technology is the new liberal art.
3290
+
3291
+ 52:09.800 --> 52:12.920
3292
+ Understanding how technology will change your life,
3293
+
3294
+ 52:12.920 --> 52:14.480
3295
+ I think is important.
3296
+
3297
+ 52:14.480 --> 52:18.080
3298
+ And every human being needs to understand that.
3299
+
3300
+ 52:18.080 --> 52:20.160
3301
+ And maybe we can elect some engineers
3302
+
3303
+ 52:20.160 --> 52:22.720
3304
+ to office as well on the other side.
3305
+
3306
+ 52:22.720 --> 52:24.840
3307
+ What are the biggest open problems in robotics?
3308
+
3309
+ 52:24.840 --> 52:27.760
3310
+ And you said we're in the early days in some sense.
3311
+
3312
+ 52:27.760 --> 52:31.040
3313
+ What are the problems we would like to solve in robotics?
3314
+
3315
+ 52:31.040 --> 52:32.520
3316
+ I think there are lots of problems, right?
3317
+
3318
+ 52:32.520 --> 52:36.440
3319
+ But I would phrase it in the following way.
3320
+
3321
+ 52:36.440 --> 52:39.520
3322
+ If you look at the robots we're building,
3323
+
3324
+ 52:39.520 --> 52:43.160
3325
+ they're still very much tailored towards
3326
+
3327
+ 52:43.160 --> 52:46.520
3328
+ doing specific tasks and specific settings.
3329
+
3330
+ 52:46.520 --> 52:49.480
3331
+ I think the question of how do you get them to operate
3332
+
3333
+ 52:49.480 --> 52:51.080
3334
+ in much broader settings
3335
+
3336
+ 52:53.560 --> 52:58.040
3337
+ where things can change in unstructured environments
3338
+
3339
+ 52:58.040 --> 52:59.160
3340
+ is up in the air.
3341
+
3342
+ 52:59.160 --> 53:01.200
3343
+ So think of self driving cars.
3344
+
3345
+ 53:02.920 --> 53:05.680
3346
+ Today, we can build a self driving car in a parking lot.
3347
+
3348
+ 53:05.680 --> 53:09.000
3349
+ We can do level five autonomy in a parking lot.
3350
+
3351
+ 53:10.040 --> 53:13.240
3352
+ But can you do a level five autonomy
3353
+
3354
+ 53:13.240 --> 53:16.840
3355
+ in the streets of Napoli in Italy or Mumbai in India?
3356
+
3357
+ 53:16.840 --> 53:17.760
3358
+ No.
3359
+
3360
+ 53:17.760 --> 53:22.400
3361
+ So in some sense, when we think about robotics,
3362
+
3363
+ 53:22.400 --> 53:25.120
3364
+ we have to think about where they're functioning,
3365
+
3366
+ 53:25.120 --> 53:27.760
3367
+ what kind of environment, what kind of a task.
3368
+
3369
+ 53:27.760 --> 53:29.800
3370
+ We have no understanding
3371
+
3372
+ 53:29.800 --> 53:32.800
3373
+ of how to put both those things together.
3374
+
3375
+ 53:32.800 --> 53:34.000
3376
+ So we're in the very early days
3377
+
3378
+ 53:34.000 --> 53:35.920
3379
+ of applying it to the physical world.
3380
+
3381
+ 53:35.920 --> 53:38.800
3382
+ And I was just in Naples actually.
3383
+
3384
+ 53:38.800 --> 53:42.200
3385
+ And there's levels of difficulty and complexity
3386
+
3387
+ 53:42.200 --> 53:45.880
3388
+ depending on which area you're applying it to.
3389
+
3390
+ 53:45.880 --> 53:46.720
3391
+ I think so.
3392
+
3393
+ 53:46.720 --> 53:49.320
3394
+ And we don't have a systematic way of understanding that.
3395
+
3396
+ 53:51.040 --> 53:53.800
3397
+ Everybody says, just because a computer
3398
+
3399
+ 53:53.800 --> 53:56.520
3400
+ can now beat a human at any board game,
3401
+
3402
+ 53:56.520 --> 53:59.920
3403
+ we certainly know something about intelligence.
3404
+
3405
+ 53:59.920 --> 54:01.360
3406
+ That's not true.
3407
+
3408
+ 54:01.360 --> 54:04.400
3409
+ A computer board game is very, very structured.
3410
+
3411
+ 54:04.400 --> 54:08.480
3412
+ It is the equivalent of working in a Henry Ford factory
3413
+
3414
+ 54:08.480 --> 54:11.680
3415
+ where things, parts come, you assemble, move on.
3416
+
3417
+ 54:11.680 --> 54:14.120
3418
+ It's a very, very, very structured setting.
3419
+
3420
+ 54:14.120 --> 54:15.680
3421
+ That's the easiest thing.
3422
+
3423
+ 54:15.680 --> 54:17.040
3424
+ And we know how to do that.
3425
+
3426
+ 54:18.400 --> 54:20.400
3427
+ So you've done a lot of incredible work
3428
+
3429
+ 54:20.400 --> 54:23.720
3430
+ at the UPenn, University of Pennsylvania, GraspLab.
3431
+
3432
+ 54:23.720 --> 54:26.560
3433
+ You're now Dean of Engineering at UPenn.
3434
+
3435
+ 54:26.560 --> 54:31.320
3436
+ What advice do you have for a new bright eyed undergrad
3437
+
3438
+ 54:31.320 --> 54:34.640
3439
+ interested in robotics or AI or engineering?
3440
+
3441
+ 54:34.640 --> 54:36.560
3442
+ Well, I think there's really three things.
3443
+
3444
+ 54:36.560 --> 54:40.600
3445
+ One is you have to get used to the idea
3446
+
3447
+ 54:40.600 --> 54:42.840
3448
+ that the world will not be the same in five years
3449
+
3450
+ 54:42.840 --> 54:45.160
3451
+ or four years whenever you graduate, right?
3452
+
3453
+ 54:45.160 --> 54:46.120
3454
+ Which is really hard to do.
3455
+
3456
+ 54:46.120 --> 54:48.960
3457
+ So this thing about predicting the future,
3458
+
3459
+ 54:48.960 --> 54:50.520
3460
+ every one of us needs to be trying
3461
+
3462
+ 54:50.520 --> 54:52.360
3463
+ to predict the future always.
3464
+
3465
+ 54:53.280 --> 54:54.960
3466
+ Not because you'll be any good at it,
3467
+
3468
+ 54:54.960 --> 54:56.440
3469
+ but by thinking about it,
3470
+
3471
+ 54:56.440 --> 55:00.880
3472
+ I think you sharpen your senses and you become smarter.
3473
+
3474
+ 55:00.880 --> 55:02.080
3475
+ So that's number one.
3476
+
3477
+ 55:02.080 --> 55:05.760
3478
+ Number two, it's a corollary of the first piece,
3479
+
3480
+ 55:05.760 --> 55:09.360
3481
+ which is you really don't know what's gonna be important.
3482
+
3483
+ 55:09.360 --> 55:12.080
3484
+ So this idea that I'm gonna specialize in something
3485
+
3486
+ 55:12.080 --> 55:15.320
3487
+ which will allow me to go in a particular direction,
3488
+
3489
+ 55:15.320 --> 55:16.480
3490
+ it may be interesting,
3491
+
3492
+ 55:16.480 --> 55:18.480
3493
+ but it's important also to have this breadth
3494
+
3495
+ 55:18.480 --> 55:20.360
3496
+ so you have this jumping off point.
3497
+
3498
+ 55:22.000 --> 55:23.000
3499
+ I think the third thing,
3500
+
3501
+ 55:23.000 --> 55:25.360
3502
+ and this is where I think Penn excels.
3503
+
3504
+ 55:25.360 --> 55:27.240
3505
+ I mean, we teach engineering,
3506
+
3507
+ 55:27.240 --> 55:29.960
3508
+ but it's always in the context of the liberal arts.
3509
+
3510
+ 55:29.960 --> 55:32.360
3511
+ It's always in the context of society.
3512
+
3513
+ 55:32.360 --> 55:35.840
3514
+ As engineers, we cannot afford to lose sight of that.
3515
+
3516
+ 55:35.840 --> 55:37.640
3517
+ So I think that's important.
3518
+
3519
+ 55:37.640 --> 55:39.960
3520
+ But I think one thing that people underestimate
3521
+
3522
+ 55:39.960 --> 55:40.920
3523
+ when they do robotics
3524
+
3525
+ 55:40.920 --> 55:43.440
3526
+ is the importance of mathematical foundations,
3527
+
3528
+ 55:43.440 --> 55:46.880
3529
+ the importance of representations.
3530
+
3531
+ 55:47.720 --> 55:50.040
3532
+ Not everything can just be solved
3533
+
3534
+ 55:50.040 --> 55:52.440
3535
+ by looking for Ross packages on the internet
3536
+
3537
+ 55:52.440 --> 55:56.280
3538
+ or to find a deep neural network that works.
3539
+
3540
+ 55:56.280 --> 55:59.080
3541
+ I think the representation question is key,
3542
+
3543
+ 55:59.080 --> 56:00.400
3544
+ even to machine learning,
3545
+
3546
+ 56:00.400 --> 56:05.400
3547
+ where if you ever hope to achieve or get to explainable AI,
3548
+
3549
+ 56:05.400 --> 56:07.760
3550
+ somehow there need to be representations
3551
+
3552
+ 56:07.760 --> 56:09.080
3553
+ that you can understand.
3554
+
3555
+ 56:09.080 --> 56:11.120
3556
+ So if you wanna do robotics,
3557
+
3558
+ 56:11.120 --> 56:12.680
3559
+ you should also do mathematics.
3560
+
3561
+ 56:12.680 --> 56:15.080
3562
+ And you said liberal arts, a little literature.
3563
+
3564
+ 56:16.160 --> 56:17.200
3565
+ If you wanna build a robot,
3566
+
3567
+ 56:17.200 --> 56:19.320
3568
+ it should be reading Dostoyevsky.
3569
+
3570
+ 56:19.320 --> 56:20.360
3571
+ I agree with that.
3572
+
3573
+ 56:20.360 --> 56:21.200
3574
+ Very good.
3575
+
3576
+ 56:21.200 --> 56:23.560
3577
+ So Vijay, thank you so much for talking today.
3578
+
3579
+ 56:23.560 --> 56:24.400
3580
+ It was an honor.
3581
+
3582
+ 56:24.400 --> 56:25.240
3583
+ Thank you.
3584
+
3585
+ 56:25.240 --> 56:26.200
3586
+ It was just a very exciting conversation.
3587
+
3588
+ 56:26.200 --> 56:46.200
3589
+ Thank you.
3590
+
vtt/episode_037_small.vtt ADDED
@@ -0,0 +1,3143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:03.160
4
+ The following is a conversation with Vijay Kumar.
5
+
6
+ 00:03.160 --> 00:05.800
7
+ He's one of the top roboticists in the world,
8
+
9
+ 00:05.800 --> 00:08.760
10
+ a professor at the University of Pennsylvania,
11
+
12
+ 00:08.760 --> 00:10.680
13
+ a Dean of Penn Engineering,
14
+
15
+ 00:10.680 --> 00:12.880
16
+ former director of Grasp Lab,
17
+
18
+ 00:12.880 --> 00:15.320
19
+ or the General Robotics Automation Sensing
20
+
21
+ 00:15.320 --> 00:17.560
22
+ and Perception Laboratory at Penn,
23
+
24
+ 00:17.560 --> 00:22.560
25
+ that was established back in 1979, that's 40 years ago.
26
+
27
+ 00:22.600 --> 00:24.720
28
+ Vijay is perhaps best known
29
+
30
+ 00:24.720 --> 00:28.520
31
+ for his work in multi robot systems, robot swarms,
32
+
33
+ 00:28.520 --> 00:30.880
34
+ and micro aerial vehicles.
35
+
36
+ 00:30.880 --> 00:34.040
37
+ Robots that elegantly cooperate in flight
38
+
39
+ 00:34.040 --> 00:36.200
40
+ under all the uncertainty and challenges
41
+
42
+ 00:36.200 --> 00:38.760
43
+ that the real world conditions present.
44
+
45
+ 00:38.760 --> 00:41.960
46
+ This is the Artificial Intelligence Podcast.
47
+
48
+ 00:41.960 --> 00:44.320
49
+ If you enjoy it, subscribe on YouTube,
50
+
51
+ 00:44.320 --> 00:46.080
52
+ give it five stars on iTunes,
53
+
54
+ 00:46.080 --> 00:47.560
55
+ support it on Patreon,
56
+
57
+ 00:47.560 --> 00:49.480
58
+ or simply connect with me on Twitter
59
+
60
+ 00:49.480 --> 00:53.280
61
+ at Lex Freedman spelled FRID MAN.
62
+
63
+ 00:53.280 --> 00:57.560
64
+ And now here's my conversation with Vijay Kumar.
65
+
66
+ 00:58.680 --> 01:01.080
67
+ What is the first robot you've ever built
68
+
69
+ 01:01.080 --> 01:02.840
70
+ or were a part of building?
71
+
72
+ 01:02.840 --> 01:04.760
73
+ Way back when I was in graduate school,
74
+
75
+ 01:04.760 --> 01:06.760
76
+ I was part of a fairly big project
77
+
78
+ 01:06.760 --> 01:11.760
79
+ that involved building a very large hexapod.
80
+
81
+ 01:12.000 --> 01:17.000
82
+ This weighed close to 7,000 pounds.
83
+
84
+ 01:17.520 --> 01:21.600
85
+ And it was powered by hydraulic actuation,
86
+
87
+ 01:21.600 --> 01:26.600
88
+ or was actuated by hydraulics with 18 motors,
89
+
90
+ 01:27.760 --> 01:32.760
91
+ hydraulic motors, each controlled by an Intel 8085 processor
92
+
93
+ 01:34.200 --> 01:36.680
94
+ and an 8086 co processor.
95
+
96
+ 01:38.160 --> 01:43.160
97
+ And so imagine this huge monster that had 18 joints,
98
+
99
+ 01:44.840 --> 01:47.000
100
+ each controlled by an independent computer.
101
+
102
+ 01:47.000 --> 01:48.560
103
+ And there was a 19th computer
104
+
105
+ 01:48.560 --> 01:50.160
106
+ that actually did the coordination
107
+
108
+ 01:50.160 --> 01:52.360
109
+ between these 18 joints.
110
+
111
+ 01:52.360 --> 01:53.760
112
+ So as part of this project,
113
+
114
+ 01:53.760 --> 01:57.960
115
+ and my thesis work was,
116
+
117
+ 01:57.960 --> 02:01.080
118
+ how do you coordinate the 18 legs?
119
+
120
+ 02:02.120 --> 02:06.360
121
+ And in particular, the pressures in the hydraulic cylinders
122
+
123
+ 02:06.360 --> 02:09.240
124
+ to get efficient locomotion.
125
+
126
+ 02:09.240 --> 02:11.680
127
+ It sounds like a giant mess.
128
+
129
+ 02:11.680 --> 02:14.480
130
+ So how difficult is it to make all the motors communicate?
131
+
132
+ 02:14.480 --> 02:16.880
133
+ Presumably you have to send signals
134
+
135
+ 02:16.880 --> 02:18.720
136
+ hundreds of times a second, or at least...
137
+
138
+ 02:18.720 --> 02:22.800
139
+ This was not my work, but the folks who worked on this
140
+
141
+ 02:22.800 --> 02:24.200
142
+ wrote what I believe to be
143
+
144
+ 02:24.200 --> 02:26.640
145
+ the first multiprocessor operating system.
146
+
147
+ 02:26.640 --> 02:27.960
148
+ This was in the 80s.
149
+
150
+ 02:29.080 --> 02:32.240
151
+ And you had to make sure that obviously messages
152
+
153
+ 02:32.240 --> 02:34.640
154
+ got across from one joint to another.
155
+
156
+ 02:34.640 --> 02:37.960
157
+ You have to remember the clock speeds on those computers
158
+
159
+ 02:37.960 --> 02:39.640
160
+ were about half a megahertz.
161
+
162
+ 02:39.640 --> 02:40.480
163
+ Right.
164
+
165
+ 02:40.480 --> 02:42.200
166
+ So the 80s.
167
+
168
+ 02:42.200 --> 02:45.320
169
+ So not to romanticize the notion,
170
+
171
+ 02:45.320 --> 02:47.960
172
+ but how did it make you feel to make,
173
+
174
+ 02:47.960 --> 02:49.680
175
+ to see that robot move?
176
+
177
+ 02:51.040 --> 02:52.240
178
+ It was amazing.
179
+
180
+ 02:52.240 --> 02:54.160
181
+ In hindsight, it looks like, well,
182
+
183
+ 02:54.160 --> 02:57.240
184
+ we built the thing which really should have been much smaller.
185
+
186
+ 02:57.240 --> 02:59.080
187
+ And of course, today's robots are much smaller.
188
+
189
+ 02:59.080 --> 03:02.200
190
+ You look at, you know, Boston Dynamics,
191
+
192
+ 03:02.200 --> 03:04.720
193
+ our ghost robotics has been off from pen.
194
+
195
+ 03:06.000 --> 03:08.640
196
+ But back then, you were stuck
197
+
198
+ 03:08.640 --> 03:11.120
199
+ with the substrate you had, the compute you had,
200
+
201
+ 03:11.120 --> 03:13.640
202
+ so things were unnecessarily big.
203
+
204
+ 03:13.640 --> 03:18.000
205
+ But at the same time, and this is just human psychology,
206
+
207
+ 03:18.000 --> 03:20.380
208
+ somehow bigger means grander.
209
+
210
+ 03:21.280 --> 03:23.600
211
+ You know, people never have the same appreciation
212
+
213
+ 03:23.600 --> 03:26.320
214
+ for nanotechnology or nano devices
215
+
216
+ 03:26.320 --> 03:30.120
217
+ as they do for the space shuttle or the Boeing 747.
218
+
219
+ 03:30.120 --> 03:32.720
220
+ Yeah, you've actually done quite a good job
221
+
222
+ 03:32.720 --> 03:35.960
223
+ at illustrating that small is beautiful
224
+
225
+ 03:35.960 --> 03:37.680
226
+ in terms of robotics.
227
+
228
+ 03:37.680 --> 03:42.520
229
+ So what is on that topic is the most beautiful
230
+
231
+ 03:42.520 --> 03:46.200
232
+ or elegant robot emotion that you've ever seen.
233
+
234
+ 03:46.200 --> 03:47.880
235
+ Not to pick favorites or whatever,
236
+
237
+ 03:47.880 --> 03:51.000
238
+ but something that just inspires you that you remember.
239
+
240
+ 03:51.000 --> 03:54.000
241
+ Well, I think the thing that I'm most proud of
242
+
243
+ 03:54.000 --> 03:57.200
244
+ that my students have done is really think about
245
+
246
+ 03:57.200 --> 04:00.360
247
+ small UAVs that can maneuver and constrain spaces
248
+
249
+ 04:00.360 --> 04:03.640
250
+ and in particular, their ability to coordinate
251
+
252
+ 04:03.640 --> 04:06.760
253
+ with each other and form three dimensional patterns.
254
+
255
+ 04:06.760 --> 04:08.920
256
+ So once you can do that,
257
+
258
+ 04:08.920 --> 04:13.920
259
+ you can essentially create 3D objects in the sky
260
+
261
+ 04:17.000 --> 04:19.800
262
+ and you can deform these objects on the fly.
263
+
264
+ 04:19.800 --> 04:23.520
265
+ So in some sense, your toolbox of what you can create
266
+
267
+ 04:23.520 --> 04:25.300
268
+ has suddenly got enhanced.
269
+
270
+ 04:27.400 --> 04:29.920
271
+ And before that, we did the two dimensional version of this.
272
+
273
+ 04:29.920 --> 04:33.760
274
+ So we had ground robots forming patterns and so on.
275
+
276
+ 04:33.760 --> 04:37.080
277
+ So that was not as impressive, that was not as beautiful.
278
+
279
+ 04:37.080 --> 04:40.480
280
+ But if you do it in 3D, suspend it in midair
281
+
282
+ 04:40.480 --> 04:43.640
283
+ and you've got to go back to 2011 when we did this.
284
+
285
+ 04:43.640 --> 04:45.960
286
+ Now it's actually pretty standard to do these things
287
+
288
+ 04:45.960 --> 04:49.800
289
+ eight years later, but back then it was a big accomplishment.
290
+
291
+ 04:49.800 --> 04:52.440
292
+ So the distributed cooperation
293
+
294
+ 04:52.440 --> 04:55.640
295
+ is where beauty emerges in your eyes?
296
+
297
+ 04:55.640 --> 04:57.960
298
+ Well, I think beauty to an engineer is very different
299
+
300
+ 04:57.960 --> 05:01.520
301
+ from beauty to someone who's looking at robots
302
+
303
+ 05:01.520 --> 05:03.400
304
+ from the outside, if you will.
305
+
306
+ 05:03.400 --> 05:07.920
307
+ But what I meant there, so before we said that grand
308
+
309
+ 05:07.920 --> 05:10.480
310
+ is associated with size.
311
+
312
+ 05:10.480 --> 05:13.640
313
+ And another way of thinking about this
314
+
315
+ 05:13.640 --> 05:16.480
316
+ is just the physical shape and the idea
317
+
318
+ 05:16.480 --> 05:18.320
319
+ that you can create physical shapes in midair
320
+
321
+ 05:18.320 --> 05:21.520
322
+ and have them deform, that's beautiful.
323
+
324
+ 05:21.520 --> 05:23.000
325
+ But the individual components,
326
+
327
+ 05:23.000 --> 05:24.840
328
+ the agility is beautiful too, right?
329
+
330
+ 05:24.840 --> 05:25.680
331
+ That is true too.
332
+
333
+ 05:25.680 --> 05:28.400
334
+ So then how quickly can you actually manipulate
335
+
336
+ 05:28.400 --> 05:29.560
337
+ these three dimensional shapes
338
+
339
+ 05:29.560 --> 05:31.200
340
+ and the individual components?
341
+
342
+ 05:31.200 --> 05:32.200
343
+ Yes, you're right.
344
+
345
+ 05:32.200 --> 05:36.760
346
+ Oh, by the way, said UAV, unmanned aerial vehicle.
347
+
348
+ 05:36.760 --> 05:41.760
349
+ What's a good term for drones, UAVs, quadcopters?
350
+
351
+ 05:41.840 --> 05:44.520
352
+ Is there a term that's being standardized?
353
+
354
+ 05:44.520 --> 05:45.440
355
+ I don't know if there is.
356
+
357
+ 05:45.440 --> 05:47.880
358
+ Everybody wants to use the word drones.
359
+
360
+ 05:47.880 --> 05:49.760
361
+ And I've often said there's drones to me
362
+
363
+ 05:49.760 --> 05:51.000
364
+ is a pejorative word.
365
+
366
+ 05:51.000 --> 05:53.960
367
+ It signifies something that's dumb,
368
+
369
+ 05:53.960 --> 05:56.320
370
+ a pre program that does one little thing
371
+
372
+ 05:56.320 --> 05:58.600
373
+ and robots are anything but drones.
374
+
375
+ 05:58.600 --> 06:00.680
376
+ So I actually don't like that word,
377
+
378
+ 06:00.680 --> 06:02.960
379
+ but that's what everybody uses.
380
+
381
+ 06:02.960 --> 06:04.880
382
+ You could call it unpiloted.
383
+
384
+ 06:04.880 --> 06:05.800
385
+ Unpiloted.
386
+
387
+ 06:05.800 --> 06:08.080
388
+ But even unpiloted could be radio controlled,
389
+
390
+ 06:08.080 --> 06:10.480
391
+ could be remotely controlled in many different ways.
392
+
393
+ 06:11.560 --> 06:12.680
394
+ And I think the right word
395
+
396
+ 06:12.680 --> 06:15.040
397
+ is thinking about it as an aerial robot.
398
+
399
+ 06:15.040 --> 06:19.080
400
+ You also say agile, autonomous aerial robot, right?
401
+
402
+ 06:19.080 --> 06:20.600
403
+ Yeah, so agility is an attribute,
404
+
405
+ 06:20.600 --> 06:22.200
406
+ but they don't have to be.
407
+
408
+ 06:23.080 --> 06:24.800
409
+ So what biological system,
410
+
411
+ 06:24.800 --> 06:26.880
412
+ because you've also drawn a lot of inspiration
413
+
414
+ 06:26.880 --> 06:28.640
415
+ with those I've seen bees and ants
416
+
417
+ 06:28.640 --> 06:32.400
418
+ that you've talked about, what living creatures
419
+
420
+ 06:32.400 --> 06:35.280
421
+ have you found to be most inspiring
422
+
423
+ 06:35.280 --> 06:38.560
424
+ as an engineer, instructive in your work in robotics?
425
+
426
+ 06:38.560 --> 06:43.480
427
+ To me, so ants are really quite incredible creatures, right?
428
+
429
+ 06:43.480 --> 06:47.920
430
+ So you, I mean, the individuals arguably are very simple
431
+
432
+ 06:47.920 --> 06:52.400
433
+ in how they're built, and yet they're incredibly resilient
434
+
435
+ 06:52.400 --> 06:54.000
436
+ as a population.
437
+
438
+ 06:54.000 --> 06:56.800
439
+ And as individuals, they're incredibly robust.
440
+
441
+ 06:56.800 --> 07:01.800
442
+ So, if you take an ant with six legs, you remove one leg,
443
+
444
+ 07:02.080 --> 07:04.160
445
+ it still works just fine.
446
+
447
+ 07:04.160 --> 07:05.800
448
+ And it moves along,
449
+
450
+ 07:05.800 --> 07:08.800
451
+ and I don't know that he even realizes it's lost a leg.
452
+
453
+ 07:09.800 --> 07:13.480
454
+ So that's the robustness at the individual ant level.
455
+
456
+ 07:13.480 --> 07:15.400
457
+ But then you look about this instinct
458
+
459
+ 07:15.400 --> 07:17.760
460
+ for self preservation of the colonies,
461
+
462
+ 07:17.760 --> 07:20.440
463
+ and they adapt in so many amazing ways,
464
+
465
+ 07:20.440 --> 07:25.440
466
+ you know, transcending gaps by just chaining themselves
467
+
468
+ 07:27.680 --> 07:30.400
469
+ together when you have a flood,
470
+
471
+ 07:30.400 --> 07:33.160
472
+ being able to recruit other teammates
473
+
474
+ 07:33.160 --> 07:35.200
475
+ to carry big morsels of food,
476
+
477
+ 07:36.560 --> 07:38.240
478
+ and then going out in different directions,
479
+
480
+ 07:38.240 --> 07:39.560
481
+ looking for food,
482
+
483
+ 07:39.560 --> 07:43.040
484
+ and then being able to demonstrate consensus,
485
+
486
+ 07:43.880 --> 07:47.240
487
+ even though they don't communicate directly
488
+
489
+ 07:47.240 --> 07:50.400
490
+ with each other the way we communicate with each other,
491
+
492
+ 07:50.400 --> 07:53.840
493
+ in some sense, they also know how to do democracy,
494
+
495
+ 07:53.840 --> 07:55.480
496
+ probably better than what we do.
497
+
498
+ 07:55.480 --> 07:59.000
499
+ Yeah, somehow, even democracy is emergent.
500
+
501
+ 07:59.000 --> 08:00.640
502
+ It seems like all of the phenomena
503
+
504
+ 08:00.640 --> 08:02.480
505
+ that we see is all emergent.
506
+
507
+ 08:02.480 --> 08:05.600
508
+ It seems like there's no centralized communicator.
509
+
510
+ 08:05.600 --> 08:09.840
511
+ There is, so I think a lot is made about that word, emergent,
512
+
513
+ 08:09.840 --> 08:11.560
514
+ and it means lots of things to different people,
515
+
516
+ 08:11.560 --> 08:12.600
517
+ but you're absolutely right.
518
+
519
+ 08:12.600 --> 08:17.600
520
+ I think as an engineer, you think about what element,
521
+
522
+ 08:17.600 --> 08:22.600
523
+ elemental behaviors, what primitives you could synthesize
524
+
525
+ 08:22.720 --> 08:26.640
526
+ so that the whole looks incredibly powerful,
527
+
528
+ 08:26.640 --> 08:27.920
529
+ incredibly synergistic,
530
+
531
+ 08:27.920 --> 08:30.960
532
+ the whole definitely being greater than some of the parts,
533
+
534
+ 08:30.960 --> 08:33.760
535
+ and ants are living proof of that.
536
+
537
+ 08:33.760 --> 08:36.280
538
+ So when you see these beautiful swarms
539
+
540
+ 08:36.280 --> 08:38.800
541
+ where there's biological systems of robots,
542
+
543
+ 08:39.920 --> 08:41.560
544
+ do you sometimes think of them
545
+
546
+ 08:41.560 --> 08:45.920
547
+ as a single individual living intelligent organism?
548
+
549
+ 08:45.920 --> 08:49.400
550
+ So it's the same as thinking of our human civilization
551
+
552
+ 08:49.400 --> 08:52.920
553
+ as one organism, or do you still, as an engineer,
554
+
555
+ 08:52.920 --> 08:54.560
556
+ think about the individual components
557
+
558
+ 08:54.560 --> 08:55.840
559
+ and all the engineering that went into
560
+
561
+ 08:55.840 --> 08:57.280
562
+ the individual components?
563
+
564
+ 08:57.280 --> 08:58.600
565
+ Well, that's very interesting.
566
+
567
+ 08:58.600 --> 09:01.440
568
+ So again, philosophically, as engineers,
569
+
570
+ 09:01.440 --> 09:06.440
571
+ what we want to do is to go beyond the individual components,
572
+
573
+ 09:06.800 --> 09:10.240
574
+ the individual units, and think about it as a unit,
575
+
576
+ 09:10.240 --> 09:11.480
577
+ as a cohesive unit,
578
+
579
+ 09:11.480 --> 09:14.240
580
+ without worrying about the individual components.
581
+
582
+ 09:14.240 --> 09:19.240
583
+ If you start obsessing about the individual building blocks
584
+
585
+ 09:19.680 --> 09:24.680
586
+ and what they do, you inevitably will find it hard
587
+
588
+ 09:26.400 --> 09:27.920
589
+ to scale up.
590
+
591
+ 09:27.920 --> 09:28.960
592
+ Just mathematically,
593
+
594
+ 09:28.960 --> 09:31.560
595
+ just think about individual things you want to model,
596
+
597
+ 09:31.560 --> 09:34.000
598
+ and if you want to have 10 of those,
599
+
600
+ 09:34.000 --> 09:36.400
601
+ then you essentially are taking Cartesian products
602
+
603
+ 09:36.400 --> 09:39.280
604
+ of 10 things, and that makes it really complicated
605
+
606
+ 09:39.280 --> 09:41.800
607
+ than to do any kind of synthesis or design
608
+
609
+ 09:41.800 --> 09:44.160
610
+ in that high dimension space is really hard.
611
+
612
+ 09:44.160 --> 09:45.840
613
+ So the right way to do this
614
+
615
+ 09:45.840 --> 09:49.080
616
+ is to think about the individuals in a clever way
617
+
618
+ 09:49.080 --> 09:51.160
619
+ so that at the higher level,
620
+
621
+ 09:51.160 --> 09:53.400
622
+ when you look at lots and lots of them,
623
+
624
+ 09:53.400 --> 09:55.320
625
+ abstractly, you can think of them
626
+
627
+ 09:55.320 --> 09:57.120
628
+ in some low dimensional space.
629
+
630
+ 09:57.120 --> 09:58.680
631
+ So what does that involve?
632
+
633
+ 09:58.680 --> 10:02.160
634
+ For the individual, you have to try to make
635
+
636
+ 10:02.160 --> 10:05.160
637
+ the way they see the world as local as possible,
638
+
639
+ 10:05.160 --> 10:06.440
640
+ and the other thing,
641
+
642
+ 10:06.440 --> 10:09.560
643
+ do you just have to make them robust to collisions?
644
+
645
+ 10:09.560 --> 10:10.880
646
+ Like you said, with the ants,
647
+
648
+ 10:10.880 --> 10:15.320
649
+ if something fails, the whole swarm doesn't fail.
650
+
651
+ 10:15.320 --> 10:17.760
652
+ Right, I think as engineers, we do this.
653
+
654
+ 10:17.760 --> 10:18.840
655
+ I mean, you know, think about,
656
+
657
+ 10:18.840 --> 10:21.280
658
+ we build planes or we build iPhones,
659
+
660
+ 10:22.240 --> 10:26.280
661
+ and we know that by taking individual components,
662
+
663
+ 10:26.280 --> 10:27.600
664
+ well engineered components,
665
+
666
+ 10:27.600 --> 10:30.080
667
+ with well specified interfaces
668
+
669
+ 10:30.080 --> 10:31.680
670
+ that behave in a predictable way,
671
+
672
+ 10:31.680 --> 10:33.560
673
+ you can build complex systems.
674
+
675
+ 10:34.400 --> 10:36.840
676
+ So that's ingrained I would claim
677
+
678
+ 10:36.840 --> 10:39.400
679
+ in most engineers thinking,
680
+
681
+ 10:39.400 --> 10:41.600
682
+ and it's true for computer scientists as well.
683
+
684
+ 10:41.600 --> 10:44.760
685
+ I think what's different here is that you want
686
+
687
+ 10:44.760 --> 10:49.480
688
+ the individuals to be robust in some sense,
689
+
690
+ 10:49.480 --> 10:52.000
691
+ as we do in these other settings,
692
+
693
+ 10:52.000 --> 10:54.480
694
+ but you also want some degree of resiliency
695
+
696
+ 10:54.480 --> 10:56.320
697
+ for the population.
698
+
699
+ 10:56.320 --> 10:58.720
700
+ And so you really want them to be able
701
+
702
+ 10:58.720 --> 11:03.720
703
+ to reestablish communication with their neighbors.
704
+
705
+ 11:03.840 --> 11:07.320
706
+ You want them to rethink their strategy
707
+
708
+ 11:07.320 --> 11:09.040
709
+ for group behavior.
710
+
711
+ 11:09.040 --> 11:11.000
712
+ You want them to reorganize.
713
+
714
+ 11:12.440 --> 11:16.120
715
+ And that's where I think a lot of the challenges lie.
716
+
717
+ 11:16.120 --> 11:18.400
718
+ So just at a high level,
719
+
720
+ 11:18.400 --> 11:21.120
721
+ what does it take for a bunch of,
722
+
723
+ 11:22.400 --> 11:23.560
724
+ what should we call them,
725
+
726
+ 11:23.560 --> 11:26.920
727
+ flying robots to create a formation?
728
+
729
+ 11:26.920 --> 11:30.400
730
+ Just for people who are not familiar with robotics
731
+
732
+ 11:30.400 --> 11:33.000
733
+ in general, how much information is needed?
734
+
735
+ 11:33.000 --> 11:36.040
736
+ How do you even make it happen
737
+
738
+ 11:36.040 --> 11:39.720
739
+ without a centralized controller?
740
+
741
+ 11:39.720 --> 11:41.320
742
+ So I mean, there are a couple of different ways
743
+
744
+ 11:41.320 --> 11:43.400
745
+ of looking at this.
746
+
747
+ 11:43.400 --> 11:48.400
748
+ If you are a purist, you think of it as a way
749
+
750
+ 11:50.040 --> 11:52.160
751
+ of recreating what nature does.
752
+
753
+ 11:53.800 --> 11:58.680
754
+ So nature forms groups for several reasons,
755
+
756
+ 11:58.680 --> 12:02.200
757
+ but mostly it's because of this instinct
758
+
759
+ 12:02.200 --> 12:07.200
760
+ that organisms have of preserving their colonies,
761
+
762
+ 12:07.280 --> 12:11.200
763
+ their population, which means what?
764
+
765
+ 12:11.200 --> 12:14.640
766
+ You need shelter, you need food, you need to procreate,
767
+
768
+ 12:14.640 --> 12:16.480
769
+ and that's basically it.
770
+
771
+ 12:16.480 --> 12:20.120
772
+ So the kinds of interactions you see are all organic.
773
+
774
+ 12:20.120 --> 12:21.320
775
+ They're all local.
776
+
777
+ 12:22.320 --> 12:25.760
778
+ And the only information that they share,
779
+
780
+ 12:25.760 --> 12:27.800
781
+ and mostly it's indirectly,
782
+
783
+ 12:27.800 --> 12:31.040
784
+ is to again preserve the herd or the flock
785
+
786
+ 12:31.040 --> 12:36.040
787
+ or the swarm and either by looking for new sources of food
788
+
789
+ 12:39.400 --> 12:41.240
790
+ or looking for new shelters, right?
791
+
792
+ 12:42.960 --> 12:47.200
793
+ As engineers, when we build swarms, we have a mission.
794
+
795
+ 12:48.280 --> 12:50.760
796
+ And when you think of a mission,
797
+
798
+ 12:52.080 --> 12:54.360
799
+ and it involves mobility,
800
+
801
+ 12:54.360 --> 12:56.840
802
+ most often it's described in some kind
803
+
804
+ 12:56.840 --> 12:58.800
805
+ of a global coordinate system.
806
+
807
+ 12:58.800 --> 13:03.080
808
+ As a human, as an operator, as a commander,
809
+
810
+ 13:03.080 --> 13:07.120
811
+ or as a collaborator, I have my coordinate system
812
+
813
+ 13:07.120 --> 13:10.160
814
+ and I want the robots to be consistent with that.
815
+
816
+ 13:11.120 --> 13:14.720
817
+ So I might think of it slightly differently.
818
+
819
+ 13:14.720 --> 13:18.960
820
+ I might want the robots to recognize that coordinate system,
821
+
822
+ 13:18.960 --> 13:21.360
823
+ which means not only do they have to think locally
824
+
825
+ 13:21.360 --> 13:23.160
826
+ in terms of who their immediate neighbors are,
827
+
828
+ 13:23.160 --> 13:24.640
829
+ but they have to be cognizant
830
+
831
+ 13:24.640 --> 13:28.320
832
+ of what the global environment looks like.
833
+
834
+ 13:28.320 --> 13:31.080
835
+ So if I go, if I say surround this building
836
+
837
+ 13:31.080 --> 13:33.280
838
+ and protect this from intruders,
839
+
840
+ 13:33.280 --> 13:35.160
841
+ well, they're immediately in a building
842
+
843
+ 13:35.160 --> 13:36.520
844
+ centered coordinate system
845
+
846
+ 13:36.520 --> 13:38.720
847
+ and I have to tell them where the building is.
848
+
849
+ 13:38.720 --> 13:40.080
850
+ And they're globally collaborating
851
+
852
+ 13:40.080 --> 13:41.360
853
+ on the map of that building.
854
+
855
+ 13:41.360 --> 13:44.240
856
+ They're maintaining some kind of global,
857
+
858
+ 13:44.240 --> 13:45.560
859
+ not just in the frame of the building,
860
+
861
+ 13:45.560 --> 13:49.040
862
+ but there's information that's ultimately being built up
863
+
864
+ 13:49.040 --> 13:53.320
865
+ explicitly as opposed to kind of implicitly,
866
+
867
+ 13:53.320 --> 13:54.400
868
+ like nature might.
869
+
870
+ 13:54.400 --> 13:55.240
871
+ Correct, correct.
872
+
873
+ 13:55.240 --> 13:57.720
874
+ So in some sense, nature is very, very sophisticated,
875
+
876
+ 13:57.720 --> 14:00.480
877
+ but the tasks that nature solves
878
+
879
+ 14:00.480 --> 14:03.040
880
+ or needs to solve are very different
881
+
882
+ 14:03.040 --> 14:05.160
883
+ from the kind of engineered tasks,
884
+
885
+ 14:05.160 --> 14:09.800
886
+ artificial tasks that we are forced to address.
887
+
888
+ 14:09.800 --> 14:12.560
889
+ And again, there's nothing preventing us
890
+
891
+ 14:12.560 --> 14:15.200
892
+ from solving these other problems,
893
+
894
+ 14:15.200 --> 14:16.640
895
+ but ultimately it's about impact.
896
+
897
+ 14:16.640 --> 14:19.400
898
+ You want these swarms to do something useful.
899
+
900
+ 14:19.400 --> 14:24.400
901
+ And so you're kind of driven into this very unnatural,
902
+
903
+ 14:24.400 --> 14:27.840
904
+ if you will, unnatural meaning, not like how nature does,
905
+
906
+ 14:27.840 --> 14:29.000
907
+ setting.
908
+
909
+ 14:29.000 --> 14:31.720
910
+ And it's probably a little bit more expensive
911
+
912
+ 14:31.720 --> 14:33.560
913
+ to do it the way nature does,
914
+
915
+ 14:33.560 --> 14:38.560
916
+ because nature is less sensitive to the loss of the individual
917
+
918
+ 14:39.280 --> 14:42.080
919
+ and cost wise in robotics,
920
+
921
+ 14:42.080 --> 14:45.280
922
+ I think you're more sensitive to losing individuals.
923
+
924
+ 14:45.280 --> 14:48.800
925
+ I think that's true, although if you look at the price
926
+
927
+ 14:48.800 --> 14:51.320
928
+ to performance ratio of robotic components,
929
+
930
+ 14:51.320 --> 14:53.640
931
+ it's coming down dramatically.
932
+
933
+ 14:53.640 --> 14:54.480
934
+ I'm interested.
935
+
936
+ 14:54.480 --> 14:56.040
937
+ Right, it continues to come down.
938
+
939
+ 14:56.040 --> 14:58.920
940
+ So I think we're asymptotically approaching the point
941
+
942
+ 14:58.920 --> 14:59.960
943
+ where we would get, yeah,
944
+
945
+ 14:59.960 --> 15:05.080
946
+ the cost of individuals would really become insignificant.
947
+
948
+ 15:05.080 --> 15:07.600
949
+ So let's step back at a high level view,
950
+
951
+ 15:07.600 --> 15:11.680
952
+ the impossible question of what kind of,
953
+
954
+ 15:11.680 --> 15:14.400
955
+ as an overview, what kind of autonomous flying vehicles
956
+
957
+ 15:14.400 --> 15:16.200
958
+ are there in general?
959
+
960
+ 15:16.200 --> 15:19.720
961
+ I think the ones that receive a lot of notoriety
962
+
963
+ 15:19.720 --> 15:22.560
964
+ are obviously the military vehicles.
965
+
966
+ 15:22.560 --> 15:26.280
967
+ Military vehicles are controlled by a base station,
968
+
969
+ 15:26.280 --> 15:29.640
970
+ but have a lot of human supervision,
971
+
972
+ 15:29.640 --> 15:31.800
973
+ but have limited autonomy,
974
+
975
+ 15:31.800 --> 15:34.760
976
+ which is the ability to go from point A to point B,
977
+
978
+ 15:34.760 --> 15:38.320
979
+ and even the more sophisticated vehicles
980
+
981
+ 15:38.320 --> 15:41.760
982
+ can do autonomous takeoff and landing.
983
+
984
+ 15:41.760 --> 15:44.400
985
+ And those usually have wings and they're heavy?
986
+
987
+ 15:44.400 --> 15:45.360
988
+ Usually they're wings,
989
+
990
+ 15:45.360 --> 15:47.440
991
+ but there's nothing preventing us from doing this
992
+
993
+ 15:47.440 --> 15:49.000
994
+ for helicopters as well.
995
+
996
+ 15:49.000 --> 15:53.440
997
+ There are many military organizations that have
998
+
999
+ 15:53.440 --> 15:56.560
1000
+ autonomous helicopters in the same vein.
1001
+
1002
+ 15:56.560 --> 16:00.080
1003
+ And by the way, you look at autopilots and airplanes,
1004
+
1005
+ 16:00.080 --> 16:02.800
1006
+ and it's actually very similar.
1007
+
1008
+ 16:02.800 --> 16:07.160
1009
+ In fact, one interesting question we can ask is,
1010
+
1011
+ 16:07.160 --> 16:12.120
1012
+ if you look at all the air safety violations,
1013
+
1014
+ 16:12.120 --> 16:14.080
1015
+ all the crashes that occurred,
1016
+
1017
+ 16:14.080 --> 16:16.880
1018
+ would they have happened if the plane
1019
+
1020
+ 16:16.880 --> 16:20.200
1021
+ were truly autonomous, and I think you'll find
1022
+
1023
+ 16:20.200 --> 16:21.960
1024
+ that in many of the cases,
1025
+
1026
+ 16:21.960 --> 16:24.600
1027
+ because of pilot error, we made silly decisions.
1028
+
1029
+ 16:24.600 --> 16:26.920
1030
+ And so in some sense, even in air traffic,
1031
+
1032
+ 16:26.920 --> 16:29.760
1033
+ commercial air traffic, there's a lot of applications,
1034
+
1035
+ 16:29.760 --> 16:33.920
1036
+ although we only see autonomy being enabled
1037
+
1038
+ 16:33.920 --> 16:38.920
1039
+ at very high altitudes when the plane is an autopilot.
1040
+
1041
+ 16:41.160 --> 16:42.520
1042
+ There's still a role for the human,
1043
+
1044
+ 16:42.520 --> 16:47.520
1045
+ and that kind of autonomy is, you're kind of implying,
1046
+
1047
+ 16:47.640 --> 16:48.680
1048
+ I don't know what the right word is,
1049
+
1050
+ 16:48.680 --> 16:52.600
1051
+ but it's a little dumber than it could be.
1052
+
1053
+ 16:53.480 --> 16:55.720
1054
+ Right, so in the lab, of course,
1055
+
1056
+ 16:55.720 --> 16:59.200
1057
+ we can afford to be a lot more aggressive.
1058
+
1059
+ 16:59.200 --> 17:04.200
1060
+ And the question we try to ask is,
1061
+
1062
+ 17:04.600 --> 17:09.600
1063
+ can we make robots that will be able to make decisions
1064
+
1065
+ 17:09.600 --> 17:13.680
1066
+ without any kind of external infrastructure?
1067
+
1068
+ 17:13.680 --> 17:14.880
1069
+ So what does that mean?
1070
+
1071
+ 17:14.880 --> 17:16.960
1072
+ So the most common piece of infrastructure
1073
+
1074
+ 17:16.960 --> 17:19.640
1075
+ that airplanes use today is GPS.
1076
+
1077
+ 17:20.560 --> 17:25.160
1078
+ GPS is also the most brittle form of information.
1079
+
1080
+ 17:26.680 --> 17:30.480
1081
+ If you've driven in a city, try to use GPS navigation,
1082
+
1083
+ 17:30.480 --> 17:32.760
1084
+ tall buildings, you immediately lose GPS.
1085
+
1086
+ 17:33.720 --> 17:36.320
1087
+ And so that's not a very sophisticated way
1088
+
1089
+ 17:36.320 --> 17:37.880
1090
+ of building autonomy.
1091
+
1092
+ 17:37.880 --> 17:39.560
1093
+ I think the second piece of infrastructure
1094
+
1095
+ 17:39.560 --> 17:41.920
1096
+ that I rely on is communications.
1097
+
1098
+ 17:41.920 --> 17:46.200
1099
+ Again, it's very easy to jam communications.
1100
+
1101
+ 17:47.400 --> 17:49.680
1102
+ In fact, if you use Wi Fi,
1103
+
1104
+ 17:49.680 --> 17:51.880
1105
+ you know that Wi Fi signals drop out,
1106
+
1107
+ 17:51.880 --> 17:53.560
1108
+ cell signals drop out.
1109
+
1110
+ 17:53.560 --> 17:56.840
1111
+ So to rely on something like that is not good.
1112
+
1113
+ 17:58.600 --> 18:01.240
1114
+ The third form of infrastructure we use,
1115
+
1116
+ 18:01.240 --> 18:02.960
1117
+ and I hate to call it infrastructure,
1118
+
1119
+ 18:02.960 --> 18:06.400
1120
+ but it is that in the sense of robots, it's people.
1121
+
1122
+ 18:06.400 --> 18:08.760
1123
+ So you could rely on somebody to pilot you.
1124
+
1125
+ 18:08.760 --> 18:09.960
1126
+ Right.
1127
+
1128
+ 18:09.960 --> 18:11.600
1129
+ And so the question you wanna ask is
1130
+
1131
+ 18:11.600 --> 18:13.400
1132
+ if there are no pilots,
1133
+
1134
+ 18:13.400 --> 18:16.200
1135
+ if there's no communications with any base station,
1136
+
1137
+ 18:16.200 --> 18:18.720
1138
+ if there's no knowledge of position,
1139
+
1140
+ 18:18.720 --> 18:21.640
1141
+ and if there's no a priori map,
1142
+
1143
+ 18:21.640 --> 18:24.880
1144
+ a priori knowledge of what the environment looks like,
1145
+
1146
+ 18:24.880 --> 18:28.280
1147
+ a priori model of what might happen in the future.
1148
+
1149
+ 18:28.280 --> 18:29.560
1150
+ Can robots navigate?
1151
+
1152
+ 18:29.560 --> 18:31.440
1153
+ So that is true autonomy.
1154
+
1155
+ 18:31.440 --> 18:33.240
1156
+ So that's true autonomy.
1157
+
1158
+ 18:33.240 --> 18:35.040
1159
+ And we're talking about, you mentioned
1160
+
1161
+ 18:35.040 --> 18:36.880
1162
+ like military applications and drones.
1163
+
1164
+ 18:36.880 --> 18:38.280
1165
+ Okay, so what else is there?
1166
+
1167
+ 18:38.280 --> 18:43.280
1168
+ You talk about agile autonomous flying robots, aerial robots.
1169
+
1170
+ 18:43.480 --> 18:46.320
1171
+ So that's a different kind of, it's not winged,
1172
+
1173
+ 18:46.320 --> 18:48.120
1174
+ it's not big, at least it's small.
1175
+
1176
+ 18:48.120 --> 18:50.800
1177
+ So I use the word agility mostly,
1178
+
1179
+ 18:50.800 --> 18:53.480
1180
+ or at least we're motivated to do agile robots,
1181
+
1182
+ 18:53.480 --> 18:57.960
1183
+ mostly because robots can operate
1184
+
1185
+ 18:57.960 --> 19:01.120
1186
+ and should be operating in constrained environments.
1187
+
1188
+ 19:02.120 --> 19:06.960
1189
+ And if you want to operate the way a global hawk operates,
1190
+
1191
+ 19:06.960 --> 19:09.120
1192
+ I mean, the kinds of conditions in which you operate
1193
+
1194
+ 19:09.120 --> 19:10.760
1195
+ are very, very restrictive.
1196
+
1197
+ 19:11.760 --> 19:13.720
1198
+ If you wanna go inside a building,
1199
+
1200
+ 19:13.720 --> 19:15.600
1201
+ for example, for search and rescue,
1202
+
1203
+ 19:15.600 --> 19:18.120
1204
+ or to locate an active shooter,
1205
+
1206
+ 19:18.120 --> 19:22.120
1207
+ or you wanna navigate under the canopy in an orchard
1208
+
1209
+ 19:22.120 --> 19:23.880
1210
+ to look at health of plants,
1211
+
1212
+ 19:23.880 --> 19:28.880
1213
+ or to count fruits to measure the tree trunks.
1214
+
1215
+ 19:31.240 --> 19:33.240
1216
+ These are things we do, by the way.
1217
+
1218
+ 19:33.240 --> 19:35.920
1219
+ Yeah, some cool agriculture stuff you've shown in the past,
1220
+
1221
+ 19:35.920 --> 19:36.760
1222
+ it's really awesome.
1223
+
1224
+ 19:36.760 --> 19:40.400
1225
+ So in those kinds of settings, you do need that agility.
1226
+
1227
+ 19:40.400 --> 19:42.560
1228
+ Agility does not necessarily mean
1229
+
1230
+ 19:42.560 --> 19:45.440
1231
+ you break records for the 100 meters dash.
1232
+
1233
+ 19:45.440 --> 19:48.040
1234
+ What it really means is you see the unexpected
1235
+
1236
+ 19:48.040 --> 19:51.520
1237
+ and you're able to maneuver in a safe way,
1238
+
1239
+ 19:51.520 --> 19:55.440
1240
+ and in a way that gets you the most information
1241
+
1242
+ 19:55.440 --> 19:57.720
1243
+ about the thing you're trying to do.
1244
+
1245
+ 19:57.720 --> 20:00.520
1246
+ By the way, you may be the only person
1247
+
1248
+ 20:00.520 --> 20:04.280
1249
+ who in a TED talk has used a math equation,
1250
+
1251
+ 20:04.280 --> 20:07.720
1252
+ which is amazing, people should go see one of your TED talks.
1253
+
1254
+ 20:07.720 --> 20:08.840
1255
+ Actually, it's very interesting
1256
+
1257
+ 20:08.840 --> 20:13.560
1258
+ because the TED curator, Chris Anderson, told me,
1259
+
1260
+ 20:13.560 --> 20:15.400
1261
+ you can't show math.
1262
+
1263
+ 20:15.400 --> 20:18.240
1264
+ And I thought about it, but that's who I am.
1265
+
1266
+ 20:18.240 --> 20:20.800
1267
+ I mean, that's our work.
1268
+
1269
+ 20:20.800 --> 20:25.800
1270
+ And so I felt compelled to give the audience a taste
1271
+
1272
+ 20:25.800 --> 20:27.680
1273
+ for at least some math.
1274
+
1275
+ 20:27.680 --> 20:32.680
1276
+ So on that point, simply, what does it take
1277
+
1278
+ 20:32.680 --> 20:37.120
1279
+ to make a thing with four motors fly, a quadcopter,
1280
+
1281
+ 20:37.120 --> 20:40.400
1282
+ one of these little flying robots?
1283
+
1284
+ 20:41.560 --> 20:43.800
1285
+ How hard is it to make it fly?
1286
+
1287
+ 20:43.800 --> 20:46.360
1288
+ How do you coordinate the four motors?
1289
+
1290
+ 20:47.360 --> 20:52.360
1291
+ How do you convert those motors into actual movement?
1292
+
1293
+ 20:52.400 --> 20:54.600
1294
+ So this is an interesting question.
1295
+
1296
+ 20:54.600 --> 20:57.840
1297
+ We've been trying to do this since 2000.
1298
+
1299
+ 20:57.840 --> 21:00.360
1300
+ It is a commentary on the sensors
1301
+
1302
+ 21:00.360 --> 21:01.880
1303
+ that were available back then,
1304
+
1305
+ 21:01.880 --> 21:04.320
1306
+ and the computers that were available back then.
1307
+
1308
+ 21:05.640 --> 21:08.080
1309
+ And a number of things happened
1310
+
1311
+ 21:08.080 --> 21:10.320
1312
+ between 2000 and 2007.
1313
+
1314
+ 21:11.640 --> 21:15.560
1315
+ One is the advances in computing, which is,
1316
+
1317
+ 21:15.560 --> 21:16.840
1318
+ so we all know about Moore's Law,
1319
+
1320
+ 21:16.840 --> 21:19.760
1321
+ but I think 2007 was a tipping point,
1322
+
1323
+ 21:19.760 --> 21:22.800
1324
+ the year of the iPhone, the year of the cloud.
1325
+
1326
+ 21:22.800 --> 21:24.720
1327
+ Lots of things happened in 2007.
1328
+
1329
+ 21:25.680 --> 21:27.640
1330
+ But going back even further,
1331
+
1332
+ 21:27.640 --> 21:31.440
1333
+ inertial measurement units as a sensor really matured.
1334
+
1335
+ 21:31.440 --> 21:33.040
1336
+ Again, lots of reasons for that.
1337
+
1338
+ 21:34.000 --> 21:35.480
1339
+ Certainly there's a lot of federal funding,
1340
+
1341
+ 21:35.480 --> 21:37.440
1342
+ particularly DARPA in the US,
1343
+
1344
+ 21:38.360 --> 21:42.840
1345
+ but they didn't anticipate this boom in IMUs.
1346
+
1347
+ 21:43.800 --> 21:46.080
1348
+ But if you look subsequently,
1349
+
1350
+ 21:46.080 --> 21:49.000
1351
+ what happened is that every car manufacturer
1352
+
1353
+ 21:49.000 --> 21:50.120
1354
+ had to put an airbag in,
1355
+
1356
+ 21:50.120 --> 21:52.720
1357
+ which meant you had to have an accelerometer on board.
1358
+
1359
+ 21:52.720 --> 21:54.080
1360
+ And so that drove down the price
1361
+
1362
+ 21:54.080 --> 21:56.280
1363
+ to performance ratio of the sensors.
1364
+
1365
+ 21:56.280 --> 21:57.960
1366
+ I should know this, that's very interesting.
1367
+
1368
+ 21:57.960 --> 21:59.480
1369
+ It's very interesting, the connection there.
1370
+
1371
+ 21:59.480 --> 22:03.160
1372
+ And that's why research is very hard to predict the outcomes.
1373
+
1374
+ 22:04.920 --> 22:07.760
1375
+ And again, the federal government spent a ton of money
1376
+
1377
+ 22:07.760 --> 22:12.360
1378
+ on things that they thought were useful for resonators,
1379
+
1380
+ 22:12.360 --> 22:16.920
1381
+ but it ended up enabling these small UAVs, which is great,
1382
+
1383
+ 22:16.920 --> 22:18.600
1384
+ because I could have never raised that much money
1385
+
1386
+ 22:18.600 --> 22:20.800
1387
+ and told, sold this project,
1388
+
1389
+ 22:20.800 --> 22:22.280
1390
+ hey, we want to build these small UAVs.
1391
+
1392
+ 22:22.280 --> 22:25.520
1393
+ Can you actually fund the development of low cost IMUs?
1394
+
1395
+ 22:25.520 --> 22:27.720
1396
+ So why do you need an IMU on an IMU?
1397
+
1398
+ 22:27.720 --> 22:30.440
1399
+ So I'll come back to that,
1400
+
1401
+ 22:30.440 --> 22:33.400
1402
+ but so in 2007, 2008, we were able to build these.
1403
+
1404
+ 22:33.400 --> 22:35.280
1405
+ And then the question you're asking was a good one,
1406
+
1407
+ 22:35.280 --> 22:40.280
1408
+ how do you coordinate the motors to develop this?
1409
+
1410
+ 22:40.320 --> 22:43.920
1411
+ But over the last 10 years, everything is commoditized.
1412
+
1413
+ 22:43.920 --> 22:47.920
1414
+ A high school kid today can pick up a Raspberry Pi kit
1415
+
1416
+ 22:49.520 --> 22:50.600
1417
+ and build this,
1418
+
1419
+ 22:50.600 --> 22:53.240
1420
+ all the low levels functionality is all automated.
1421
+
1422
+ 22:53.240 --> 22:58.240
1423
+ But basically at some level, you have to drive the motors
1424
+
1425
+ 22:59.160 --> 23:03.660
1426
+ at the right RPMs, the right velocity,
1427
+
1428
+ 23:04.560 --> 23:07.480
1429
+ in order to generate the right amount of thrust
1430
+
1431
+ 23:07.480 --> 23:09.960
1432
+ in order to position it and orient it
1433
+
1434
+ 23:09.960 --> 23:12.840
1435
+ in a way that you need to in order to fly.
1436
+
1437
+ 23:13.800 --> 23:16.680
1438
+ The feedback that you get is from onboard sensors
1439
+
1440
+ 23:16.680 --> 23:18.400
1441
+ and the IMU is an important part of it.
1442
+
1443
+ 23:18.400 --> 23:23.400
1444
+ The IMU tells you what the acceleration is
1445
+
1446
+ 23:23.840 --> 23:26.400
1447
+ as well as what the angular velocity is.
1448
+
1449
+ 23:26.400 --> 23:29.200
1450
+ And those are important pieces of information.
1451
+
1452
+ 23:30.440 --> 23:34.200
1453
+ In addition to that, you need some kind of local position
1454
+
1455
+ 23:34.200 --> 23:36.480
1456
+ or velocity information.
1457
+
1458
+ 23:37.440 --> 23:39.320
1459
+ For example, when we walk,
1460
+
1461
+ 23:39.320 --> 23:41.520
1462
+ we implicitly have this information
1463
+
1464
+ 23:41.520 --> 23:45.800
1465
+ because we kind of know how, what our stride length is.
1466
+
1467
+ 23:45.800 --> 23:50.800
1468
+ We also are looking at images fly past our retina,
1469
+
1470
+ 23:51.440 --> 23:54.240
1471
+ if you will, and so we can estimate velocity.
1472
+
1473
+ 23:54.240 --> 23:56.280
1474
+ We also have accelerometers in our head
1475
+
1476
+ 23:56.280 --> 23:59.120
1477
+ and we're able to integrate all these pieces of information
1478
+
1479
+ 23:59.120 --> 24:02.320
1480
+ to determine where we are as we walk.
1481
+
1482
+ 24:02.320 --> 24:04.320
1483
+ And so robots have to do something very similar.
1484
+
1485
+ 24:04.320 --> 24:08.160
1486
+ You need an IMU, you need some kind of a camera
1487
+
1488
+ 24:08.160 --> 24:11.600
1489
+ or other sensor that's measuring velocity.
1490
+
1491
+ 24:11.600 --> 24:15.800
1492
+ And then you need some kind of a global reference frame
1493
+
1494
+ 24:15.800 --> 24:19.480
1495
+ if you really want to think about doing something
1496
+
1497
+ 24:19.480 --> 24:21.280
1498
+ in a world coordinate system.
1499
+
1500
+ 24:21.280 --> 24:23.680
1501
+ And so how do you estimate your position
1502
+
1503
+ 24:23.680 --> 24:25.160
1504
+ with respect to that global reference frame?
1505
+
1506
+ 24:25.160 --> 24:26.560
1507
+ That's important as well.
1508
+
1509
+ 24:26.560 --> 24:29.520
1510
+ So coordinating the RPMs of the four motors
1511
+
1512
+ 24:29.520 --> 24:32.680
1513
+ is what allows you to, first of all, fly and hover
1514
+
1515
+ 24:32.680 --> 24:35.640
1516
+ and then you can change the orientation
1517
+
1518
+ 24:35.640 --> 24:37.640
1519
+ and the velocity and so on.
1520
+
1521
+ 24:37.640 --> 24:38.480
1522
+ Exactly, exactly.
1523
+
1524
+ 24:38.480 --> 24:40.360
1525
+ So there's a bunch of degrees of freedom
1526
+
1527
+ 24:40.360 --> 24:42.240
1528
+ or there's six degrees of freedom
1529
+
1530
+ 24:42.240 --> 24:44.960
1531
+ but you only have four inputs, the four motors.
1532
+
1533
+ 24:44.960 --> 24:49.960
1534
+ And it turns out to be a remarkably versatile configuration.
1535
+
1536
+ 24:50.960 --> 24:53.120
1537
+ You think at first, well, I only have four motors,
1538
+
1539
+ 24:53.120 --> 24:55.040
1540
+ how do I go sideways?
1541
+
1542
+ 24:55.040 --> 24:56.360
1543
+ But it's not too hard to say, well,
1544
+
1545
+ 24:56.360 --> 24:59.200
1546
+ if I tilt myself, I can go sideways.
1547
+
1548
+ 24:59.200 --> 25:01.200
1549
+ And then you have four motors pointing up,
1550
+
1551
+ 25:01.200 --> 25:05.400
1552
+ how do I rotate in place about a vertical axis?
1553
+
1554
+ 25:05.400 --> 25:07.840
1555
+ Well, you rotate them at different speeds
1556
+
1557
+ 25:07.840 --> 25:09.760
1558
+ and that generates reaction moments
1559
+
1560
+ 25:09.760 --> 25:11.560
1561
+ and that allows you to turn.
1562
+
1563
+ 25:11.560 --> 25:13.400
1564
+ So it's actually a pretty,
1565
+
1566
+ 25:13.400 --> 25:17.040
1567
+ it's an optimal configuration from an engineer standpoint.
1568
+
1569
+ 25:17.960 --> 25:22.960
1570
+ It's very simple, very cleverly done and very versatile.
1571
+
1572
+ 25:23.800 --> 25:26.520
1573
+ So if you could step back to a time,
1574
+
1575
+ 25:27.320 --> 25:30.120
1576
+ so I've always known flying robots as,
1577
+
1578
+ 25:31.120 --> 25:35.840
1579
+ to me it was natural that the quadcopters should fly.
1580
+
1581
+ 25:35.840 --> 25:38.040
1582
+ But when you first started working with it,
1583
+
1584
+ 25:38.040 --> 25:42.040
1585
+ how surprised are you that you can make,
1586
+
1587
+ 25:42.040 --> 25:45.560
1588
+ do so much with the four motors?
1589
+
1590
+ 25:45.560 --> 25:47.640
1591
+ How surprising is that you can make this thing fly,
1592
+
1593
+ 25:47.640 --> 25:49.800
1594
+ first of all, that you can make it hover,
1595
+
1596
+ 25:49.800 --> 25:52.040
1597
+ then you can add control to it?
1598
+
1599
+ 25:52.920 --> 25:55.800
1600
+ Firstly, this is not, the four motor configuration
1601
+
1602
+ 25:55.800 --> 26:00.120
1603
+ is not ours, it has at least a hundred year history.
1604
+
1605
+ 26:01.080 --> 26:02.480
1606
+ And various people,
1607
+
1608
+ 26:02.480 --> 26:06.280
1609
+ various people try to get quadrotors to fly
1610
+
1611
+ 26:06.280 --> 26:08.120
1612
+ without much success.
1613
+
1614
+ 26:09.240 --> 26:11.560
1615
+ As I said, we've been working on this since 2000.
1616
+
1617
+ 26:11.560 --> 26:13.360
1618
+ Our first designs were,
1619
+
1620
+ 26:13.360 --> 26:15.160
1621
+ well, this is way too complicated.
1622
+
1623
+ 26:15.160 --> 26:19.160
1624
+ Why not we try to get an omnidirectional flying robot?
1625
+
1626
+ 26:19.160 --> 26:22.760
1627
+ So our early designs, we had eight rotors.
1628
+
1629
+ 26:22.760 --> 26:26.080
1630
+ And so these eight rotors were arranged uniformly
1631
+
1632
+ 26:27.520 --> 26:28.880
1633
+ on a sphere, if you will.
1634
+
1635
+ 26:28.880 --> 26:31.360
1636
+ So you can imagine a symmetric configuration
1637
+
1638
+ 26:31.360 --> 26:34.160
1639
+ and so you should be able to fly anywhere.
1640
+
1641
+ 26:34.160 --> 26:36.280
1642
+ But the real challenge we had is the strength
1643
+
1644
+ 26:36.280 --> 26:37.880
1645
+ to weight ratio is not enough,
1646
+
1647
+ 26:37.880 --> 26:41.240
1648
+ and of course we didn't have the sensors and so on.
1649
+
1650
+ 26:41.240 --> 26:43.840
1651
+ So everybody knew, or at least the people
1652
+
1653
+ 26:43.840 --> 26:45.680
1654
+ who worked with rotor crafts knew,
1655
+
1656
+ 26:45.680 --> 26:47.320
1657
+ four rotors would get it done.
1658
+
1659
+ 26:48.280 --> 26:50.200
1660
+ So that was not our idea.
1661
+
1662
+ 26:50.200 --> 26:53.480
1663
+ But it took a while before we could actually do
1664
+
1665
+ 26:53.480 --> 26:56.520
1666
+ the onboard sensing and the computation
1667
+
1668
+ 26:56.520 --> 27:00.400
1669
+ that was needed for the kinds of agile maneuvering
1670
+
1671
+ 27:00.400 --> 27:03.800
1672
+ that we wanted to do in our little aerial robots.
1673
+
1674
+ 27:03.800 --> 27:08.320
1675
+ And that only happened between 2007 and 2009 in our lab.
1676
+
1677
+ 27:08.320 --> 27:10.680
1678
+ Yeah, and you have to send the signal
1679
+
1680
+ 27:10.680 --> 27:13.200
1681
+ maybe a hundred times a second.
1682
+
1683
+ 27:13.200 --> 27:15.400
1684
+ So the compute there is everything
1685
+
1686
+ 27:15.400 --> 27:16.720
1687
+ has to come down in price.
1688
+
1689
+ 27:16.720 --> 27:21.720
1690
+ And what are the steps of getting from point A to point B?
1691
+
1692
+ 27:22.320 --> 27:25.840
1693
+ So we just talked about like local control,
1694
+
1695
+ 27:25.840 --> 27:30.840
1696
+ but if all the kind of cool dancing in the air
1697
+
1698
+ 27:30.840 --> 27:34.480
1699
+ that I've seen you show, how do you make it happen?
1700
+
1701
+ 27:34.480 --> 27:38.840
1702
+ Make it trajectory, first of all, okay,
1703
+
1704
+ 27:38.840 --> 27:41.600
1705
+ figure out a trajectory, so plan a trajectory,
1706
+
1707
+ 27:41.600 --> 27:44.320
1708
+ and then how do you make that trajectory happen?
1709
+
1710
+ 27:44.320 --> 27:47.320
1711
+ I think planning is a very fundamental problem in robotics.
1712
+
1713
+ 27:47.320 --> 27:50.120
1714
+ I think 10 years ago it was an esoteric thing,
1715
+
1716
+ 27:50.120 --> 27:52.360
1717
+ but today with self driving cars,
1718
+
1719
+ 27:52.360 --> 27:55.160
1720
+ everybody can understand this basic idea
1721
+
1722
+ 27:55.160 --> 27:57.320
1723
+ that a car sees a whole bunch of things
1724
+
1725
+ 27:57.320 --> 27:59.720
1726
+ and it has to keep a lane or maybe make a right turn
1727
+
1728
+ 27:59.720 --> 28:02.160
1729
+ or switch lanes, it has to plan a trajectory,
1730
+
1731
+ 28:02.160 --> 28:04.320
1732
+ it has to be safe, it has to be efficient.
1733
+
1734
+ 28:04.320 --> 28:06.120
1735
+ So everybody's familiar with that.
1736
+
1737
+ 28:06.120 --> 28:07.400
1738
+ That's kind of the first step
1739
+
1740
+ 28:07.400 --> 28:12.400
1741
+ that you have to think about when you say autonomy.
1742
+
1743
+ 28:14.320 --> 28:18.600
1744
+ And so for us, it's about finding smooth motions,
1745
+
1746
+ 28:18.600 --> 28:20.760
1747
+ motions that are safe.
1748
+
1749
+ 28:20.760 --> 28:22.360
1750
+ So we think about these two things.
1751
+
1752
+ 28:22.360 --> 28:24.160
1753
+ One is optimality, one is safety.
1754
+
1755
+ 28:24.160 --> 28:26.720
1756
+ Clearly you cannot compromise safety.
1757
+
1758
+ 28:26.720 --> 28:30.160
1759
+ So you're looking for safe, optimal motions.
1760
+
1761
+ 28:30.160 --> 28:33.160
1762
+ The other thing you have to think about
1763
+
1764
+ 28:33.160 --> 28:37.360
1765
+ is can you actually compute a reasonable trajectory
1766
+
1767
+ 28:37.360 --> 28:41.560
1768
+ in a small amount of time, because you have a time budget.
1769
+
1770
+ 28:41.560 --> 28:44.360
1771
+ So the optimal becomes suboptimal,
1772
+
1773
+ 28:44.360 --> 28:49.360
1774
+ but in our lab we focus on synthesizing smooth trajectory
1775
+
1776
+ 28:50.560 --> 28:52.360
1777
+ that satisfy all the constraints.
1778
+
1779
+ 28:52.360 --> 28:57.200
1780
+ In other words, don't violate any safety constraints
1781
+
1782
+ 28:57.200 --> 29:02.200
1783
+ and is as efficient as possible.
1784
+
1785
+ 29:02.200 --> 29:04.600
1786
+ And when I say efficient, it could mean
1787
+
1788
+ 29:04.600 --> 29:07.760
1789
+ I want to get from point A to point B as quickly as possible
1790
+
1791
+ 29:07.760 --> 29:11.200
1792
+ or I want to get to it as gracefully as possible
1793
+
1794
+ 29:11.200 --> 29:15.360
1795
+ or I want to consume as little energy as possible.
1796
+
1797
+ 29:15.360 --> 29:17.600
1798
+ But always staying within the safety constraints.
1799
+
1800
+ 29:17.600 --> 29:22.440
1801
+ But yes, always finding a safe trajectory.
1802
+
1803
+ 29:22.440 --> 29:24.440
1804
+ So there's a lot of excitement and progress
1805
+
1806
+ 29:24.440 --> 29:26.440
1807
+ in the field of machine learning.
1808
+
1809
+ 29:26.440 --> 29:31.440
1810
+ And reinforcement learning and the neural network variant
1811
+
1812
+ 29:31.440 --> 29:33.440
1813
+ of that with deeper reinforcement learning.
1814
+
1815
+ 29:33.440 --> 29:37.440
1816
+ Do you see a role of machine learning in...
1817
+
1818
+ 29:37.440 --> 29:40.040
1819
+ So a lot of the success with flying robots
1820
+
1821
+ 29:40.040 --> 29:41.840
1822
+ did not rely on machine learning,
1823
+
1824
+ 29:41.840 --> 29:44.440
1825
+ except for maybe a little bit of the perception
1826
+
1827
+ 29:44.440 --> 29:46.040
1828
+ on the computer vision side.
1829
+
1830
+ 29:46.040 --> 29:48.040
1831
+ On the control side and the planning,
1832
+
1833
+ 29:48.040 --> 29:50.040
1834
+ do you see there's a role in the future
1835
+
1836
+ 29:50.040 --> 29:51.040
1837
+ for machine learning?
1838
+
1839
+ 29:51.040 --> 29:53.040
1840
+ So let me disagree a little bit with you.
1841
+
1842
+ 29:53.040 --> 29:56.040
1843
+ I think we never perhaps called out in my work
1844
+
1845
+ 29:56.040 --> 29:59.040
1846
+ called out learning, but even this very simple idea
1847
+
1848
+ 29:59.040 --> 30:04.040
1849
+ of being able to fly through a constrained space.
1850
+
1851
+ 30:04.040 --> 30:07.040
1852
+ The first time you try it, you'll invariably...
1853
+
1854
+ 30:07.040 --> 30:10.040
1855
+ You might get it wrong if the task is challenging.
1856
+
1857
+ 30:10.040 --> 30:14.040
1858
+ And the reason is to get it perfectly right,
1859
+
1860
+ 30:14.040 --> 30:17.040
1861
+ you have to model everything in the environment.
1862
+
1863
+ 30:17.040 --> 30:22.040
1864
+ And flying is notoriously hard to model.
1865
+
1866
+ 30:22.040 --> 30:28.040
1867
+ There are aerodynamic effects that we constantly discover,
1868
+
1869
+ 30:28.040 --> 30:31.040
1870
+ even just before I was talking to you,
1871
+
1872
+ 30:31.040 --> 30:37.040
1873
+ I was talking to a student about how blades flap when they fly.
1874
+
1875
+ 30:37.040 --> 30:43.040
1876
+ And that ends up changing how a rotorcraft
1877
+
1878
+ 30:43.040 --> 30:46.040
1879
+ is accelerated in the angular direction.
1880
+
1881
+ 30:46.040 --> 30:48.040
1882
+ Does it use like microflaps or something?
1883
+
1884
+ 30:48.040 --> 30:49.040
1885
+ It's not microflaps.
1886
+
1887
+ 30:49.040 --> 30:52.040
1888
+ We assume that each blade is rigid,
1889
+
1890
+ 30:52.040 --> 30:54.040
1891
+ but actually it flaps a little bit.
1892
+
1893
+ 30:54.040 --> 30:55.040
1894
+ It bends.
1895
+
1896
+ 30:55.040 --> 30:56.040
1897
+ Interesting, yeah.
1898
+
1899
+ 30:56.040 --> 30:58.040
1900
+ And so the models rely on the fact,
1901
+
1902
+ 30:58.040 --> 31:01.040
1903
+ on the assumption that they're actually rigid.
1904
+
1905
+ 31:01.040 --> 31:02.040
1906
+ But that's not true.
1907
+
1908
+ 31:02.040 --> 31:04.040
1909
+ If you're flying really quickly,
1910
+
1911
+ 31:04.040 --> 31:07.040
1912
+ these effects become significant.
1913
+
1914
+ 31:07.040 --> 31:09.040
1915
+ If you're flying close to the ground,
1916
+
1917
+ 31:09.040 --> 31:12.040
1918
+ you get pushed off by the ground.
1919
+
1920
+ 31:12.040 --> 31:15.040
1921
+ Something which every pilot knows when he tries to land
1922
+
1923
+ 31:15.040 --> 31:19.040
1924
+ or she tries to land, this is called a ground effect.
1925
+
1926
+ 31:19.040 --> 31:21.040
1927
+ Something very few pilots think about
1928
+
1929
+ 31:21.040 --> 31:23.040
1930
+ is what happens when you go close to a ceiling,
1931
+
1932
+ 31:23.040 --> 31:25.040
1933
+ or you get sucked into a ceiling.
1934
+
1935
+ 31:25.040 --> 31:29.040
1936
+ There are very few aircraft that fly close to any kind of ceiling.
1937
+
1938
+ 31:29.040 --> 31:33.040
1939
+ Likewise, when you go close to a wall,
1940
+
1941
+ 31:33.040 --> 31:36.040
1942
+ there are these wall effects.
1943
+
1944
+ 31:36.040 --> 31:39.040
1945
+ And if you've gone on a train and you pass another train
1946
+
1947
+ 31:39.040 --> 31:41.040
1948
+ that's traveling the opposite direction,
1949
+
1950
+ 31:41.040 --> 31:43.040
1951
+ you can feel the buffeting.
1952
+
1953
+ 31:43.040 --> 31:46.040
1954
+ And so these kinds of microclimates
1955
+
1956
+ 31:46.040 --> 31:48.040
1957
+ affect our UAVs significantly.
1958
+
1959
+ 31:48.040 --> 31:51.040
1960
+ And they're impossible to model, essentially.
1961
+
1962
+ 31:51.040 --> 31:53.040
1963
+ I wouldn't say they're impossible to model,
1964
+
1965
+ 31:53.040 --> 31:55.040
1966
+ but the level of sophistication you would need
1967
+
1968
+ 31:55.040 --> 32:00.040
1969
+ in the model and the software would be tremendous.
1970
+
1971
+ 32:00.040 --> 32:03.040
1972
+ Plus, to get everything right would be awfully tedious.
1973
+
1974
+ 32:03.040 --> 32:05.040
1975
+ So the way we do this is over time,
1976
+
1977
+ 32:05.040 --> 32:10.040
1978
+ we figure out how to adapt to these conditions.
1979
+
1980
+ 32:10.040 --> 32:13.040
1981
+ So early on, we use the form of learning
1982
+
1983
+ 32:13.040 --> 32:15.040
1984
+ that we call iterative learning.
1985
+
1986
+ 32:15.040 --> 32:18.040
1987
+ So this idea, if you want to perform a task,
1988
+
1989
+ 32:18.040 --> 32:23.040
1990
+ there are a few things that you need to change
1991
+
1992
+ 32:23.040 --> 32:25.040
1993
+ and iterate over a few parameters
1994
+
1995
+ 32:25.040 --> 32:29.040
1996
+ that over time you can figure out.
1997
+
1998
+ 32:29.040 --> 32:34.040
1999
+ So I could call it policy gradient reinforcement learning,
2000
+
2001
+ 32:34.040 --> 32:36.040
2002
+ but actually it was just iterative learning.
2003
+
2004
+ 32:36.040 --> 32:38.040
2005
+ And so this was there way back.
2006
+
2007
+ 32:38.040 --> 32:40.040
2008
+ I think what's interesting is,
2009
+
2010
+ 32:40.040 --> 32:43.040
2011
+ if you look at autonomous vehicles today,
2012
+
2013
+ 32:43.040 --> 32:46.040
2014
+ learning occurs, could occur in two pieces.
2015
+
2016
+ 32:46.040 --> 32:48.040
2017
+ One is perception, understanding the world.
2018
+
2019
+ 32:48.040 --> 32:51.040
2020
+ Second is action, taking actions.
2021
+
2022
+ 32:51.040 --> 32:54.040
2023
+ Everything that I've seen that is successful
2024
+
2025
+ 32:54.040 --> 32:56.040
2026
+ is on the perception side of things.
2027
+
2028
+ 32:56.040 --> 32:59.040
2029
+ So in computer vision, we've made amazing strides
2030
+
2031
+ 32:59.040 --> 33:00.040
2032
+ in the last 10 years.
2033
+
2034
+ 33:00.040 --> 33:03.040
2035
+ So recognizing objects, actually detecting objects,
2036
+
2037
+ 33:03.040 --> 33:08.040
2038
+ classifying them and tagging them in some sense,
2039
+
2040
+ 33:08.040 --> 33:11.040
2041
+ annotating them, this is all done through machine learning.
2042
+
2043
+ 33:11.040 --> 33:13.040
2044
+ On the action side, on the other hand,
2045
+
2046
+ 33:13.040 --> 33:17.040
2047
+ I don't know if any examples where there are fielded systems
2048
+
2049
+ 33:17.040 --> 33:21.040
2050
+ where we actually learn the right behavior.
2051
+
2052
+ 33:21.040 --> 33:23.040
2053
+ Outside of single demonstration is successful.
2054
+
2055
+ 33:23.040 --> 33:25.040
2056
+ On the laboratory, this is the Holy Grail.
2057
+
2058
+ 33:25.040 --> 33:27.040
2059
+ Can you do end to end learning?
2060
+
2061
+ 33:27.040 --> 33:31.040
2062
+ Can you go from pixels to motor currents?
2063
+
2064
+ 33:31.040 --> 33:33.040
2065
+ This is really, really hard.
2066
+
2067
+ 33:33.040 --> 33:36.040
2068
+ And I think if you go forward,
2069
+
2070
+ 33:36.040 --> 33:38.040
2071
+ the right way to think about these things
2072
+
2073
+ 33:38.040 --> 33:43.040
2074
+ is data driven approaches, learning based approaches,
2075
+
2076
+ 33:43.040 --> 33:46.040
2077
+ in concert with model based approaches,
2078
+
2079
+ 33:46.040 --> 33:48.040
2080
+ which is the traditional way of doing things.
2081
+
2082
+ 33:48.040 --> 33:50.040
2083
+ So I think there's a piece,
2084
+
2085
+ 33:50.040 --> 33:52.040
2086
+ there's a role for each of these methodologies.
2087
+
2088
+ 33:52.040 --> 33:55.040
2089
+ So what do you think, just jumping out on topics,
2090
+
2091
+ 33:55.040 --> 33:57.040
2092
+ since you mentioned autonomous vehicles,
2093
+
2094
+ 33:57.040 --> 33:59.040
2095
+ what do you think are the limits on the perception side?
2096
+
2097
+ 33:59.040 --> 34:02.040
2098
+ So I've talked to Elon Musk,
2099
+
2100
+ 34:02.040 --> 34:04.040
2101
+ and there on the perception side,
2102
+
2103
+ 34:04.040 --> 34:07.040
2104
+ they're using primarily computer vision
2105
+
2106
+ 34:07.040 --> 34:09.040
2107
+ to perceive the environment.
2108
+
2109
+ 34:09.040 --> 34:13.040
2110
+ In your work with, because you work with the real world a lot,
2111
+
2112
+ 34:13.040 --> 34:15.040
2113
+ and the physical world,
2114
+
2115
+ 34:15.040 --> 34:17.040
2116
+ what are the limits of computer vision?
2117
+
2118
+ 34:17.040 --> 34:20.040
2119
+ Do you think you can solve autonomous vehicles,
2120
+
2121
+ 34:20.040 --> 34:22.040
2122
+ focusing on the perception side,
2123
+
2124
+ 34:22.040 --> 34:25.040
2125
+ focusing on vision alone and machine learning?
2126
+
2127
+ 34:25.040 --> 34:29.040
2128
+ So we also have a spin off company, Exxon Technologies,
2129
+
2130
+ 34:29.040 --> 34:32.040
2131
+ that works underground in mines.
2132
+
2133
+ 34:32.040 --> 34:35.040
2134
+ So you go into mines, they're dark.
2135
+
2136
+ 34:35.040 --> 34:37.040
2137
+ They're dirty.
2138
+
2139
+ 34:37.040 --> 34:39.040
2140
+ You fly in a dirty area,
2141
+
2142
+ 34:39.040 --> 34:42.040
2143
+ there's stuff you kick up by the propellers,
2144
+
2145
+ 34:42.040 --> 34:44.040
2146
+ the downwash kicks up dust.
2147
+
2148
+ 34:44.040 --> 34:48.040
2149
+ I challenge you to get a computer vision algorithm to work there.
2150
+
2151
+ 34:48.040 --> 34:53.040
2152
+ So we use LIDARS in that setting.
2153
+
2154
+ 34:53.040 --> 34:57.040
2155
+ Indoors, and even outdoors when we fly through fields,
2156
+
2157
+ 34:57.040 --> 34:59.040
2158
+ I think there's a lot of potential
2159
+
2160
+ 34:59.040 --> 35:03.040
2161
+ for just solving the problem using computer vision alone.
2162
+
2163
+ 35:03.040 --> 35:05.040
2164
+ But I think the bigger question is,
2165
+
2166
+ 35:05.040 --> 35:08.040
2167
+ can you actually solve,
2168
+
2169
+ 35:08.040 --> 35:11.040
2170
+ or can you actually identify all the corner cases
2171
+
2172
+ 35:11.040 --> 35:14.040
2173
+ using a single sensing modality
2174
+
2175
+ 35:14.040 --> 35:16.040
2176
+ and using learning alone?
2177
+
2178
+ 35:16.040 --> 35:18.040
2179
+ What's your intuition there?
2180
+
2181
+ 35:18.040 --> 35:20.040
2182
+ So look, if you have a corner case
2183
+
2184
+ 35:20.040 --> 35:22.040
2185
+ and your algorithm doesn't work,
2186
+
2187
+ 35:22.040 --> 35:25.040
2188
+ your instinct is to go get data about the corner case
2189
+
2190
+ 35:25.040 --> 35:29.040
2191
+ and patch it up, learn how to deal with that corner case.
2192
+
2193
+ 35:29.040 --> 35:32.040
2194
+ But at some point,
2195
+
2196
+ 35:32.040 --> 35:36.040
2197
+ this is going to saturate, this approach is not viable.
2198
+
2199
+ 35:36.040 --> 35:39.040
2200
+ So today, computer vision algorithms
2201
+
2202
+ 35:39.040 --> 35:43.040
2203
+ can detect objects 90% of the time,
2204
+
2205
+ 35:43.040 --> 35:45.040
2206
+ classify them 90% of the time.
2207
+
2208
+ 35:45.040 --> 35:49.040
2209
+ Cats on the internet probably can do 95%, I don't know.
2210
+
2211
+ 35:49.040 --> 35:54.040
2212
+ But to get from 90% to 99%, you need a lot more data.
2213
+
2214
+ 35:54.040 --> 35:56.040
2215
+ And then I tell you, well, that's not enough
2216
+
2217
+ 35:56.040 --> 35:58.040
2218
+ because I have a safety critical application
2219
+
2220
+ 35:58.040 --> 36:01.040
2221
+ that want to go from 99% to 99.9%,
2222
+
2223
+ 36:01.040 --> 36:03.040
2224
+ well, that's even more data.
2225
+
2226
+ 36:03.040 --> 36:07.040
2227
+ So I think if you look at
2228
+
2229
+ 36:07.040 --> 36:11.040
2230
+ wanting accuracy on the x axis
2231
+
2232
+ 36:11.040 --> 36:15.040
2233
+ and look at the amount of data on the y axis,
2234
+
2235
+ 36:15.040 --> 36:18.040
2236
+ I believe that curve is an exponential curve.
2237
+
2238
+ 36:18.040 --> 36:21.040
2239
+ Wow, okay, it's even hard if it's linear.
2240
+
2241
+ 36:21.040 --> 36:24.040
2242
+ It's hard if it's linear, totally, but I think it's exponential.
2243
+
2244
+ 36:24.040 --> 36:26.040
2245
+ And the other thing you have to think about
2246
+
2247
+ 36:26.040 --> 36:31.040
2248
+ is that this process is a very, very power hungry process
2249
+
2250
+ 36:31.040 --> 36:34.040
2251
+ to run data farms or servers.
2252
+
2253
+ 36:34.040 --> 36:36.040
2254
+ Power, do you mean literally power?
2255
+
2256
+ 36:36.040 --> 36:38.040
2257
+ Literally power, literally power.
2258
+
2259
+ 36:38.040 --> 36:43.040
2260
+ So in 2014, five years ago, and I don't have more recent data,
2261
+
2262
+ 36:43.040 --> 36:50.040
2263
+ 2% of US electricity consumption was from data farms.
2264
+
2265
+ 36:50.040 --> 36:54.040
2266
+ So we think about this as an information science
2267
+
2268
+ 36:54.040 --> 36:56.040
2269
+ and information processing problem.
2270
+
2271
+ 36:56.040 --> 36:59.040
2272
+ Actually, it is an energy processing problem.
2273
+
2274
+ 36:59.040 --> 37:02.040
2275
+ And so unless we've figured out better ways of doing this,
2276
+
2277
+ 37:02.040 --> 37:04.040
2278
+ I don't think this is viable.
2279
+
2280
+ 37:04.040 --> 37:08.040
2281
+ So talking about driving, which is a safety critical application
2282
+
2283
+ 37:08.040 --> 37:11.040
2284
+ and some aspect of the flight is safety critical,
2285
+
2286
+ 37:11.040 --> 37:14.040
2287
+ maybe philosophical question, maybe an engineering one.
2288
+
2289
+ 37:14.040 --> 37:16.040
2290
+ What problem do you think is harder to solve?
2291
+
2292
+ 37:16.040 --> 37:19.040
2293
+ Autonomous driving or autonomous flight?
2294
+
2295
+ 37:19.040 --> 37:21.040
2296
+ That's a really interesting question.
2297
+
2298
+ 37:21.040 --> 37:26.040
2299
+ I think autonomous flight has several advantages
2300
+
2301
+ 37:26.040 --> 37:30.040
2302
+ that autonomous driving doesn't have.
2303
+
2304
+ 37:30.040 --> 37:33.040
2305
+ So look, if I want to go from point A to point B,
2306
+
2307
+ 37:33.040 --> 37:35.040
2308
+ I have a very, very safe trajectory.
2309
+
2310
+ 37:35.040 --> 37:38.040
2311
+ Go vertically up to a maximum altitude,
2312
+
2313
+ 37:38.040 --> 37:41.040
2314
+ fly horizontally to just about the destination
2315
+
2316
+ 37:41.040 --> 37:43.040
2317
+ and then come down vertically.
2318
+
2319
+ 37:43.040 --> 37:46.040
2320
+ This is preprogrammed.
2321
+
2322
+ 37:46.040 --> 37:49.040
2323
+ The equivalent of that is very hard to find
2324
+
2325
+ 37:49.040 --> 37:53.040
2326
+ in a self driving car world because you're on the ground,
2327
+
2328
+ 37:53.040 --> 37:55.040
2329
+ you're in a two dimensional surface,
2330
+
2331
+ 37:55.040 --> 37:58.040
2332
+ and the trajectories on the two dimensional surface
2333
+
2334
+ 37:58.040 --> 38:01.040
2335
+ are more likely to encounter obstacles.
2336
+
2337
+ 38:01.040 --> 38:03.040
2338
+ I mean this in an intuitive sense,
2339
+
2340
+ 38:03.040 --> 38:05.040
2341
+ but mathematically true, that's...
2342
+
2343
+ 38:05.040 --> 38:08.040
2344
+ Mathematically as well, that's true.
2345
+
2346
+ 38:08.040 --> 38:11.040
2347
+ There's other option on the 2G space of platooning
2348
+
2349
+ 38:11.040 --> 38:13.040
2350
+ or because there's so many obstacles,
2351
+
2352
+ 38:13.040 --> 38:15.040
2353
+ you can connect with those obstacles
2354
+
2355
+ 38:15.040 --> 38:16.040
2356
+ and all these kinds of problems.
2357
+
2358
+ 38:16.040 --> 38:18.040
2359
+ But those exist in the three dimensional space as well.
2360
+
2361
+ 38:18.040 --> 38:19.040
2362
+ So they do.
2363
+
2364
+ 38:19.040 --> 38:23.040
2365
+ So the question also implies how difficult are obstacles
2366
+
2367
+ 38:23.040 --> 38:25.040
2368
+ in the three dimensional space in flight?
2369
+
2370
+ 38:25.040 --> 38:27.040
2371
+ So that's the downside.
2372
+
2373
+ 38:27.040 --> 38:29.040
2374
+ I think in three dimensional space,
2375
+
2376
+ 38:29.040 --> 38:31.040
2377
+ you're modeling three dimensional world,
2378
+
2379
+ 38:31.040 --> 38:33.040
2380
+ not just because you want to avoid it,
2381
+
2382
+ 38:33.040 --> 38:35.040
2383
+ but you want to reason about it
2384
+
2385
+ 38:35.040 --> 38:37.040
2386
+ and you want to work in that three dimensional environment.
2387
+
2388
+ 38:37.040 --> 38:39.040
2389
+ And that's significantly harder.
2390
+
2391
+ 38:39.040 --> 38:41.040
2392
+ So that's one disadvantage.
2393
+
2394
+ 38:41.040 --> 38:43.040
2395
+ I think the second disadvantage is of course,
2396
+
2397
+ 38:43.040 --> 38:45.040
2398
+ anytime you fly, you have to put up
2399
+
2400
+ 38:45.040 --> 38:49.040
2401
+ with the peculiarities of aerodynamics
2402
+
2403
+ 38:49.040 --> 38:51.040
2404
+ and their complicated environments.
2405
+
2406
+ 38:51.040 --> 38:52.040
2407
+ How do you negotiate that?
2408
+
2409
+ 38:52.040 --> 38:54.040
2410
+ So that's always a problem.
2411
+
2412
+ 38:54.040 --> 38:57.040
2413
+ Do you see a time in the future where there is...
2414
+
2415
+ 38:57.040 --> 39:00.040
2416
+ You mentioned there's agriculture applications.
2417
+
2418
+ 39:00.040 --> 39:03.040
2419
+ So there's a lot of applications of flying robots.
2420
+
2421
+ 39:03.040 --> 39:07.040
2422
+ But do you see a time in the future where there is tens of thousands
2423
+
2424
+ 39:07.040 --> 39:10.040
2425
+ or maybe hundreds of thousands of delivery drones
2426
+
2427
+ 39:10.040 --> 39:14.040
2428
+ that fill the sky, a delivery of flying robots?
2429
+
2430
+ 39:14.040 --> 39:18.040
2431
+ I think there's a lot of potential for the last mile delivery.
2432
+
2433
+ 39:18.040 --> 39:21.040
2434
+ And so in crowded cities,
2435
+
2436
+ 39:21.040 --> 39:24.040
2437
+ I don't know if you go to a place like Hong Kong,
2438
+
2439
+ 39:24.040 --> 39:27.040
2440
+ just crossing the river can take half an hour.
2441
+
2442
+ 39:27.040 --> 39:32.040
2443
+ And while a drone can just do it in five minutes at most.
2444
+
2445
+ 39:32.040 --> 39:38.040
2446
+ I think you look at delivery of supplies to remote villages.
2447
+
2448
+ 39:38.040 --> 39:41.040
2449
+ I work with a nonprofit called Weave Robotics.
2450
+
2451
+ 39:41.040 --> 39:43.040
2452
+ So they work in the Peruvian Amazon,
2453
+
2454
+ 39:43.040 --> 39:47.040
2455
+ where the only highways are rivers.
2456
+
2457
+ 39:47.040 --> 39:49.040
2458
+ And to get from point A to point B
2459
+
2460
+ 39:49.040 --> 39:52.040
2461
+ may take five hours.
2462
+
2463
+ 39:52.040 --> 39:56.040
2464
+ While with a drone, you can get there in 30 minutes.
2465
+
2466
+ 39:56.040 --> 39:59.040
2467
+ So just delivering drugs,
2468
+
2469
+ 39:59.040 --> 40:04.040
2470
+ retrieving samples for testing vaccines.
2471
+
2472
+ 40:04.040 --> 40:06.040
2473
+ I think there's huge potential here.
2474
+
2475
+ 40:06.040 --> 40:09.040
2476
+ So I think the challenges are not technological.
2477
+
2478
+ 40:09.040 --> 40:12.040
2479
+ The challenge is economical.
2480
+
2481
+ 40:12.040 --> 40:16.040
2482
+ The one thing I'll tell you that nobody thinks about
2483
+
2484
+ 40:16.040 --> 40:21.040
2485
+ is the fact that we've not made huge strides in battery technology.
2486
+
2487
+ 40:21.040 --> 40:22.040
2488
+ Yes, it's true.
2489
+
2490
+ 40:22.040 --> 40:24.040
2491
+ Batteries are becoming less expensive
2492
+
2493
+ 40:24.040 --> 40:27.040
2494
+ because we have these mega factories that are coming up.
2495
+
2496
+ 40:27.040 --> 40:29.040
2497
+ But they're all based on lithium based technologies.
2498
+
2499
+ 40:29.040 --> 40:34.040
2500
+ And if you look at the energy density and the power density,
2501
+
2502
+ 40:34.040 --> 40:39.040
2503
+ those are two fundamentally limiting numbers.
2504
+
2505
+ 40:39.040 --> 40:41.040
2506
+ So power density is important because for a UAV
2507
+
2508
+ 40:41.040 --> 40:43.040
2509
+ to take off vertically into the air,
2510
+
2511
+ 40:43.040 --> 40:47.040
2512
+ which most drones do, they don't have a runway,
2513
+
2514
+ 40:47.040 --> 40:52.040
2515
+ you consume roughly 200 watts per kilo at the small size.
2516
+
2517
+ 40:52.040 --> 40:54.040
2518
+ That's a lot.
2519
+
2520
+ 40:54.040 --> 40:58.040
2521
+ In contrast, the human brain consumes less than 80 watts,
2522
+
2523
+ 40:58.040 --> 41:00.040
2524
+ the whole of the human brain.
2525
+
2526
+ 41:00.040 --> 41:04.040
2527
+ So just imagine just lifting yourself into the air
2528
+
2529
+ 41:04.040 --> 41:08.040
2530
+ is like two or three light bulbs, which makes no sense to me.
2531
+
2532
+ 41:08.040 --> 41:12.040
2533
+ Yeah, so you're going to have to at scale solve the energy problem
2534
+
2535
+ 41:12.040 --> 41:19.040
2536
+ then charging the batteries, storing the energy and so on.
2537
+
2538
+ 41:19.040 --> 41:21.040
2539
+ And then the storage is the second problem.
2540
+
2541
+ 41:21.040 --> 41:23.040
2542
+ But storage limits the range.
2543
+
2544
+ 41:23.040 --> 41:30.040
2545
+ But you have to remember that you have to burn a lot of it
2546
+
2547
+ 41:30.040 --> 41:32.040
2548
+ for a given time.
2549
+
2550
+ 41:32.040 --> 41:33.040
2551
+ So the burning is another problem.
2552
+
2553
+ 41:33.040 --> 41:35.040
2554
+ Which is a power question.
2555
+
2556
+ 41:35.040 --> 41:36.040
2557
+ Yes.
2558
+
2559
+ 41:36.040 --> 41:39.040
2560
+ And do you think just your intuition,
2561
+
2562
+ 41:39.040 --> 41:45.040
2563
+ there are breakthroughs in batteries on the horizon?
2564
+
2565
+ 41:45.040 --> 41:47.040
2566
+ How hard is that problem?
2567
+
2568
+ 41:47.040 --> 41:52.040
2569
+ Look, there are a lot of companies that are promising flying cars,
2570
+
2571
+ 41:52.040 --> 42:00.040
2572
+ that are autonomous, and that are clean.
2573
+
2574
+ 42:00.040 --> 42:02.040
2575
+ I think they're over promising.
2576
+
2577
+ 42:02.040 --> 42:05.040
2578
+ The autonomy piece is doable.
2579
+
2580
+ 42:05.040 --> 42:08.040
2581
+ The clean piece, I don't think so.
2582
+
2583
+ 42:08.040 --> 42:12.040
2584
+ There's another company that I work with called Jatatra.
2585
+
2586
+ 42:12.040 --> 42:16.040
2587
+ They make small jet engines.
2588
+
2589
+ 42:16.040 --> 42:20.040
2590
+ And they can get up to 50 miles an hour very easily and lift 50 kilos.
2591
+
2592
+ 42:20.040 --> 42:22.040
2593
+ But they're jet engines.
2594
+
2595
+ 42:22.040 --> 42:24.040
2596
+ They're efficient.
2597
+
2598
+ 42:24.040 --> 42:26.040
2599
+ They're a little louder than electric vehicles.
2600
+
2601
+ 42:26.040 --> 42:29.040
2602
+ But they can build flying cars.
2603
+
2604
+ 42:29.040 --> 42:33.040
2605
+ So your sense is that there's a lot of pieces that have come together.
2606
+
2607
+ 42:33.040 --> 42:39.040
2608
+ So on this crazy question, if you look at companies like Kitty Hawk,
2609
+
2610
+ 42:39.040 --> 42:45.040
2611
+ working on electric, so the clean, talking as the bashing through.
2612
+
2613
+ 42:45.040 --> 42:52.040
2614
+ It's a crazy dream, but you work with flight a lot.
2615
+
2616
+ 42:52.040 --> 42:58.040
2617
+ You've mentioned before that manned flights or carrying a human body
2618
+
2619
+ 42:58.040 --> 43:01.040
2620
+ is very difficult to do.
2621
+
2622
+ 43:01.040 --> 43:04.040
2623
+ So how crazy is flying cars?
2624
+
2625
+ 43:04.040 --> 43:11.040
2626
+ Do you think there will be a day when we have vertical takeoff and landing vehicles
2627
+
2628
+ 43:11.040 --> 43:17.040
2629
+ that are sufficiently affordable that we're going to see a huge amount of them?
2630
+
2631
+ 43:17.040 --> 43:21.040
2632
+ And they would look like something like we dream of when we think about flying cars.
2633
+
2634
+ 43:21.040 --> 43:23.040
2635
+ Yeah, like the Jetsons.
2636
+
2637
+ 43:23.040 --> 43:26.040
2638
+ So look, there are a lot of smart people working on this.
2639
+
2640
+ 43:26.040 --> 43:32.040
2641
+ And you never say something is not possible when you're people like Sebastian Thrun working on it.
2642
+
2643
+ 43:32.040 --> 43:35.040
2644
+ So I totally think it's viable.
2645
+
2646
+ 43:35.040 --> 43:38.040
2647
+ I question, again, the electric piece.
2648
+
2649
+ 43:38.040 --> 43:40.040
2650
+ The electric piece, yeah.
2651
+
2652
+ 43:40.040 --> 43:42.040
2653
+ For short distances, you can do it.
2654
+
2655
+ 43:42.040 --> 43:46.040
2656
+ And there's no reason to suggest that these all just have to be rotor crafts.
2657
+
2658
+ 43:46.040 --> 43:50.040
2659
+ You take off vertically, but then you morph into a forward flight.
2660
+
2661
+ 43:50.040 --> 43:52.040
2662
+ I think there are a lot of interesting designs.
2663
+
2664
+ 43:52.040 --> 43:56.040
2665
+ The question to me is, are these economically viable?
2666
+
2667
+ 43:56.040 --> 44:02.040
2668
+ And if you agree to do this with fossil fuels, it instantly immediately becomes viable.
2669
+
2670
+ 44:02.040 --> 44:04.040
2671
+ That's a real challenge.
2672
+
2673
+ 44:04.040 --> 44:09.040
2674
+ Do you think it's possible for robots and humans to collaborate successfully on tasks?
2675
+
2676
+ 44:09.040 --> 44:18.040
2677
+ So a lot of robotics folks that I talk to and work with, I mean, humans just add a giant mess to the picture.
2678
+
2679
+ 44:18.040 --> 44:22.040
2680
+ So it's best to remove them from consideration when solving specific tasks.
2681
+
2682
+ 44:22.040 --> 44:24.040
2683
+ It's very difficult to model.
2684
+
2685
+ 44:24.040 --> 44:26.040
2686
+ There's just a source of uncertainty.
2687
+
2688
+ 44:26.040 --> 44:36.040
2689
+ In your work with these agile flying robots, do you think there's a role for collaboration with humans?
2690
+
2691
+ 44:36.040 --> 44:43.040
2692
+ Is it best to model tasks in a way that doesn't have a human in the picture?
2693
+
2694
+ 44:43.040 --> 44:48.040
2695
+ I don't think we should ever think about robots without human in the picture.
2696
+
2697
+ 44:48.040 --> 44:54.040
2698
+ Ultimately, robots are there because we want them to solve problems for humans.
2699
+
2700
+ 44:54.040 --> 44:58.040
2701
+ But there's no general solution to this problem.
2702
+
2703
+ 44:58.040 --> 45:02.040
2704
+ I think if you look at human interaction and how humans interact with robots,
2705
+
2706
+ 45:02.040 --> 45:06.040
2707
+ you know, we think of these in sort of three different ways.
2708
+
2709
+ 45:06.040 --> 45:09.040
2710
+ One is the human commanding the robot.
2711
+
2712
+ 45:09.040 --> 45:13.040
2713
+ The second is the human collaborating with the robot.
2714
+
2715
+ 45:13.040 --> 45:19.040
2716
+ So for example, we work on how a robot can actually pick up things with a human and carry things.
2717
+
2718
+ 45:19.040 --> 45:21.040
2719
+ That's like true collaboration.
2720
+
2721
+ 45:21.040 --> 45:26.040
2722
+ And third, we think about humans as bystanders, self driving cars.
2723
+
2724
+ 45:26.040 --> 45:33.040
2725
+ What's the human's role and how do self driving cars acknowledge the presence of humans?
2726
+
2727
+ 45:33.040 --> 45:36.040
2728
+ So I think all of these things are different scenarios.
2729
+
2730
+ 45:36.040 --> 45:39.040
2731
+ It depends on what kind of humans, what kind of tasks.
2732
+
2733
+ 45:39.040 --> 45:45.040
2734
+ And I think it's very difficult to say that there's a general theory that we all have for this.
2735
+
2736
+ 45:45.040 --> 45:52.040
2737
+ But at the same time, it's also silly to say that we should think about robots independent of humans.
2738
+
2739
+ 45:52.040 --> 45:59.040
2740
+ So to me, human robot interaction is almost a mandatory aspect of everything we do.
2741
+
2742
+ 45:59.040 --> 46:00.040
2743
+ Yes.
2744
+
2745
+ 46:00.040 --> 46:05.040
2746
+ But to wish to agree, so your thoughts, if we jump to autonomous vehicles, for example,
2747
+
2748
+ 46:05.040 --> 46:10.040
2749
+ there's a big debate between what's called level two and level four.
2750
+
2751
+ 46:10.040 --> 46:13.040
2752
+ So semi autonomous and autonomous vehicles.
2753
+
2754
+ 46:13.040 --> 46:19.040
2755
+ And sort of the Tesla approach currently at least has a lot of collaboration between human and machine.
2756
+
2757
+ 46:19.040 --> 46:24.040
2758
+ So the human is supposed to actively supervise the operation of the robot.
2759
+
2760
+ 46:24.040 --> 46:33.040
2761
+ Part of the safety definition of how safe a robot is in that case is how effective is the human in monitoring it.
2762
+
2763
+ 46:33.040 --> 46:43.040
2764
+ Do you think that's ultimately not a good approach in sort of having a human in the picture,
2765
+
2766
+ 46:43.040 --> 46:51.040
2767
+ not as a bystander or part of the infrastructure, but really as part of what's required to make the system safe?
2768
+
2769
+ 46:51.040 --> 46:53.040
2770
+ This is harder than it sounds.
2771
+
2772
+ 46:53.040 --> 47:01.040
2773
+ I think, you know, if you, I mean, I'm sure you've driven before in highways and so on,
2774
+
2775
+ 47:01.040 --> 47:10.040
2776
+ it's really very hard to relinquish controls to a machine and then take over when needed.
2777
+
2778
+ 47:10.040 --> 47:18.040
2779
+ So I think Tesla's approach is interesting because it allows you to periodically establish some kind of contact with the car.
2780
+
2781
+ 47:18.040 --> 47:24.040
2782
+ Toyota, on the other hand, is thinking about shared autonomy or collaborative autonomy as a paradigm.
2783
+
2784
+ 47:24.040 --> 47:31.040
2785
+ If I may argue, these are very, very simple ways of human robot collaboration because the task is pretty boring.
2786
+
2787
+ 47:31.040 --> 47:34.040
2788
+ You sit in a vehicle, you go from point A to point B.
2789
+
2790
+ 47:34.040 --> 47:42.040
2791
+ I think the more interesting thing to me is, for example, search and rescue, I've got a human first responder, robot first responders.
2792
+
2793
+ 47:42.040 --> 47:44.040
2794
+ I got to do something.
2795
+
2796
+ 47:44.040 --> 47:45.040
2797
+ It's important.
2798
+
2799
+ 47:45.040 --> 47:47.040
2800
+ I have to do it in two minutes.
2801
+
2802
+ 47:47.040 --> 47:48.040
2803
+ The building is burning.
2804
+
2805
+ 47:48.040 --> 47:50.040
2806
+ There's been an explosion.
2807
+
2808
+ 47:50.040 --> 47:51.040
2809
+ It's collapsed.
2810
+
2811
+ 47:51.040 --> 47:52.040
2812
+ How do I do it?
2813
+
2814
+ 47:52.040 --> 47:58.040
2815
+ I think to me, those are the interesting things where it's very, very unstructured and what's the role of the human?
2816
+
2817
+ 47:58.040 --> 47:59.040
2818
+ What's the role of the robot?
2819
+
2820
+ 47:59.040 --> 48:02.040
2821
+ Clearly, there's lots of interesting challenges.
2822
+
2823
+ 48:02.040 --> 48:05.040
2824
+ As a field, I think we're going to make a lot of progress in this area.
2825
+
2826
+ 48:05.040 --> 48:07.040
2827
+ Yeah, it's an exciting form of collaboration.
2828
+
2829
+ 48:07.040 --> 48:08.040
2830
+ You're right.
2831
+
2832
+ 48:08.040 --> 48:15.040
2833
+ In the autonomous driving, the main enemy is just boredom of the human as opposed to the rescue operations.
2834
+
2835
+ 48:15.040 --> 48:23.040
2836
+ It's literally life and death and the collaboration enables the effective completion of the mission.
2837
+
2838
+ 48:23.040 --> 48:24.040
2839
+ So it's exciting.
2840
+
2841
+ 48:24.040 --> 48:27.040
2842
+ Well, in some sense, we're also doing this.
2843
+
2844
+ 48:27.040 --> 48:37.040
2845
+ You think about the human driving a car and almost invariably the human is trying to estimate the state of the car, the state of the environment, and so on.
2846
+
2847
+ 48:37.040 --> 48:40.040
2848
+ But what is the car where to estimate the state of the human?
2849
+
2850
+ 48:40.040 --> 48:47.040
2851
+ So for example, I'm sure you have a smartphone and the smartphone tries to figure out what you're doing and send you reminders.
2852
+
2853
+ 48:47.040 --> 48:53.040
2854
+ And oftentimes telling you to drive to a certain place, although you have no intention of going there, because it thinks that that's where you should be.
2855
+
2856
+ 48:53.040 --> 48:59.040
2857
+ Because of some Gmail calendar entry or something like that.
2858
+
2859
+ 48:59.040 --> 49:02.040
2860
+ And it's trying to constantly figure out who you are, what you're doing.
2861
+
2862
+ 49:02.040 --> 49:06.040
2863
+ If a car were to do that, maybe that would make the driver safer.
2864
+
2865
+ 49:06.040 --> 49:14.040
2866
+ Because the car is trying to figure out there's a driver paying attention, looking at his or her eyes, looking at circadian movements.
2867
+
2868
+ 49:14.040 --> 49:16.040
2869
+ So I think the potential is there.
2870
+
2871
+ 49:16.040 --> 49:21.040
2872
+ But from the reverse side, it's not robot modeling, but it's human modeling.
2873
+
2874
+ 49:21.040 --> 49:23.040
2875
+ It's more in the human, right?
2876
+
2877
+ 49:23.040 --> 49:30.040
2878
+ And I think the robots can do a very good job of modeling humans if you really think about the framework that you have.
2879
+
2880
+ 49:30.040 --> 49:39.040
2881
+ A human sitting in a cockpit surrounded by sensors, all staring at him, in addition to be staring outside, but also staring at him.
2882
+
2883
+ 49:39.040 --> 49:41.040
2884
+ I think there's a real synergy there.
2885
+
2886
+ 49:41.040 --> 49:48.040
2887
+ Yeah, I love that problem because it's the new 21st century form of psychology actually, AI enabled psychology.
2888
+
2889
+ 49:48.040 --> 49:54.040
2890
+ A lot of people have sci fi inspired fears of walking robots like those from Boston Dynamics.
2891
+
2892
+ 49:54.040 --> 49:59.040
2893
+ If you just look at shows on Netflix and so on, or flying robots like those you work with.
2894
+
2895
+ 49:59.040 --> 50:03.040
2896
+ How would you, how do you think about those fears?
2897
+
2898
+ 50:03.040 --> 50:05.040
2899
+ How would you alleviate those fears?
2900
+
2901
+ 50:05.040 --> 50:09.040
2902
+ Do you have inklings, echoes of those same concerns?
2903
+
2904
+ 50:09.040 --> 50:23.040
2905
+ Any time we develop a technology meaning to have positive impact in the world, there's always a worry that somebody could subvert those technologies and use it in an adversarial setting.
2906
+
2907
+ 50:23.040 --> 50:25.040
2908
+ And robotics is no exception, right?
2909
+
2910
+ 50:25.040 --> 50:29.040
2911
+ So I think it's very easy to weaponize robots.
2912
+
2913
+ 50:29.040 --> 50:31.040
2914
+ I think we talk about swarms.
2915
+
2916
+ 50:31.040 --> 50:38.040
2917
+ One thing I worry a lot about is, for us to get swarms to work and do something reliably, it's really hard.
2918
+
2919
+ 50:38.040 --> 50:44.040
2920
+ But suppose I have this challenge of trying to destroy something.
2921
+
2922
+ 50:44.040 --> 50:49.040
2923
+ And I have a swarm of robots where only one out of the swarm needs to get to its destination.
2924
+
2925
+ 50:49.040 --> 50:53.040
2926
+ So that suddenly becomes a lot more doable.
2927
+
2928
+ 50:53.040 --> 51:00.040
2929
+ And so I worry about this general idea of using autonomy with lots and lots of agents.
2930
+
2931
+ 51:00.040 --> 51:04.040
2932
+ I mean, having said that, look, a lot of this technology is not very mature.
2933
+
2934
+ 51:04.040 --> 51:12.040
2935
+ My favorite saying is that if somebody had to develop this technology, wouldn't you rather the good guys do it?
2936
+
2937
+ 51:12.040 --> 51:21.040
2938
+ So the good guys have a good understanding of the technology so they can figure out how this technology is being used in a bad way or could be used in a bad way and try to defend against it?
2939
+
2940
+ 51:21.040 --> 51:23.040
2941
+ So we think a lot about that.
2942
+
2943
+ 51:23.040 --> 51:28.040
2944
+ So we're doing research on how to defend against swarms, for example.
2945
+
2946
+ 51:28.040 --> 51:29.040
2947
+ That's interesting.
2948
+
2949
+ 51:29.040 --> 51:36.040
2950
+ There is, in fact, a report by the National Academies on counter UAS technologies.
2951
+
2952
+ 51:36.040 --> 51:38.040
2953
+ This is a real threat.
2954
+
2955
+ 51:38.040 --> 51:47.040
2956
+ But we're also thinking about how to defend against this and knowing how swarms work, knowing how autonomy works is, I think, very important.
2957
+
2958
+ 51:47.040 --> 51:49.040
2959
+ So it's not just politicians?
2960
+
2961
+ 51:49.040 --> 51:51.040
2962
+ You think engineers have a role in this discussion?
2963
+
2964
+ 51:51.040 --> 51:52.040
2965
+ Absolutely.
2966
+
2967
+ 51:52.040 --> 51:59.040
2968
+ I think the days where politicians can be agnostic to technology are gone.
2969
+
2970
+ 51:59.040 --> 52:05.040
2971
+ I think every politician needs to be literate in technology.
2972
+
2973
+ 52:05.040 --> 52:09.040
2974
+ And I often say technology is the new liberal art.
2975
+
2976
+ 52:09.040 --> 52:18.040
2977
+ Understanding how technology will change your life, I think, is important and every human being needs to understand that.
2978
+
2979
+ 52:18.040 --> 52:22.040
2980
+ And maybe we can elect some engineers to office as well on the other side.
2981
+
2982
+ 52:22.040 --> 52:25.040
2983
+ What are the biggest open problems in robotics in UV?
2984
+
2985
+ 52:25.040 --> 52:27.040
2986
+ You said we're in the early days in some sense.
2987
+
2988
+ 52:27.040 --> 52:30.040
2989
+ What are the problems we would like to solve in robotics?
2990
+
2991
+ 52:30.040 --> 52:32.040
2992
+ I think there are lots of problems, right?
2993
+
2994
+ 52:32.040 --> 52:36.040
2995
+ But I would phrase it in the following way.
2996
+
2997
+ 52:36.040 --> 52:46.040
2998
+ If you look at the robots we're building, they're still very much tailored towards doing specific tasks in specific settings.
2999
+
3000
+ 52:46.040 --> 52:59.040
3001
+ I think the question of how do you get them to operate in much broader settings where things can change in unstructured environments is up in the air.
3002
+
3003
+ 52:59.040 --> 53:02.040
3004
+ So think of the self driving cars.
3005
+
3006
+ 53:02.040 --> 53:05.040
3007
+ Today, we can build a self driving car in a parking lot.
3008
+
3009
+ 53:05.040 --> 53:09.040
3010
+ We can do level five autonomy in a parking lot.
3011
+
3012
+ 53:09.040 --> 53:17.040
3013
+ But can you do a level five autonomy in the streets of Napoli in Italy or Mumbai in India?
3014
+
3015
+ 53:17.040 --> 53:18.040
3016
+ No.
3017
+
3018
+ 53:18.040 --> 53:27.040
3019
+ So in some sense, when we think about robotics, we have to think about where they're functioning, what kind of environment, what kind of a task.
3020
+
3021
+ 53:27.040 --> 53:32.040
3022
+ We have no understanding of how to put both those things together.
3023
+
3024
+ 53:32.040 --> 53:36.040
3025
+ So we're in the very early days of applying it to the physical world.
3026
+
3027
+ 53:36.040 --> 53:39.040
3028
+ And I was just in Naples, actually.
3029
+
3030
+ 53:39.040 --> 53:46.040
3031
+ And there's levels of difficulty and complexity depending on which area you're applying it to.
3032
+
3033
+ 53:46.040 --> 53:47.040
3034
+ I think so.
3035
+
3036
+ 53:47.040 --> 53:51.040
3037
+ And we don't have a systematic way of understanding that.
3038
+
3039
+ 53:51.040 --> 54:00.040
3040
+ Everybody says just because a computer can now beat a human at any board game, we suddenly know something about intelligence.
3041
+
3042
+ 54:00.040 --> 54:01.040
3043
+ That's not true.
3044
+
3045
+ 54:01.040 --> 54:04.040
3046
+ A computer board game is very, very structured.
3047
+
3048
+ 54:04.040 --> 54:11.040
3049
+ It is the equivalent of working in a Henry Ford factory where things, parts come, you assemble, move on.
3050
+
3051
+ 54:11.040 --> 54:14.040
3052
+ It's a very, very, very structured setting.
3053
+
3054
+ 54:14.040 --> 54:15.040
3055
+ That's the easiest thing.
3056
+
3057
+ 54:15.040 --> 54:18.040
3058
+ And we know how to do that.
3059
+
3060
+ 54:18.040 --> 54:23.040
3061
+ So you've done a lot of incredible work at the UPenn University of Pennsylvania Grass Club.
3062
+
3063
+ 54:23.040 --> 54:26.040
3064
+ You're now Dean of Engineering at UPenn.
3065
+
3066
+ 54:26.040 --> 54:34.040
3067
+ What advice do you have for a new bright eyed undergrad interested in robotics or AI or engineering?
3068
+
3069
+ 54:34.040 --> 54:37.040
3070
+ Well, I think there's really three things.
3071
+
3072
+ 54:37.040 --> 54:45.040
3073
+ One is you have to get used to the idea that the world will not be the same in five years or four years whenever you graduate, right?
3074
+
3075
+ 54:45.040 --> 54:46.040
3076
+ Which is really hard to do.
3077
+
3078
+ 54:46.040 --> 54:53.040
3079
+ So this thing about predicting the future, every one of us needs to be trying to predict the future always.
3080
+
3081
+ 54:53.040 --> 55:01.040
3082
+ Not because you'll be any good at it, but by thinking about it, I think you sharpen your senses and you become smarter.
3083
+
3084
+ 55:01.040 --> 55:02.040
3085
+ So that's number one.
3086
+
3087
+ 55:02.040 --> 55:09.040
3088
+ Number two, it's a callery of the first piece, which is you really don't know what's going to be important.
3089
+
3090
+ 55:09.040 --> 55:15.040
3091
+ So this idea that I'm going to specialize in something which will allow me to go in a particular direction.
3092
+
3093
+ 55:15.040 --> 55:22.040
3094
+ It may be interesting, but it's important also to have this breadth so you have this jumping off point.
3095
+
3096
+ 55:22.040 --> 55:25.040
3097
+ I think the third thing, and this is where I think Penn excels.
3098
+
3099
+ 55:25.040 --> 55:30.040
3100
+ I mean, we teach engineering, but it's always in the context of the liberal arts.
3101
+
3102
+ 55:30.040 --> 55:32.040
3103
+ It's always in the context of society.
3104
+
3105
+ 55:32.040 --> 55:35.040
3106
+ As engineers, we cannot afford to lose sight of that.
3107
+
3108
+ 55:35.040 --> 55:37.040
3109
+ So I think that's important.
3110
+
3111
+ 55:37.040 --> 55:43.040
3112
+ But I think one thing that people underestimate when they do robotics is the importance of mathematical foundations,
3113
+
3114
+ 55:43.040 --> 55:47.040
3115
+ the importance of representations.
3116
+
3117
+ 55:47.040 --> 55:56.040
3118
+ Not everything can just be solved by looking for ROS packages on the Internet or to find a deep neural network that works.
3119
+
3120
+ 55:56.040 --> 56:06.040
3121
+ I think the representation question is key, even to machine learning, where if you ever hope to achieve or get to explainable AI,
3122
+
3123
+ 56:06.040 --> 56:09.040
3124
+ somehow there need to be representations that you can understand.
3125
+
3126
+ 56:09.040 --> 56:16.040
3127
+ So if you want to do robotics, you should also do mathematics, and you said liberal arts, a little literature.
3128
+
3129
+ 56:16.040 --> 56:19.040
3130
+ If you want to build a robot, you should be reading Dostoyevsky.
3131
+
3132
+ 56:19.040 --> 56:20.040
3133
+ I agree with that.
3134
+
3135
+ 56:20.040 --> 56:21.040
3136
+ Very good.
3137
+
3138
+ 56:21.040 --> 56:24.040
3139
+ So Vijay, thank you so much for talking today. It was an honor.
3140
+
3141
+ 56:24.040 --> 56:47.040
3142
+ Thank you. It was just a very exciting conversation. Thank you.
3143
+
vtt/episode_038_large.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_038_small.vtt ADDED
The diff for this file is too large to render. See raw diff
 
vtt/episode_039_small.vtt ADDED
@@ -0,0 +1,1994 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WEBVTT
2
+
3
+ 00:00.000 --> 00:02.800
4
+ The following is a conversation with Colin Engel.
5
+
6
+ 00:02.800 --> 00:05.840
7
+ He's the CEO and cofounder of iRobot,
8
+
9
+ 00:05.840 --> 00:09.680
10
+ a robotics company that for 29 years has been creating robots
11
+
12
+ 00:09.680 --> 00:12.600
13
+ that operate successfully in the real world.
14
+
15
+ 00:12.600 --> 00:15.600
16
+ Not as a demo or on a scale of dozens,
17
+
18
+ 00:15.600 --> 00:18.880
19
+ but on a scale of thousands and millions.
20
+
21
+ 00:18.880 --> 00:24.320
22
+ As of this year, iRobot has sold more than 25 million robots
23
+
24
+ 00:24.320 --> 00:28.200
25
+ to consumers, including the Roomba vacuum cleaning robot,
26
+
27
+ 00:28.200 --> 00:30.000
28
+ the Bravo floor mopping robot,
29
+
30
+ 00:30.000 --> 00:34.000
31
+ and soon the Terra lawn mowing robot.
32
+
33
+ 00:34.000 --> 00:37.680
34
+ 29 million robots successfully operating autonomously
35
+
36
+ 00:37.680 --> 00:39.680
37
+ in real people's homes.
38
+
39
+ 00:39.680 --> 00:42.120
40
+ To me, it's an incredible accomplishment
41
+
42
+ 00:42.120 --> 00:45.120
43
+ of science, engineering, logistics,
44
+
45
+ 00:45.120 --> 00:48.800
46
+ and all kinds of general entrepreneurial innovation.
47
+
48
+ 00:48.800 --> 00:51.360
49
+ Most robotics companies fail.
50
+
51
+ 00:51.360 --> 00:56.880
52
+ iRobot has survived and succeeded for 29 years.
53
+
54
+ 00:56.880 --> 01:00.160
55
+ I spent all day at iRobot, including a long tour
56
+
57
+ 01:00.160 --> 01:03.520
58
+ and conversation with Colin about the history of iRobot,
59
+
60
+ 01:03.520 --> 01:06.720
61
+ and then sat down for this podcast conversation
62
+
63
+ 01:06.720 --> 01:08.560
64
+ that would have been much longer
65
+
66
+ 01:08.560 --> 01:10.760
67
+ if I didn't spend all day learning about
68
+
69
+ 01:10.760 --> 01:13.960
70
+ and playing with the various robots in the company's history.
71
+
72
+ 01:13.960 --> 01:17.480
73
+ I'll release the video of the tour separately.
74
+
75
+ 01:17.480 --> 01:20.720
76
+ Colin, iRobot, its founding team,
77
+
78
+ 01:20.720 --> 01:23.200
79
+ its current team, and its mission
80
+
81
+ 01:23.200 --> 01:26.200
82
+ has been and continues to be an inspiration to me
83
+
84
+ 01:26.200 --> 01:28.880
85
+ and thousands of engineers who are working hard
86
+
87
+ 01:28.880 --> 01:33.000
88
+ to create AI systems that help real people.
89
+
90
+ 01:33.000 --> 01:35.640
91
+ This is the Artificial Intelligence Podcast.
92
+
93
+ 01:35.640 --> 01:38.000
94
+ If you enjoy it, subscribe on YouTube,
95
+
96
+ 01:38.000 --> 01:41.240
97
+ give it five stars on iTunes, support it on Patreon,
98
+
99
+ 01:41.240 --> 01:43.280
100
+ or simply connect with me on Twitter
101
+
102
+ 01:43.280 --> 01:47.120
103
+ at Lex Freedman, spelled F R I D M A N.
104
+
105
+ 01:47.120 --> 01:51.080
106
+ And now, here's my conversation with Colin Engel.
107
+
108
+ 01:52.120 --> 01:55.120
109
+ In his 1942 short story, Run Around,
110
+
111
+ 01:55.120 --> 01:58.520
112
+ from his iRobot collection, Asimov,
113
+
114
+ 01:59.400 --> 02:02.840
115
+ proposed the three laws of robotics in order,
116
+
117
+ 02:02.840 --> 02:06.800
118
+ don't harm humans, obey orders, protect yourself.
119
+
120
+ 02:06.800 --> 02:07.640
121
+ So two questions.
122
+
123
+ 02:07.640 --> 02:11.640
124
+ First, does the Roomba follow these three laws?
125
+
126
+ 02:11.640 --> 02:14.760
127
+ And also, more seriously,
128
+
129
+ 02:14.760 --> 02:17.120
130
+ what role do you hope to see robots take
131
+
132
+ 02:17.120 --> 02:20.280
133
+ in modern society and in the future world?
134
+
135
+ 02:20.280 --> 02:25.280
136
+ So the three laws are very thought provoking
137
+
138
+ 02:25.720 --> 02:30.720
139
+ and require such a profound understanding
140
+
141
+ 02:31.360 --> 02:36.280
142
+ of the world a robot lives in,
143
+
144
+ 02:36.280 --> 02:38.360
145
+ the ramifications of its action
146
+
147
+ 02:38.360 --> 02:40.040
148
+ and its own sense of self,
149
+
150
+ 02:40.040 --> 02:45.040
151
+ that it's not a relevant bar,
152
+
153
+ 02:46.640 --> 02:48.360
154
+ at least it won't be a relevant bar
155
+
156
+ 02:48.360 --> 02:50.160
157
+ for decades to come.
158
+
159
+ 02:50.160 --> 02:54.560
160
+ And so, if Roomba follows the three laws,
161
+
162
+ 02:54.560 --> 02:56.840
163
+ and I believe it does,
164
+
165
+ 02:58.040 --> 03:00.920
166
+ it is designed to help humans not hurt them,
167
+
168
+ 03:00.920 --> 03:03.120
169
+ it's designed to be inherently safe,
170
+
171
+ 03:03.120 --> 03:05.960
172
+ and we design it to last a long time.
173
+
174
+ 03:07.200 --> 03:11.600
175
+ It's not through any AI or intent on the robot's part.
176
+
177
+ 03:11.600 --> 03:14.960
178
+ It's because following the three laws
179
+
180
+ 03:14.960 --> 03:18.200
181
+ is aligned with being a good robot product.
182
+
183
+ 03:19.600 --> 03:23.120
184
+ So I guess it does,
185
+
186
+ 03:23.120 --> 03:27.240
187
+ but not by explicit design.
188
+
189
+ 03:27.240 --> 03:28.800
190
+ So then the bigger picture,
191
+
192
+ 03:28.800 --> 03:31.560
193
+ what role do you hope to see robotics,
194
+
195
+ 03:31.560 --> 03:36.560
196
+ robots take in what's currently mostly a world of humans?
197
+
198
+ 03:37.360 --> 03:42.360
199
+ We need robots to help us continue
200
+
201
+ 03:42.360 --> 03:44.960
202
+ to improve our standard of living.
203
+
204
+ 03:46.160 --> 03:51.160
205
+ We need robots because the average age of humanity
206
+
207
+ 03:52.840 --> 03:55.040
208
+ is increasing very quickly,
209
+
210
+ 03:55.040 --> 03:59.760
211
+ and simply the number of people young enough
212
+
213
+ 03:59.760 --> 04:02.480
214
+ and spry enough to care for the elder
215
+
216
+ 04:03.880 --> 04:07.880
217
+ growing demographic is inadequate.
218
+
219
+ 04:08.800 --> 04:11.440
220
+ And so what is the role of robots?
221
+
222
+ 04:11.440 --> 04:15.000
223
+ Today, the role is to make our lives a little easier,
224
+
225
+ 04:15.000 --> 04:17.400
226
+ a little cleaner, maybe a little healthier.
227
+
228
+ 04:18.520 --> 04:22.160
229
+ But in time, robots are gonna be the difference
230
+
231
+ 04:22.160 --> 04:25.720
232
+ between real gut wrenching declines
233
+
234
+ 04:25.720 --> 04:28.360
235
+ in our ability to live independently
236
+
237
+ 04:28.360 --> 04:30.280
238
+ and maintain our standard of living,
239
+
240
+ 04:30.280 --> 04:34.920
241
+ and a future that is the bright one
242
+
243
+ 04:34.920 --> 04:37.520
244
+ where we have more control of our lives,
245
+
246
+ 04:37.520 --> 04:40.560
247
+ can spend more of our time focused
248
+
249
+ 04:40.560 --> 04:43.280
250
+ on activities we choose.
251
+
252
+ 04:44.680 --> 04:48.840
253
+ And I'm so honored and excited to be
254
+
255
+ 04:48.840 --> 04:50.520
256
+ playing a role in that journey.
257
+
258
+ 04:50.520 --> 04:51.800
259
+ So you've given me a tour,
260
+
261
+ 04:51.800 --> 04:54.280
262
+ you showed me some of the long histories now,
263
+
264
+ 04:54.280 --> 04:57.280
265
+ 29 years that iRobot has been at it,
266
+
267
+ 04:57.280 --> 04:59.280
268
+ creating some incredible robots.
269
+
270
+ 04:59.280 --> 05:01.280
271
+ You showed me PacBot,
272
+
273
+ 05:01.280 --> 05:03.200
274
+ you showed me a bunch of other stuff
275
+
276
+ 05:03.200 --> 05:04.600
277
+ that led up to Roomba,
278
+
279
+ 05:04.600 --> 05:08.760
280
+ that led to Brava and Tara.
281
+
282
+ 05:08.760 --> 05:13.760
283
+ So let's skip that incredible history
284
+
285
+ 05:14.080 --> 05:15.080
286
+ in the interest of time,
287
+
288
+ 05:15.080 --> 05:16.160
289
+ because we already talked about it,
290
+
291
+ 05:16.160 --> 05:18.080
292
+ I'll show this incredible footage.
293
+
294
+ 05:18.080 --> 05:22.680
295
+ You mentioned elderly and robotics and society.
296
+
297
+ 05:22.680 --> 05:26.320
298
+ I think the home is a fascinating place for robots to be.
299
+
300
+ 05:26.320 --> 05:29.840
301
+ So where do you see robots in the home?
302
+
303
+ 05:29.840 --> 05:31.680
304
+ Currently, I would say, once again,
305
+
306
+ 05:31.680 --> 05:34.560
307
+ probably most homes in the world don't have a robot.
308
+
309
+ 05:34.560 --> 05:36.200
310
+ So how do you see that changing?
311
+
312
+ 05:36.200 --> 05:39.880
313
+ Where do you think is the big initial value add
314
+
315
+ 05:39.880 --> 05:41.960
316
+ that robots can do?
317
+
318
+ 05:41.960 --> 05:46.960
319
+ So iRobot has sort of over the years narrowed in on the home,
320
+
321
+ 05:48.160 --> 05:53.160
322
+ the consumer's home as the place where we want to innovate
323
+
324
+ 05:53.160 --> 05:58.160
325
+ and deliver tools that will help a home be
326
+
327
+ 06:00.840 --> 06:04.280
328
+ a more automatically maintained place,
329
+
330
+ 06:04.280 --> 06:06.840
331
+ a healthier place, a safer place
332
+
333
+ 06:06.840 --> 06:11.520
334
+ and perhaps even a more efficient place to be.
335
+
336
+ 06:11.520 --> 06:16.520
337
+ And today vacuum we mop, soon we'll be mowing your lawn,
338
+
339
+ 06:17.000 --> 06:21.520
340
+ but where things are going is,
341
+
342
+ 06:22.760 --> 06:27.120
343
+ when do we get to the point where the home,
344
+
345
+ 06:27.120 --> 06:29.160
346
+ not just the robots that live in your home,
347
+
348
+ 06:29.160 --> 06:32.200
349
+ but the home itself becomes part of a system
350
+
351
+ 06:32.200 --> 06:36.000
352
+ that maintains itself and plays an active role
353
+
354
+ 06:36.000 --> 06:40.800
355
+ in caring for and helping the people live in that home.
356
+
357
+ 06:40.800 --> 06:43.240
358
+ And I see everything that we're doing
359
+
360
+ 06:43.240 --> 06:46.200
361
+ as steps along the path toward that future.
362
+
363
+ 06:46.200 --> 06:47.760
364
+ So what are the steps?
365
+
366
+ 06:47.760 --> 06:51.760
367
+ So if we can summarize some of the history of Roomba,
368
+
369
+ 06:53.280 --> 06:55.560
370
+ you've mentioned, and maybe you can elaborate on it,
371
+
372
+ 06:55.560 --> 06:57.280
373
+ but you mentioned that the early days
374
+
375
+ 06:57.280 --> 07:02.280
376
+ were really taking a robot from something that works
377
+
378
+ 07:02.360 --> 07:04.920
379
+ either in the lab or something that works in the field
380
+
381
+ 07:04.920 --> 07:09.920
382
+ that helps soldiers do the difficult work they do
383
+
384
+ 07:10.240 --> 07:12.680
385
+ to actually be in the hands of consumers
386
+
387
+ 07:12.680 --> 07:15.680
388
+ and tens of thousands, hundreds of thousands of robots
389
+
390
+ 07:15.680 --> 07:18.520
391
+ that don't break down over how much people love them
392
+
393
+ 07:18.520 --> 07:21.480
394
+ over months of very extensive use.
395
+
396
+ 07:21.480 --> 07:22.880
397
+ So that was the big first step.
398
+
399
+ 07:22.880 --> 07:26.040
400
+ And then the second big step was the ability
401
+
402
+ 07:26.040 --> 07:29.920
403
+ to sense the environment, to build a map, to localize,
404
+
405
+ 07:29.920 --> 07:32.600
406
+ to be able to build a picture of the home
407
+
408
+ 07:32.600 --> 07:34.640
409
+ that the human can then attach labels to
410
+
411
+ 07:34.640 --> 07:38.440
412
+ in terms of giving some semantic knowledge
413
+
414
+ 07:38.440 --> 07:40.920
415
+ to the robot about its environment.
416
+
417
+ 07:40.920 --> 07:44.880
418
+ Okay, so that's like a huge, two big, huge steps.
419
+
420
+ 07:46.280 --> 07:47.560
421
+ Maybe you can comment on them,
422
+
423
+ 07:47.560 --> 07:51.080
424
+ but also what is the next step
425
+
426
+ 07:51.080 --> 07:54.760
427
+ of making a robot part of the home?
428
+
429
+ 07:54.760 --> 07:55.600
430
+ Sure.
431
+
432
+ 07:55.600 --> 08:00.600
433
+ So the goal is to make a home that takes care of itself,
434
+
435
+ 08:01.320 --> 08:03.720
436
+ takes care of the people in the home
437
+
438
+ 08:03.720 --> 08:07.880
439
+ and gives a user an experience of just living their life
440
+
441
+ 08:07.880 --> 08:10.880
442
+ and the home is somehow doing the right thing,
443
+
444
+ 08:10.880 --> 08:14.160
445
+ turning on and off lights when you leave,
446
+
447
+ 08:14.160 --> 08:17.280
448
+ cleaning up the environment.
449
+
450
+ 08:17.280 --> 08:22.280
451
+ And we went from robots that were right in the lab,
452
+
453
+ 08:22.280 --> 08:26.760
454
+ but were both too expensive and not sufficiently capable
455
+
456
+ 08:26.760 --> 08:30.480
457
+ to ever do an acceptable job of anything
458
+
459
+ 08:30.480 --> 08:34.080
460
+ other than being a toy or a curio in your home
461
+
462
+ 08:34.080 --> 08:39.080
463
+ to something that was both affordable
464
+
465
+ 08:38.880 --> 08:42.440
466
+ and sufficiently effective to drive,
467
+
468
+ 08:42.440 --> 08:45.360
469
+ be above threshold and drive purchase intent.
470
+
471
+ 08:47.520 --> 08:50.000
472
+ Now we've disrupted the intent of the work
473
+
474
+ 08:50.000 --> 08:54.400
475
+ and now we've disrupted the entire vacuuming industry.
476
+
477
+ 08:55.480 --> 08:59.840
478
+ The number one selling vacuums, for example, in the US
479
+
480
+ 08:59.840 --> 09:02.480
481
+ are Roombas, so not robot vacuums,
482
+
483
+ 09:02.480 --> 09:05.160
484
+ but vacuums and that's really crazy and weird.
485
+
486
+ 09:05.160 --> 09:08.080
487
+ We need to pause that, I mean, that's incredible.
488
+
489
+ 09:08.080 --> 09:10.520
490
+ That's incredible that a robot
491
+
492
+ 09:10.520 --> 09:15.520
493
+ is the number one selling thing that does something.
494
+
495
+ 09:15.560 --> 09:16.400
496
+ Yep.
497
+
498
+ 09:16.400 --> 09:18.240
499
+ Something as essential as vacuuming.
500
+
501
+ 09:18.240 --> 09:20.120
502
+ So we're... Congratulations.
503
+
504
+ 09:20.120 --> 09:20.960
505
+ Thank you.
506
+
507
+ 09:20.960 --> 09:22.440
508
+ It's still kind of fun to say,
509
+
510
+ 09:22.440 --> 09:26.600
511
+ but just because this was a crazy idea
512
+
513
+ 09:26.600 --> 09:30.920
514
+ that just started in a room here,
515
+
516
+ 09:30.920 --> 09:33.800
517
+ we're like, do you think we can do this?
518
+
519
+ 09:35.000 --> 09:36.160
520
+ Hey, let's give it a try.
521
+
522
+ 09:38.000 --> 09:40.440
523
+ But now the robots are starting
524
+
525
+ 09:40.440 --> 09:42.840
526
+ to understand their environment.
527
+
528
+ 09:42.840 --> 09:45.280
529
+ And if you think about the next step,
530
+
531
+ 09:45.280 --> 09:47.880
532
+ there's two dimensions.
533
+
534
+ 09:48.720 --> 09:53.040
535
+ I've been working so hard since the beginning of iRobot
536
+
537
+ 09:53.040 --> 09:55.080
538
+ to make robots are autonomous,
539
+
540
+ 09:55.080 --> 09:59.120
541
+ that they're smart enough and understand their task enough
542
+
543
+ 09:59.120 --> 10:02.480
544
+ that they can just go do it without human involvement.
545
+
546
+ 10:04.160 --> 10:07.440
547
+ Now what I'm really excited and working on
548
+
549
+ 10:07.440 --> 10:09.520
550
+ is how do I make them less autonomous?
551
+
552
+ 10:09.520 --> 10:14.520
553
+ Meaning that the robot is supposed to be your partner,
554
+
555
+ 10:15.080 --> 10:20.080
556
+ not this automaton that just goes and does what a robot does.
557
+
558
+ 10:20.240 --> 10:22.440
559
+ And so that if you tell it,
560
+
561
+ 10:22.440 --> 10:27.160
562
+ hey, I just dropped some flour by the fridge in the kitchen.
563
+
564
+ 10:27.160 --> 10:29.000
565
+ Can you deal with it?
566
+
567
+ 10:29.000 --> 10:31.840
568
+ Wouldn't be awesome if the right thing just happened
569
+
570
+ 10:31.840 --> 10:35.240
571
+ based on that utterance.
572
+
573
+ 10:35.240 --> 10:37.960
574
+ And to some extent that's less autonomous
575
+
576
+ 10:37.960 --> 10:40.120
577
+ because it's actually listening to you,
578
+
579
+ 10:40.120 --> 10:44.400
580
+ understanding the context and intent of the sentence,
581
+
582
+ 10:44.400 --> 10:49.400
583
+ mapping it against its understanding of the home it lives in
584
+
585
+ 10:50.040 --> 10:52.680
586
+ and knowing what to do.
587
+
588
+ 10:52.680 --> 10:56.360
589
+ And so that's an area of research.
590
+
591
+ 10:56.360 --> 10:59.360
592
+ It's an area where we're starting to roll out features.
593
+
594
+ 10:59.360 --> 11:02.880
595
+ You can now tell your robot to clean up the kitchen
596
+
597
+ 11:02.880 --> 11:05.880
598
+ and it knows what the kitchen is and can do that.
599
+
600
+ 11:05.880 --> 11:09.360
601
+ And that's sort of 1.0 of where we're going.
602
+
603
+ 11:10.440 --> 11:12.480
604
+ The other cool thing is that
605
+
606
+ 11:12.480 --> 11:14.640
607
+ we're starting to know where stuff is.
608
+
609
+ 11:14.640 --> 11:16.040
610
+ And why is that important?
611
+
612
+ 11:16.040 --> 11:21.040
613
+ Well, robots are supposed to have arms, right?
614
+
615
+ 11:21.520 --> 11:24.200
616
+ Data had an arm, Rosie had an arm,
617
+
618
+ 11:24.200 --> 11:25.240
619
+ Robbie the robot had an arm.
620
+
621
+ 11:25.240 --> 11:26.680
622
+ I mean, robots are, you know,
623
+
624
+ 11:26.680 --> 11:29.560
625
+ they are physical things that move around in an environment
626
+
627
+ 11:29.560 --> 11:31.280
628
+ they're supposed to like do work.
629
+
630
+ 11:31.280 --> 11:34.120
631
+ And if you think about it,
632
+
633
+ 11:34.120 --> 11:35.440
634
+ if a robot doesn't know anything,
635
+
636
+ 11:35.440 --> 11:38.760
637
+ where anything is, why should it have an arm?
638
+
639
+ 11:38.760 --> 11:43.760
640
+ But with this new dawn of home understanding
641
+
642
+ 11:44.320 --> 11:47.680
643
+ that we're starting to go enjoy,
644
+
645
+ 11:47.680 --> 11:49.320
646
+ I know where the kitchen is.
647
+
648
+ 11:49.320 --> 11:52.040
649
+ I might in the future know where the refrigerator is.
650
+
651
+ 11:52.040 --> 11:55.280
652
+ I might, if I had an arm, be able to find the handle,
653
+
654
+ 11:55.280 --> 11:58.440
655
+ open it and even get myself a beer.
656
+
657
+ 11:58.440 --> 12:01.920
658
+ Obviously that's one of the true dreams of robotics
659
+
660
+ 12:01.920 --> 12:03.520
661
+ is to have robots bringing us a beer
662
+
663
+ 12:03.520 --> 12:05.280
664
+ while we watch television.
665
+
666
+ 12:05.280 --> 12:10.280
667
+ But I think that that new category of tasks
668
+
669
+ 12:11.000 --> 12:14.240
670
+ where physical manipulation, robot arms
671
+
672
+ 12:14.240 --> 12:19.240
673
+ is just a popery of new opportunity and excitement.
674
+
675
+ 12:20.200 --> 12:23.800
676
+ And you see humans as a crucial part of that.
677
+
678
+ 12:23.800 --> 12:26.280
679
+ So you kind of mentioned that.
680
+
681
+ 12:26.280 --> 12:28.960
682
+ And I personally find that a really compelling idea.
683
+
684
+ 12:28.960 --> 12:33.960
685
+ I think full autonomy can only take us so far,
686
+
687
+ 12:33.960 --> 12:35.360
688
+ especially in the home.
689
+
690
+ 12:35.360 --> 12:38.920
691
+ So you see humans as helping the robot understand
692
+
693
+ 12:38.920 --> 12:42.600
694
+ or give deeper meaning to the spatial information.
695
+
696
+ 12:43.600 --> 12:45.680
697
+ Right, it's a partnership.
698
+
699
+ 12:46.800 --> 12:51.800
700
+ The robot is supposed to operate according to descriptors
701
+
702
+ 12:52.800 --> 12:55.620
703
+ that you would use to describe your own home.
704
+
705
+ 12:57.040 --> 13:02.040
706
+ The robot is supposed to in lieu of better direction
707
+
708
+ 13:02.040 --> 13:03.840
709
+ kind of go about its routine,
710
+
711
+ 13:03.840 --> 13:07.520
712
+ which ought to be basically right
713
+
714
+ 13:07.520 --> 13:12.200
715
+ and lead to a home maintained
716
+
717
+ 13:12.200 --> 13:14.840
718
+ in a way that it's learned you like,
719
+
720
+ 13:14.840 --> 13:19.840
721
+ but also be perpetually ready to take direction
722
+
723
+ 13:21.440 --> 13:26.440
724
+ that would activate a different set of behaviors or actions
725
+
726
+ 13:26.880 --> 13:28.880
727
+ to meet a current need
728
+
729
+ 13:28.880 --> 13:32.320
730
+ to the extent it could actually perform that task.
731
+
732
+ 13:32.320 --> 13:33.560
733
+ So I gotta ask you,
734
+
735
+ 13:33.560 --> 13:36.960
736
+ I think this is a fundamental and a fascinating question
737
+
738
+ 13:36.960 --> 13:39.760
739
+ because iRobot has been a successful company
740
+
741
+ 13:39.760 --> 13:42.320
742
+ and a rare successful robotics company.
743
+
744
+ 13:42.320 --> 13:46.720
745
+ So Anki, Gibo, Mayfield Robotics with a Robot Curry,
746
+
747
+ 13:46.720 --> 13:49.160
748
+ SciFi Works, Rethink Robotics.
749
+
750
+ 13:49.160 --> 13:51.320
751
+ These were robotics companies that were founded
752
+
753
+ 13:51.320 --> 13:52.920
754
+ and run by brilliant people.
755
+
756
+ 13:54.000 --> 13:58.760
757
+ But all very unfortunately, at least for us roboticists,
758
+
759
+ 13:58.760 --> 14:02.120
760
+ and all went out of business recently.
761
+
762
+ 14:02.120 --> 14:05.160
763
+ So why do you think they didn't last longer?
764
+
765
+ 14:05.160 --> 14:07.000
766
+ Why do you think it is so hard
767
+
768
+ 14:07.000 --> 14:10.680
769
+ to keep a robotics company alive?
770
+
771
+ 14:10.680 --> 14:14.160
772
+ You know, I say this only partially in jest
773
+
774
+ 14:14.160 --> 14:16.760
775
+ that back in the day before Roomba,
776
+
777
+ 14:17.840 --> 14:22.840
778
+ you know, I was a high tech entrepreneur building robots,
779
+
780
+ 14:23.920 --> 14:26.440
781
+ but it wasn't until I became a vacuum cleaner salesman
782
+
783
+ 14:26.440 --> 14:29.420
784
+ that we had any success.
785
+
786
+ 14:29.420 --> 14:34.240
787
+ So I mean, the point is technology alone
788
+
789
+ 14:34.240 --> 14:37.680
790
+ doesn't equal a successful business.
791
+
792
+ 14:37.680 --> 14:42.680
793
+ We need to go and find the compelling need
794
+
795
+ 14:43.600 --> 14:45.920
796
+ where the robot that we're creating
797
+
798
+ 14:47.640 --> 14:52.640
799
+ can deliver clearly more value
800
+
801
+ 14:52.800 --> 14:55.440
802
+ to the end user than it costs.
803
+
804
+ 14:55.440 --> 14:59.040
805
+ And this is not a marginal thing
806
+
807
+ 14:59.040 --> 15:01.800
808
+ where you're looking at the skin, like, it's close.
809
+
810
+ 15:01.800 --> 15:04.380
811
+ Maybe we can hold our breath and make it work.
812
+
813
+ 15:04.380 --> 15:09.380
814
+ It's clearly more value than the cost of the robot
815
+
816
+ 15:11.560 --> 15:13.920
817
+ to bring, you know, in the store.
818
+
819
+ 15:13.920 --> 15:17.360
820
+ And I think that the challenge has been finding
821
+
822
+ 15:17.360 --> 15:22.360
823
+ those businesses where that's true
824
+
825
+ 15:22.360 --> 15:26.400
826
+ that's true in a sustainable fashion.
827
+
828
+ 15:28.240 --> 15:33.240
829
+ You know, when you get into entertainment style things,
830
+
831
+ 15:34.600 --> 15:38.240
832
+ you could be the cat's meow one year,
833
+
834
+ 15:38.240 --> 15:42.440
835
+ but 85% of toys, regardless of their merit,
836
+
837
+ 15:43.400 --> 15:45.640
838
+ fail to make it to their second season.
839
+
840
+ 15:45.640 --> 15:47.720
841
+ It's just super hard to do so.
842
+
843
+ 15:47.720 --> 15:52.720
844
+ And so that that's just a tough business.
845
+
846
+ 15:53.800 --> 15:58.800
847
+ And there's been a lot of experimentation around
848
+
849
+ 15:59.200 --> 16:02.640
850
+ what is the right type of social companion?
851
+
852
+ 16:02.640 --> 16:05.840
853
+ What is the right robot in the home
854
+
855
+ 16:05.840 --> 16:10.840
856
+ that is doing something other than tasks people do every week
857
+
858
+ 16:14.560 --> 16:16.400
859
+ that they'd rather not do?
860
+
861
+ 16:16.400 --> 16:20.880
862
+ And I'm not sure we've got it all figured out right.
863
+
864
+ 16:20.880 --> 16:22.960
865
+ And so that you get brilliant roboticists
866
+
867
+ 16:22.960 --> 16:25.680
868
+ with super interesting robots
869
+
870
+ 16:25.680 --> 16:29.800
871
+ that ultimately don't quite have
872
+
873
+ 16:29.800 --> 16:34.040
874
+ that magical user experience and thus the,
875
+
876
+ 16:36.480 --> 16:40.560
877
+ that value benefit equation remains ambiguous.
878
+
879
+ 16:40.560 --> 16:43.960
880
+ So you as somebody who dreams of robots,
881
+
882
+ 16:43.960 --> 16:45.720
883
+ you know, changing the world,
884
+
885
+ 16:45.720 --> 16:46.880
886
+ what's your estimate?
887
+
888
+ 16:47.840 --> 16:52.840
889
+ Why, how big is the space of applications
890
+
891
+ 16:53.200 --> 16:55.800
892
+ that fit the criteria that you just described
893
+
894
+ 16:55.800 --> 16:58.080
895
+ where you can really demonstrate
896
+
897
+ 16:58.080 --> 17:00.520
898
+ an obvious significant value
899
+
900
+ 17:00.520 --> 17:04.620
901
+ over the alternative non robotic solution?
902
+
903
+ 17:05.720 --> 17:08.680
904
+ Well, I think that we're just about none of the way
905
+
906
+ 17:10.000 --> 17:13.360
907
+ to achieving the potential of robotics at home.
908
+
909
+ 17:13.360 --> 17:18.360
910
+ But we have to do it in a really eyes wide open,
911
+
912
+ 17:20.400 --> 17:21.920
913
+ honest fashion.
914
+
915
+ 17:21.920 --> 17:25.400
916
+ And so another way to put that is the potential is infinite
917
+
918
+ 17:25.400 --> 17:27.040
919
+ because we did take a few steps
920
+
921
+ 17:27.040 --> 17:29.640
922
+ but you're saying those steps are just very initial steps.
923
+
924
+ 17:29.640 --> 17:32.560
925
+ So the Roomba is a hugely successful product
926
+
927
+ 17:32.560 --> 17:34.440
928
+ but you're saying that's just the very, very beginning.
929
+
930
+ 17:34.440 --> 17:36.520
931
+ That's just the very, very beginning.
932
+
933
+ 17:36.520 --> 17:37.960
934
+ It's the foot in the door.
935
+
936
+ 17:37.960 --> 17:40.840
937
+ And, you know, I think I was lucky
938
+
939
+ 17:40.840 --> 17:45.120
940
+ that in the early days of robotics,
941
+
942
+ 17:45.120 --> 17:48.360
943
+ people would ask me, when are you gonna clean my floor?
944
+
945
+ 17:48.360 --> 17:52.240
946
+ It was something that I grew up saying,
947
+
948
+ 17:53.560 --> 17:54.800
949
+ I got all these really good ideas
950
+
951
+ 17:54.800 --> 17:58.080
952
+ but everyone seems to want their floor clean.
953
+
954
+ 17:58.080 --> 18:02.480
955
+ And so maybe we should do that.
956
+
957
+ 18:02.480 --> 18:03.320
958
+ Yeah, your good ideas.
959
+
960
+ 18:03.320 --> 18:05.840
961
+ Earn the right to do the next thing after that.
962
+
963
+ 18:05.840 --> 18:10.160
964
+ So the good ideas have to match with the desire of the people
965
+
966
+ 18:10.160 --> 18:13.280
967
+ and then the actual cost has to like the business,
968
+
969
+ 18:13.280 --> 18:16.640
970
+ the financial aspect has to all match together.
971
+
972
+ 18:16.640 --> 18:20.160
973
+ Yeah, during our partnership
974
+
975
+ 18:20.160 --> 18:22.200
976
+ back a number of years ago with Johnson Wax,
977
+
978
+ 18:22.200 --> 18:27.200
979
+ they would explain to me that they would go into homes
980
+
981
+ 18:29.480 --> 18:32.520
982
+ and just watch how people lived
983
+
984
+ 18:32.520 --> 18:35.560
985
+ and try to figure out what were they doing
986
+
987
+ 18:35.560 --> 18:39.960
988
+ that they really didn't really like to do
989
+
990
+ 18:39.960 --> 18:42.440
991
+ but they had to do it frequently enough
992
+
993
+ 18:42.440 --> 18:47.440
994
+ that it was top of mind and understood as a burden.
995
+
996
+ 18:51.720 --> 18:53.000
997
+ Hey, let's make a product
998
+
999
+ 18:54.480 --> 18:58.480
1000
+ or come up with a solution to make that pain point
1001
+
1002
+ 18:58.480 --> 19:02.040
1003
+ less challenging.
1004
+
1005
+ 19:02.040 --> 19:07.040
1006
+ And sometimes we do certain burdens so often as a society
1007
+
1008
+ 19:07.400 --> 19:09.400
1009
+ that we actually don't even realize,
1010
+
1011
+ 19:09.400 --> 19:10.680
1012
+ like it's actually hard to see
1013
+
1014
+ 19:10.680 --> 19:13.200
1015
+ that that burden is something that could be removed.
1016
+
1017
+ 19:13.200 --> 19:15.760
1018
+ So it does require just going into the home
1019
+
1020
+ 19:15.760 --> 19:19.560
1021
+ and staring at, wait, how do I actually live life?
1022
+
1023
+ 19:19.560 --> 19:21.080
1024
+ What are the pain points?
1025
+
1026
+ 19:21.080 --> 19:26.080
1027
+ Yeah, and it getting those insights is a lot harder
1028
+
1029
+ 19:26.400 --> 19:29.360
1030
+ than it would seem it should be in retrospect.
1031
+
1032
+ 19:29.360 --> 19:33.120
1033
+ So how hard on that point,
1034
+
1035
+ 19:33.120 --> 19:37.480
1036
+ I mean, one of the big challenges of robotics
1037
+
1038
+ 19:37.480 --> 19:40.240
1039
+ is driving the cost to something,
1040
+
1041
+ 19:40.240 --> 19:42.240
1042
+ driving the cost down to something
1043
+
1044
+ 19:42.240 --> 19:45.680
1045
+ that consumers, people would afford.
1046
+
1047
+ 19:45.680 --> 19:48.880
1048
+ So people would be less likely to buy a Roomba
1049
+
1050
+ 19:48.880 --> 19:52.280
1051
+ if it costs $500,000, right?
1052
+
1053
+ 19:52.280 --> 19:55.840
1054
+ Which is probably sort of what a Roomba would cost
1055
+
1056
+ 19:55.840 --> 19:58.040
1057
+ several decades ago.
1058
+
1059
+ 19:58.040 --> 20:02.200
1060
+ So how do you drive, which I mentioned is very difficult.
1061
+
1062
+ 20:02.200 --> 20:04.200
1063
+ How do you drive the cost of a Roomba
1064
+
1065
+ 20:04.200 --> 20:07.920
1066
+ or a robot down such that people would want to buy it?
1067
+
1068
+ 20:07.920 --> 20:09.720
1069
+ When I started building robots,
1070
+
1071
+ 20:09.720 --> 20:12.240
1072
+ the cost of the robot had a lot to do
1073
+
1074
+ 20:12.240 --> 20:15.480
1075
+ with the amount of time it took to build it.
1076
+
1077
+ 20:15.480 --> 20:18.400
1078
+ And so that we would build our robots out of aluminum,
1079
+
1080
+ 20:18.400 --> 20:21.160
1081
+ I would go spend my time in the machine shop
1082
+
1083
+ 20:21.160 --> 20:22.840
1084
+ on the milling machine,
1085
+
1086
+ 20:23.840 --> 20:28.000
1087
+ cutting out the parts and so forth.
1088
+
1089
+ 20:28.000 --> 20:29.720
1090
+ And then when we got into the toy industry,
1091
+
1092
+ 20:29.720 --> 20:34.520
1093
+ I realized that if we were building at scale,
1094
+
1095
+ 20:34.520 --> 20:36.000
1096
+ I could determine the cost of the Roomba
1097
+
1098
+ 20:36.000 --> 20:38.920
1099
+ instead of adding up all the hours to mill out the parts,
1100
+
1101
+ 20:38.920 --> 20:40.360
1102
+ but by weighing it.
1103
+
1104
+ 20:42.080 --> 20:44.200
1105
+ And that's liberating.
1106
+
1107
+ 20:44.200 --> 20:49.200
1108
+ You can say, wow, the world has just changed
1109
+
1110
+ 20:49.560 --> 20:53.160
1111
+ as I think about construction in a different way.
1112
+
1113
+ 20:53.160 --> 20:56.880
1114
+ The 3D CAD tools that are available to us today,
1115
+
1116
+ 20:56.880 --> 21:01.680
1117
+ the operating at scale where I can do tooling
1118
+
1119
+ 21:01.680 --> 21:06.680
1120
+ and injection mold, an arbitrarily complicated part,
1121
+
1122
+ 21:07.080 --> 21:09.800
1123
+ and the cost is going to be basically
1124
+
1125
+ 21:09.800 --> 21:12.560
1126
+ the weight of the plastic in that part
1127
+
1128
+ 21:13.920 --> 21:16.360
1129
+ is incredibly exciting and liberating
1130
+
1131
+ 21:16.360 --> 21:18.560
1132
+ and opens up all sorts of opportunities.
1133
+
1134
+ 21:18.560 --> 21:23.560
1135
+ And for the sensing part of it, where we are today
1136
+
1137
+ 21:23.560 --> 21:28.560
1138
+ is instead of trying to build skin,
1139
+
1140
+ 21:29.280 --> 21:31.440
1141
+ which is like really hard for a long time.
1142
+
1143
+ 21:31.440 --> 21:36.440
1144
+ I spent creating strategies and ideas
1145
+
1146
+ 21:38.240 --> 21:42.720
1147
+ around how could we duplicate the skin on the human body
1148
+
1149
+ 21:42.720 --> 21:45.360
1150
+ because it's such an amazing sensor.
1151
+
1152
+ 21:47.880 --> 21:49.600
1153
+ Instead of going down that path,
1154
+
1155
+ 21:49.600 --> 21:53.000
1156
+ why don't we focus on vision?
1157
+
1158
+ 21:54.000 --> 21:59.000
1159
+ And how many of the problems that face a robot
1160
+
1161
+ 22:00.080 --> 22:04.880
1162
+ trying to do real work could be solved
1163
+
1164
+ 22:04.880 --> 22:08.480
1165
+ with a cheap camera and a big ass computer?
1166
+
1167
+ 22:09.480 --> 22:12.440
1168
+ And Moore's Law continues to work.
1169
+
1170
+ 22:12.440 --> 22:16.520
1171
+ The cell phone industry, the mobile industry
1172
+
1173
+ 22:16.520 --> 22:18.800
1174
+ is giving us better and better tools
1175
+
1176
+ 22:18.800 --> 22:21.120
1177
+ that can run on these embedded computers.
1178
+
1179
+ 22:21.120 --> 22:26.120
1180
+ And I think we passed an important moment,
1181
+
1182
+ 22:26.600 --> 22:31.600
1183
+ maybe two years ago, where you could put
1184
+
1185
+ 22:32.760 --> 22:37.520
1186
+ machine vision capable processors on robots
1187
+
1188
+ 22:37.520 --> 22:39.640
1189
+ at consumer price points.
1190
+
1191
+ 22:39.640 --> 22:43.040
1192
+ And I was waiting for it to happen.
1193
+
1194
+ 22:43.040 --> 22:46.600
1195
+ We avoided putting lasers on our robots
1196
+
1197
+ 22:46.600 --> 22:51.600
1198
+ to do navigation and instead spent years researching
1199
+
1200
+ 22:51.840 --> 22:54.640
1201
+ how to do vision based navigation
1202
+
1203
+ 22:54.640 --> 22:58.440
1204
+ because you could just see it
1205
+
1206
+ 22:58.440 --> 23:01.640
1207
+ where these technology trends were going
1208
+
1209
+ 23:01.640 --> 23:05.880
1210
+ and between injection molded plastic
1211
+
1212
+ 23:05.880 --> 23:08.040
1213
+ and a camera with a computer
1214
+
1215
+ 23:08.040 --> 23:10.840
1216
+ capable of running machine learning
1217
+
1218
+ 23:10.840 --> 23:12.560
1219
+ and visual object recognition,
1220
+
1221
+ 23:12.560 --> 23:15.560
1222
+ I could build an incredibly affordable,
1223
+
1224
+ 23:15.560 --> 23:18.760
1225
+ incredibly capable robot
1226
+
1227
+ 23:18.760 --> 23:21.240
1228
+ and that's gonna be the future.
1229
+
1230
+ 23:21.240 --> 23:23.440
1231
+ So you know, on that point with a small tangent
1232
+
1233
+ 23:23.440 --> 23:25.000
1234
+ but I think an important one,
1235
+
1236
+ 23:25.000 --> 23:27.600
1237
+ another industry in which I would say
1238
+
1239
+ 23:27.600 --> 23:31.880
1240
+ the only other industry in which there is automation
1241
+
1242
+ 23:31.880 --> 23:34.840
1243
+ actually touching people's lives today
1244
+
1245
+ 23:34.840 --> 23:36.520
1246
+ is autonomous vehicles.
1247
+
1248
+ 23:37.640 --> 23:42.360
1249
+ What the vision is described of using computer vision
1250
+
1251
+ 23:42.360 --> 23:44.520
1252
+ and using cheap camera sensors,
1253
+
1254
+ 23:44.520 --> 23:48.320
1255
+ there's a debate on that of LiDAR versus computer vision
1256
+
1257
+ 23:48.320 --> 23:53.320
1258
+ and sort of the Elon Musk famously said
1259
+
1260
+ 23:53.320 --> 23:58.320
1261
+ that LiDAR is a crutch that really in the long term,
1262
+
1263
+ 23:58.440 --> 24:00.880
1264
+ camera only is the right solution
1265
+
1266
+ 24:00.880 --> 24:03.520
1267
+ which echoes some of the ideas you're expressing.
1268
+
1269
+ 24:03.520 --> 24:05.120
1270
+ Of course, the domain
1271
+
1272
+ 24:05.120 --> 24:07.720
1273
+ in terms of its safety criticality is different.
1274
+
1275
+ 24:07.720 --> 24:10.720
1276
+ But what do you think about that approach
1277
+
1278
+ 24:10.720 --> 24:13.480
1279
+ in the autonomous vehicle space?
1280
+
1281
+ 24:13.480 --> 24:15.200
1282
+ And in general, do you see a connection
1283
+
1284
+ 24:15.200 --> 24:18.560
1285
+ between the incredible real world challenges
1286
+
1287
+ 24:18.560 --> 24:20.800
1288
+ you have to solve in the home with Roomba
1289
+
1290
+ 24:20.800 --> 24:23.000
1291
+ and I saw a demonstration of some of them,
1292
+
1293
+ 24:23.000 --> 24:27.920
1294
+ corner cases literally and autonomous vehicles.
1295
+
1296
+ 24:27.920 --> 24:31.720
1297
+ So there's absolutely a tremendous overlap
1298
+
1299
+ 24:31.720 --> 24:35.520
1300
+ between both the problems, you know,
1301
+
1302
+ 24:35.520 --> 24:38.680
1303
+ a robot vacuum and autonomous vehicle are trying to solve
1304
+
1305
+ 24:38.680 --> 24:41.880
1306
+ and the tools and the types of sensors
1307
+
1308
+ 24:41.880 --> 24:46.720
1309
+ that are being applied in the pursuit of the solutions.
1310
+
1311
+ 24:48.040 --> 24:53.040
1312
+ In my world, my environment is actually much harder
1313
+
1314
+ 24:54.680 --> 24:57.320
1315
+ than the environment in automobile travels.
1316
+
1317
+ 24:57.320 --> 25:02.320
1318
+ We don't have roads, we have t shirts, we have steps,
1319
+
1320
+ 25:02.720 --> 25:07.120
1321
+ we have a near infinite number of patterns and colors
1322
+
1323
+ 25:07.120 --> 25:10.200
1324
+ and surface textures on the floor.
1325
+
1326
+ 25:10.200 --> 25:12.560
1327
+ Especially from a visual perspective.
1328
+
1329
+ 25:12.560 --> 25:14.760
1330
+ Yeah, visually it's really tough.
1331
+
1332
+ 25:14.760 --> 25:18.880
1333
+ Is an infinitely variable.
1334
+
1335
+ 25:18.880 --> 25:22.560
1336
+ On the other hand, safety is way easier on the inside.
1337
+
1338
+ 25:22.560 --> 25:26.440
1339
+ My robots, they're not very heavy,
1340
+
1341
+ 25:26.440 --> 25:28.280
1342
+ they're not very fast.
1343
+
1344
+ 25:28.280 --> 25:31.480
1345
+ If they bump into your foot, you think it's funny.
1346
+
1347
+ 25:32.720 --> 25:36.920
1348
+ And, you know, and autonomous vehicles
1349
+
1350
+ 25:36.920 --> 25:39.400
1351
+ kind of have the inverse problem.
1352
+
1353
+ 25:39.400 --> 25:44.400
1354
+ And so that for me saying vision is the future,
1355
+
1356
+ 25:45.920 --> 25:47.800
1357
+ I can say that without reservation.
1358
+
1359
+ 25:49.480 --> 25:54.480
1360
+ For autonomous vehicles, I think I believe what Elon's saying
1361
+
1362
+ 25:55.360 --> 25:59.040
1363
+ about the future is ultimately gonna be vision.
1364
+
1365
+ 25:59.040 --> 26:01.000
1366
+ Maybe if we put a cheap lighter on there
1367
+
1368
+ 26:01.000 --> 26:03.280
1369
+ as a backup sensor, it might not be the worst idea
1370
+
1371
+ 26:03.280 --> 26:04.120
1372
+ in the world.
1373
+
1374
+ 26:04.120 --> 26:04.960
1375
+ So the stakes are much higher.
1376
+
1377
+ 26:04.960 --> 26:05.800
1378
+ The stakes are much higher.
1379
+
1380
+ 26:05.800 --> 26:08.200
1381
+ You have to be much more careful thinking through
1382
+
1383
+ 26:08.200 --> 26:10.720
1384
+ how far away that future is.
1385
+
1386
+ 26:10.720 --> 26:14.240
1387
+ Right, but I think that the primary
1388
+
1389
+ 26:16.080 --> 26:19.320
1390
+ environmental understanding sensor
1391
+
1392
+ 26:19.320 --> 26:21.760
1393
+ is going to be a visual system.
1394
+
1395
+ 26:21.760 --> 26:23.000
1396
+ Visual system.
1397
+
1398
+ 26:23.000 --> 26:25.560
1399
+ So on that point, well, let me ask,
1400
+
1401
+ 26:25.560 --> 26:29.440
1402
+ do you hope there's an iRobot robot in every home
1403
+
1404
+ 26:29.440 --> 26:30.920
1405
+ in the world one day?
1406
+
1407
+ 26:31.880 --> 26:34.880
1408
+ I expect there to be at least one iRobot robot
1409
+
1410
+ 26:34.880 --> 26:36.440
1411
+ in every home.
1412
+
1413
+ 26:36.440 --> 26:41.160
1414
+ You know, we've sold 25 million robots.
1415
+
1416
+ 26:41.160 --> 26:44.600
1417
+ So we're in about 10% of US homes,
1418
+
1419
+ 26:44.600 --> 26:45.760
1420
+ which is a great start.
1421
+
1422
+ 26:47.120 --> 26:51.080
1423
+ But I think that when we think about the numbers
1424
+
1425
+ 26:51.080 --> 26:55.840
1426
+ of things that robots can do, you know,
1427
+
1428
+ 26:55.840 --> 26:58.560
1429
+ today I can vacuum your floor, mop your floor,
1430
+
1431
+ 26:58.560 --> 27:01.240
1432
+ cut your lawn or soon we'll be able to cut your lawn.
1433
+
1434
+ 27:01.240 --> 27:06.240
1435
+ But there are more things that we could do in the home.
1436
+
1437
+ 27:06.720 --> 27:11.520
1438
+ And I hope that we continue using the techniques
1439
+
1440
+ 27:11.520 --> 27:14.480
1441
+ I described around exploiting computer vision
1442
+
1443
+ 27:14.480 --> 27:18.680
1444
+ and low cost manufacturing that we'll be able
1445
+
1446
+ 27:18.680 --> 27:22.680
1447
+ to create these solutions at affordable price points.
1448
+
1449
+ 27:22.680 --> 27:25.600
1450
+ So let me ask, on that point of a robot in every home,
1451
+
1452
+ 27:25.600 --> 27:26.880
1453
+ that's my dream as well.
1454
+
1455
+ 27:26.880 --> 27:28.400
1456
+ I'd love to see that.
1457
+
1458
+ 27:28.400 --> 27:31.880
1459
+ I think the possibilities there are indeed
1460
+
1461
+ 27:31.880 --> 27:34.560
1462
+ infinite positive possibilities.
1463
+
1464
+ 27:34.560 --> 27:39.560
1465
+ But in our current culture, no thanks to science fiction
1466
+
1467
+ 27:39.800 --> 27:44.720
1468
+ and so on, there's a serious kind of hesitation
1469
+
1470
+ 27:44.720 --> 27:47.120
1471
+ and anxiety, concern about robots
1472
+
1473
+ 27:47.120 --> 27:50.080
1474
+ and also a concern about privacy.
1475
+
1476
+ 27:51.480 --> 27:54.040
1477
+ And it's a fascinating question to me
1478
+
1479
+ 27:54.040 --> 27:59.040
1480
+ why that concern is amongst a certain group of people
1481
+
1482
+ 27:59.600 --> 28:02.880
1483
+ is as intense as it is.
1484
+
1485
+ 28:02.880 --> 28:04.280
1486
+ So you have to think about it
1487
+
1488
+ 28:04.280 --> 28:05.520
1489
+ because it's a serious concern,
1490
+
1491
+ 28:05.520 --> 28:08.040
1492
+ but I wonder how you address it best.
1493
+
1494
+ 28:08.040 --> 28:09.840
1495
+ So from a perspective of a vision sensor,
1496
+
1497
+ 28:09.840 --> 28:14.200
1498
+ so robots that move about the home and sense the world,
1499
+
1500
+ 28:14.200 --> 28:19.200
1501
+ how do you alleviate people's privacy concerns?
1502
+
1503
+ 28:19.720 --> 28:22.880
1504
+ How do you make sure that they can trust iRobot
1505
+
1506
+ 28:22.880 --> 28:25.360
1507
+ and the robots that they share their home with?
1508
+
1509
+ 28:26.640 --> 28:28.160
1510
+ I think that's a great question.
1511
+
1512
+ 28:28.160 --> 28:33.160
1513
+ And we've really leaned way forward on this
1514
+
1515
+ 28:33.760 --> 28:38.760
1516
+ because given our vision as to the role the company
1517
+
1518
+ 28:38.880 --> 28:40.720
1519
+ intends to play in the home,
1520
+
1521
+ 28:43.000 --> 28:45.480
1522
+ really for us, make or break is,
1523
+
1524
+ 28:45.480 --> 28:50.480
1525
+ can our approach be trusted to protecting the data
1526
+
1527
+ 28:50.480 --> 28:53.560
1528
+ and the privacy of the people who have our robots?
1529
+
1530
+ 28:53.560 --> 28:56.920
1531
+ And so we've gone out publicly
1532
+
1533
+ 28:56.920 --> 29:00.440
1534
+ with a privacy manifesto stating we'll never sell your data.
1535
+
1536
+ 29:00.440 --> 29:05.440
1537
+ We've adopted GDPR, not just where GDPR is required,
1538
+
1539
+ 29:05.520 --> 29:10.520
1540
+ but globally, we have ensured that images
1541
+
1542
+ 29:16.640 --> 29:18.080
1543
+ don't leave the robot.
1544
+
1545
+ 29:18.080 --> 29:22.120
1546
+ So processing data from the visual sensors
1547
+
1548
+ 29:22.120 --> 29:23.680
1549
+ happens locally on the robot
1550
+
1551
+ 29:23.680 --> 29:28.680
1552
+ and only semantic knowledge of the home
1553
+
1554
+ 29:29.520 --> 29:32.720
1555
+ with the consumer's consent is sent up.
1556
+
1557
+ 29:32.720 --> 29:34.480
1558
+ We show you what we know
1559
+
1560
+ 29:34.480 --> 29:39.480
1561
+ and are trying to go use data as an enabler
1562
+
1563
+ 29:41.560 --> 29:44.120
1564
+ for the performance of the robots
1565
+
1566
+ 29:44.120 --> 29:49.120
1567
+ with the informed consent and understanding
1568
+
1569
+ 29:50.040 --> 29:52.440
1570
+ of the people who own those robots.
1571
+
1572
+ 29:52.440 --> 29:56.880
1573
+ And we take it very seriously.
1574
+
1575
+ 29:56.880 --> 30:01.880
1576
+ And ultimately, we think that by showing a customer
1577
+
1578
+ 30:01.880 --> 30:06.880
1579
+ that if you let us build a semantic map of your home
1580
+
1581
+ 30:07.360 --> 30:09.000
1582
+ and know where the rooms are,
1583
+
1584
+ 30:09.000 --> 30:11.760
1585
+ well, then you can say clean the kitchen.
1586
+
1587
+ 30:11.760 --> 30:13.720
1588
+ If you don't want the robot to do that,
1589
+
1590
+ 30:13.720 --> 30:14.560
1591
+ don't make the map,
1592
+
1593
+ 30:14.560 --> 30:17.000
1594
+ it'll do its best job cleaning your home,
1595
+
1596
+ 30:17.000 --> 30:18.640
1597
+ but it won't be able to do that.
1598
+
1599
+ 30:18.640 --> 30:20.280
1600
+ And if you ever want us to forget
1601
+
1602
+ 30:20.280 --> 30:22.080
1603
+ that we know that it's your kitchen,
1604
+
1605
+ 30:22.080 --> 30:26.680
1606
+ you can have confidence that we will do that for you.
1607
+
1608
+ 30:26.680 --> 30:31.680
1609
+ So we're trying to go and be a sort of a data 2.0
1610
+
1611
+ 30:34.520 --> 30:37.680
1612
+ perspective company where we treat the data
1613
+
1614
+ 30:37.680 --> 30:40.800
1615
+ that the robots have of the consumer's home
1616
+
1617
+ 30:40.800 --> 30:43.200
1618
+ as if it were the consumer's data
1619
+
1620
+ 30:43.200 --> 30:47.360
1621
+ and that they have rights to it.
1622
+
1623
+ 30:47.360 --> 30:50.960
1624
+ So we think by being the good guys on this front,
1625
+
1626
+ 30:50.960 --> 30:53.840
1627
+ we can build the trust and thus be entrusted
1628
+
1629
+ 30:55.120 --> 31:00.120
1630
+ to enable robots to do more things that are thoughtful.
1631
+
1632
+ 31:00.200 --> 31:04.520
1633
+ You think people's worries will diminish over time?
1634
+
1635
+ 31:04.520 --> 31:06.840
1636
+ As a society, broadly speaking,
1637
+
1638
+ 31:06.840 --> 31:09.320
1639
+ do you think you can win over trust,
1640
+
1641
+ 31:09.320 --> 31:10.640
1642
+ not just for the company,
1643
+
1644
+ 31:10.640 --> 31:14.880
1645
+ but just the comfort of people have with AI in their home
1646
+
1647
+ 31:14.880 --> 31:17.040
1648
+ enriching their lives in some way?
1649
+
1650
+ 31:17.040 --> 31:19.560
1651
+ I think we're an interesting place today
1652
+
1653
+ 31:19.560 --> 31:22.400
1654
+ where it's less about winning them over
1655
+
1656
+ 31:22.400 --> 31:26.240
1657
+ and more about finding a way to talk about privacy
1658
+
1659
+ 31:26.240 --> 31:28.840
1660
+ in a way that more people can understand.
1661
+
1662
+ 31:28.840 --> 31:30.920
1663
+ I would tell you that today,
1664
+
1665
+ 31:30.920 --> 31:33.320
1666
+ when there's a privacy breach,
1667
+
1668
+ 31:33.320 --> 31:37.040
1669
+ people get very upset and then go to the store
1670
+
1671
+ 31:37.040 --> 31:38.320
1672
+ and buy the cheapest thing,
1673
+
1674
+ 31:38.320 --> 31:41.000
1675
+ paying no attention to whether or not the products
1676
+
1677
+ 31:41.000 --> 31:44.640
1678
+ that they're buying honor privacy standards or not.
1679
+
1680
+ 31:44.640 --> 31:48.600
1681
+ In fact, if I put on the package of my Roomba,
1682
+
1683
+ 31:50.080 --> 31:53.640
1684
+ the privacy commitments that we have,
1685
+
1686
+ 31:53.640 --> 31:58.640
1687
+ I would sell less than I would if I did nothing at all
1688
+
1689
+ 31:58.720 --> 32:00.400
1690
+ and that needs to change.
1691
+
1692
+ 32:00.400 --> 32:02.880
1693
+ So it's not a question about earning trust.
1694
+
1695
+ 32:02.880 --> 32:05.000
1696
+ I think that's necessary but not sufficient.
1697
+
1698
+ 32:05.000 --> 32:08.440
1699
+ We need to figure out how to have a comfortable set
1700
+
1701
+ 32:08.440 --> 32:13.440
1702
+ of what is the grade A meat standard applied to privacy
1703
+
1704
+ 32:14.200 --> 32:18.400
1705
+ that customers can trust and understand
1706
+
1707
+ 32:18.400 --> 32:20.480
1708
+ and then use in the buying decisions.
1709
+
1710
+ 32:23.040 --> 32:25.520
1711
+ That will reward companies for good behavior
1712
+
1713
+ 32:25.520 --> 32:29.880
1714
+ and that will ultimately be how this moves forward.
1715
+
1716
+ 32:29.880 --> 32:32.680
1717
+ And maybe be part of the conversation
1718
+
1719
+ 32:32.680 --> 32:34.800
1720
+ between regular people about what it means,
1721
+
1722
+ 32:34.800 --> 32:36.280
1723
+ what privacy means.
1724
+
1725
+ 32:36.280 --> 32:38.400
1726
+ If you have some standards, you can say,
1727
+
1728
+ 32:38.400 --> 32:41.080
1729
+ you can start talking about who's following them,
1730
+
1731
+ 32:41.080 --> 32:42.680
1732
+ who's not, have more.
1733
+
1734
+ 32:42.680 --> 32:45.440
1735
+ Because most people are actually quite clueless
1736
+
1737
+ 32:45.440 --> 32:47.320
1738
+ about all aspects of artificial intelligence
1739
+
1740
+ 32:47.320 --> 32:48.400
1741
+ or data collection and so on.
1742
+
1743
+ 32:48.400 --> 32:49.920
1744
+ It would be nice to change that
1745
+
1746
+ 32:49.920 --> 32:52.760
1747
+ for people to understand the good that AI can do
1748
+
1749
+ 32:52.760 --> 32:56.520
1750
+ and it's not some system that's trying to steal
1751
+
1752
+ 32:56.520 --> 32:58.760
1753
+ all the most sensitive data.
1754
+
1755
+ 32:58.760 --> 33:02.640
1756
+ Do you think, do you dream of a Roomba
1757
+
1758
+ 33:02.640 --> 33:05.240
1759
+ with human level intelligence one day?
1760
+
1761
+ 33:05.240 --> 33:10.240
1762
+ So you've mentioned a very successful localization
1763
+
1764
+ 33:10.520 --> 33:11.880
1765
+ and mapping of the environment,
1766
+
1767
+ 33:11.880 --> 33:14.360
1768
+ being able to do some basic communication
1769
+
1770
+ 33:14.360 --> 33:16.560
1771
+ to say go clean the kitchen.
1772
+
1773
+ 33:16.560 --> 33:21.360
1774
+ Do you see in your maybe more bored moments,
1775
+
1776
+ 33:22.880 --> 33:24.840
1777
+ once you get the beer,
1778
+
1779
+ 33:24.840 --> 33:27.000
1780
+ just sit back with that beer
1781
+
1782
+ 33:27.000 --> 33:30.800
1783
+ and have a chat on a Friday night with the Roomba
1784
+
1785
+ 33:30.800 --> 33:32.840
1786
+ about how your day went.
1787
+
1788
+ 33:34.120 --> 33:37.560
1789
+ So through your latter question, absolutely.
1790
+
1791
+ 33:38.640 --> 33:40.720
1792
+ To your former question as to whether Roomba
1793
+
1794
+ 33:40.720 --> 33:43.680
1795
+ can have human level intelligence, not in my lifetime.
1796
+
1797
+ 33:45.200 --> 33:48.440
1798
+ You can have you, you can have a great conversation,
1799
+
1800
+ 33:49.680 --> 33:51.960
1801
+ a meaningful conversation with a robot
1802
+
1803
+ 33:53.920 --> 33:55.400
1804
+ without it having anything
1805
+
1806
+ 33:55.400 --> 33:57.760
1807
+ that resembles human level intelligence.
1808
+
1809
+ 33:57.760 --> 34:02.760
1810
+ And I think that as long as you realize
1811
+
1812
+ 34:02.800 --> 34:05.280
1813
+ that conversation is not about the robot
1814
+
1815
+ 34:06.360 --> 34:08.600
1816
+ and making the robot feel good.
1817
+
1818
+ 34:08.600 --> 34:13.600
1819
+ That conversation is about you learning interesting things
1820
+
1821
+ 34:14.880 --> 34:18.400
1822
+ that make you feel like the conversation
1823
+
1824
+ 34:18.400 --> 34:21.240
1825
+ that you had with the robot is
1826
+
1827
+ 34:23.800 --> 34:27.280
1828
+ a pretty awesome way of learning something.
1829
+
1830
+ 34:27.280 --> 34:30.800
1831
+ And it could be about what kind of day your pet had.
1832
+
1833
+ 34:30.800 --> 34:35.040
1834
+ It could be about, how can I make my home
1835
+
1836
+ 34:35.040 --> 34:36.160
1837
+ more energy efficient?
1838
+
1839
+ 34:36.160 --> 34:39.600
1840
+ It could be about, if I'm thinking about
1841
+
1842
+ 34:39.600 --> 34:41.720
1843
+ climbing Mount Everest, what should I know?
1844
+
1845
+ 34:44.320 --> 34:46.480
1846
+ And that's a very doable thing.
1847
+
1848
+ 34:48.600 --> 34:51.480
1849
+ But if I think that that conversation
1850
+
1851
+ 34:51.480 --> 34:53.640
1852
+ I'm gonna have with the robot is,
1853
+
1854
+ 34:53.640 --> 34:56.760
1855
+ I'm gonna be rewarded by making the robot happy.
1856
+
1857
+ 34:56.760 --> 34:58.720
1858
+ But I could have just put a button on the robot
1859
+
1860
+ 34:58.720 --> 35:00.280
1861
+ that you could push and the robot would smile
1862
+
1863
+ 35:00.280 --> 35:02.120
1864
+ and that sort of thing.
1865
+
1866
+ 35:02.120 --> 35:04.160
1867
+ So I think you need to think about the question
1868
+
1869
+ 35:04.160 --> 35:06.680
1870
+ in the right way.
1871
+
1872
+ 35:06.680 --> 35:11.520
1873
+ And robots can be awesomely effective
1874
+
1875
+ 35:11.520 --> 35:14.400
1876
+ at helping people feel less isolated,
1877
+
1878
+ 35:14.400 --> 35:17.520
1879
+ learn more about the home that they live in
1880
+
1881
+ 35:17.520 --> 35:21.920
1882
+ and fill some of those lonely gaps
1883
+
1884
+ 35:21.920 --> 35:23.760
1885
+ that we wish we were engaged
1886
+
1887
+ 35:23.760 --> 35:25.640
1888
+ learning cool stuff about our world.
1889
+
1890
+ 35:25.640 --> 35:30.640
1891
+ Last question, if you could hang out for a day
1892
+
1893
+ 35:30.640 --> 35:34.640
1894
+ with a robot from science fiction, movies, books
1895
+
1896
+ 35:34.640 --> 35:39.640
1897
+ and safely pick, safely pick its brain for that day,
1898
+
1899
+ 35:39.640 --> 35:41.640
1900
+ who would you pick?
1901
+
1902
+ 35:41.640 --> 35:42.640
1903
+ Data.
1904
+
1905
+ 35:42.640 --> 35:43.640
1906
+ Data.
1907
+
1908
+ 35:43.640 --> 35:44.640
1909
+ From Star Trek.
1910
+
1911
+ 35:44.640 --> 35:48.640
1912
+ I think that data is really smart.
1913
+
1914
+ 35:48.640 --> 35:52.640
1915
+ Data's been through a lot trying to go and save the galaxy
1916
+
1917
+ 35:52.640 --> 35:57.640
1918
+ and I'm really interested actually in emotion and robotics.
1919
+
1920
+ 35:58.640 --> 36:00.640
1921
+ And I think you'd have a lot to say about that
1922
+
1923
+ 36:00.640 --> 36:05.640
1924
+ because I believe actually that emotion plays
1925
+
1926
+ 36:05.640 --> 36:10.640
1927
+ an incredibly useful role in doing reasonable things
1928
+
1929
+ 36:11.640 --> 36:14.640
1930
+ in situations where we have imperfect understanding
1931
+
1932
+ 36:14.640 --> 36:15.640
1933
+ of what's going on.
1934
+
1935
+ 36:15.640 --> 36:18.640
1936
+ In social situations when there's imperfect information.
1937
+
1938
+ 36:18.640 --> 36:23.640
1939
+ In social situations also in competitive
1940
+
1941
+ 36:23.640 --> 36:25.640
1942
+ or dangerous situations,
1943
+
1944
+ 36:26.640 --> 36:30.640
1945
+ that we have emotion for a reason.
1946
+
1947
+ 36:30.640 --> 36:35.640
1948
+ And so that ultimately, my theory is that as robots
1949
+
1950
+ 36:35.640 --> 36:36.640
1951
+ get smarter and smarter,
1952
+
1953
+ 36:36.640 --> 36:38.640
1954
+ they're actually going to get more emotional
1955
+
1956
+ 36:38.640 --> 36:46.640
1957
+ because you can't actually survive on pure logic.
1958
+
1959
+ 36:46.640 --> 36:51.640
1960
+ Because only a very tiny fraction of the situations
1961
+
1962
+ 36:51.640 --> 36:55.640
1963
+ we find ourselves in can be resolved reasonably with logic.
1964
+
1965
+ 36:55.640 --> 36:57.640
1966
+ And so I think data would have a lot to say about that
1967
+
1968
+ 36:57.640 --> 36:59.640
1969
+ and so I could find out whether he agrees.
1970
+
1971
+ 36:59.640 --> 37:02.640
1972
+ What, if you could ask data one question
1973
+
1974
+ 37:02.640 --> 37:05.640
1975
+ and you would get a deep, honest answer to,
1976
+
1977
+ 37:05.640 --> 37:06.640
1978
+ what would you ask?
1979
+
1980
+ 37:06.640 --> 37:08.640
1981
+ What's Captain Picard really like?
1982
+
1983
+ 37:08.640 --> 37:12.640
1984
+ Okay, I think that's the perfect way to end the call
1985
+
1986
+ 37:12.640 --> 37:14.640
1987
+ and thank you so much for talking today.
1988
+
1989
+ 37:14.640 --> 37:16.640
1990
+ I really appreciate it.
1991
+
1992
+ 37:16.640 --> 37:45.640
1993
+ My pleasure.
1994
+
vtt/episode_040_large.vtt ADDED
The diff for this file is too large to render. See raw diff