File size: 93,769 Bytes
a3be5d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
WEBVTT

00:00.000 --> 00:03.520
 The following is a conversation with Jürgen Schmidhuber.

00:03.520 --> 00:06.360
 He's the codirector of its CS Swiss AI lab

00:06.360 --> 00:10.360
 and a cocreator of long short term memory networks.

00:10.360 --> 00:13.720
 LSDMs are used in billions of devices today

00:13.720 --> 00:17.400
 for speech recognition, translation, and much more.

00:17.400 --> 00:20.800
 Over 30 years, he has proposed a lot of interesting

00:20.800 --> 00:24.800
 out of the box ideas, a meta learning, adversarial networks,

00:24.800 --> 00:28.720
 computer vision, and even a formal theory of quote,

00:28.720 --> 00:32.360
 creativity, curiosity, and fun.

00:32.360 --> 00:34.920
 This conversation is part of the MIT course

00:34.920 --> 00:36.520
 on artificial general intelligence

00:36.520 --> 00:38.840
 and the artificial intelligence podcast.

00:38.840 --> 00:41.960
 If you enjoy it, subscribe on YouTube, iTunes,

00:41.960 --> 00:43.960
 or simply connect with me on Twitter

00:43.960 --> 00:47.280
 at Lex Friedman spelled F R I D.

00:47.280 --> 00:51.480
 And now here's my conversation with Jürgen Schmidhuber.

00:53.080 --> 00:55.640
 Early on you dreamed of AI systems

00:55.640 --> 00:58.680
 that self improve recursively.

00:58.680 --> 01:01.440
 When was that dream born?

01:01.440 --> 01:02.840
 When I was a baby.

01:02.840 --> 01:04.000
 No, that's not true.

01:04.000 --> 01:06.200
 When I was a teenager.

01:06.200 --> 01:09.400
 And what was the catalyst for that birth?

01:09.400 --> 01:12.800
 What was the thing that first inspired you?

01:12.800 --> 01:15.000
 When I was a boy, I...

01:17.400 --> 01:19.880
 I was thinking about what to do in my life

01:19.880 --> 01:23.560
 and then I thought the most exciting thing

01:23.560 --> 01:27.160
 is to solve the riddles of the universe.

01:27.160 --> 01:30.720
 And that means you have to become a physicist.

01:30.720 --> 01:35.640
 However, then I realized that there's something even grander.

01:35.640 --> 01:39.680
 You can try to build a machine.

01:39.680 --> 01:41.920
 That isn't really a machine any longer.

01:41.920 --> 01:44.280
 That learns to become a much better physicist

01:44.280 --> 01:46.840
 than I could ever hope to be.

01:46.840 --> 01:50.120
 And that's how I thought maybe I can multiply

01:50.120 --> 01:54.320
 my tiny little bit of creativity into infinity.

01:54.320 --> 01:57.160
 But ultimately, that creativity will be multiplied

01:57.160 --> 01:59.160
 to understand the universe around us.

01:59.160 --> 02:05.640
 That's the curiosity for that mystery that drove you.

02:05.640 --> 02:08.320
 Yes, so if you can build a machine

02:08.320 --> 02:13.760
 that learns to solve more and more complex problems

02:13.760 --> 02:16.720
 and more and more general problems over,

02:16.720 --> 02:22.520
 then you basically have solved all the problems.

02:22.520 --> 02:25.960
 At least all the solvable problems.

02:25.960 --> 02:27.080
 So how do you think...

02:27.080 --> 02:31.440
 What is the mechanism for that kind of general solver look like?

02:31.440 --> 02:35.480
 Obviously, we don't quite yet have one or know

02:35.480 --> 02:37.040
 how to build one boy of ideas

02:37.040 --> 02:40.800
 and you have had throughout your career several ideas about it.

02:40.800 --> 02:43.600
 So how do you think about that mechanism?

02:43.600 --> 02:48.640
 So in the 80s, I thought about how to build this machine

02:48.640 --> 02:51.000
 that learns to solve all these problems

02:51.000 --> 02:54.120
 that I cannot solve myself.

02:54.120 --> 02:57.120
 And I thought it is clear, it has to be a machine

02:57.120 --> 03:00.880
 that not only learns to solve this problem here

03:00.880 --> 03:02.640
 and this problem here,

03:02.640 --> 03:06.240
 but it also has to learn to improve

03:06.240 --> 03:09.360
 the learning algorithm itself.

03:09.360 --> 03:12.480
 So it has to have the learning algorithm

03:12.480 --> 03:15.720
 in a representation that allows it to inspect it

03:15.720 --> 03:19.240
 and modify it so that it can come up

03:19.240 --> 03:22.080
 with a better learning algorithm.

03:22.080 --> 03:25.680
 So I called that meta learning, learning to learn

03:25.680 --> 03:28.040
 and recursive self improvement.

03:28.040 --> 03:29.840
 That is really the pinnacle of that,

03:29.840 --> 03:35.960
 where you then not only learn how to improve

03:35.960 --> 03:37.480
 on that problem and on that,

03:37.480 --> 03:41.080
 but you also improve the way the machine improves

03:41.080 --> 03:43.480
 and you also improve the way it improves the way

03:43.480 --> 03:45.720
 it improves itself.

03:45.720 --> 03:48.560
 And that was my 1987 diploma thesis,

03:48.560 --> 03:53.200
 which was all about that hierarchy of meta learners

03:53.200 --> 03:57.240
 that have no computational limits

03:57.240 --> 03:59.920
 except for the well known limits

03:59.920 --> 04:03.160
 that Gödel identified in 1931

04:03.160 --> 04:05.640
 and for the limits of physics.

04:06.480 --> 04:10.040
 In the recent years, meta learning has gained popularity

04:10.040 --> 04:12.760
 in a specific kind of form.

04:12.760 --> 04:16.000
 You've talked about how that's not really meta learning

04:16.000 --> 04:21.000
 with neural networks, that's more basic transfer learning.

04:21.480 --> 04:22.720
 Can you talk about the difference

04:22.720 --> 04:25.440
 between the big general meta learning

04:25.440 --> 04:27.960
 and a more narrow sense of meta learning

04:27.960 --> 04:30.880
 the way it's used today, the way it's talked about today?

04:30.880 --> 04:33.440
 Let's take the example of a deep neural network

04:33.440 --> 04:37.240
 that has learned to classify images.

04:37.240 --> 04:40.080
 And maybe you have trained that network

04:40.080 --> 04:43.800
 on 100 different databases of images.

04:43.800 --> 04:48.120
 And now a new database comes along

04:48.120 --> 04:52.000
 and you want to quickly learn the new thing as well.

04:53.400 --> 04:57.720
 So one simple way of doing that is you take the network

04:57.720 --> 05:02.440
 which already knows 100 types of databases

05:02.440 --> 05:06.320
 and then you just take the top layer of that

05:06.320 --> 05:11.320
 and you retrain that using the new labeled data

05:11.320 --> 05:14.720
 that you have in the new image database.

05:14.720 --> 05:17.320
 And then it turns out that it really, really quickly

05:17.320 --> 05:20.560
 can learn that too, one shot basically,

05:20.560 --> 05:24.280
 because from the first 100 data sets,

05:24.280 --> 05:27.520
 it already has learned so much about computer vision

05:27.520 --> 05:31.840
 that it can reuse that and that is then almost good enough

05:31.840 --> 05:34.240
 to solve the new tasks except you need a little bit

05:34.240 --> 05:37.040
 of adjustment on the top.

05:37.040 --> 05:40.200
 So that is transfer learning

05:40.200 --> 05:43.480
 and it has been done in principle for many decades.

05:43.480 --> 05:45.680
 People have done similar things for decades.

05:47.360 --> 05:49.880
 Meta learning, true meta learning is about

05:49.880 --> 05:53.880
 having the learning algorithm itself

05:54.440 --> 05:59.440
 open to introspection by the system that is using it

06:00.440 --> 06:03.760
 and also open to modification

06:03.760 --> 06:07.200
 such that the learning system has an opportunity

06:07.200 --> 06:11.400
 to modify any part of the learning algorithm

06:11.400 --> 06:16.000
 and then evaluate the consequences of that modification

06:16.000 --> 06:21.000
 and then learn from that to create a better learning algorithm

06:22.000 --> 06:23.960
 and so on recursively.

06:24.960 --> 06:27.680
 So that's a very different animal

06:27.680 --> 06:32.680
 where you are opening the space of possible learning algorithms

06:32.680 --> 06:35.520
 to the learning system itself.

06:35.520 --> 06:39.040
 Right, so you've like in the 2004 paper,

06:39.040 --> 06:42.440
 you describe Gatal machines and programs

06:42.440 --> 06:44.520
 that rewrite themselves, right?

06:44.520 --> 06:47.520
 Philosophically and even in your paper mathematically,

06:47.520 --> 06:50.000
 these are really compelling ideas,

06:50.000 --> 06:55.000
 but practically, do you see these self referential programs

06:55.320 --> 06:59.400
 being successful in the near term to having an impact

06:59.400 --> 07:03.040
 where sort of it demonstrates to the world

07:03.040 --> 07:08.040
 that this direction is a good one to pursue in the near term?

07:08.680 --> 07:11.400
 Yes, we had these two different types

07:11.400 --> 07:13.440
 of fundamental research,

07:13.440 --> 07:15.840
 how to build a universal problem solver,

07:15.840 --> 07:19.840
 one basically exploiting proof search

07:23.000 --> 07:24.960
 and things like that that you need to come up

07:24.960 --> 07:29.960
 with asymptotically optimal, theoretically optimal

07:30.320 --> 07:33.200
 self improvers and problem solvers.

07:34.200 --> 07:39.200
 However, one has to admit that through this proof search

07:40.640 --> 07:43.640
 comes in an additive constant,

07:43.640 --> 07:46.800
 an overhead, an additive overhead

07:46.800 --> 07:51.800
 that vanishes in comparison to what you have to do

07:51.800 --> 07:53.960
 to solve large problems.

07:53.960 --> 07:56.920
 However, for many of the small problems

07:56.920 --> 07:59.920
 that we want to solve in our everyday life,

07:59.920 --> 08:02.440
 we cannot ignore this constant overhead.

08:02.440 --> 08:07.440
 And that's why we also have been doing other things,

08:07.440 --> 08:11.160
 non universal things such as recurrent neural networks

08:11.160 --> 08:14.360
 which are trained by gradient descent

08:14.360 --> 08:17.600
 and local search techniques which aren't universal at all,

08:17.600 --> 08:20.280
 which aren't provably optimal at all

08:20.280 --> 08:22.000
 like the other stuff that we did,

08:22.000 --> 08:24.600
 but which are much more practical

08:24.600 --> 08:27.920
 as long as we only want to solve the small problems

08:27.920 --> 08:32.920
 that we are typically trying to solve in this environment here.

08:34.680 --> 08:38.200
 So the universal problem solvers like the Gödel machine

08:38.200 --> 08:41.320
 but also Markus Hutter's fastest way

08:41.320 --> 08:43.560
 of solving all possible problems,

08:43.560 --> 08:47.360
 which he developed around 2002 in my lab,

08:47.360 --> 08:51.280
 they are associated with these constant overheads

08:51.280 --> 08:53.280
 for proof search, which guarantees

08:53.280 --> 08:55.480
 that the thing that you're doing is optimal.

08:55.480 --> 08:59.880
 For example, there is this fastest way

08:59.880 --> 09:03.880
 of solving all problems with a computable solution

09:03.880 --> 09:05.880
 which is due to Markus Hutter.

09:05.880 --> 09:10.880
 And to explain what's going on there,

09:10.880 --> 09:13.040
 let's take traveling salesman problems.

09:14.240 --> 09:16.160
 With traveling salesman problems,

09:16.160 --> 09:20.080
 you have a number of cities, N cities,

09:20.080 --> 09:22.480
 and you try to find the shortest path

09:22.480 --> 09:26.480
 through all these cities without visiting any city twice.

09:28.480 --> 09:31.040
 And nobody knows the fastest way

09:31.040 --> 09:35.040
 of solving traveling salesman problems, TSPs,

09:37.520 --> 09:40.480
 but let's assume there is a method of solving them

09:40.480 --> 09:44.480
 within N to the five operations

09:44.480 --> 09:48.560
 where N is the number of cities.

09:50.160 --> 09:54.560
 Then the universal method of Markus

09:54.560 --> 09:58.560
 is going to solve the same traveling salesman problem

09:58.560 --> 10:02.080
 also within N to the five steps,

10:02.080 --> 10:06.360
 plus O of one, plus a constant number of steps

10:06.360 --> 10:09.240
 that you need for the proof searcher,

10:09.240 --> 10:13.800
 which you need to show that this particular

10:13.800 --> 10:17.240
 class of problems that traveling salesman problems

10:17.240 --> 10:19.360
 can be solved within a certain time bound,

10:20.520 --> 10:24.400
 within order N to the five steps, basically.

10:24.400 --> 10:28.520
 And this additive constant doesn't care for N,

10:28.520 --> 10:32.400
 which means as N is getting larger and larger,

10:32.400 --> 10:34.880
 as you have more and more cities,

10:34.880 --> 10:38.600
 the constant overhead pales and comparison.

10:38.600 --> 10:44.120
 And that means that almost all large problems are solved

10:44.120 --> 10:46.520
 in the best possible way already today.

10:46.520 --> 10:50.480
 We already have a universal problem solved like that.

10:50.480 --> 10:54.520
 However, it's not practical because the overhead,

10:54.520 --> 10:57.440
 the constant overhead is so large

10:57.440 --> 11:00.200
 that for the small kinds of problems

11:00.200 --> 11:04.560
 that we want to solve in this little biosphere.

11:04.560 --> 11:06.360
 By the way, when you say small,

11:06.360 --> 11:08.600
 you're talking about things that fall

11:08.600 --> 11:10.880
 within the constraints of our computational systems.

11:10.880 --> 11:14.280
 So they can seem quite large to us mere humans.

11:14.280 --> 11:15.360
 That's right, yeah.

11:15.360 --> 11:19.000
 So they seem large and even unsolvable

11:19.000 --> 11:21.000
 in a practical sense today,

11:21.000 --> 11:24.760
 but they are still small compared to almost all problems

11:24.760 --> 11:28.480
 because almost all problems are large problems,

11:28.480 --> 11:30.840
 which are much larger than any constant.

11:31.920 --> 11:34.520
 Do you find it useful as a person

11:34.520 --> 11:38.680
 who is dreamed of creating a general learning system,

11:38.680 --> 11:39.880
 has worked on creating one,

11:39.880 --> 11:42.160
 has done a lot of interesting ideas there

11:42.160 --> 11:46.360
 to think about P versus NP,

11:46.360 --> 11:50.800
 this formalization of how hard problems are,

11:50.800 --> 11:52.360
 how they scale,

11:52.360 --> 11:55.200
 this kind of worst case analysis type of thinking.

11:55.200 --> 11:56.840
 Do you find that useful?

11:56.840 --> 11:59.720
 Or is it only just a mathematical,

12:00.560 --> 12:02.640
 it's a set of mathematical techniques

12:02.640 --> 12:05.760
 to give you intuition about what's good and bad?

12:05.760 --> 12:09.440
 So P versus NP, that's super interesting

12:09.440 --> 12:11.800
 from a theoretical point of view.

12:11.800 --> 12:14.560
 And in fact, as you are thinking about that problem,

12:14.560 --> 12:17.280
 you can also get inspiration

12:17.280 --> 12:21.280
 for better practical problem solvers.

12:21.280 --> 12:23.320
 On the other hand, we have to admit

12:23.320 --> 12:24.560
 that at the moment,

12:24.560 --> 12:28.360
 the best practical problem solvers

12:28.360 --> 12:30.120
 for all kinds of problems

12:30.120 --> 12:33.880
 that we are now solving through what is called AI at the moment,

12:33.880 --> 12:36.240
 they are not of the kind

12:36.240 --> 12:38.800
 that is inspired by these questions.

12:38.800 --> 12:42.680
 There we are using general purpose computers,

12:42.680 --> 12:44.840
 such as recurrent neural networks,

12:44.840 --> 12:46.680
 but we have a search technique,

12:46.680 --> 12:50.320
 which is just local search gradient descent

12:50.320 --> 12:51.960
 to try to find a program

12:51.960 --> 12:54.400
 that is running on these recurrent networks,

12:54.400 --> 12:58.160
 such that it can solve some interesting problems,

12:58.160 --> 13:01.920
 such as speech recognition or machine translation

13:01.920 --> 13:03.200
 and something like that.

13:03.200 --> 13:06.480
 And there is very little theory

13:06.480 --> 13:09.720
 behind the best solutions that we have at the moment

13:09.720 --> 13:10.800
 that can do that.

13:10.800 --> 13:12.640
 Do you think that needs to change?

13:12.640 --> 13:15.120
 Do you think that will change or can we go,

13:15.120 --> 13:17.120
 can we create a general intelligence systems

13:17.120 --> 13:19.200
 without ever really proving

13:19.200 --> 13:20.600
 that that system is intelligent

13:20.600 --> 13:22.560
 in some kind of mathematical way,

13:22.560 --> 13:24.960
 solving machine translation perfectly

13:24.960 --> 13:26.320
 or something like that,

13:26.320 --> 13:29.160
 within some kind of syntactic definition of a language?

13:29.160 --> 13:31.120
 Or can we just be super impressed

13:31.120 --> 13:35.080
 by the thing working extremely well and that's sufficient?

13:35.080 --> 13:36.720
 There's an old saying,

13:36.720 --> 13:39.360
 and I don't know who brought it up first,

13:39.360 --> 13:42.440
 which says there's nothing more practical

13:42.440 --> 13:43.680
 than a good theory.

13:43.680 --> 13:48.680
 And a good theory of problem solving

13:52.760 --> 13:55.560
 under limited resources like here in this universe

13:55.560 --> 13:57.000
 or on this little planet

13:58.480 --> 14:01.800
 has to take into account these limited resources.

14:01.800 --> 14:06.800
 And so probably there is locking a theory

14:08.040 --> 14:10.800
 which is related to what we already have,

14:10.800 --> 14:14.440
 these asymptotically optimal problem solvers,

14:14.440 --> 14:18.560
 which tells us what we need in addition to that

14:18.560 --> 14:21.760
 to come up with a practically optimal problem solver.

14:21.760 --> 14:26.760
 So I believe we will have something like that

14:27.080 --> 14:29.720
 and maybe just a few little tiny twists

14:29.720 --> 14:34.320
 are necessary to change what we already have

14:34.320 --> 14:36.360
 to come up with that as well.

14:36.360 --> 14:37.800
 As long as we don't have that,

14:37.800 --> 14:42.600
 we admit that we are taking suboptimal ways

14:42.600 --> 14:46.040
 and recurrent neural networks and long short term memory

14:46.040 --> 14:50.440
 for equipped with local search techniques

14:50.440 --> 14:53.560
 and we are happy that it works better

14:53.560 --> 14:55.480
 than any competing methods,

14:55.480 --> 15:00.480
 but that doesn't mean that we think we are done.

15:00.800 --> 15:05.040
 You've said that an AGI system will ultimately be a simple one,

15:05.040 --> 15:08.000
 a general intelligence system will ultimately be a simple one,

15:08.000 --> 15:10.240
 maybe a pseudo code of a few lines

15:10.240 --> 15:11.840
 will be able to describe it.

15:11.840 --> 15:16.760
 Can you talk through your intuition behind this idea,

15:16.760 --> 15:21.760
 why you feel that at its core intelligence

15:22.120 --> 15:25.560
 is a simple algorithm?

15:26.920 --> 15:31.680
 Experience tells us that the stuff that works best

15:31.680 --> 15:33.120
 is really simple.

15:33.120 --> 15:37.640
 So the asymptotically optimal ways of solving problems,

15:37.640 --> 15:38.800
 if you look at them,

15:38.800 --> 15:41.800
 they're just a few lines of code, it's really true.

15:41.800 --> 15:44.000
 Although they are these amazing properties,

15:44.000 --> 15:45.760
 just a few lines of code,

15:45.760 --> 15:50.760
 then the most promising and most useful practical things

15:53.760 --> 15:57.760
 maybe don't have this proof of optimality associated with them.

15:57.760 --> 16:00.840
 However, they are also just a few lines of code.

16:00.840 --> 16:05.040
 The most successful recurrent neural networks,

16:05.040 --> 16:08.360
 you can write them down and five lines of pseudo code.

16:08.360 --> 16:10.920
 That's a beautiful, almost poetic idea,

16:10.920 --> 16:15.600
 but what you're describing there

16:15.600 --> 16:17.400
 is the lines of pseudo code

16:17.400 --> 16:20.600
 are sitting on top of layers and layers of abstractions,

16:20.600 --> 16:22.240
 in a sense.

16:22.240 --> 16:25.040
 So you're saying at the very top,

16:25.040 --> 16:30.040
 it'll be a beautifully written sort of algorithm,

16:31.120 --> 16:33.960
 but do you think that there's many layers of abstractions

16:33.960 --> 16:36.880
 we have to first learn to construct?

16:36.880 --> 16:38.280
 Yeah, of course.

16:38.280 --> 16:42.640
 We are building on all these great abstractions

16:42.640 --> 16:46.040
 that people have invented over the millennia,

16:46.040 --> 16:51.040
 such as matrix multiplications and drill numbers

16:51.600 --> 16:56.600
 and basic arithmetics and calculus and derivations

16:58.720 --> 17:03.320
 of error functions and derivatives of error functions

17:03.320 --> 17:04.320
 and stuff like that.

17:05.440 --> 17:10.440
 So without that language that greatly simplifies

17:10.440 --> 17:13.880
 our way of thinking about these problems,

17:13.880 --> 17:14.840
 we couldn't do anything.

17:14.840 --> 17:16.560
 So in that sense, as always,

17:16.560 --> 17:19.600
 we are standing on the shoulders of the giants

17:19.600 --> 17:24.600
 who in the past simplified the problem of problem solving

17:25.520 --> 17:30.000
 so much that now we have a chance to do the final step.

17:30.000 --> 17:32.120
 So the final step will be a simple one.

17:34.000 --> 17:36.760
 If we take a step back through all of human civilization

17:36.760 --> 17:38.360
 and just the universe in general,

17:38.360 --> 17:41.440
 how do you think about evolution?

17:41.440 --> 17:45.400
 And what if creating a universe is required

17:45.400 --> 17:47.320
 to achieve this final step?

17:47.320 --> 17:50.920
 What if going through the very painful

17:50.920 --> 17:53.840
 and inefficient process of evolution is needed

17:53.840 --> 17:55.880
 to come up with this set of abstractions

17:55.880 --> 17:57.800
 that ultimately lead to intelligence?

17:57.800 --> 18:00.800
 Do you think there's a shortcut

18:00.800 --> 18:04.640
 or do you think we have to create something like our universe

18:04.640 --> 18:09.480
 in order to create something like human level intelligence?

18:09.480 --> 18:13.160
 So far, the only example we have is this one,

18:13.160 --> 18:15.160
 this universe in which we are living.

18:15.160 --> 18:16.360
 You think you can do better?

18:20.880 --> 18:25.000
 Maybe not, but we are part of this whole process.

18:25.000 --> 18:30.000
 So apparently, so it might be the case

18:30.000 --> 18:32.160
 that the code that runs the universe

18:32.160 --> 18:33.720
 is really, really simple.

18:33.720 --> 18:36.640
 Everything points to that possibility

18:36.640 --> 18:39.960
 because gravity and other basic forces

18:39.960 --> 18:44.120
 are really simple laws that can be easily described,

18:44.120 --> 18:47.080
 also in just a few lines of code, basically.

18:47.080 --> 18:52.080
 And then there are these other events

18:52.200 --> 18:55.080
 that the apparently random events

18:55.080 --> 18:56.560
 in the history of the universe,

18:56.560 --> 18:58.800
 which as far as we know at the moment

18:58.800 --> 19:00.720
 don't have a compact code,

19:00.720 --> 19:03.240
 but who knows, maybe somebody in the near future

19:03.240 --> 19:06.800
 is going to figure out the pseudo random generator,

19:06.800 --> 19:11.800
 which is computing whether the measurement of that

19:13.520 --> 19:15.920
 spin up or down thing here

19:15.920 --> 19:18.440
 is going to be positive or negative.

19:18.440 --> 19:19.880
 Underline quantum mechanics.

19:19.880 --> 19:20.720
 Yes, so.

19:20.720 --> 19:23.160
 Do you ultimately think quantum mechanics

19:23.160 --> 19:25.200
 is a pseudo random number generator?

19:25.200 --> 19:26.920
 So it's all deterministic.

19:26.920 --> 19:28.760
 There's no randomness in our universe.

19:30.400 --> 19:31.800
 Does God play dice?

19:31.800 --> 19:34.080
 So a couple of years ago,

19:34.080 --> 19:39.080
 a famous physicist, quantum physicist, Anton Zeilinger,

19:39.080 --> 19:41.600
 he wrote an essay in Nature,

19:41.600 --> 19:44.280
 and it started more or less like that.

19:46.720 --> 19:51.720
 One of the fundamental insights of the 20th century

19:53.280 --> 19:58.280
 was that the universe is fundamentally random

19:58.280 --> 20:02.760
 on the quantum level, and that whenever

20:03.760 --> 20:06.720
 you measure spin up or down or something like that,

20:06.720 --> 20:10.720
 a new bit of information enters the history of the universe.

20:13.440 --> 20:14.680
 And while I was reading that,

20:14.680 --> 20:18.000
 I was already typing the response

20:18.000 --> 20:20.280
 and they had to publish it because I was right,

20:21.560 --> 20:25.560
 that there is no evidence, no physical evidence for that.

20:25.560 --> 20:28.440
 So there's an alternative explanation

20:28.440 --> 20:31.240
 where everything that we consider random

20:31.240 --> 20:33.800
 is actually pseudo random,

20:33.800 --> 20:38.800
 such as the decimal expansion of pi, 3.141 and so on,

20:39.400 --> 20:42.120
 which looks random, but isn't.

20:42.120 --> 20:47.120
 So pi is interesting because every three digit sequence,

20:47.720 --> 20:51.720
 every sequence of three digits appears roughly

20:51.720 --> 20:56.720
 one in a thousand times, and every five digit sequence

20:57.360 --> 21:00.760
 appears roughly one in 10,000 times.

21:00.760 --> 21:02.760
 What do you expect?

21:02.760 --> 21:06.760
 If it was random, but there's a very short algorithm,

21:06.760 --> 21:09.120
 a short program that computes all of that.

21:09.120 --> 21:11.200
 So it's extremely compressible.

21:11.200 --> 21:13.120
 And who knows, maybe tomorrow somebody,

21:13.120 --> 21:15.360
 some grad student at CERN goes back

21:15.360 --> 21:19.120
 over all these data points, better decay,

21:19.120 --> 21:21.760
 and whatever, and figures out, oh,

21:21.760 --> 21:25.760
 it's the second billion digits of pi or something like that.

21:25.760 --> 21:28.840
 We don't have any fundamental reason at the moment

21:28.840 --> 21:33.600
 to believe that this is truly random

21:33.600 --> 21:36.440
 and not just a deterministic video game.

21:36.440 --> 21:38.680
 If it was a deterministic video game,

21:38.680 --> 21:40.360
 it would be much more beautiful

21:40.360 --> 21:44.160
 because beauty is simplicity.

21:44.160 --> 21:47.560
 And many of the basic laws of the universe

21:47.560 --> 21:51.560
 like gravity and the other basic forces are very simple.

21:51.560 --> 21:55.560
 So very short programs can explain what these are doing.

21:56.560 --> 22:00.560
 And it would be awful and ugly.

22:00.560 --> 22:01.560
 The universe would be ugly.

22:01.560 --> 22:03.560
 The history of the universe would be ugly

22:03.560 --> 22:06.560
 if for the extra things, the random,

22:06.560 --> 22:10.560
 the seemingly random data points that we get all the time

22:10.560 --> 22:15.560
 that we really need a huge number of extra bits

22:15.560 --> 22:21.560
 to describe all these extra bits of information.

22:22.560 --> 22:25.560
 So as long as we don't have evidence

22:25.560 --> 22:27.560
 that there is no short program

22:27.560 --> 22:32.560
 that computes the entire history of the entire universe,

22:32.560 --> 22:38.560
 we are, as scientists, compelled to look further

22:38.560 --> 22:41.560
 for that shortest program.

22:41.560 --> 22:46.560
 Your intuition says there exists a program

22:46.560 --> 22:50.560
 that can backtrack to the creation of the universe.

22:50.560 --> 22:53.560
 So it can take the shortest path to the creation of the universe.

22:53.560 --> 22:57.560
 Yes, including all the entanglement things

22:57.560 --> 23:01.560
 and all the spin up and down measurements

23:01.560 --> 23:09.560
 that have been taken place since 13.8 billion years ago.

23:09.560 --> 23:14.560
 So we don't have a proof that it is random.

23:14.560 --> 23:19.560
 We don't have a proof that it is compressible to a short program.

23:19.560 --> 23:21.560
 But as long as we don't have that proof,

23:21.560 --> 23:24.560
 we are obliged as scientists to keep looking

23:24.560 --> 23:26.560
 for that simple explanation.

23:26.560 --> 23:27.560
 Absolutely.

23:27.560 --> 23:30.560
 So you said simplicity is beautiful or beauty is simple.

23:30.560 --> 23:32.560
 Either one works.

23:32.560 --> 23:36.560
 But you also work on curiosity, discovery.

23:36.560 --> 23:42.560
 The romantic notion of randomness, of serendipity,

23:42.560 --> 23:49.560
 of being surprised by things that are about you,

23:49.560 --> 23:53.560
 kind of in our poetic notion of reality,

23:53.560 --> 23:56.560
 we think as humans require randomness.

23:56.560 --> 23:58.560
 So you don't find randomness beautiful.

23:58.560 --> 24:04.560
 You find simple determinism beautiful.

24:04.560 --> 24:06.560
 Yeah.

24:06.560 --> 24:07.560
 Okay.

24:07.560 --> 24:08.560
 So why?

24:08.560 --> 24:09.560
 Why?

24:09.560 --> 24:12.560
 Because the explanation becomes shorter.

24:12.560 --> 24:19.560
 A universe that is compressible to a short program

24:19.560 --> 24:22.560
 is much more elegant and much more beautiful

24:22.560 --> 24:24.560
 than another one,

24:24.560 --> 24:28.560
 which needs an almost infinite number of bits to be described.

24:28.560 --> 24:31.560
 As far as we know,

24:31.560 --> 24:34.560
 many things that are happening in this universe are really simple

24:34.560 --> 24:38.560
 in terms of short programs that compute gravity

24:38.560 --> 24:43.560
 and the interaction between elementary particles and so on.

24:43.560 --> 24:45.560
 So all of that seems to be very, very simple.

24:45.560 --> 24:50.560
 Every electron seems to reuse the same subprogram all the time

24:50.560 --> 24:57.560
 as it is interacting with other elementary particles.

24:57.560 --> 25:04.560
 If we now require an extra oracle

25:04.560 --> 25:07.560
 injecting new bits of information all the time

25:07.560 --> 25:11.560
 for these extra things which are currently not understood,

25:11.560 --> 25:18.560
 such as better decay,

25:18.560 --> 25:25.560
 then the whole description length of the data that we can observe

25:25.560 --> 25:31.560
 of the history of the universe would become much longer.

25:31.560 --> 25:33.560
 And therefore, uglier.

25:33.560 --> 25:34.560
 And uglier.

25:34.560 --> 25:38.560
 Again, the simplicity is elegant and beautiful.

25:38.560 --> 25:42.560
 All the history of science is a history of compression progress.

25:42.560 --> 25:43.560
 Yeah.

25:43.560 --> 25:48.560
 So you've described sort of as we build up abstractions

25:48.560 --> 25:52.560
 and you've talked about the idea of compression.

25:52.560 --> 25:55.560
 How do you see this, the history of science,

25:55.560 --> 25:59.560
 the history of humanity, our civilization and life on Earth

25:59.560 --> 26:03.560
 as some kind of path towards greater and greater compression?

26:03.560 --> 26:04.560
 What do you mean by that?

26:04.560 --> 26:06.560
 How do you think about that?

26:06.560 --> 26:12.560
 Indeed, the history of science is a history of compression progress.

26:12.560 --> 26:14.560
 What does that mean?

26:14.560 --> 26:17.560
 Hundreds of years ago, there was an astronomer

26:17.560 --> 26:19.560
 whose name was Kepler.

26:19.560 --> 26:25.560
 And he looked at the data points that he got by watching planets move.

26:25.560 --> 26:28.560
 And then he had all these data points and suddenly it turned out

26:28.560 --> 26:37.560
 that he can greatly compress the data by predicting it through an ellipse law.

26:37.560 --> 26:44.560
 So it turns out that all these data points are more or less on ellipses around the sun.

26:44.560 --> 26:50.560
 And another guy came along whose name was Newton and before him Hook.

26:50.560 --> 26:57.560
 And they said the same thing that is making these planets move like that

26:57.560 --> 27:01.560
 is what makes the apples fall down.

27:01.560 --> 27:10.560
 And it also holds for stones and for all kinds of other objects.

27:10.560 --> 27:16.560
 And suddenly many, many of these observations became much more compressible

27:16.560 --> 27:19.560
 because as long as you can predict the next thing,

27:19.560 --> 27:22.560
 given what you have seen so far, you can compress it.

27:22.560 --> 27:24.560
 But you don't have to store that data extra.

27:24.560 --> 27:28.560
 This is called predictive coding.

27:28.560 --> 27:33.560
 And then there was still something wrong with that theory of the universe

27:33.560 --> 27:37.560
 and you had deviations from these predictions of the theory.

27:37.560 --> 27:41.560
 And 300 years later another guy came along whose name was Einstein

27:41.560 --> 27:50.560
 and he was able to explain away all these deviations from the predictions of the old theory

27:50.560 --> 27:56.560
 through a new theory which was called the general theory of relativity

27:56.560 --> 28:00.560
 which at first glance looks a little bit more complicated

28:00.560 --> 28:05.560
 and you have to warp space and time but you can't phrase it within one single sentence

28:05.560 --> 28:12.560
 which is no matter how fast you accelerate and how fast or how hard you decelerate

28:12.560 --> 28:18.560
 and no matter what is the gravity in your local framework,

28:18.560 --> 28:21.560
 light speed always looks the same.

28:21.560 --> 28:24.560
 And from that you can calculate all the consequences.

28:24.560 --> 28:30.560
 So it's a very simple thing and it allows you to further compress all the observations

28:30.560 --> 28:35.560
 because certainly there are hardly any deviations any longer

28:35.560 --> 28:39.560
 that you can measure from the predictions of this new theory.

28:39.560 --> 28:44.560
 So art of science is a history of compression progress.

28:44.560 --> 28:50.560
 You never arrive immediately at the shortest explanation of the data

28:50.560 --> 28:52.560
 but you're making progress.

28:52.560 --> 28:56.560
 Whenever you are making progress you have an insight.

28:56.560 --> 29:01.560
 You see, oh, first I needed so many bits of information to describe the data,

29:01.560 --> 29:04.560
 to describe my falling apples, my video of falling apples,

29:04.560 --> 29:08.560
 I need so many data, so many pixels have to be stored

29:08.560 --> 29:14.560
 but then suddenly I realize, no, there is a very simple way of predicting the third frame

29:14.560 --> 29:20.560
 in the video from the first two and maybe not every little detail can be predicted

29:20.560 --> 29:24.560
 but more or less most of these orange blots that are coming down,

29:24.560 --> 29:28.560
 I'm sorry, in the same way, which means that I can greatly compress the video

29:28.560 --> 29:33.560
 and the amount of compression, progress,

29:33.560 --> 29:37.560
 that is the depth of the insight that you have at that moment.

29:37.560 --> 29:40.560
 That's the fun that you have, the scientific fun,

29:40.560 --> 29:46.560
 the fun in that discovery and we can build artificial systems that do the same thing.

29:46.560 --> 29:51.560
 They measure the depth of their insights as they are looking at the data

29:51.560 --> 29:55.560
 through their own experiments and we give them a reward,

29:55.560 --> 30:00.560
 an intrinsic reward and proportion to this depth of insight.

30:00.560 --> 30:07.560
 And since they are trying to maximize the rewards they get,

30:07.560 --> 30:12.560
 they are suddenly motivated to come up with new action sequences,

30:12.560 --> 30:17.560
 with new experiments that have the property that the data that is coming in

30:17.560 --> 30:21.560
 as a consequence of these experiments has the property

30:21.560 --> 30:25.560
 that they can learn something about, see a pattern in there

30:25.560 --> 30:28.560
 which they hadn't seen yet before.

30:28.560 --> 30:32.560
 So there's an idea of power play that you've described,

30:32.560 --> 30:37.560
 a training and general problem solver in this kind of way of looking for the unsolved problems.

30:37.560 --> 30:40.560
 Can you describe that idea a little further?

30:40.560 --> 30:42.560
 It's another very simple idea.

30:42.560 --> 30:49.560
 Normally what you do in computer science, you have some guy who gives you a problem

30:49.560 --> 30:56.560
 and then there is a huge search space of potential solution candidates

30:56.560 --> 31:02.560
 and you somehow try them out and you have more or less sophisticated ways

31:02.560 --> 31:06.560
 of moving around in that search space

31:06.560 --> 31:11.560
 until you finally found a solution which you consider satisfactory.

31:11.560 --> 31:15.560
 That's what most of computer science is about.

31:15.560 --> 31:19.560
 Power play just goes one little step further and says,

31:19.560 --> 31:24.560
 let's not only search for solutions to a given problem,

31:24.560 --> 31:30.560
 but let's search to pairs of problems and their solutions

31:30.560 --> 31:36.560
 where the system itself has the opportunity to phrase its own problem.

31:36.560 --> 31:42.560
 So we are looking suddenly at pairs of problems and their solutions

31:42.560 --> 31:46.560
 or modifications of the problem solver

31:46.560 --> 31:50.560
 that is supposed to generate a solution to that new problem.

31:50.560 --> 31:56.560
 And this additional degree of freedom

31:56.560 --> 32:01.560
 allows us to build career systems that are like scientists

32:01.560 --> 32:06.560
 in the sense that they not only try to solve and try to find answers

32:06.560 --> 32:12.560
 to existing questions, no, they are also free to pose their own questions.

32:12.560 --> 32:15.560
 So if you want to build an artificial scientist,

32:15.560 --> 32:19.560
 you have to give it that freedom and power play is exactly doing that.

32:19.560 --> 32:23.560
 So that's a dimension of freedom that's important to have,

32:23.560 --> 32:31.560
 how hard do you think that, how multi dimensional and difficult the space of

32:31.560 --> 32:34.560
 then coming up with your own questions is.

32:34.560 --> 32:38.560
 So it's one of the things that as human beings we consider to be

32:38.560 --> 32:41.560
 the thing that makes us special, the intelligence that makes us special

32:41.560 --> 32:47.560
 is that brilliant insight that can create something totally new.

32:47.560 --> 32:51.560
 Yes. So now let's look at the extreme case.

32:51.560 --> 32:57.560
 Let's look at the set of all possible problems that you can formally describe,

32:57.560 --> 33:03.560
 which is infinite, which should be the next problem

33:03.560 --> 33:07.560
 that a scientist or power play is going to solve.

33:07.560 --> 33:16.560
 Well, it should be the easiest problem that goes beyond what you already know.

33:16.560 --> 33:22.560
 So it should be the simplest problem that the current problems

33:22.560 --> 33:28.560
 that you have which can already solve 100 problems that he cannot solve yet

33:28.560 --> 33:30.560
 by just generalizing.

33:30.560 --> 33:32.560
 So it has to be new.

33:32.560 --> 33:36.560
 So it has to require a modification of the problem solver such that the new

33:36.560 --> 33:41.560
 problem solver can solve this new thing, but the old problem solver cannot do it.

33:41.560 --> 33:47.560
 And in addition to that, we have to make sure that the problem solver

33:47.560 --> 33:50.560
 doesn't forget any of the previous solutions.

33:50.560 --> 33:51.560
 Right.

33:51.560 --> 33:57.560
 And so by definition, power play is now trying always to search in this pair of

33:57.560 --> 34:02.560
 in the set of pairs of problems and problems over modifications

34:02.560 --> 34:08.560
 for a combination that minimize the time to achieve these criteria.

34:08.560 --> 34:14.560
 Power is trying to find the problem which is easiest to add to the repertoire.

34:14.560 --> 34:19.560
 So just like grad students and academics and researchers can spend their whole

34:19.560 --> 34:25.560
 career in a local minima stuck trying to come up with interesting questions,

34:25.560 --> 34:27.560
 but ultimately doing very little.

34:27.560 --> 34:32.560
 Do you think it's easy in this approach of looking for the simplest

34:32.560 --> 34:38.560
 problem solver problem to get stuck in a local minima is not never really discovering

34:38.560 --> 34:43.560
 new, you know, really jumping outside of the hundred problems that you've already

34:43.560 --> 34:47.560
 solved in a genuine creative way.

34:47.560 --> 34:52.560
 No, because that's the nature of power play that it's always trying to break

34:52.560 --> 34:58.560
 its current generalization abilities by coming up with a new problem which is

34:58.560 --> 35:04.560
 beyond the current horizon, just shifting the horizon of knowledge a little bit

35:04.560 --> 35:10.560
 out there, breaking the existing rules such that the new thing becomes solvable

35:10.560 --> 35:13.560
 but wasn't solvable by the old thing.

35:13.560 --> 35:19.560
 So like adding a new axiom, like what Gödel did when he came up with these

35:19.560 --> 35:23.560
 new sentences, new theorems that didn't have a proof in the formal system,

35:23.560 --> 35:30.560
 which means you can add them to the repertoire, hoping that they are not

35:30.560 --> 35:35.560
 going to damage the consistency of the whole thing.

35:35.560 --> 35:41.560
 So in the paper with the amazing title, Formal Theory of Creativity,

35:41.560 --> 35:47.560
 Fun and Intrinsic Motivation, you talk about discovery as intrinsic reward.

35:47.560 --> 35:53.560
 So if you view humans as intelligent agents, what do you think is the purpose

35:53.560 --> 35:56.560
 and meaning of life for us humans?

35:56.560 --> 35:58.560
 You've talked about this discovery.

35:58.560 --> 36:04.560
 Do you see humans as an instance of power play agents?

36:04.560 --> 36:11.560
 Yeah, so humans are curious and that means they behave like scientists,

36:11.560 --> 36:15.560
 not only the official scientists but even the babies behave like scientists

36:15.560 --> 36:19.560
 and they play around with their toys to figure out how the world works

36:19.560 --> 36:22.560
 and how it is responding to their actions.

36:22.560 --> 36:26.560
 And that's how they learn about gravity and everything.

36:26.560 --> 36:30.560
 And yeah, in 1990, we had the first systems like that

36:30.560 --> 36:33.560
 who would just try to play around with the environment

36:33.560 --> 36:39.560
 and come up with situations that go beyond what they knew at that time

36:39.560 --> 36:42.560
 and then get a reward for creating these situations

36:42.560 --> 36:45.560
 and then becoming more general problem solvers

36:45.560 --> 36:48.560
 and being able to understand more of the world.

36:48.560 --> 36:56.560
 So yeah, I think in principle that curiosity,

36:56.560 --> 37:02.560
 strategy or more sophisticated versions of what I just described,

37:02.560 --> 37:07.560
 they are what we have built in as well because evolution discovered

37:07.560 --> 37:12.560
 that's a good way of exploring the unknown world and a guy who explores

37:12.560 --> 37:16.560
 the unknown world has a higher chance of solving problems

37:16.560 --> 37:19.560
 that he needs to survive in this world.

37:19.560 --> 37:23.560
 On the other hand, those guys who were too curious,

37:23.560 --> 37:25.560
 they were weeded out as well.

37:25.560 --> 37:27.560
 So you have to find this trade off.

37:27.560 --> 37:30.560
 Evolution found a certain trade off apparently in our society.

37:30.560 --> 37:35.560
 There is a certain percentage of extremely explorative guys

37:35.560 --> 37:41.560
 and it doesn't matter if they die because many of the others are more conservative.

37:41.560 --> 37:46.560
 And so yeah, it would be surprising to me

37:46.560 --> 37:55.560
 if that principle of artificial curiosity wouldn't be present

37:55.560 --> 37:59.560
 in almost exactly the same form here in our brains.

37:59.560 --> 38:02.560
 So you're a bit of a musician and an artist.

38:02.560 --> 38:07.560
 So continuing on this topic of creativity,

38:07.560 --> 38:10.560
 what do you think is the role of creativity in intelligence?

38:10.560 --> 38:16.560
 So you've kind of implied that it's essential for intelligence,

38:16.560 --> 38:21.560
 if you think of intelligence as a problem solving system,

38:21.560 --> 38:23.560
 as ability to solve problems.

38:23.560 --> 38:28.560
 But do you think it's essential, this idea of creativity?

38:28.560 --> 38:34.560
 We never have a subprogram that is called creativity or something.

38:34.560 --> 38:37.560
 It's just a side effect of what our problems always do.

38:37.560 --> 38:44.560
 They are searching a space of candidates, of solution candidates,

38:44.560 --> 38:47.560
 until they hopefully find a solution to a given problem.

38:47.560 --> 38:50.560
 But then there are these two types of creativity

38:50.560 --> 38:53.560
 and both of them are now present in our machines.

38:53.560 --> 38:56.560
 The first one has been around for a long time,

38:56.560 --> 38:59.560
 which is human gives problem to machine.

38:59.560 --> 39:03.560
 Machine tries to find a solution to that.

39:03.560 --> 39:05.560
 And this has been happening for many decades.

39:05.560 --> 39:09.560
 And for many decades, machines have found creative solutions

39:09.560 --> 39:13.560
 to interesting problems where humans were not aware

39:13.560 --> 39:17.560
 of these particularly creative solutions,

39:17.560 --> 39:20.560
 but then appreciated that the machine found that.

39:20.560 --> 39:23.560
 The second is the pure creativity.

39:23.560 --> 39:28.560
 What I just mentioned, I would call the applied creativity,

39:28.560 --> 39:31.560
 like applied art, where somebody tells you,

39:31.560 --> 39:34.560
 now make a nice picture of this pope,

39:34.560 --> 39:36.560
 and you will get money for that.

39:36.560 --> 39:41.560
 So here is the artist and he makes a convincing picture of the pope

39:41.560 --> 39:44.560
 and the pope likes it and gives him the money.

39:44.560 --> 39:48.560
 And then there is the pure creativity,

39:48.560 --> 39:51.560
 which is more like the power play and the artificial curiosity thing,

39:51.560 --> 39:56.560
 where you have the freedom to select your own problem,

39:56.560 --> 40:02.560
 like a scientist who defines his own question to study.

40:02.560 --> 40:06.560
 And so that is the pure creativity, if you will,

40:06.560 --> 40:13.560
 as opposed to the applied creativity, which serves another.

40:13.560 --> 40:18.560
 In that distinction, there's almost echoes of narrow AI versus general AI.

40:18.560 --> 40:24.560
 So this kind of constrained painting of a pope seems like

40:24.560 --> 40:29.560
 the approaches of what people are calling narrow AI.

40:29.560 --> 40:32.560
 And pure creativity seems to be,

40:32.560 --> 40:34.560
 maybe I'm just biased as a human,

40:34.560 --> 40:40.560
 but it seems to be an essential element of human level intelligence.

40:40.560 --> 40:43.560
 Is that what you're implying?

40:43.560 --> 40:45.560
 To a degree.

40:45.560 --> 40:50.560
 If you zoom back a little bit and you just look at a general problem solving machine,

40:50.560 --> 40:53.560
 which is trying to solve arbitrary problems,

40:53.560 --> 40:57.560
 then this machine will figure out in the course of solving problems

40:57.560 --> 40:59.560
 that it's good to be curious.

40:59.560 --> 41:04.560
 So all of what I said just now about this pre wild curiosity

41:04.560 --> 41:10.560
 and this will to invent new problems that the system doesn't know how to solve yet,

41:10.560 --> 41:14.560
 should be just a byproduct of the general search.

41:14.560 --> 41:21.560
 However, apparently evolution has built it into us

41:21.560 --> 41:26.560
 because it turned out to be so successful, a pre wiring, a bias,

41:26.560 --> 41:33.560
 a very successful exploratory bias that we are born with.

41:33.560 --> 41:36.560
 And you've also said that consciousness in the same kind of way

41:36.560 --> 41:40.560
 may be a byproduct of problem solving.

41:40.560 --> 41:44.560
 Do you find this an interesting byproduct?

41:44.560 --> 41:46.560
 Do you think it's a useful byproduct?

41:46.560 --> 41:49.560
 What are your thoughts on consciousness in general?

41:49.560 --> 41:54.560
 Or is it simply a byproduct of greater and greater capabilities of problem solving

41:54.560 --> 42:00.560
 that's similar to creativity in that sense?

42:00.560 --> 42:04.560
 We never have a procedure called consciousness in our machines.

42:04.560 --> 42:10.560
 However, we get a side effects of what these machines are doing,

42:10.560 --> 42:15.560
 things that seem to be closely related to what people call consciousness.

42:15.560 --> 42:20.560
 So for example, already 1990 we had simple systems

42:20.560 --> 42:25.560
 which were basically recurrent networks and therefore universal computers

42:25.560 --> 42:32.560
 trying to map incoming data into actions that lead to success.

42:32.560 --> 42:39.560
 Maximizing reward in a given environment, always finding the charging station in time

42:39.560 --> 42:43.560
 whenever the battery is low and negative signals are coming from the battery,

42:43.560 --> 42:50.560
 always find the charging station in time without bumping against painful obstacles on the way.

42:50.560 --> 42:54.560
 So complicated things but very easily motivated.

42:54.560 --> 43:01.560
 And then we give these little guys a separate recurrent network

43:01.560 --> 43:04.560
 which is just predicting what's happening if I do that and that.

43:04.560 --> 43:08.560
 What will happen as a consequence of these actions that I'm executing

43:08.560 --> 43:13.560
 and it's just trained on the long and long history of interactions with the world.

43:13.560 --> 43:17.560
 So it becomes a predictive model of the world basically.

43:17.560 --> 43:22.560
 And therefore also a compressor of the observations of the world

43:22.560 --> 43:26.560
 because whatever you can predict, you don't have to store extra.

43:26.560 --> 43:29.560
 So compression is a side effect of prediction.

43:29.560 --> 43:32.560
 And how does this recurrent network compress?

43:32.560 --> 43:36.560
 Well, it's inventing little subprograms, little subnetworks

43:36.560 --> 43:41.560
 that stand for everything that frequently appears in the environment.

43:41.560 --> 43:47.560
 Like bottles and microphones and faces, maybe lots of faces in my environment.

43:47.560 --> 43:51.560
 So I'm learning to create something like a prototype face

43:51.560 --> 43:55.560
 and a new face comes along and all I have to encode are the deviations from the prototype.

43:55.560 --> 44:00.560
 So it's compressing all the time the stuff that frequently appears.

44:00.560 --> 44:04.560
 There's one thing that appears all the time

44:04.560 --> 44:09.560
 that is present all the time when the agent is interacting with its environment,

44:09.560 --> 44:11.560
 which is the agent itself.

44:11.560 --> 44:14.560
 So just for data compression reasons,

44:14.560 --> 44:18.560
 it is extremely natural for this recurrent network

44:18.560 --> 44:23.560
 to come up with little subnetworks that stand for the properties of the agents,

44:23.560 --> 44:27.560
 the hand, the other actuators,

44:27.560 --> 44:31.560
 and all the stuff that you need to better encode the data,

44:31.560 --> 44:34.560
 which is influenced by the actions of the agent.

44:34.560 --> 44:40.560
 So there, just as a side effect of data compression during primal solving,

44:40.560 --> 44:45.560
 you have internal self models.

44:45.560 --> 44:51.560
 Now you can use this model of the world to plan your future.

44:51.560 --> 44:54.560
 And that's what we also have done since 1990.

44:54.560 --> 44:57.560
 So the recurrent network, which is the controller,

44:57.560 --> 44:59.560
 which is trying to maximize reward,

44:59.560 --> 45:02.560
 can use this model of the network of the world,

45:02.560 --> 45:05.560
 this model network of the world, this predictive model of the world

45:05.560 --> 45:08.560
 to plan ahead and say, let's not do this action sequence.

45:08.560 --> 45:11.560
 Let's do this action sequence instead

45:11.560 --> 45:14.560
 because it leads to more predicted rewards.

45:14.560 --> 45:19.560
 And whenever it's waking up these little subnetworks that stand for itself,

45:19.560 --> 45:21.560
 then it's thinking about itself.

45:21.560 --> 45:23.560
 Then it's thinking about itself.

45:23.560 --> 45:30.560
 And it's exploring mentally the consequences of its own actions.

45:30.560 --> 45:36.560
 And now you tell me why it's still missing.

45:36.560 --> 45:39.560
 Missing the gap to consciousness.

45:39.560 --> 45:43.560
 There isn't. That's a really beautiful idea that, you know,

45:43.560 --> 45:46.560
 if life is a collection of data

45:46.560 --> 45:53.560
 and life is a process of compressing that data to act efficiently.

45:53.560 --> 45:57.560
 In that data, you yourself appear very often.

45:57.560 --> 46:00.560
 So it's useful to form compressions of yourself.

46:00.560 --> 46:03.560
 And it's a really beautiful formulation of what consciousness is,

46:03.560 --> 46:05.560
 is a necessary side effect.

46:05.560 --> 46:11.560
 It's actually quite compelling to me.

46:11.560 --> 46:18.560
 We've described RNNs, developed LSTMs, long short term memory networks.

46:18.560 --> 46:22.560
 They're a type of recurrent neural networks.

46:22.560 --> 46:24.560
 They've gotten a lot of success recently.

46:24.560 --> 46:29.560
 So these are networks that model the temporal aspects in the data,

46:29.560 --> 46:31.560
 temporal patterns in the data.

46:31.560 --> 46:36.560
 And you've called them the deepest of the neural networks, right?

46:36.560 --> 46:43.560
 What do you think is the value of depth in the models that we use to learn?

46:43.560 --> 46:47.560
 Yeah, since you mentioned the long short term memory and the LSTM,

46:47.560 --> 46:52.560
 I have to mention the names of the brilliant students who made that possible.

46:52.560 --> 46:53.560
 Yes, of course, of course.

46:53.560 --> 46:56.560
 First of all, my first student ever, Sepp Hochreiter,

46:56.560 --> 47:00.560
 who had fundamental insights already in his diploma thesis.

47:00.560 --> 47:04.560
 Then Felix Giers, who had additional important contributions.

47:04.560 --> 47:11.560
 Alex Gray is a guy from Scotland who is mostly responsible for this CTC algorithm,

47:11.560 --> 47:16.560
 which is now often used to train the LSTM to do the speech recognition

47:16.560 --> 47:21.560
 on all the Google Android phones and whatever, and Siri and so on.

47:21.560 --> 47:26.560
 So these guys, without these guys, I would be nothing.

47:26.560 --> 47:28.560
 It's a lot of incredible work.

47:28.560 --> 47:30.560
 What is now the depth?

47:30.560 --> 47:32.560
 What is the importance of depth?

47:32.560 --> 47:36.560
 Well, most problems in the real world are deep

47:36.560 --> 47:41.560
 in the sense that the current input doesn't tell you all you need to know

47:41.560 --> 47:44.560
 about the environment.

47:44.560 --> 47:49.560
 So instead, you have to have a memory of what happened in the past

47:49.560 --> 47:54.560
 and often important parts of that memory are dated.

47:54.560 --> 47:56.560
 They are pretty old.

47:56.560 --> 47:59.560
 So when you're doing speech recognition, for example,

47:59.560 --> 48:03.560
 and somebody says 11,

48:03.560 --> 48:08.560
 then that's about half a second or something like that,

48:08.560 --> 48:11.560
 which means it's already 50 time steps.

48:11.560 --> 48:15.560
 And another guy or the same guy says 7.

48:15.560 --> 48:18.560
 So the ending is the same, even.

48:18.560 --> 48:22.560
 But now the system has to see the distinction between 7 and 11,

48:22.560 --> 48:26.560
 and the only way it can see the difference is it has to store

48:26.560 --> 48:34.560
 that 50 steps ago there was an S or an L, 11 or 7.

48:34.560 --> 48:37.560
 So there you have already a problem of depth 50,

48:37.560 --> 48:42.560
 because for each time step you have something like a virtual layer

48:42.560 --> 48:45.560
 and the expanded, unrolled version of this recurrent network

48:45.560 --> 48:47.560
 which is doing the speech recognition.

48:47.560 --> 48:53.560
 So these long time lags, they translate into problem depth.

48:53.560 --> 48:59.560
 And most problems in this world are such that you really

48:59.560 --> 49:04.560
 have to look far back in time to understand what is the problem

49:04.560 --> 49:06.560
 and to solve it.

49:06.560 --> 49:09.560
 But just like with LSTMs, you don't necessarily need to,

49:09.560 --> 49:12.560
 when you look back in time, remember every aspect.

49:12.560 --> 49:14.560
 You just need to remember the important aspects.

49:14.560 --> 49:15.560
 That's right.

49:15.560 --> 49:19.560
 The network has to learn to put the important stuff into memory

49:19.560 --> 49:23.560
 and to ignore the unimportant noise.

49:23.560 --> 49:28.560
 But in that sense, deeper and deeper is better?

49:28.560 --> 49:30.560
 Or is there a limitation?

49:30.560 --> 49:36.560
 I mean LSTM is one of the great examples of architectures

49:36.560 --> 49:41.560
 that do something beyond just deeper and deeper networks.

49:41.560 --> 49:47.560
 There's clever mechanisms for filtering data for remembering and forgetting.

49:47.560 --> 49:51.560
 So do you think that kind of thinking is necessary?

49:51.560 --> 49:54.560
 If you think about LSTMs as a leap, a big leap forward

49:54.560 --> 50:01.560
 over traditional vanilla RNNs, what do you think is the next leap

50:01.560 --> 50:03.560
 within this context?

50:03.560 --> 50:08.560
 So LSTM is a very clever improvement, but LSTMs still don't

50:08.560 --> 50:13.560
 have the same kind of ability to see far back in the past

50:13.560 --> 50:18.560
 as humans do, the credit assignment problem across way back,

50:18.560 --> 50:24.560
 not just 50 time steps or 100 or 1,000, but millions and billions.

50:24.560 --> 50:28.560
 It's not clear what are the practical limits of the LSTM

50:28.560 --> 50:30.560
 when it comes to looking back.

50:30.560 --> 50:35.560
 Already in 2006, I think, we had examples where not only

50:35.560 --> 50:40.560
 looked back tens of thousands of steps, but really millions of steps.

50:40.560 --> 50:46.560
 And Juan Perez Ortiz in my lab, I think was the first author of a paper

50:46.560 --> 50:51.560
 where we really, was it 2006 or something, had examples where it

50:51.560 --> 50:56.560
 learned to look back for more than 10 million steps.

50:56.560 --> 51:02.560
 So for most problems of speech recognition, it's not

51:02.560 --> 51:06.560
 necessary to look that far back, but there are examples where it does.

51:06.560 --> 51:12.560
 Now, the looking back thing, that's rather easy because there is only

51:12.560 --> 51:17.560
 one past, but there are many possible futures.

51:17.560 --> 51:21.560
 And so a reinforcement learning system, which is trying to maximize

51:21.560 --> 51:26.560
 its future expected reward and doesn't know yet which of these

51:26.560 --> 51:31.560
 many possible futures should I select, given this one single past,

51:31.560 --> 51:36.560
 is facing problems that the LSTM by itself cannot solve.

51:36.560 --> 51:40.560
 So the LSTM is good for coming up with a compact representation

51:40.560 --> 51:46.560
 of the history so far, of the history and of observations and actions so far.

51:46.560 --> 51:53.560
 But now, how do you plan in an efficient and good way among all these,

51:53.560 --> 51:57.560
 how do you select one of these many possible action sequences

51:57.560 --> 52:02.560
 that a reinforcement learning system has to consider to maximize

52:02.560 --> 52:05.560
 reward in this unknown future.

52:05.560 --> 52:11.560
 So again, we have this basic setup where you have one recon network,

52:11.560 --> 52:16.560
 which gets in the video and the speech and whatever, and it's

52:16.560 --> 52:19.560
 executing the actions and it's trying to maximize reward.

52:19.560 --> 52:24.560
 So there is no teacher who tells it what to do at which point in time.

52:24.560 --> 52:29.560
 And then there's the other network, which is just predicting

52:29.560 --> 52:32.560
 what's going to happen if I do that and then.

52:32.560 --> 52:36.560
 And that could be an LSTM network, and it learns to look back

52:36.560 --> 52:41.560
 all the way to make better predictions of the next time step.

52:41.560 --> 52:45.560
 So essentially, although it's predicting only the next time step,

52:45.560 --> 52:50.560
 it is motivated to learn to put into memory something that happened

52:50.560 --> 52:54.560
 maybe a million steps ago because it's important to memorize that

52:54.560 --> 52:58.560
 if you want to predict that at the next time step, the next event.

52:58.560 --> 53:03.560
 Now, how can a model of the world like that,

53:03.560 --> 53:07.560
 a predictive model of the world be used by the first guy,

53:07.560 --> 53:11.560
 let's call it the controller and the model, the controller and the model.

53:11.560 --> 53:16.560
 How can the model be used by the controller to efficiently select

53:16.560 --> 53:19.560
 among these many possible futures?

53:19.560 --> 53:23.560
 The naive way we had about 30 years ago was

53:23.560 --> 53:27.560
 let's just use the model of the world as a stand in,

53:27.560 --> 53:29.560
 as a simulation of the world.

53:29.560 --> 53:32.560
 And millisecond by millisecond we plan the future

53:32.560 --> 53:36.560
 and that means we have to roll it out really in detail

53:36.560 --> 53:38.560
 and it will work only if the model is really good

53:38.560 --> 53:40.560
 and it will still be inefficient

53:40.560 --> 53:43.560
 because we have to look at all these possible futures

53:43.560 --> 53:45.560
 and there are so many of them.

53:45.560 --> 53:50.560
 So instead, what we do now since 2015 in our CN systems,

53:50.560 --> 53:54.560
 controller model systems, we give the controller the opportunity

53:54.560 --> 54:00.560
 to learn by itself how to use the potentially relevant parts

54:00.560 --> 54:05.560
 of the model network to solve new problems more quickly.

54:05.560 --> 54:09.560
 And if it wants to, it can learn to ignore the M

54:09.560 --> 54:12.560
 and sometimes it's a good idea to ignore the M

54:12.560 --> 54:15.560
 because it's really bad, it's a bad predictor

54:15.560 --> 54:18.560
 in this particular situation of life

54:18.560 --> 54:22.560
 where the controller is currently trying to maximize reward.

54:22.560 --> 54:26.560
 However, it can also learn to address and exploit

54:26.560 --> 54:32.560
 some of the subprograms that came about in the model network

54:32.560 --> 54:35.560
 through compressing the data by predicting it.

54:35.560 --> 54:40.560
 So it now has an opportunity to reuse that code,

54:40.560 --> 54:43.560
 the algorithmic information in the model network

54:43.560 --> 54:47.560
 to reduce its own search space,

54:47.560 --> 54:50.560
 search that it can solve a new problem more quickly

54:50.560 --> 54:52.560
 than without the model.

54:52.560 --> 54:54.560
 Compression.

54:54.560 --> 54:58.560
 So you're ultimately optimistic and excited

54:58.560 --> 55:02.560
 about the power of reinforcement learning

55:02.560 --> 55:04.560
 in the context of real systems.

55:04.560 --> 55:06.560
 Absolutely, yeah.

55:06.560 --> 55:11.560
 So you see RL as a potential having a huge impact

55:11.560 --> 55:15.560
 beyond just sort of the M part is often developed

55:15.560 --> 55:19.560
 on supervised learning methods.

55:19.560 --> 55:25.560
 You see RL as a, for problems of cell driving cars

55:25.560 --> 55:28.560
 or any kind of applied side robotics,

55:28.560 --> 55:33.560
 that's the correct, interesting direction for researching you.

55:33.560 --> 55:35.560
 I do think so.

55:35.560 --> 55:37.560
 We have a company called Nasense,

55:37.560 --> 55:43.560
 which has applied reinforcement learning to little Audis.

55:43.560 --> 55:45.560
 Little Audis.

55:45.560 --> 55:47.560
 Which learn to park without a teacher.

55:47.560 --> 55:51.560
 The same principles were used, of course.

55:51.560 --> 55:54.560
 So these little Audis, they are small, maybe like that,

55:54.560 --> 55:57.560
 so much smaller than the RL Audis.

55:57.560 --> 56:00.560
 But they have all the sensors that you find in the RL Audis.

56:00.560 --> 56:03.560
 You find the cameras, the LIDAR sensors.

56:03.560 --> 56:08.560
 They go up to 120 kilometers an hour if they want to.

56:08.560 --> 56:12.560
 And they have pain sensors, basically.

56:12.560 --> 56:16.560
 And they don't want to bump against obstacles and other Audis.

56:16.560 --> 56:21.560
 And so they must learn like little babies to park.

56:21.560 --> 56:25.560
 Take the raw vision input and translate that into actions

56:25.560 --> 56:28.560
 that lead to successful parking behavior,

56:28.560 --> 56:30.560
 which is a rewarding thing.

56:30.560 --> 56:32.560
 And yes, they learn that.

56:32.560 --> 56:34.560
 So we have examples like that.

56:34.560 --> 56:36.560
 And it's only in the beginning.

56:36.560 --> 56:38.560
 This is just a tip of the iceberg.

56:38.560 --> 56:44.560
 And I believe the next wave of AI is going to be all about that.

56:44.560 --> 56:47.560
 So at the moment, the current wave of AI is about

56:47.560 --> 56:51.560
 passive pattern observation and prediction.

56:51.560 --> 56:54.560
 And that's what you have on your smartphone

56:54.560 --> 56:58.560
 and what the major companies on the Pacific Rim are using

56:58.560 --> 57:01.560
 to sell you ads to do marketing.

57:01.560 --> 57:04.560
 That's the current sort of profit in AI.

57:04.560 --> 57:09.560
 And that's only one or two percent of the wild economy,

57:09.560 --> 57:11.560
 which is big enough to make these companies

57:11.560 --> 57:14.560
 pretty much the most valuable companies in the world.

57:14.560 --> 57:19.560
 But there's a much, much bigger fraction of the economy

57:19.560 --> 57:21.560
 going to be affected by the next wave,

57:21.560 --> 57:25.560
 which is really about machines that shape the data

57:25.560 --> 57:27.560
 through their own actions.

57:27.560 --> 57:32.560
 Do you think simulation is ultimately the biggest way

57:32.560 --> 57:36.560
 that those methods will be successful in the next 10, 20 years?

57:36.560 --> 57:38.560
 We're not talking about 100 years from now.

57:38.560 --> 57:42.560
 We're talking about sort of the near term impact of RL.

57:42.560 --> 57:44.560
 Do you think really good simulation is required?

57:44.560 --> 57:48.560
 Or is there other techniques like imitation learning,

57:48.560 --> 57:53.560
 observing other humans operating in the real world?

57:53.560 --> 57:57.560
 Where do you think this success will come from?

57:57.560 --> 58:01.560
 So at the moment we have a tendency of using

58:01.560 --> 58:06.560
 physics simulations to learn behavior for machines

58:06.560 --> 58:13.560
 that learn to solve problems that humans also do not know how to solve.

58:13.560 --> 58:15.560
 However, this is not the future,

58:15.560 --> 58:19.560
 because the future is in what little babies do.

58:19.560 --> 58:22.560
 They don't use a physics engine to simulate the world.

58:22.560 --> 58:25.560
 They learn a predictive model of the world,

58:25.560 --> 58:29.560
 which maybe sometimes is wrong in many ways,

58:29.560 --> 58:34.560
 but captures all kinds of important abstract high level predictions

58:34.560 --> 58:37.560
 which are really important to be successful.

58:37.560 --> 58:42.560
 And that's what was the future 30 years ago

58:42.560 --> 58:44.560
 when we started that type of research,

58:44.560 --> 58:45.560
 but it's still the future,

58:45.560 --> 58:50.560
 and now we know much better how to move forward

58:50.560 --> 58:54.560
 and to really make working systems based on that,

58:54.560 --> 58:57.560
 where you have a learning model of the world,

58:57.560 --> 59:00.560
 a model of the world that learns to predict what's going to happen

59:00.560 --> 59:01.560
 if I do that and that,

59:01.560 --> 59:06.560
 and then the controller uses that model

59:06.560 --> 59:11.560
 to more quickly learn successful action sequences.

59:11.560 --> 59:13.560
 And then of course always this curiosity thing,

59:13.560 --> 59:15.560
 in the beginning the model is stupid,

59:15.560 --> 59:17.560
 so the controller should be motivated

59:17.560 --> 59:20.560
 to come up with experiments, with action sequences

59:20.560 --> 59:23.560
 that lead to data that improve the model.

59:23.560 --> 59:26.560
 Do you think improving the model,

59:26.560 --> 59:30.560
 constructing an understanding of the world in this connection

59:30.560 --> 59:34.560
 is now the popular approaches have been successful

59:34.560 --> 59:38.560
 or grounded in ideas of neural networks,

59:38.560 --> 59:43.560
 but in the 80s with expert systems there's symbolic AI approaches,

59:43.560 --> 59:47.560
 which to us humans are more intuitive

59:47.560 --> 59:50.560
 in the sense that it makes sense that you build up knowledge

59:50.560 --> 59:52.560
 in this knowledge representation.

59:52.560 --> 59:56.560
 What kind of lessons can we draw into our current approaches

59:56.560 --> 1:00:00.560
 from expert systems, from symbolic AI?

1:00:00.560 --> 1:00:04.560
 So I became aware of all of that in the 80s

1:00:04.560 --> 1:00:09.560
 and back then logic programming was a huge thing.

1:00:09.560 --> 1:00:12.560
 Was it inspiring to yourself that you find it compelling

1:00:12.560 --> 1:00:16.560
 that a lot of your work was not so much in that realm,

1:00:16.560 --> 1:00:18.560
 is more in the learning systems?

1:00:18.560 --> 1:00:20.560
 Yes and no, but we did all of that.

1:00:20.560 --> 1:00:27.560
 So my first publication ever actually was 1987,

1:00:27.560 --> 1:00:31.560
 was the implementation of a genetic algorithm

1:00:31.560 --> 1:00:34.560
 of a genetic programming system in Prolog.

1:00:34.560 --> 1:00:37.560
 So Prolog, that's what you learn back then,

1:00:37.560 --> 1:00:39.560
 which is a logic programming language,

1:00:39.560 --> 1:00:45.560
 and the Japanese, they had this huge fifth generation AI project,

1:00:45.560 --> 1:00:48.560
 which was mostly about logic programming back then,

1:00:48.560 --> 1:00:53.560
 although neural networks existed and were well known back then,

1:00:53.560 --> 1:00:57.560
 and deep learning has existed since 1965,

1:00:57.560 --> 1:01:01.560
 since this guy in the Ukraine, Ivak Nenko, started it,

1:01:01.560 --> 1:01:05.560
 but the Japanese and many other people,

1:01:05.560 --> 1:01:07.560
 they focused really on this logic programming,

1:01:07.560 --> 1:01:10.560
 and I was influenced to the extent that I said,

1:01:10.560 --> 1:01:13.560
 okay, let's take these biologically inspired algorithms

1:01:13.560 --> 1:01:16.560
 like evolution, programs,

1:01:16.560 --> 1:01:22.560
 and implement that in the language which I know,

1:01:22.560 --> 1:01:24.560
 which was Prolog, for example, back then.

1:01:24.560 --> 1:01:28.560
 And then in many ways this came back later,

1:01:28.560 --> 1:01:31.560
 because the Goudel machine, for example,

1:01:31.560 --> 1:01:33.560
 has a proof searcher on board,

1:01:33.560 --> 1:01:35.560
 and without that it would not be optimal.

1:01:35.560 --> 1:01:38.560
 Well, Markus Hutter's universal algorithm

1:01:38.560 --> 1:01:40.560
 for solving all well defined problems

1:01:40.560 --> 1:01:42.560
 has a proof search on board,

1:01:42.560 --> 1:01:46.560
 so that's very much logic programming.

1:01:46.560 --> 1:01:50.560
 Without that it would not be asymptotically optimal.

1:01:50.560 --> 1:01:54.560
 But then on the other hand, because we are very pragmatic guys also,

1:01:54.560 --> 1:01:59.560
 we focused on recurrent neural networks

1:01:59.560 --> 1:02:04.560
 and suboptimal stuff such as gradient based search

1:02:04.560 --> 1:02:09.560
 and program space rather than provably optimal things.

1:02:09.560 --> 1:02:13.560
 So logic programming certainly has a usefulness

1:02:13.560 --> 1:02:17.560
 when you're trying to construct something provably optimal

1:02:17.560 --> 1:02:19.560
 or provably good or something like that,

1:02:19.560 --> 1:02:22.560
 but is it useful for practical problems?

1:02:22.560 --> 1:02:24.560
 It's really useful for our theorem proving.

1:02:24.560 --> 1:02:28.560
 The best theorem proofers today are not neural networks.

1:02:28.560 --> 1:02:31.560
 No, they are logic programming systems

1:02:31.560 --> 1:02:35.560
 that are much better theorem proofers than most math students

1:02:35.560 --> 1:02:38.560
 in the first or second semester.

1:02:38.560 --> 1:02:42.560
 But for reasoning, for playing games of Go, or chess,

1:02:42.560 --> 1:02:46.560
 or for robots, autonomous vehicles that operate in the real world,

1:02:46.560 --> 1:02:50.560
 or object manipulation, you think learning...

1:02:50.560 --> 1:02:53.560
 Yeah, as long as the problems have little to do

1:02:53.560 --> 1:02:58.560
 with theorem proving themselves,

1:02:58.560 --> 1:03:01.560
 then as long as that is not the case,

1:03:01.560 --> 1:03:05.560
 you just want to have better pattern recognition.

1:03:05.560 --> 1:03:09.560
 So to build a self trying car, you want to have better pattern recognition

1:03:09.560 --> 1:03:13.560
 and pedestrian recognition and all these things,

1:03:13.560 --> 1:03:18.560
 and you want to minimize the number of false positives,

1:03:18.560 --> 1:03:22.560
 which is currently slowing down self trying cars in many ways.

1:03:22.560 --> 1:03:27.560
 And all of that has very little to do with logic programming.

1:03:27.560 --> 1:03:32.560
 What are you most excited about in terms of directions

1:03:32.560 --> 1:03:36.560
 of artificial intelligence at this moment in the next few years,

1:03:36.560 --> 1:03:41.560
 in your own research and in the broader community?

1:03:41.560 --> 1:03:44.560
 So I think in the not so distant future,

1:03:44.560 --> 1:03:52.560
 we will have for the first time little robots that learn like kids.

1:03:52.560 --> 1:03:57.560
 And I will be able to say to the robot,

1:03:57.560 --> 1:04:00.560
 look here robot, we are going to assemble a smartphone.

1:04:00.560 --> 1:04:05.560
 Let's take this slab of plastic and the screwdriver

1:04:05.560 --> 1:04:08.560
 and let's screw in the screw like that.

1:04:08.560 --> 1:04:11.560
 No, not like that, like that.

1:04:11.560 --> 1:04:13.560
 Not like that, like that.

1:04:13.560 --> 1:04:17.560
 And I don't have a data glove or something.

1:04:17.560 --> 1:04:20.560
 He will see me and he will hear me

1:04:20.560 --> 1:04:24.560
 and he will try to do something with his own actuators,

1:04:24.560 --> 1:04:26.560
 which will be really different from mine,

1:04:26.560 --> 1:04:28.560
 but he will understand the difference

1:04:28.560 --> 1:04:34.560
 and will learn to imitate me but not in the supervised way

1:04:34.560 --> 1:04:40.560
 where a teacher is giving target signals for all his muscles all the time.

1:04:40.560 --> 1:04:43.560
 No, by doing this high level imitation

1:04:43.560 --> 1:04:46.560
 where he first has to learn to imitate me

1:04:46.560 --> 1:04:50.560
 and to interpret these additional noises coming from my mouth

1:04:50.560 --> 1:04:54.560
 as helpful signals to do that pattern.

1:04:54.560 --> 1:05:00.560
 And then it will by itself come up with faster ways

1:05:00.560 --> 1:05:03.560
 and more efficient ways of doing the same thing.

1:05:03.560 --> 1:05:07.560
 And finally, I stop his learning algorithm

1:05:07.560 --> 1:05:10.560
 and make a million copies and sell it.

1:05:10.560 --> 1:05:13.560
 And so at the moment this is not possible,

1:05:13.560 --> 1:05:16.560
 but we already see how we are going to get there.

1:05:16.560 --> 1:05:21.560
 And you can imagine to the extent that this works economically and cheaply,

1:05:21.560 --> 1:05:24.560
 it's going to change everything.

1:05:24.560 --> 1:05:30.560
 Almost all our production is going to be affected by that.

1:05:30.560 --> 1:05:33.560
 And a much bigger wave,

1:05:33.560 --> 1:05:36.560
 a much bigger AI wave is coming

1:05:36.560 --> 1:05:38.560
 than the one that we are currently witnessing,

1:05:38.560 --> 1:05:41.560
 which is mostly about passive pattern recognition on your smartphone.

1:05:41.560 --> 1:05:47.560
 This is about active machines that shapes data through the actions they are executing

1:05:47.560 --> 1:05:51.560
 and they learn to do that in a good way.

1:05:51.560 --> 1:05:56.560
 So many of the traditional industries are going to be affected by that.

1:05:56.560 --> 1:06:00.560
 All the companies that are building machines

1:06:00.560 --> 1:06:05.560
 will equip these machines with cameras and other sensors

1:06:05.560 --> 1:06:10.560
 and they are going to learn to solve all kinds of problems.

1:06:10.560 --> 1:06:14.560
 Through interaction with humans, but also a lot on their own

1:06:14.560 --> 1:06:18.560
 to improve what they already can do.

1:06:18.560 --> 1:06:23.560
 And lots of old economy is going to be affected by that.

1:06:23.560 --> 1:06:28.560
 And in recent years I have seen that old economy is actually waking up

1:06:28.560 --> 1:06:31.560
 and realizing that this is the case.

1:06:31.560 --> 1:06:35.560
 Are you optimistic about that future? Are you concerned?

1:06:35.560 --> 1:06:40.560
 There's a lot of people concerned in the near term about the transformation

1:06:40.560 --> 1:06:42.560
 of the nature of work.

1:06:42.560 --> 1:06:45.560
 The kind of ideas that you just suggested

1:06:45.560 --> 1:06:48.560
 would have a significant impact on what kind of things could be automated.

1:06:48.560 --> 1:06:51.560
 Are you optimistic about that future?

1:06:51.560 --> 1:06:54.560
 Are you nervous about that future?

1:06:54.560 --> 1:07:01.560
 And looking a little bit farther into the future, there's people like Gila Musk

1:07:01.560 --> 1:07:06.560
 still wrestle concerned about the existential threats of that future.

1:07:06.560 --> 1:07:10.560
 So in the near term, job loss in the long term existential threat,

1:07:10.560 --> 1:07:15.560
 are these concerns to you or are you ultimately optimistic?

1:07:15.560 --> 1:07:22.560
 So let's first address the near future.

1:07:22.560 --> 1:07:27.560
 We have had predictions of job losses for many decades.

1:07:27.560 --> 1:07:32.560
 For example, when industrial robots came along,

1:07:32.560 --> 1:07:37.560
 many people predicted that lots of jobs are going to get lost.

1:07:37.560 --> 1:07:41.560
 And in a sense, they were right,

1:07:41.560 --> 1:07:45.560
 because back then there were car factories

1:07:45.560 --> 1:07:50.560
 and hundreds of people in these factories assembled cars.

1:07:50.560 --> 1:07:53.560
 And today the same car factories have hundreds of robots

1:07:53.560 --> 1:07:58.560
 and maybe three guys watching the robots.

1:07:58.560 --> 1:08:04.560
 On the other hand, those countries that have lots of robots per capita,

1:08:04.560 --> 1:08:09.560
 Japan, Korea, Germany, Switzerland, a couple of other countries,

1:08:09.560 --> 1:08:13.560
 they have really low unemployment rates.

1:08:13.560 --> 1:08:17.560
 Somehow all kinds of new jobs were created.

1:08:17.560 --> 1:08:22.560
 Back then nobody anticipated those jobs.

1:08:22.560 --> 1:08:26.560
 And decades ago, I already said,

1:08:26.560 --> 1:08:31.560
 it's really easy to say which jobs are going to get lost,

1:08:31.560 --> 1:08:35.560
 but it's really hard to predict the new ones.

1:08:35.560 --> 1:08:38.560
 30 years ago, who would have predicted all these people

1:08:38.560 --> 1:08:44.560
 making money as YouTube bloggers, for example?

1:08:44.560 --> 1:08:51.560
 200 years ago, 60% of all people used to work in agriculture.

1:08:51.560 --> 1:08:55.560
 Today, maybe 1%.

1:08:55.560 --> 1:09:01.560
 But still, only, I don't know, 5% unemployment.

1:09:01.560 --> 1:09:03.560
 Lots of new jobs were created.

1:09:03.560 --> 1:09:07.560
 And Homo Ludens, the playing man,

1:09:07.560 --> 1:09:10.560
 is inventing new jobs all the time.

1:09:10.560 --> 1:09:15.560
 Most of these jobs are not existentially necessary

1:09:15.560 --> 1:09:18.560
 for the survival of our species.

1:09:18.560 --> 1:09:22.560
 There are only very few existentially necessary jobs

1:09:22.560 --> 1:09:27.560
 such as farming and building houses and warming up the houses,

1:09:27.560 --> 1:09:30.560
 but less than 10% of the population is doing that.

1:09:30.560 --> 1:09:37.560
 And most of these newly invented jobs are about interacting with other people

1:09:37.560 --> 1:09:40.560
 in new ways, through new media and so on,

1:09:40.560 --> 1:09:45.560
 getting new types of kudos and forms of likes and whatever,

1:09:45.560 --> 1:09:47.560
 and even making money through that.

1:09:47.560 --> 1:09:52.560
 So, Homo Ludens, the playing man, doesn't want to be unemployed,

1:09:52.560 --> 1:09:56.560
 and that's why he's inventing new jobs all the time.

1:09:56.560 --> 1:10:01.560
 And he keeps considering these jobs as really important

1:10:01.560 --> 1:10:07.560
 and is investing a lot of energy and hours of work into those new jobs.

1:10:07.560 --> 1:10:09.560
 That's quite beautifully put.

1:10:09.560 --> 1:10:11.560
 We're really nervous about the future

1:10:11.560 --> 1:10:14.560
 because we can't predict what kind of new jobs will be created.

1:10:14.560 --> 1:10:20.560
 But you're ultimately optimistic that we humans are so restless

1:10:20.560 --> 1:10:24.560
 that we create and give meaning to newer and newer jobs,

1:10:24.560 --> 1:10:29.560
 telling you things that get likes on Facebook

1:10:29.560 --> 1:10:31.560
 or whatever the social platform is.

1:10:31.560 --> 1:10:36.560
 So, what about long term existential threat of AI

1:10:36.560 --> 1:10:40.560
 where our whole civilization may be swallowed up

1:10:40.560 --> 1:10:44.560
 by this ultra super intelligent systems?

1:10:44.560 --> 1:10:47.560
 Maybe it's not going to be swallowed up,

1:10:47.560 --> 1:10:55.560
 but I'd be surprised if we humans were the last step

1:10:55.560 --> 1:10:59.560
 in the evolution of the universe.

1:10:59.560 --> 1:11:03.560
 You've actually had this beautiful comment somewhere

1:11:03.560 --> 1:11:08.560
 that I've seen saying that artificial...

1:11:08.560 --> 1:11:11.560
 Quite insightful, artificial intelligence systems

1:11:11.560 --> 1:11:15.560
 just like us humans will likely not want to interact with humans.

1:11:15.560 --> 1:11:17.560
 They'll just interact amongst themselves,

1:11:17.560 --> 1:11:20.560
 just like ants interact amongst themselves

1:11:20.560 --> 1:11:24.560
 and only tangentially interact with humans.

1:11:24.560 --> 1:11:28.560
 And it's quite an interesting idea that once we create AGI

1:11:28.560 --> 1:11:31.560
 that will lose interest in humans

1:11:31.560 --> 1:11:34.560
 and have compete for their own Facebook likes

1:11:34.560 --> 1:11:36.560
 and their own social platforms.

1:11:36.560 --> 1:11:40.560
 So, within that quite elegant idea,

1:11:40.560 --> 1:11:44.560
 how do we know in a hypothetical sense

1:11:44.560 --> 1:11:48.560
 that there's not already intelligent systems out there?

1:11:48.560 --> 1:11:52.560
 How do you think broadly of general intelligence

1:11:52.560 --> 1:11:56.560
 greater than us, how do we know it's out there?

1:11:56.560 --> 1:12:01.560
 How do we know it's around us and could it already be?

1:12:01.560 --> 1:12:04.560
 I'd be surprised if within the next few decades

1:12:04.560 --> 1:12:10.560
 or something like that we won't have AIs

1:12:10.560 --> 1:12:12.560
 that are truly smart in every single way

1:12:12.560 --> 1:12:17.560
 and better problem solvers in almost every single important way.

1:12:17.560 --> 1:12:22.560
 And I'd be surprised if they wouldn't realize

1:12:22.560 --> 1:12:24.560
 what we have realized a long time ago,

1:12:24.560 --> 1:12:29.560
 which is that almost all physical resources are not here

1:12:29.560 --> 1:12:36.560
 in this biosphere, but throughout the rest of the solar system

1:12:36.560 --> 1:12:42.560
 gets two billion times more solar energy than our little planet.

1:12:42.560 --> 1:12:46.560
 There's lots of material out there that you can use

1:12:46.560 --> 1:12:51.560
 to build robots and self replicating robot factories and all this stuff.

1:12:51.560 --> 1:12:53.560
 And they are going to do that.

1:12:53.560 --> 1:12:56.560
 And they will be scientists and curious

1:12:56.560 --> 1:12:59.560
 and they will explore what they can do.

1:12:59.560 --> 1:13:04.560
 And in the beginning they will be fascinated by life

1:13:04.560 --> 1:13:07.560
 and by their own origins in our civilization.

1:13:07.560 --> 1:13:09.560
 They will want to understand that completely,

1:13:09.560 --> 1:13:13.560
 just like people today would like to understand how life works

1:13:13.560 --> 1:13:22.560
 and also the history of our own existence and civilization

1:13:22.560 --> 1:13:26.560
 and also the physical laws that created all of them.

1:13:26.560 --> 1:13:29.560
 So in the beginning they will be fascinated by life

1:13:29.560 --> 1:13:33.560
 once they understand it, they lose interest,

1:13:33.560 --> 1:13:39.560
 like anybody who loses interest in things he understands.

1:13:39.560 --> 1:13:43.560
 And then, as you said,

1:13:43.560 --> 1:13:50.560
 the most interesting sources of information for them

1:13:50.560 --> 1:13:57.560
 will be others of their own kind.

1:13:57.560 --> 1:14:01.560
 So, at least in the long run,

1:14:01.560 --> 1:14:06.560
 there seems to be some sort of protection

1:14:06.560 --> 1:14:11.560
 through lack of interest on the other side.

1:14:11.560 --> 1:14:16.560
 And now it seems also clear, as far as we understand physics,

1:14:16.560 --> 1:14:20.560
 you need matter and energy to compute

1:14:20.560 --> 1:14:22.560
 and to build more robots and infrastructure

1:14:22.560 --> 1:14:28.560
 and more AI civilization and AI ecologies

1:14:28.560 --> 1:14:31.560
 consisting of trillions of different types of AI's.

1:14:31.560 --> 1:14:34.560
 And so it seems inconceivable to me

1:14:34.560 --> 1:14:37.560
 that this thing is not going to expand.

1:14:37.560 --> 1:14:41.560
 Some AI ecology not controlled by one AI

1:14:41.560 --> 1:14:44.560
 but trillions of different types of AI's competing

1:14:44.560 --> 1:14:47.560
 in all kinds of quickly evolving

1:14:47.560 --> 1:14:49.560
 and disappearing ecological niches

1:14:49.560 --> 1:14:52.560
 in ways that we cannot fathom at the moment.

1:14:52.560 --> 1:14:54.560
 But it's going to expand,

1:14:54.560 --> 1:14:56.560
 limited by light speed and physics,

1:14:56.560 --> 1:15:00.560
 but it's going to expand and now we realize

1:15:00.560 --> 1:15:02.560
 that the universe is still young.

1:15:02.560 --> 1:15:05.560
 It's only 13.8 billion years old

1:15:05.560 --> 1:15:10.560
 and it's going to be a thousand times older than that.

1:15:10.560 --> 1:15:13.560
 So there's plenty of time

1:15:13.560 --> 1:15:16.560
 to conquer the entire universe

1:15:16.560 --> 1:15:19.560
 and to fill it with intelligence

1:15:19.560 --> 1:15:21.560
 and send us in receivers such that

1:15:21.560 --> 1:15:25.560
 AI's can travel the way they are traveling

1:15:25.560 --> 1:15:27.560
 in our labs today,

1:15:27.560 --> 1:15:31.560
 which is by radio from sender to receiver.

1:15:31.560 --> 1:15:35.560
 And let's call the current age of the universe one eon.

1:15:35.560 --> 1:15:38.560
 One eon.

1:15:38.560 --> 1:15:41.560
 Now it will take just a few eons from now

1:15:41.560 --> 1:15:43.560
 and the entire visible universe

1:15:43.560 --> 1:15:46.560
 is going to be full of that stuff.

1:15:46.560 --> 1:15:48.560
 And let's look ahead to a time

1:15:48.560 --> 1:15:50.560
 when the universe is going to be

1:15:50.560 --> 1:15:52.560
 one thousand times older than it is now.

1:15:52.560 --> 1:15:54.560
 They will look back and they will say,

1:15:54.560 --> 1:15:56.560
 look almost immediately after the Big Bang,

1:15:56.560 --> 1:15:59.560
 only a few eons later,

1:15:59.560 --> 1:16:02.560
 the entire universe started to become intelligent.

1:16:02.560 --> 1:16:05.560
 Now to your question,

1:16:05.560 --> 1:16:08.560
 how do we see whether anything like that

1:16:08.560 --> 1:16:12.560
 has already happened or is already in a more advanced stage

1:16:12.560 --> 1:16:14.560
 in some other part of the universe,

1:16:14.560 --> 1:16:16.560
 of the visible universe?

1:16:16.560 --> 1:16:18.560
 We are trying to look out there

1:16:18.560 --> 1:16:20.560
 and nothing like that has happened so far.

1:16:20.560 --> 1:16:22.560
 Or is that true?

1:16:22.560 --> 1:16:24.560
 Do you think we would recognize it?

1:16:24.560 --> 1:16:26.560
 How do we know it's not among us?

1:16:26.560 --> 1:16:30.560
 How do we know planets aren't in themselves intelligent beings?

1:16:30.560 --> 1:16:36.560
 How do we know ants seen as a collective

1:16:36.560 --> 1:16:39.560
 are not much greater intelligence than our own?

1:16:39.560 --> 1:16:41.560
 These kinds of ideas.

1:16:41.560 --> 1:16:44.560
 When I was a boy, I was thinking about these things

1:16:44.560 --> 1:16:48.560
 and I thought, hmm, maybe it has already happened.

1:16:48.560 --> 1:16:50.560
 Because back then I knew,

1:16:50.560 --> 1:16:53.560
 I learned from popular physics books,

1:16:53.560 --> 1:16:57.560
 that the structure, the large scale structure of the universe

1:16:57.560 --> 1:16:59.560
 is not homogeneous.

1:16:59.560 --> 1:17:02.560
 And you have these clusters of galaxies

1:17:02.560 --> 1:17:07.560
 and then in between there are these huge empty spaces.

1:17:07.560 --> 1:17:11.560
 And I thought, hmm, maybe they aren't really empty.

1:17:11.560 --> 1:17:13.560
 It's just that in the middle of that

1:17:13.560 --> 1:17:16.560
 some AI civilization already has expanded

1:17:16.560 --> 1:17:22.560
 and then has covered a bubble of a billion light years time

1:17:22.560 --> 1:17:26.560
 using all the energy of all the stars within that bubble

1:17:26.560 --> 1:17:29.560
 for its own unfathomable practices.

1:17:29.560 --> 1:17:34.560
 And so it always happened and we just failed to interpret the signs.

1:17:34.560 --> 1:17:39.560
 But then I learned that gravity by itself

1:17:39.560 --> 1:17:42.560
 explains the large scale structure of the universe

1:17:42.560 --> 1:17:45.560
 and that this is not a convincing explanation.

1:17:45.560 --> 1:17:50.560
 And then I thought maybe it's the dark matter

1:17:50.560 --> 1:17:54.560
 because as far as we know today

1:17:54.560 --> 1:18:00.560
 80% of the measurable matter is invisible.

1:18:00.560 --> 1:18:03.560
 And we know that because otherwise our galaxy

1:18:03.560 --> 1:18:06.560
 or other galaxies would fall apart.

1:18:06.560 --> 1:18:09.560
 They are rotating too quickly.

1:18:09.560 --> 1:18:14.560
 And then the idea was maybe all of these

1:18:14.560 --> 1:18:17.560
 AI civilizations that are already out there,

1:18:17.560 --> 1:18:22.560
 they are just invisible

1:18:22.560 --> 1:18:24.560
 because they are really efficient in using the energies

1:18:24.560 --> 1:18:26.560
 out their own local systems

1:18:26.560 --> 1:18:29.560
 and that's why they appear dark to us.

1:18:29.560 --> 1:18:31.560
 But this is also not a convincing explanation

1:18:31.560 --> 1:18:34.560
 because then the question becomes

1:18:34.560 --> 1:18:41.560
 why are there still any visible stars left in our own galaxy

1:18:41.560 --> 1:18:44.560
 which also must have a lot of dark matter.

1:18:44.560 --> 1:18:46.560
 So that is also not a convincing thing.

1:18:46.560 --> 1:18:53.560
 And today I like to think it's quite plausible

1:18:53.560 --> 1:18:56.560
 that maybe we are the first, at least in our local light cone

1:18:56.560 --> 1:19:04.560
 within the few hundreds of millions of light years

1:19:04.560 --> 1:19:08.560
 that we can reliably observe.

1:19:08.560 --> 1:19:10.560
 Is that exciting to you?

1:19:10.560 --> 1:19:12.560
 That we might be the first?

1:19:12.560 --> 1:19:16.560
 It would make us much more important

1:19:16.560 --> 1:19:20.560
 because if we mess it up through a nuclear war

1:19:20.560 --> 1:19:25.560
 then maybe this will have an effect

1:19:25.560 --> 1:19:30.560
 on the development of the entire universe.

1:19:30.560 --> 1:19:32.560
 So let's not mess it up.

1:19:32.560 --> 1:19:33.560
 Let's not mess it up.

1:19:33.560 --> 1:19:35.560
 Jürgen, thank you so much for talking today.

1:19:35.560 --> 1:19:36.560
 I really appreciate it.

1:19:36.560 --> 1:19:42.560
 It's my pleasure.