File size: 70,866 Bytes
a3be5d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
WEBVTT

00:00.000 --> 00:05.440
 The following is a conversation with Leslie Kailbling. She is a roboticist and professor at

00:05.440 --> 00:12.080
 MIT. She is recognized for her work in reinforcement learning, planning, robot navigation, and several

00:12.080 --> 00:18.560
 other topics in AI. She won the Ijkai Computers and Thought Award and was the editor in chief

00:18.560 --> 00:24.320
 of the prestigious journal machine learning research. This conversation is part of the

00:24.320 --> 00:30.400
 artificial intelligence podcast at MIT and beyond. If you enjoy it, subscribe on YouTube,

00:30.400 --> 00:37.760
 iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now,

00:37.760 --> 00:45.360
 here's my conversation with Leslie Kailbling. What made me get excited about AI, I can say

00:45.360 --> 00:49.920
 that, is I read Girdle Escherbach when I was in high school. That was pretty formative for me

00:49.920 --> 00:59.280
 because it exposed the interestingness of primitives and combination and how you can

00:59.280 --> 01:06.000
 make complex things out of simple parts and ideas of AI and what kinds of programs might

01:06.000 --> 01:12.800
 generate intelligent behavior. So you first fell in love with AI reasoning logic versus robots?

01:12.800 --> 01:18.240
 Yeah, the robots came because my first job, so I finished an undergraduate degree in philosophy

01:18.240 --> 01:24.160
 at Stanford and was about to finish masters in computer science and I got hired at SRI

01:25.440 --> 01:30.960
 in their AI lab and they were building a robot. It was a kind of a follow on to shaky,

01:30.960 --> 01:35.840
 but all the shaky people were not there anymore. And so my job was to try to get this robot to

01:35.840 --> 01:41.200
 do stuff and that's really kind of what got me interested in robots. So maybe taking a small

01:41.200 --> 01:46.160
 step back to your bachelor's in Stanford philosophy, did master's in PhD in computer science,

01:46.160 --> 01:51.440
 but the bachelor's in philosophy. So what was that journey like? What elements of philosophy

01:52.320 --> 01:55.200
 do you think you bring to your work in computer science?

01:55.200 --> 02:00.080
 So it's surprisingly relevant. So part of the reason that I didn't do a computer science

02:00.080 --> 02:04.560
 undergraduate degree was that there wasn't one at Stanford at the time, but that there's part of

02:04.560 --> 02:09.200
 philosophy and in fact Stanford has a special sub major in something called now Symbolic Systems,

02:09.200 --> 02:15.520
 which is logic, model, theory, formal semantics of natural language. And so that's actually

02:15.520 --> 02:19.680
 a perfect preparation for work in AI and computer science.

02:19.680 --> 02:23.760
 That's kind of interesting. So if you were interested in artificial intelligence,

02:26.000 --> 02:32.560
 what kind of majors were people even thinking about taking? What is in your science? So besides

02:32.560 --> 02:37.120
 philosophies, what were you supposed to do if you were fascinated by the idea of creating

02:37.120 --> 02:41.440
 intelligence? There weren't enough people who did that for that even to be a conversation.

02:41.440 --> 02:49.920
 I mean, I think probably philosophy. I mean, it's interesting in my graduating class of

02:49.920 --> 02:57.120
 undergraduate philosophers, probably maybe slightly less than half went on in computer

02:57.120 --> 03:02.240
 science, slightly less than half went on in law, and like one or two went on in philosophy.

03:03.360 --> 03:07.920
 So it was a common kind of connection. Do you think AI researchers have a role,

03:07.920 --> 03:12.480
 be part time philosophers, or should they stick to the solid science and engineering

03:12.480 --> 03:17.200
 without sort of taking the philosophizing tangents? I mean, you work with robots,

03:17.200 --> 03:22.960
 you think about what it takes to create intelligent beings. Aren't you the perfect person to think

03:22.960 --> 03:27.440
 about the big picture philosophy at all? The parts of philosophy that are closest to AI,

03:27.440 --> 03:30.400
 I think, or at least the closest to AI that I think about are stuff like

03:30.400 --> 03:38.400
 belief and knowledge and denotation and that kind of stuff. It's quite formal, and it's

03:38.400 --> 03:44.000
 like just one step away from the kinds of computer science work that we do kind of routinely.

03:45.680 --> 03:53.040
 I think that there are important questions still about what you can do with a machine and what

03:53.040 --> 03:57.680
 you can't and so on. Although at least my personal view is that I'm completely a materialist,

03:57.680 --> 04:01.920
 and I don't think that there's any reason why we can't make a robot be

04:02.800 --> 04:06.800
 behaviorally indistinguishable from a human. And the question of whether it's

04:08.480 --> 04:13.600
 distinguishable internally, whether it's a zombie or not in philosophy terms, I actually don't,

04:14.720 --> 04:16.960
 I don't know, and I don't know if I care too much about that.

04:16.960 --> 04:22.080
 Right, but there is a philosophical notions there, mathematical and philosophical,

04:22.080 --> 04:27.520
 because we don't know so much of how difficult that is, how difficult is a perception problem.

04:27.520 --> 04:32.640
 How difficult is the planning problem? How difficult is it to operate in this world successfully?

04:32.640 --> 04:37.920
 Because our robots are not currently as successful as human beings in many tasks.

04:37.920 --> 04:44.320
 The question about the gap between current robots and human beings borders a little bit

04:44.320 --> 04:52.400
 on philosophy. The expanse of knowledge that's required to operate in this world and the ability

04:52.400 --> 04:57.280
 to form common sense knowledge, the ability to reason about uncertainty, much of the work

04:57.280 --> 05:05.040
 you've been doing, there's open questions there that, I don't know, require to activate a certain

05:06.320 --> 05:09.840
 big picture view. To me, that doesn't seem like a philosophical gap at all.

05:10.640 --> 05:14.240
 To me, there is a big technical gap. There's a huge technical gap,

05:15.040 --> 05:19.360
 but I don't see any reason why it's more than a technical gap.

05:19.360 --> 05:28.400
 Perfect. When you mentioned AI, you mentioned SRI, and maybe can you describe to me when you

05:28.400 --> 05:37.680
 first fell in love with robotics, with robots, or inspired, so you mentioned flaky or shaky flaky,

05:38.400 --> 05:42.720
 and what was the robot that first captured your imagination of what's possible?

05:42.720 --> 05:47.920
 Right. The first robot I worked with was flaky. Shaky was a robot that the SRI people had built,

05:47.920 --> 05:53.360
 but by the time, I think when I arrived, it was sitting in a corner of somebody's office

05:53.360 --> 06:00.640
 dripping hydraulic fluid into a pan, but it's iconic. Really, everybody should read the Shaky

06:00.640 --> 06:07.840
 Tech Report because it has so many good ideas in it. They invented ASTAR Search and symbolic

06:07.840 --> 06:15.520
 planning and learning macro operators. They had low level kind of configuration space planning for

06:15.520 --> 06:20.160
 the robot. They had vision. That's the basic ideas of a ton of things.

06:20.160 --> 06:27.920
 Can you take a step by it? Shaky was a mobile robot, but it could push objects,

06:27.920 --> 06:31.680
 and so it would move things around. With which actuator?

06:31.680 --> 06:40.080
 With its self, with its base. They had painted the baseboards black,

06:40.080 --> 06:48.320
 so it used vision to localize itself in a map. It detected objects. It could detect objects that

06:48.320 --> 06:54.800
 were surprising to it. It would plan and replan based on what it saw. It reasoned about whether

06:54.800 --> 07:02.240
 to look and take pictures. It really had the basics of so many of the things that we think about now.

07:03.280 --> 07:05.360
 How did it represent the space around it?

07:05.360 --> 07:09.680
 It had representations at a bunch of different levels of abstraction,

07:09.680 --> 07:13.920
 so it had, I think, a kind of an occupancy grid of some sort at the lowest level.

07:14.880 --> 07:20.000
 At the high level, it was abstract, symbolic kind of rooms and connectivity.

07:20.000 --> 07:22.160
 So where does Flakey come in?

07:22.160 --> 07:29.600
 Yeah, okay. I showed up at SRI and we were building a brand new robot. As I said, none of the people

07:29.600 --> 07:34.240
 from the previous project were there or involved anymore, so we were starting from scratch.

07:34.240 --> 07:43.920
 My advisor was Stan Rosenstein. He ended up being my thesis advisor. He was motivated by this idea

07:43.920 --> 07:52.400
 of situated computation or situated automata. The idea was that the tools of logical reasoning were

07:52.400 --> 08:01.200
 important, but possibly only for the engineers or designers to use in the analysis of a system,

08:01.200 --> 08:05.600
 but not necessarily to be manipulated in the head of the system itself.

08:06.400 --> 08:09.920
 So I might use logic to prove a theorem about the behavior of my robot,

08:10.480 --> 08:14.400
 even if the robot's not using logic, and it's had to prove theorems. So that was kind of the

08:14.400 --> 08:22.160
 distinction. And so the idea was to kind of use those principles to make a robot do stuff.

08:22.800 --> 08:28.960
 But a lot of the basic things we had to kind of learn for ourselves, because I had zero

08:28.960 --> 08:32.160
 background in robotics. I didn't know anything about control. I didn't know anything about

08:32.160 --> 08:36.640
 sensors. So we reinvented a lot of wheels on the way to getting that robot to do stuff.

08:36.640 --> 08:39.120
 Do you think that was an advantage or hindrance?

08:39.120 --> 08:45.600
 Oh, no. I'm big in favor of wheel reinvention, actually. I mean, I think you learned a lot

08:45.600 --> 08:51.920
 by doing it. It's important though to eventually have the pointers so that you can see what's

08:51.920 --> 08:58.080
 really going on. But I think you can appreciate much better the good solutions once you've

08:58.080 --> 09:00.400
 messed around a little bit on your own and found a bad one.

09:00.400 --> 09:04.880
 Yeah, I think you mentioned reinventing reinforcement learning and referring to

09:04.880 --> 09:10.960
 rewards as pleasures, a pleasure, I think, which I think is a nice name for it.

09:12.800 --> 09:18.960
 It's more fun, almost. Do you think you could tell the history of AI, machine learning,

09:18.960 --> 09:23.520
 reinforcement learning, how you think about it from the 50s to now?

09:23.520 --> 09:29.440
 One thing is that it oscillates. So things become fashionable and then they go out and

09:29.440 --> 09:34.480
 then something else becomes cool and then it goes out and so on. So there's some interesting

09:34.480 --> 09:41.600
 sociological process that actually drives a lot of what's going on. Early days was cybernetics and

09:41.600 --> 09:48.320
 control and the idea that of homeostasis, people who made these robots that could,

09:48.320 --> 09:54.400
 I don't know, try to plug into the wall when they needed power and then come loose and roll

09:54.400 --> 10:00.960
 around and do stuff. And then I think over time, they thought, well, that was inspiring, but people

10:00.960 --> 10:04.880
 said, no, no, no, we want to get maybe closer to what feels like real intelligence or human

10:04.880 --> 10:15.040
 intelligence. And then maybe the expert systems people tried to do that, but maybe a little

10:15.040 --> 10:21.760
 too superficially. So we get this surface understanding of what intelligence is like,

10:21.760 --> 10:25.840
 because I understand how a steel mill works and I can try to explain it to you and you can write

10:25.840 --> 10:31.520
 it down in logic and then we can make a computer infer that. And then that didn't work out.

10:32.400 --> 10:37.520
 But what's interesting, I think, is when a thing starts to not be working very well,

10:38.720 --> 10:44.480
 it's not only do we change methods, we change problems. So it's not like we have better ways

10:44.480 --> 10:48.160
 of doing the problem of the expert systems people are trying to do. We have no ways of

10:48.160 --> 10:56.800
 trying to do that problem. Oh, yeah, no, I think maybe a few. But we kind of give up on that problem

10:56.800 --> 11:01.520
 and we switch to a different problem. And we work that for a while and we make progress.

11:01.520 --> 11:04.960
 As a broad community. As a community. And there's a lot of people who would argue,

11:04.960 --> 11:09.760
 you don't give up on the problem. It's just the decrease in the number of people working on it.

11:09.760 --> 11:13.920
 You almost kind of like put it on the shelf. So we'll come back to this 20 years later.

11:13.920 --> 11:19.360
 Yeah, I think that's right. Or you might decide that it's malformed. Like you might say,

11:21.600 --> 11:26.800
 it's wrong to just try to make something that does superficial symbolic reasoning behave like a

11:26.800 --> 11:34.000
 doctor. You can't do that until you've had the sensory motor experience of being a doctor or

11:34.000 --> 11:38.560
 something. So there's arguments that say that that problem was not well formed. Or it could be

11:38.560 --> 11:44.160
 that it is well formed, but we just weren't approaching it well. So you mentioned that your

11:44.160 --> 11:49.120
 favorite part of logic and symbolic systems is that they give short names for large sets.

11:49.840 --> 11:56.320
 So there is some use to this. They use symbolic reasoning. So looking at expert systems

11:56.960 --> 12:01.760
 and symbolic computing, what do you think are the roadblocks that were hit in the 80s and 90s?

12:02.640 --> 12:08.320
 Okay, so right. So the fact that I'm not a fan of expert systems doesn't mean that I'm not a fan

12:08.320 --> 12:16.640
 of some kind of symbolic reasoning. So let's see roadblocks. Well, the main roadblock, I think,

12:16.640 --> 12:25.040
 was that the idea that humans could articulate their knowledge effectively into some kind of

12:25.040 --> 12:30.560
 logical statements. So it's not just the cost, the effort, but really just the capability of

12:30.560 --> 12:37.120
 doing it. Right. Because we're all experts in vision, but totally don't have introspective access

12:37.120 --> 12:45.440
 into how we do that. Right. And it's true that, I mean, I think the idea was, well, of course,

12:45.440 --> 12:49.040
 even people then would know, of course, I wouldn't ask you to please write down the rules that you

12:49.040 --> 12:54.000
 use for recognizing a water bottle. That's crazy. And everyone understood that. But we might ask

12:54.000 --> 13:00.800
 you to please write down the rules you use for deciding, I don't know what tie to put on or

13:00.800 --> 13:08.240
 or how to set up a microphone or something like that. But even those things, I think people maybe,

13:08.880 --> 13:12.720
 I think what they found, I'm not sure about this, but I think what they found was that the

13:12.720 --> 13:19.120
 so called experts could give explanations that sort of post hoc explanations for how and why

13:19.120 --> 13:27.680
 they did things, but they weren't necessarily very good. And then they depended on maybe some

13:27.680 --> 13:33.280
 kinds of perceptual things, which again, they couldn't really define very well. So I think,

13:33.280 --> 13:38.800
 I think fundamentally, I think that the underlying problem with that was the assumption that people

13:38.800 --> 13:45.280
 could articulate how and why they make their decisions. Right. So it's almost encoding the

13:45.280 --> 13:51.440
 knowledge from converting from expert to something that a machine can understand and reason with.

13:51.440 --> 13:58.880
 No, no, no, not even just encoding, but getting it out of you. Not not not writing it. I mean,

13:58.880 --> 14:03.680
 yes, hard also to write it down for the computer. But I don't think that people can

14:04.240 --> 14:10.080
 produce it. You can tell me a story about why you do stuff. But I'm not so sure that's the why.

14:11.440 --> 14:16.960
 Great. So there are still on the hierarchical planning side,

14:16.960 --> 14:25.120
 places where symbolic reasoning is very useful. So as you've talked about, so

14:27.840 --> 14:34.400
 where so don't where's the gap? Yeah, okay, good. So saying that humans can't provide a

14:34.400 --> 14:40.560
 description of their reasoning processes. That's okay, fine. But that doesn't mean that it's not

14:40.560 --> 14:44.880
 good to do reasoning of various styles inside a computer. Those are just two orthogonal points.

14:44.880 --> 14:50.560
 So then the question is, what kind of reasoning should you do inside a computer?

14:50.560 --> 14:55.680
 Right. And the answer is, I think you need to do all different kinds of reasoning inside

14:55.680 --> 15:01.680
 a computer, depending on what kinds of problems you face. I guess the question is, what kind of

15:01.680 --> 15:12.880
 things can you encode symbolically so you can reason about? I think the idea about and and

15:12.880 --> 15:18.080
 even symbolic, I don't even like that terminology because I don't know what it means technically

15:18.080 --> 15:24.240
 and formally. I do believe in abstractions. So abstractions are critical, right? You cannot

15:24.240 --> 15:30.240
 reason at completely fine grain about everything in your life, right? You can't make a plan at the

15:30.240 --> 15:37.680
 level of images and torques for getting a PhD. So you have to reduce the size of the state space

15:37.680 --> 15:43.040
 and you have to reduce the horizon if you're going to reason about getting a PhD or even buying

15:43.040 --> 15:50.080
 the ingredients to make dinner. And so how can you reduce the spaces and the horizon of the

15:50.080 --> 15:53.200
 reasoning you have to do? And the answer is abstraction, spatial abstraction, temporal

15:53.200 --> 15:58.080
 abstraction. I think abstraction along the lines of goals is also interesting, like you might

15:58.800 --> 16:03.840
 or well, abstraction and decomposition. Goals is maybe more of a decomposition thing.

16:03.840 --> 16:08.880
 So I think that's where these kinds of, if you want to call it symbolic or discrete

16:08.880 --> 16:15.440
 models come in. You talk about a room of your house instead of your pose. You talk about

16:16.800 --> 16:22.560
 doing something during the afternoon instead of at 2.54. And you do that because it makes

16:22.560 --> 16:30.000
 your reasoning problem easier and also because you don't have enough information

16:30.000 --> 16:37.120
 to reason in high fidelity about your pose of your elbow at 2.35 this afternoon anyway.

16:37.120 --> 16:39.440
 Right. When you're trying to get a PhD.

16:39.440 --> 16:41.600
 Right. Or when you're doing anything really.

16:41.600 --> 16:44.400
 Yeah, okay. Except for at that moment. At that moment,

16:44.400 --> 16:48.160
 you do have to reason about the pose of your elbow, maybe. But then maybe you do that in some

16:48.160 --> 16:55.680
 continuous joint space kind of model. And so again, my biggest point about all of this is that

16:55.680 --> 17:01.440
 there should be, the dogma is not the thing, right? It shouldn't be that I am in favor

17:01.440 --> 17:06.320
 against symbolic reasoning and you're in favor against neural networks. It should be that just

17:07.600 --> 17:12.240
 computer science tells us what the right answer to all these questions is if we were smart enough

17:12.240 --> 17:16.960
 to figure it out. Yeah. When you try to actually solve the problem with computers, the right answer

17:16.960 --> 17:22.880
 comes out. You mentioned abstractions. I mean, neural networks form abstractions or rather,

17:22.880 --> 17:30.320
 there's automated ways to form abstractions and there's expert driven ways to form abstractions

17:30.320 --> 17:35.920
 and expert human driven ways. And humans just seems to be way better at forming abstractions

17:35.920 --> 17:44.080
 currently and certain problems. So when you're referring to 2.45 a.m. versus afternoon,

17:44.960 --> 17:49.920
 how do we construct that taxonomy? Is there any room for automated construction of such

17:49.920 --> 17:55.200
 abstractions? Oh, I think eventually, yeah. I mean, I think when we get to be better

17:56.160 --> 18:02.240
 and machine learning engineers, we'll build algorithms that build awesome abstractions.

18:02.240 --> 18:06.720
 That are useful in this kind of way that you're describing. Yeah. So let's then step from

18:07.840 --> 18:14.400
 the abstraction discussion and let's talk about BOMMDP's

18:14.400 --> 18:21.440
 Partially Observable Markov Decision Processes. So uncertainty. So first, what are Markov Decision

18:21.440 --> 18:27.520
 Processes? What are Markov Decision Processes? Maybe how much of our world can be models and

18:27.520 --> 18:32.080
 MDPs? How much when you wake up in the morning and you're making breakfast, how do you think

18:32.080 --> 18:38.080
 of yourself as an MDP? So how do you think about MDPs and how they relate to our world?

18:38.080 --> 18:43.040
 Well, so there's a stance question, right? So a stance is a position that I take with

18:43.040 --> 18:52.160
 respect to a problem. So I as a researcher or a person who designed systems can decide to make

18:52.160 --> 18:58.960
 a model of the world around me in some terms. So I take this messy world and I say, I'm going to

18:58.960 --> 19:04.640
 treat it as if it were a problem of this formal kind, and then I can apply solution concepts

19:04.640 --> 19:09.120
 or algorithms or whatever to solve that formal thing, right? So of course, the world is not

19:09.120 --> 19:14.080
 anything. It's not an MDP or a POMDP. I don't know what it is, but I can model aspects of it

19:14.080 --> 19:19.280
 in some way or some other way. And when I model some aspect of it in a certain way, that gives me

19:19.280 --> 19:25.600
 some set of algorithms I can use. You can model the world in all kinds of ways. Some have some

19:26.400 --> 19:32.880
 are more accepting of uncertainty, more easily modeling uncertainty of the world. Some really

19:32.880 --> 19:40.720
 force the world to be deterministic. And so certainly MDPs model the uncertainty of the world.

19:40.720 --> 19:47.200
 Yes. Model some uncertainty. They model not present state uncertainty, but they model uncertainty

19:47.200 --> 19:53.840
 in the way the future will unfold. Right. So what are Markov decision processes?

19:53.840 --> 19:57.680
 So Markov decision process is a model. It's a kind of a model that you can make that says,

19:57.680 --> 20:05.600
 I know completely the current state of my system. And what it means to be a state is that I have

20:05.600 --> 20:10.720
 all the information right now that will let me make predictions about the future as well as I

20:10.720 --> 20:14.640
 can. So that remembering anything about my history wouldn't make my predictions any better.

20:18.720 --> 20:23.680
 But then it also says that then I can take some actions that might change the state of the world

20:23.680 --> 20:28.800
 and that I don't have a deterministic model of those changes. I have a probabilistic model

20:28.800 --> 20:35.600
 of how the world might change. It's a useful model for some kinds of systems. I mean, it's

20:35.600 --> 20:43.280
 certainly not a good model for most problems. I think because for most problems, you don't

20:43.280 --> 20:49.680
 actually know the state. For most problems, it's partially observed. So that's now a different

20:49.680 --> 20:56.480
 problem class. So okay, that's where the problem depies, the partially observed Markov decision

20:56.480 --> 21:03.600
 process step in. So how do they address the fact that you can't observe most the incomplete

21:03.600 --> 21:09.360
 information about most of the world around you? Right. So now the idea is we still kind of postulate

21:09.360 --> 21:14.080
 that there exists a state. We think that there is some information about the world out there

21:14.640 --> 21:18.800
 such that if we knew that we could make good predictions, but we don't know the state.

21:18.800 --> 21:23.840
 And so then we have to think about how, but we do get observations. Maybe I get images or I hear

21:23.840 --> 21:29.520
 things or I feel things and those might be local or noisy. And so therefore they don't tell me

21:29.520 --> 21:35.440
 everything about what's going on. And then I have to reason about given the history of actions

21:35.440 --> 21:40.000
 I've taken and observations I've gotten, what do I think is going on in the world? And then

21:40.000 --> 21:43.920
 given my own kind of uncertainty about what's going on in the world, I can decide what actions to

21:43.920 --> 21:51.120
 take. And so difficult is this problem of planning under uncertainty in your view and your

21:51.120 --> 21:57.840
 long experience of modeling the world, trying to deal with this uncertainty in

21:57.840 --> 22:04.240
 especially in real world systems. Optimal planning for even discrete POMDPs can be

22:04.240 --> 22:12.000
 undecidable depending on how you set it up. And so lots of people say I don't use POMDPs

22:12.000 --> 22:17.600
 because they are intractable. And I think that that's a kind of a very funny thing to say because

22:18.880 --> 22:23.120
 the problem you have to solve is the problem you have to solve. So if the problem you have to

22:23.120 --> 22:28.160
 solve is intractable, that's what makes us AI people, right? So we solve, we understand that

22:28.160 --> 22:34.320
 the problem we're solving is wildly intractable that we will never be able to solve it optimally,

22:34.320 --> 22:41.360
 at least I don't. Yeah, right. So later we can come back to an idea about bounded optimality

22:41.360 --> 22:44.960
 and something. But anyway, we can't come up with optimal solutions to these problems.

22:45.520 --> 22:51.200
 So we have to make approximations. Approximations in modeling approximations in solution algorithms

22:51.200 --> 22:58.160
 and so on. And so I don't have a problem with saying, yeah, my problem actually it is POMDP in

22:58.160 --> 23:02.880
 continuous space with continuous observations. And it's so computationally complex. I can't

23:02.880 --> 23:10.320
 even think about it's, you know, big O whatever. But that doesn't prevent me from it helps me

23:10.320 --> 23:17.360
 gives me some clarity to think about it that way. And to then take steps to make approximation

23:17.360 --> 23:22.080
 after approximation to get down to something that's like computable in some reasonable time.

23:22.080 --> 23:27.920
 When you think about optimality, you know, the community broadly has shifted on that, I think,

23:27.920 --> 23:35.600
 a little bit in how much they value the idea of optimality of chasing an optimal solution.

23:35.600 --> 23:42.240
 How is your views of chasing an optimal solution changed over the years when you work with robots?

23:42.240 --> 23:49.920
 That's interesting. I think we have a little bit of a methodological crisis, actually,

23:49.920 --> 23:54.000
 from the theoretical side. I mean, I do think that theory is important and that right now we're not

23:54.000 --> 24:00.640
 doing much of it. So there's lots of empirical hacking around and training this and doing that

24:00.640 --> 24:05.440
 and reporting numbers. But is it good? Is it bad? We don't know. It's very hard to say things.

24:08.240 --> 24:15.920
 And if you look at like computer science theory, so people talked for a while,

24:15.920 --> 24:21.280
 everyone was about solving problems optimally or completely. And then there were interesting

24:21.280 --> 24:27.520
 relaxations. So people look at, oh, can I, are there regret bounds? Or can I do some kind of,

24:27.520 --> 24:33.280
 you know, approximation? Can I prove something that I can approximately solve this problem or

24:33.280 --> 24:38.160
 that I get closer to the solution as I spend more time and so on? What's interesting, I think,

24:38.160 --> 24:47.680
 is that we don't have good approximate solution concepts for very difficult problems. Right?

24:47.680 --> 24:52.640
 I like to, you know, I like to say that I'm interested in doing a very bad job of very big

24:52.640 --> 25:02.960
 problems. Right. So very bad job, very big problems. I like to do that. But I wish I could say

25:02.960 --> 25:09.600
 something. I wish I had a, I don't know, some kind of a formal solution concept

25:10.320 --> 25:16.640
 that I could use to say, oh, this algorithm actually, it gives me something. Like, I know

25:16.640 --> 25:21.760
 what I'm going to get. I can do something other than just run it and get out. So that notion

25:21.760 --> 25:28.640
 is still somewhere deeply compelling to you. The notion that you can say, you can drop

25:28.640 --> 25:33.440
 thing on the table says this, you can expect this, this algorithm will give me some good results.

25:33.440 --> 25:38.960
 I hope there's, I hope science will, I mean, there's engineering and there's science,

25:38.960 --> 25:44.720
 I think that they're not exactly the same. And I think right now we're making huge engineering

25:45.600 --> 25:49.840
 like leaps and bounds. So the engineering is running away ahead of the science, which is cool.

25:49.840 --> 25:54.800
 And often how it goes, right? So we're making things and nobody knows how and why they work,

25:54.800 --> 26:03.200
 roughly. But we need to turn that into science. There's some form. It's, yeah,

26:03.200 --> 26:07.200
 there's some room for formalizing. We need to know what the principles are. Why does this work?

26:07.200 --> 26:12.480
 Why does that not work? I mean, for while people build bridges by trying, but now we can often

26:12.480 --> 26:17.520
 predict whether it's going to work or not without building it. Can we do that for learning systems

26:17.520 --> 26:23.600
 or for robots? See, your hope is from a materialistic perspective that intelligence,

26:23.600 --> 26:28.000
 artificial intelligence systems, robots are kind of just fancier bridges.

26:29.200 --> 26:33.600
 Belief space. What's the difference between belief space and state space? So we mentioned

26:33.600 --> 26:42.000
 MDPs, FOMDPs, you reasoning about, you sense the world, there's a state. What's this belief

26:42.000 --> 26:49.040
 space idea? Yeah. Okay, that sounds good. It sounds good. So belief space, that is, instead of

26:49.040 --> 26:54.880
 thinking about what's the state of the world and trying to control that as a robot, I think about

26:55.760 --> 27:01.120
 what is the space of beliefs that I could have about the world? What's, if I think of a belief

27:01.120 --> 27:06.640
 as a probability distribution of the ways the world could be, a belief state is a distribution,

27:06.640 --> 27:13.040
 and then my control problem, if I'm reasoning about how to move through a world I'm uncertain about,

27:14.160 --> 27:18.880
 my control problem is actually the problem of controlling my beliefs. So I think about taking

27:18.880 --> 27:23.120
 actions, not just what effect they'll have on the world outside, but what effect they'll have on my

27:23.120 --> 27:29.920
 own understanding of the world outside. And so that might compel me to ask a question or look

27:29.920 --> 27:35.280
 somewhere to gather information, which may not really change the world state, but it changes

27:35.280 --> 27:43.440
 my own belief about the world. That's a powerful way to empower the agent to reason about the

27:43.440 --> 27:47.840
 world, to explore the world. What kind of problems does it allow you to solve to

27:49.040 --> 27:54.560
 consider belief space versus just state space? Well, any problem that requires deliberate

27:54.560 --> 28:02.800
 information gathering. So if in some problems, like chess, there's no uncertainty, or maybe

28:02.800 --> 28:06.320
 there's uncertainty about the opponent. There's no uncertainty about the state.

28:08.400 --> 28:14.000
 And some problems, there's uncertainty, but you gather information as you go. You might say,

28:14.000 --> 28:18.240
 oh, I'm driving my autonomous car down the road, and it doesn't know perfectly where it is, but

28:18.240 --> 28:23.280
 the LiDARs are all going all the time. So I don't have to think about whether to gather information.

28:24.160 --> 28:28.800
 But if you're a human driving down the road, you sometimes look over your shoulder to see what's

28:28.800 --> 28:36.320
 going on behind you in the lane. And you have to decide whether you should do that now. And you

28:36.320 --> 28:40.400
 have to trade off the fact that you're not seeing in front of you, and you're looking behind you,

28:40.400 --> 28:45.440
 and how valuable is that information, and so on. And so to make choices about information

28:45.440 --> 28:56.080
 gathering, you have to reason in belief space. Also to just take into account your own uncertainty

28:56.080 --> 29:03.280
 before trying to do things. So you might say, if I understand where I'm standing relative to the

29:03.280 --> 29:08.880
 door jam, pretty accurately, then it's okay for me to go through the door. But if I'm really not

29:08.880 --> 29:14.240
 sure where the door is, then it might be better to not do that right now. The degree of your

29:14.240 --> 29:18.800
 uncertainty about the world is actually part of the thing you're trying to optimize in forming the

29:18.800 --> 29:26.560
 plan, right? So this idea of a long horizon of planning for a PhD or just even how to get out

29:26.560 --> 29:32.720
 of the house or how to make breakfast, you show this presentation of the WTF, where's the fork

29:33.360 --> 29:42.000
 of a robot looking to sink. And can you describe how we plan in this world is this idea of hierarchical

29:42.000 --> 29:52.000
 planning we've mentioned? Yeah, how can a robot hope to plan about something with such a long

29:52.000 --> 29:58.400
 horizon where the goal is quite far away? People since probably reasoning began have thought about

29:58.400 --> 30:02.560
 hierarchical reasoning, the temporal hierarchy in particular. Well, there's spatial hierarchy,

30:02.560 --> 30:06.240
 but let's talk about temporal hierarchy. So you might say, oh, I have this long

30:06.240 --> 30:13.680
 execution I have to do, but I can divide it into some segments abstractly, right? So maybe

30:14.400 --> 30:19.360
 have to get out of the house, I have to get in the car, I have to drive, and so on. And so

30:20.800 --> 30:25.920
 you can plan if you can build abstractions. So this we started out by talking about abstractions,

30:25.920 --> 30:30.080
 and we're back to that now. If you can build abstractions in your state space,

30:30.080 --> 30:37.760
 and abstractions, sort of temporal abstractions, then you can make plans at a high level. And you

30:37.760 --> 30:42.320
 can say, I'm going to go to town, and then I'll have to get gas, and I can go here, and I can do

30:42.320 --> 30:47.360
 this other thing. And you can reason about the dependencies and constraints among these actions,

30:47.920 --> 30:55.600
 again, without thinking about the complete details. What we do in our hierarchical planning work is

30:55.600 --> 31:00.960
 then say, all right, I make a plan at a high level of abstraction. I have to have some

31:00.960 --> 31:06.640
 reason to think that it's feasible without working it out in complete detail. And that's

31:06.640 --> 31:10.800
 actually the interesting step. I always like to talk about walking through an airport, like

31:12.160 --> 31:16.720
 you can plan to go to New York and arrive at the airport, and then find yourself in an office

31:16.720 --> 31:21.520
 building later. You can't even tell me in advance what your plan is for walking through the airport,

31:21.520 --> 31:26.320
 partly because you're too lazy to think about it maybe, but partly also because you just don't

31:26.320 --> 31:30.960
 have the information. You don't know what gate you're landing in or what people are going to be

31:30.960 --> 31:37.040
 in front of you or anything. So there's no point in planning in detail. But you have to have,

31:38.000 --> 31:43.760
 you have to make a leap of faith that you can figure it out once you get there. And it's really

31:43.760 --> 31:52.000
 interesting to me how you arrive at that. How do you, so you have learned over your lifetime to be

31:52.000 --> 31:56.800
 able to make some kinds of predictions about how hard it is to achieve some kinds of sub goals.

31:57.440 --> 32:01.440
 And that's critical. Like you would never plan to fly somewhere if you couldn't,

32:02.000 --> 32:05.200
 didn't have a model of how hard it was to do some of the intermediate steps.

32:05.200 --> 32:09.440
 So one of the things we're thinking about now is how do you do this kind of very aggressive

32:09.440 --> 32:16.400
 generalization to situations that you haven't been in and so on to predict how long will it

32:16.400 --> 32:20.400
 take to walk through the Kuala Lumpur airport? Like you could give me an estimate and it wouldn't

32:20.400 --> 32:26.800
 be crazy. And you have to have an estimate of that in order to make plans that involve

32:26.800 --> 32:30.080
 walking through the Kuala Lumpur airport, even if you don't need to know it in detail.

32:31.040 --> 32:35.520
 So I'm really interested in these kinds of abstract models and how do we acquire them.

32:35.520 --> 32:39.760
 But once we have them, we can use them to do hierarchical reasoning, which I think is very

32:39.760 --> 32:46.400
 important. Yeah, there's this notion of goal regression and preimage backchaining.

32:46.400 --> 32:53.760
 This idea of starting at the goal and just forming these big clouds of states. I mean,

32:54.560 --> 33:01.840
 it's almost like saying to the airport, you know, you know, once you show up to the airport,

33:01.840 --> 33:08.560
 you're like a few steps away from the goal. So thinking of it this way is kind of interesting.

33:08.560 --> 33:15.600
 I don't know if you have further comments on that of starting at the goal. Yeah, I mean,

33:15.600 --> 33:22.400
 it's interesting that Herb Simon back in the early days of AI talked a lot about

33:22.400 --> 33:26.960
 means ends reasoning and reasoning back from the goal. There's a kind of an intuition that people

33:26.960 --> 33:34.960
 have that the number of the state space is big, the number of actions you could take is really big.

33:35.760 --> 33:39.440
 So if you say, here I sit and I want to search forward from where I am, what are all the things

33:39.440 --> 33:45.040
 I could do? That's just overwhelming. If you say, if you can reason at this other level and say,

33:45.040 --> 33:49.520
 here's what I'm hoping to achieve, what can I do to make that true that somehow the

33:49.520 --> 33:54.000
 branching is smaller? Now, what's interesting is that like in the AI planning community,

33:54.000 --> 33:59.120
 that hasn't worked out in the class of problems that they look at and the methods that they tend

33:59.120 --> 34:04.400
 to use, it hasn't turned out that it's better to go backward. It's still kind of my intuition

34:04.400 --> 34:10.000
 that it is, but I can't prove that to you right now. Right. I share your intuition, at least for us

34:10.720 --> 34:19.920
 mirror humans. Speaking of which, when you maybe now we take it and take a little step into that

34:19.920 --> 34:27.280
 philosophy circle, how hard would it, when you think about human life, you give those examples

34:27.280 --> 34:32.400
 often, how hard do you think it is to formulate human life as a planning problem or aspects of

34:32.400 --> 34:37.600
 human life? So when you look at robots, you're often trying to think about object manipulation,

34:38.640 --> 34:46.240
 tasks about moving a thing. When you take a slight step outside the room, let the robot

34:46.240 --> 34:54.480
 leave and he'll get lunch or maybe try to pursue more fuzzy goals. How hard do you think is that

34:54.480 --> 35:00.720
 problem? If you were to try to maybe put another way, try to formulate human life as a planning

35:00.720 --> 35:05.680
 problem. Well, that would be a mistake. I mean, it's not all a planning problem, right? I think

35:05.680 --> 35:11.920
 it's really, really important that we understand that you have to put together pieces and parts

35:11.920 --> 35:18.640
 that have different styles of reasoning and representation and learning. I think it seems

35:18.640 --> 35:25.680
 probably clear to anybody that it can't all be this or all be that. Brains aren't all like this

35:25.680 --> 35:30.160
 or all like that, right? They have different pieces and parts and substructure and so on.

35:30.160 --> 35:34.400
 So I don't think that there's any good reason to think that there's going to be like one true

35:34.400 --> 35:39.600
 algorithmic thing that's going to do the whole job. Just a bunch of pieces together,

35:39.600 --> 35:48.160
 design to solve a bunch of specific problems. Or maybe styles of problems. I mean,

35:48.160 --> 35:52.880
 there's probably some reasoning that needs to go on in image space. I think, again,

35:55.840 --> 35:59.440
 there's this model base versus model free idea, right? So in reinforcement learning,

35:59.440 --> 36:06.000
 people talk about, oh, should I learn? I could learn a policy just straight up a way of behaving.

36:06.000 --> 36:11.360
 I could learn it's popular in a value function. That's some kind of weird intermediate ground.

36:13.360 --> 36:17.440
 Or I could learn a transition model, which tells me something about the dynamics of the world.

36:18.320 --> 36:22.560
 If I take a, imagine that I learn a transition model and I couple it with a planner and I

36:22.560 --> 36:29.520
 draw a box around that, I have a policy again. It's just stored a different way, right?

36:30.800 --> 36:34.560
 But it's just as much of a policy as the other policy. It's just I've made, I think,

36:34.560 --> 36:41.920
 the way I see it is it's a time space trade off in computation, right? A more overt policy

36:41.920 --> 36:47.680
 representation. Maybe it takes more space, but maybe I can compute quickly what action I should

36:47.680 --> 36:52.880
 take. On the other hand, maybe a very compact model of the world dynamics plus a planner

36:53.680 --> 36:58.240
 lets me compute what action to take to just more slowly. There's no, I mean, I don't think,

36:58.240 --> 37:04.240
 there's no argument to be had. It's just like a question of what form of computation is best

37:04.240 --> 37:12.720
 for us. For the various sub problems. Right. So, and so like learning to do algebra manipulations

37:12.720 --> 37:17.280
 for some reason is, I mean, that's probably going to want naturally a sort of a different

37:17.280 --> 37:22.640
 representation than riding a unicycle. At the time constraints on the unicycle are serious.

37:22.640 --> 37:28.640
 The space is maybe smaller. I don't know. But so I could be the more human size of

37:28.640 --> 37:36.240
 falling in love, having a relationship that might be another another style of no idea how to model

37:36.240 --> 37:43.280
 that. Yeah, that's, that's first solve the algebra and the object manipulation. What do you think

37:44.160 --> 37:50.480
 is harder perception or planning perception? That's why I'm understanding that's why.

37:51.920 --> 37:55.440
 So what do you think is so hard about perception by understanding the world around you?

37:55.440 --> 38:03.520
 Well, I mean, I think the big question is representational. A hugely the question is

38:03.520 --> 38:12.560
 representation. So perception has made great strides lately, right? And we can classify images and we

38:12.560 --> 38:17.760
 can play certain kinds of games and predict how to steer the car and all this sort of stuff.

38:17.760 --> 38:28.160
 I don't think we have a very good idea of what perception should deliver, right? So if you

38:28.160 --> 38:32.000
 if you believe in modularity, okay, there's there's a very strong view which says

38:34.640 --> 38:40.400
 we shouldn't build in any modularity, we should make a giant gigantic neural network,

38:40.400 --> 38:44.000
 train it end to end to do the thing. And that's the best way forward.

38:44.000 --> 38:51.280
 And it's hard to argue with that except on a sample complexity basis, right? So you might say,

38:51.280 --> 38:55.120
 oh, well, if I want to do end to end reinforcement learning on this giant giant neural network,

38:55.120 --> 39:00.800
 it's going to take a lot of data and a lot of like broken robots and stuff. So

39:02.640 --> 39:10.000
 then the only answer is to say, okay, we have to build something in build in some structure

39:10.000 --> 39:14.080
 or some bias, we know from theory of machine learning, the only way to cut down the sample

39:14.080 --> 39:20.320
 complexity is to kind of cut down somehow cut down the hypothesis space, you can do that by

39:20.320 --> 39:24.880
 building in bias. There's all kinds of reason to think that nature built bias into humans.

39:27.520 --> 39:32.800
 Convolution is a bias, right? It's a very strong bias and it's a very critical bias.

39:32.800 --> 39:39.520
 So my view is that we should look for more things that are like convolution, but that address other

39:39.520 --> 39:43.440
 aspects of reasoning, right? So convolution helps us a lot with a certain kind of spatial

39:43.440 --> 39:51.520
 reasoning that's quite close to the imaging. I think there's other ideas like that,

39:52.400 --> 39:58.240
 maybe some amount of forward search, maybe some notions of abstraction, maybe the notion that

39:58.240 --> 40:02.560
 objects exist, actually, I think that's pretty important. And a lot of people won't give you

40:02.560 --> 40:06.720
 that to start with, right? So almost like a convolution in the

40:08.640 --> 40:13.600
 in the object semantic object space or some kind of some kind of ideas in there. That's right.

40:13.600 --> 40:17.680
 And people are like the graph, graph convolutions are an idea that are related to

40:17.680 --> 40:26.240
 relational representations. And so I think there are, so you, I've come far field from perception,

40:26.240 --> 40:33.200
 but I think, I think the thing that's going to make perception that kind of the next step is

40:33.200 --> 40:38.000
 actually understanding better what it should produce, right? So what are we going to do with

40:38.000 --> 40:41.920
 the output of it, right? It's fine when what we're going to do with the output is steer,

40:41.920 --> 40:48.880
 it's less clear when we're just trying to make one integrated intelligent agent,

40:48.880 --> 40:53.520
 what should the output of perception be? We have no idea. And how should that hook up to the other

40:53.520 --> 41:00.240
 stuff? We don't know. So I think the pressing question is, what kinds of structure can we

41:00.240 --> 41:05.520
 build in that are like the moral equivalent of convolution that will make a really awesome

41:05.520 --> 41:10.240
 superstructure that then learning can kind of progress on efficiently?

41:10.240 --> 41:14.080
 I agree. Very compelling description of actually where we stand with the perception from

41:15.280 --> 41:19.120
 you're teaching a course on embodying intelligence. What do you think it takes to

41:19.120 --> 41:24.800
 build a robot with human level intelligence? I don't know if we knew we would do it.

41:27.680 --> 41:34.240
 If you were to, I mean, okay, so do you think a robot needs to have a self awareness,

41:36.000 --> 41:44.160
 consciousness, fear of mortality? Or is it, is it simpler than that? Or is consciousness a simple

41:44.160 --> 41:51.680
 thing? Do you think about these notions? I don't think much about consciousness. Even most philosophers

41:51.680 --> 41:56.560
 who care about it will give you that you could have robots that are zombies, right, that behave

41:56.560 --> 42:01.360
 like humans but are not conscious. And I, at this moment, would be happy enough for that. So I'm not

42:01.360 --> 42:06.240
 really worried one way or the other. So the technical side, you're not thinking of the use of self

42:06.240 --> 42:13.760
 awareness? Well, but I, okay, but then what does self awareness mean? I mean, that you need to have

42:13.760 --> 42:18.800
 some part of the system that can observe other parts of the system and tell whether they're

42:18.800 --> 42:24.560
 working well or not. That seems critical. So does that count as, I mean, does that count as

42:24.560 --> 42:30.560
 self awareness or not? Well, it depends on whether you think that there's somebody at home who can

42:30.560 --> 42:35.600
 articulate whether they're self aware. But clearly, if I have like, you know, some piece of code

42:35.600 --> 42:41.120
 that's counting how many times this procedure gets executed, that's a kind of self awareness,

42:41.120 --> 42:44.560
 right? So there's a big spectrum. It's clear you have to have some of it.

42:44.560 --> 42:48.800
 Right. You know, we're quite far away on many dimensions, but is the direction of research

42:49.600 --> 42:54.720
 that's most compelling to you for, you know, trying to achieve human level intelligence

42:54.720 --> 43:00.880
 in our robots? Well, to me, I guess the thing that seems most compelling to me at the moment is this

43:00.880 --> 43:10.320
 question of what to build in and what to learn. I think we're, we don't, we're missing a bunch of

43:10.320 --> 43:17.120
 ideas. And, and we, you know, people, you know, don't you dare ask me how many years it's going

43:17.120 --> 43:22.320
 to be until that happens, because I won't even participate in the conversation. Because I think

43:22.320 --> 43:26.240
 we're missing ideas and I don't know how long it's going to take to find them. So I won't ask you

43:26.240 --> 43:34.160
 how many years, but maybe I'll ask you what it, when you will be sufficiently impressed that we've

43:34.160 --> 43:41.280
 achieved it. So what's a good test of intelligence? Do you like the Turing test and natural language

43:41.280 --> 43:47.520
 in the robotic space? Is there something where you would sit back and think, oh, that's pretty

43:47.520 --> 43:52.800
 impressive as a test, as a benchmark. Do you think about these kinds of problems?

43:52.800 --> 43:58.480
 No, I resist. I mean, I think all the time that we spend arguing about those kinds of things could

43:58.480 --> 44:04.800
 be better spent just making their robots work better. So you don't value competition. So I mean,

44:04.800 --> 44:11.280
 there's a nature of benchmark, benchmarks and data sets, or Turing test challenges, where

44:11.280 --> 44:15.440
 everybody kind of gets together and tries to build a better robot because they want to outcompete

44:15.440 --> 44:21.360
 each other, like the DARPA challenge with the autonomous vehicles. Do you see the value of that?

44:23.600 --> 44:27.440
 Or can get in the way? I think you can get in the way. I mean, some people, many people find it

44:27.440 --> 44:34.880
 motivating. And so that's good. I find it anti motivating personally. But I think you get an

44:34.880 --> 44:41.200
 interesting cycle where for a contest, a bunch of smart people get super motivated and they hack

44:41.200 --> 44:47.120
 their brains out. And much of what gets done is just hacks, but sometimes really cool ideas emerge.

44:47.120 --> 44:53.360
 And then that gives us something to chew on after that. So it's not a thing for me, but I don't

44:53.360 --> 44:58.480
 I don't regret that other people do it. Yeah, it's like you said, with everything else that

44:58.480 --> 45:03.840
 makes us good. So jumping topics a little bit, you started the journal machine learning research

45:04.560 --> 45:09.920
 and served as its editor in chief. How did the publication come about?

45:11.680 --> 45:17.040
 And what do you think about the current publishing model space in machine learning

45:17.040 --> 45:23.040
 artificial intelligence? Okay, good. So it came about because there was a journal called machine

45:23.040 --> 45:29.840
 learning, which still exists, which was owned by Clure. And there was I was on the editorial

45:29.840 --> 45:33.520
 board and we used to have these meetings annually where we would complain to Clure that

45:33.520 --> 45:37.280
 it was too expensive for the libraries and that people couldn't publish. And we would really

45:37.280 --> 45:41.920
 like to have some kind of relief on those fronts. And they would always sympathize,

45:41.920 --> 45:49.120
 but not do anything. So we just decided to make a new journal. And there was the Journal of AI

45:49.120 --> 45:54.880
 Research, which has was on the same model, which had been in existence for maybe five years or so,

45:54.880 --> 46:01.920
 and it was going on pretty well. So we just made a new journal. It wasn't I mean,

46:03.600 --> 46:07.600
 I don't know, I guess it was work, but it wasn't that hard. So basically the editorial board,

46:07.600 --> 46:17.440
 probably 75% of the editorial board of machine learning resigned. And we founded the new journal.

46:17.440 --> 46:25.280
 But it was sort of it was more open. Yeah, right. So it's completely open. It's open access.

46:25.280 --> 46:31.600
 Actually, I had a postdoc, George Conrad Harris, who wanted to call these journals free for all.

46:33.520 --> 46:37.600
 Because there were I mean, it both has no page charges and has no

46:40.080 --> 46:45.520
 access restrictions. And the reason and so lots of people, I mean, for there were,

46:45.520 --> 46:50.240
 there were people who are mad about the existence of this journal who thought it was a fraud or

46:50.240 --> 46:55.200
 something, it would be impossible, they said, to run a journal like this with basically,

46:55.200 --> 46:59.200
 I mean, for a long time, I didn't even have a bank account. I paid for the

46:59.840 --> 47:06.640
 lawyer to incorporate and the IP address. And it just didn't cost a couple hundred dollars a year

47:06.640 --> 47:12.880
 to run. It's a little bit more now, but not that much more. But that's because I think computer

47:12.880 --> 47:19.920
 scientists are competent and autonomous in a way that many scientists in other fields aren't.

47:19.920 --> 47:23.920
 I mean, at doing these kinds of things, we already type set around papers,

47:23.920 --> 47:28.000
 we all have students and people who can hack a website together in the afternoon.

47:28.000 --> 47:32.960
 So the infrastructure for us was like, not a problem, but for other people in other fields,

47:32.960 --> 47:38.960
 it's a harder thing to do. Yeah. And this kind of open access journal is nevertheless,

47:38.960 --> 47:45.840
 one of the most prestigious journals. So it's not like a prestige and it can be achieved

47:45.840 --> 47:49.920
 without any of the papers. Paper is not required for prestige, turns out. Yeah.

47:50.640 --> 47:56.960
 So on the review process side of actually a long time ago, I don't remember when I reviewed a paper

47:56.960 --> 48:01.360
 where you were also a reviewer and I remember reading your review and being influenced by it.

48:01.360 --> 48:06.480
 It was really well written. It influenced how I write feature reviews. You disagreed with me,

48:06.480 --> 48:15.280
 actually. And you made it my review, but much better. But nevertheless, the review process

48:16.880 --> 48:23.600
 has its flaws. And what do you think works well? How can it be improved?

48:23.600 --> 48:27.600
 So actually, when I started JMLR, I wanted to do something completely different.

48:28.720 --> 48:34.800
 And I didn't because it felt like we needed a traditional journal of record and so we just

48:34.800 --> 48:40.800
 made JMLR be almost like a normal journal, except for the open access parts of it, basically.

48:43.200 --> 48:47.600
 Increasingly, of course, publication is not even a sensible word. You can publish something by

48:47.600 --> 48:54.400
 putting it in an archive so I can publish everything tomorrow. So making stuff public is

48:54.400 --> 49:04.800
 there's no barrier. We still need curation and evaluation. I don't have time to read all of

49:04.800 --> 49:20.480
 archive. And you could argue that kind of social thumbs uping of articles suffices, right? You

49:20.480 --> 49:25.440
 might say, oh, heck with this, we don't need journals at all. We'll put everything on archive

49:25.440 --> 49:30.400
 and people will upvote and downvote the articles and then your CV will say, oh, man, he got a lot

49:30.400 --> 49:44.000
 of upvotes. So that's good. But I think there's still value in careful reading and commentary of

49:44.000 --> 49:48.480
 things. And it's hard to tell when people are upvoting and downvoting or arguing about your

49:48.480 --> 49:55.440
 paper on Twitter and Reddit, whether they know what they're talking about. So then I have the

49:55.440 --> 50:01.360
 second order problem of trying to decide whose opinions I should value and such. So I don't

50:01.360 --> 50:06.240
 know. If I had infinite time, which I don't, and I'm not going to do this because I really want to

50:06.240 --> 50:11.920
 make robots work, but if I felt inclined to do something more in a publication direction,

50:12.880 --> 50:16.160
 I would do this other thing, which I thought about doing the first time, which is to get

50:16.160 --> 50:22.480
 together some set of people whose opinions I value and who are pretty articulate. And I guess we

50:22.480 --> 50:27.520
 would be public, although we could be private, I'm not sure. And we would review papers. We wouldn't

50:27.520 --> 50:31.600
 publish them and you wouldn't submit them. We would just find papers and we would write reviews

50:32.720 --> 50:39.120
 and we would make those reviews public. And maybe if you, you know, so we're Leslie's friends who

50:39.120 --> 50:45.200
 review papers and maybe eventually if we, our opinion was sufficiently valued, like the opinion

50:45.200 --> 50:50.800
 of JMLR is valued, then you'd say on your CV that Leslie's friends gave my paper a five star reading

50:50.800 --> 50:58.800
 and that would be just as good as saying I got it accepted into this journal. So I think we

50:58.800 --> 51:04.800
 should have good public commentary and organize it in some way, but I don't really know how to

51:04.800 --> 51:09.120
 do it. It's interesting times. The way you describe it actually is really interesting. I mean,

51:09.120 --> 51:15.040
 we do it for movies, IMDB.com. There's experts, critics come in, they write reviews, but there's

51:15.040 --> 51:20.960
 also regular non critics humans write reviews and they're separated. I like open review.

51:22.240 --> 51:31.600
 The eye clear process, I think is interesting. It's a step in the right direction, but it's still

51:31.600 --> 51:39.840
 not as compelling as reviewing movies or video games. I mean, it sometimes almost, it might be

51:39.840 --> 51:44.400
 silly, at least from my perspective to say, but it boils down to the user interface, how fun and

51:44.400 --> 51:50.400
 easy it is to actually perform the reviews, how efficient, how much you as a reviewer get

51:51.200 --> 51:56.560
 street cred for being a good reviewer. Those human elements come into play.

51:57.200 --> 52:03.600
 No, it's a big investment to do a good review of a paper and the flood of papers is out of control.

52:05.280 --> 52:08.960
 There aren't 3,000 new, I don't know how many new movies are there in a year, I don't know,

52:08.960 --> 52:15.440
 but there's probably going to be less than how many machine learning papers there are in a year now.

52:19.840 --> 52:22.320
 Right, so I'm like an old person, so of course I'm going to say,

52:23.520 --> 52:30.240
 things are moving too fast, I'm a stick in the mud. So I can say that, but my particular flavor

52:30.240 --> 52:38.240
 of that is, I think the horizon for researchers has gotten very short, that students want to

52:38.240 --> 52:46.000
 publish a lot of papers and it's exciting and there's value in that and you get padded on the

52:46.000 --> 52:58.320
 head for it and so on. And some of that is fine, but I'm worried that we're driving out people who

52:58.320 --> 53:05.280
 would spend two years thinking about something. Back in my day, when we worked on our theses,

53:05.280 --> 53:10.560
 we did not publish papers, you did your thesis for years, you picked a hard problem and then you

53:10.560 --> 53:16.320
 worked and chewed on it and did stuff and wasted time and for a long time. And when it was roughly,

53:16.320 --> 53:22.800
 when it was done, you would write papers. And so I don't know how to, and I don't think that

53:22.800 --> 53:26.720
 everybody has to work in that mode, but I think there's some problems that are hard enough

53:27.680 --> 53:31.680
 that it's important to have a longer research horizon and I'm worried that

53:31.680 --> 53:39.600
 we don't incentivize that at all at this point. In this current structure. So what do you see

53:41.440 --> 53:47.280
 what are your hopes and fears about the future of AI and continuing on this theme? So AI has

53:47.280 --> 53:53.440
 gone through a few winters, ups and downs. Do you see another winter of AI coming?

53:53.440 --> 54:02.480
 Or are you more hopeful about making robots work, as you said? I think the cycles are inevitable,

54:03.040 --> 54:10.080
 but I think each time we get higher, right? I mean, it's like climbing some kind of

54:10.080 --> 54:19.600
 landscape with a noisy optimizer. So it's clear that the deep learning stuff has

54:19.600 --> 54:25.760
 made deep and important improvements. And so the high watermark is now higher. There's no question.

54:25.760 --> 54:34.400
 But of course, I think people are overselling and eventually investors, I guess, and other people

54:34.400 --> 54:40.640
 look around and say, well, you're not quite delivering on this grand claim and that wild

54:40.640 --> 54:47.680
 hypothesis. It's like probably it's going to crash something out and then it's okay. I mean,

54:47.680 --> 54:54.000
 it's okay. I mean, but I don't I can't imagine that there's like some awesome monotonic improvement

54:54.000 --> 55:01.760
 from here to human level AI. So in, you know, I have to ask this question, I probably anticipate

55:01.760 --> 55:09.120
 answers, the answers. But do you have a worry short term, a long term about the existential

55:09.120 --> 55:18.880
 threats of AI and maybe short term, less existential, but more robots taking away jobs?

55:20.480 --> 55:28.000
 Well, actually, let me talk a little bit about utility. Actually, I had an interesting conversation

55:28.000 --> 55:32.480
 with some military ethicists who wanted to talk to me about autonomous weapons.

55:32.480 --> 55:39.360
 And they're, they were interesting, smart, well educated guys who didn't know too much about AI or

55:39.360 --> 55:43.600
 machine learning. And the first question they asked me was, has your robot ever done something you

55:43.600 --> 55:49.120
 didn't expect? And I like burst out laughing because anybody who's ever done something other robot

55:49.120 --> 55:54.720
 right knows that they don't do much. And what I realized was that their model of how we program

55:54.720 --> 55:59.440
 a robot was completely wrong. Their model of how we could put program robot was like,

55:59.440 --> 56:05.600
 program robot was like, Lego Mindstorms, like, Oh, go forward a meter, turn left, take a picture,

56:05.600 --> 56:11.120
 do this, do that. And so if you have that model of programming, then it's true, it's kind of weird

56:11.120 --> 56:16.240
 that your robot would do something that you didn't anticipate. But the fact is, and actually,

56:16.240 --> 56:21.840
 so now this is my new educational mission, if I have to talk to non experts, I try to teach them

56:22.720 --> 56:28.080
 the idea that we don't operate, we operate at least one or maybe many levels of abstraction

56:28.080 --> 56:33.280
 about that. And we say, Oh, here's a hypothesis class, maybe it's a space of plans, or maybe it's a

56:33.280 --> 56:38.400
 space of classifiers, or whatever. But there's some set of answers and an objective function. And

56:38.400 --> 56:44.800
 then we work on some optimization method that tries to optimize a solution in that class.

56:46.080 --> 56:50.560
 And we don't know what solution is going to come out. Right. So I think it's important to

56:50.560 --> 56:55.520
 communicate that. So I mean, of course, probably people who listen to this, they know that lesson.

56:55.520 --> 56:59.600
 But I think it's really critical to communicate that lesson. And then lots of people are now

56:59.600 --> 57:05.600
 talking about, you know, the value alignment problem. So you want to be sure, as robots or

57:06.480 --> 57:11.280
 software systems get more competent, that their objectives are aligned with your objectives,

57:11.280 --> 57:17.680
 or that our objectives are compatible in some way, or we have a good way of mediating when they have

57:17.680 --> 57:22.240
 different objectives. And so I think it is important to start thinking in terms, like,

57:22.240 --> 57:28.480
 you don't have to be freaked out by the robot apocalypse to accept that it's important to think

57:28.480 --> 57:33.760
 about objective functions of value alignment. And that you have to really, everyone who's done

57:33.760 --> 57:38.160
 optimization knows that you have to be careful what you wish for that, you know, sometimes you get

57:38.160 --> 57:45.280
 the optimal solution. And you realize, man, that was that objective was wrong. So pragmatically,

57:45.280 --> 57:51.360
 in the shortest term, it seems to me that that those are really interesting and critical questions.

57:51.360 --> 57:55.680
 And the idea that we're going to go from being people who engineer algorithms to being people

57:55.680 --> 58:00.800
 who engineer objective functions, I think that's, that's definitely going to happen. And that's

58:00.800 --> 58:03.360
 going to change our thinking and methodology and stuff.

58:03.360 --> 58:07.520
 We're going to, you started at Stanford philosophy, that's wish you could be science,

58:07.520 --> 58:13.840
 and I will go back to philosophy maybe. Well, I mean, they're mixed together because, because,

58:13.840 --> 58:18.240
 as we also know, as machine learning people, right? When you design, in fact, this is the

58:18.240 --> 58:23.360
 lecture I gave in class today, when you design an objective function, you have to wear both hats.

58:23.360 --> 58:28.320
 There's the hat that says, what do I want? And there's the hat that says, but I know what my

58:28.320 --> 58:34.240
 optimizer can do to some degree. And I have to take that into account. So it's, it's always a

58:34.240 --> 58:40.480
 trade off. And we have to kind of be mindful of that. The part about taking people's jobs,

58:40.480 --> 58:47.360
 that I understand that that's important, I don't understand sociology or economics or people

58:47.360 --> 58:51.840
 very well. So I don't know how to think about that. So that's, yeah, so there might be a

58:51.840 --> 58:56.640
 sociological aspect there, the economic aspect that's very difficult to think about. Okay.

58:56.640 --> 59:00.000
 I mean, I think other people should be thinking about it, but I'm just, that's not my strength.

59:00.000 --> 59:04.320
 So what do you think is the most exciting area of research in the short term,

59:04.320 --> 59:08.560
 for the community and for your, for yourself? Well, so, I mean, there's this story I've been

59:08.560 --> 59:16.480
 telling about how to engineer intelligent robots. So that's what we want to do. We all kind of want

59:16.480 --> 59:20.960
 to do, well, I mean, some set of us want to do this. And the question is, what's the most effective

59:20.960 --> 59:25.840
 strategy? And we've tried, and there's a bunch of different things you could do at the extremes,

59:25.840 --> 59:32.000
 right? One super extreme is we do introspection and we write a program. Okay, that has not worked

59:32.000 --> 59:37.360
 out very well. Another extreme is we take a giant bunch of neural guru and we try and train it up to

59:37.360 --> 59:43.040
 do something. I don't think that's going to work either. So the question is, what's the middle

59:43.040 --> 59:49.840
 ground? And again, this isn't a theological question or anything like that. It's just,

59:49.840 --> 59:57.040
 like, how do, just how do we, what's the best way to make this work out? And I think it's clear,

59:57.040 --> 1:00:01.840
 it's a combination of learning, to me, it's clear, it's a combination of learning and not learning.

1:00:02.400 --> 1:00:05.920
 And what should that combination be? And what's the stuff we build in? So to me,

1:00:05.920 --> 1:00:10.080
 that's the most compelling question. And when you say engineer robots, you mean

1:00:10.080 --> 1:00:15.600
 engineering systems that work in the real world. That's the emphasis.

1:00:17.600 --> 1:00:23.200
 Last question, which robots or robot is your favorite from science fiction?

1:00:24.480 --> 1:00:32.960
 So you can go with Star Wars or RTD2, or you can go with more modern, maybe Hal.

1:00:32.960 --> 1:00:37.040
 No, sir, I don't think I have a favorite robot from science fiction.

1:00:37.040 --> 1:00:45.520
 This is, this is back to, you like to make robots work in the real world here, not, not in.

1:00:45.520 --> 1:00:50.000
 I mean, I love the process. And I care more about the process.

1:00:50.000 --> 1:00:51.040
 The engineering process.

1:00:51.600 --> 1:00:55.760
 Yeah. I mean, I do research because it's fun, not because I care about what we produce.

1:00:57.520 --> 1:01:01.920
 Well, that's, that's a beautiful note, actually. And Leslie, thank you so much for talking today.

1:01:01.920 --> 1:01:07.920
 Sure, it's been fun.