Datasets:
File size: 88,508 Bytes
a3be5d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 |
WEBVTT
00:00.000 --> 00:05.040
The following is a conversation from Stuart Russell. He's a professor of computer science at UC
00:05.040 --> 00:11.360
Berkeley and a coauthor of a book that introduced me and millions of other people to the amazing world
00:11.360 --> 00:18.320
of AI called Artificial Intelligence The Modern Approach. So it was an honor for me to have this
00:18.320 --> 00:24.480
conversation as part of MIT course on artificial journal intelligence and the artificial intelligence
00:24.480 --> 00:30.800
podcast. If you enjoy it, please subscribe on YouTube, iTunes or your podcast provider of choice
00:31.360 --> 00:37.600
or simply connect with me on Twitter at Lex Freedman spelled F R I D. And now here's my
00:37.600 --> 00:46.160
conversation with Stuart Russell. So you've mentioned in 1975 in high school you've created
00:46.160 --> 00:54.160
one of your first AI programs that played chess. Were you ever able to build a program that
00:54.160 --> 01:02.080
beat you at chess or another board game? So my program never beat me at chess.
01:03.520 --> 01:10.480
I actually wrote the program at Imperial College. So I used to take the bus every Wednesday with a
01:10.480 --> 01:17.200
box of cards this big and shove them into the card reader and they gave us eight seconds of CPU time.
01:17.200 --> 01:24.720
It took about five seconds to read the cards in and compile the code. So we had three seconds of
01:24.720 --> 01:30.960
CPU time, which was enough to make one move, you know, with a not very deep search. And then we
01:30.960 --> 01:34.960
would print that move out and then we'd have to go to the back of the queue and wait to feed the
01:34.960 --> 01:40.480
cards in again. How deep was the search? Well, are we talking about two moves? So no, I think we've
01:40.480 --> 01:48.000
got we got an eight move, eight, you know, depth eight with alpha beta. And we had some tricks of
01:48.000 --> 01:54.480
our own about move ordering and some pruning of the tree. And we were still able to beat that
01:54.480 --> 02:00.960
program. Yeah, yeah, I was a reasonable chess player in my youth. I did an Othello program
02:01.680 --> 02:05.920
and a backgammon program. So when I got to Berkeley, I worked a lot on
02:05.920 --> 02:12.560
what we call meta reasoning, which really means reasoning about reasoning. And in the case of
02:13.200 --> 02:18.320
a game playing program, you need to reason about what parts of the search tree you're actually
02:18.320 --> 02:23.440
going to explore, because the search tree is enormous, you know, bigger than the number of
02:23.440 --> 02:30.960
atoms in the universe. And the way programs succeed and the way humans succeed is by only
02:30.960 --> 02:36.160
looking at a small fraction of the search tree. And if you look at the right fraction, you play
02:36.160 --> 02:41.360
really well. If you look at the wrong fraction, if you waste your time thinking about things that
02:41.360 --> 02:45.840
are never going to happen, the moves that no one's ever going to make, then you're going to lose,
02:45.840 --> 02:53.760
because you won't be able to figure out the right decision. So that question of how machines can
02:53.760 --> 02:59.760
manage their own computation, how they decide what to think about is the meta reasoning question.
02:59.760 --> 03:05.920
We developed some methods for doing that. And very simply, a machine should think about
03:06.640 --> 03:11.920
whatever thoughts are going to improve its decision quality. We were able to show that
03:12.640 --> 03:18.240
both for a fellow, which is a standard two player game, and for backgammon, which includes
03:19.040 --> 03:24.000
dice rolls, so it's a two player game with uncertainty. For both of those cases, we could
03:24.000 --> 03:30.480
come up with algorithms that were actually much more efficient than the standard alpha beta search,
03:31.120 --> 03:36.560
which chess programs at the time were using. And that those programs could beat me.
03:38.080 --> 03:44.720
And I think you can see the same basic ideas in AlphaGo and AlphaZero today.
03:44.720 --> 03:52.560
The way they explore the tree is using a form of meta reasoning to select what to think about
03:52.560 --> 03:57.840
based on how useful it is to think about it. Is there any insights you can describe
03:57.840 --> 04:03.040
without Greek symbols of how do we select which paths to go down?
04:04.240 --> 04:10.560
There's really two kinds of learning going on. So as you say, AlphaGo learns to evaluate board
04:10.560 --> 04:17.680
to evaluate board position. So it can look at a go board. And it actually has probably a super
04:17.680 --> 04:25.760
human ability to instantly tell how promising that situation is. To me, the amazing thing about
04:25.760 --> 04:34.560
AlphaGo is not that it can be the world champion with its hands tied behind his back. But the fact that
04:34.560 --> 04:41.360
if you stop it from searching altogether, so you say, okay, you're not allowed to do
04:41.360 --> 04:46.480
any thinking ahead. You can just consider each of your legal moves and then look at the
04:47.120 --> 04:53.280
resulting situation and evaluate it. So what we call a depth one search. So just the immediate
04:53.280 --> 04:57.920
outcome of your moves and decide if that's good or bad. That version of AlphaGo
04:57.920 --> 05:05.200
can still play at a professional level. And human professionals are sitting there for
05:05.200 --> 05:13.440
five, 10 minutes deciding what to do. And AlphaGo in less than a second can instantly intuit what
05:13.440 --> 05:18.880
is the right move to make based on its ability to evaluate positions. And that is remarkable
05:19.760 --> 05:26.080
because we don't have that level of intuition about go. We actually have to think about the
05:26.080 --> 05:35.200
situation. So anyway, that capability that AlphaGo has is one big part of why it beats humans.
05:35.840 --> 05:44.560
The other big part is that it's able to look ahead 40, 50, 60 moves into the future. And
05:46.880 --> 05:51.200
if it was considering all possibilities, 40 or 50 or 60 moves into the future,
05:51.200 --> 06:02.240
that would be 10 to the 200 possibilities. So way more than atoms in the universe and so on.
06:02.240 --> 06:09.680
So it's very, very selective about what it looks at. So let me try to give you an intuition about
06:10.880 --> 06:15.360
how you decide what to think about. It's a combination of two things. One is
06:15.360 --> 06:22.000
how promising it is. So if you're already convinced that a move is terrible,
06:22.560 --> 06:26.560
there's no point spending a lot more time convincing yourself that it's terrible.
06:27.520 --> 06:33.600
Because it's probably not going to change your mind. So the real reason you think is because
06:33.600 --> 06:39.920
there's some possibility of changing your mind about what to do. And is that changing your mind
06:39.920 --> 06:46.000
that would result then in a better final action in the real world. So that's the purpose of thinking
06:46.800 --> 06:53.520
is to improve the final action in the real world. And so if you think about a move that is guaranteed
06:53.520 --> 06:58.000
to be terrible, you can convince yourself it's terrible, you're still not going to change your
06:58.000 --> 07:04.320
mind. But on the other hand, suppose you had a choice between two moves, one of them you've
07:04.320 --> 07:10.400
already figured out is guaranteed to be a draw, let's say. And then the other one looks a little
07:10.400 --> 07:14.000
bit worse. Like it looks fairly likely that if you make that move, you're going to lose.
07:14.640 --> 07:20.720
But there's still some uncertainty about the value of that move. There's still some possibility
07:20.720 --> 07:25.920
that it will turn out to be a win. Then it's worth thinking about that. So even though it's
07:25.920 --> 07:31.280
less promising on average than the other move, which is guaranteed to be a draw, there's still
07:31.280 --> 07:36.160
some purpose in thinking about it because there's a chance that you'll change your mind and discover
07:36.160 --> 07:42.080
that in fact it's a better move. So it's a combination of how good the move appears to be
07:42.080 --> 07:48.000
and how much uncertainty there is about its value. The more uncertainty, the more it's worth thinking
07:48.000 --> 07:52.800
about because there's a higher upside if you want to think of it that way. And of course in the
07:52.800 --> 07:59.920
beginning, especially in the AlphaGo Zero formulation, it's everything is shrouded in
07:59.920 --> 08:06.240
uncertainty. So you're really swimming in a sea of uncertainty. So it benefits you to
08:07.520 --> 08:11.120
I mean, actually falling in the same process as you described, but because you're so uncertain
08:11.120 --> 08:15.280
about everything, you basically have to try a lot of different directions.
08:15.280 --> 08:22.400
Yeah. So the early parts of the search tree are fairly bushy that it will look at a lot
08:22.400 --> 08:27.840
of different possibilities, but fairly quickly, the degree of certainty about some of the moves.
08:27.840 --> 08:31.760
I mean, if a move is really terrible, you'll pretty quickly find out, right? You'll lose
08:31.760 --> 08:37.200
half your pieces or half your territory. And then you'll say, okay, this is not worth thinking
08:37.200 --> 08:45.280
about anymore. And then so further down, the tree becomes very long and narrow. And you're following
08:45.280 --> 08:54.800
various lines of play, 10, 20, 30, 40, 50 moves into the future. And that again is something
08:54.800 --> 09:01.920
that human beings have a very hard time doing mainly because they just lack the short term memory.
09:02.480 --> 09:09.440
You just can't remember a sequence of moves. That's 50 moves long. And you can't imagine
09:09.440 --> 09:15.520
the board correctly for that many moves into the future. Of course, the top players,
09:16.400 --> 09:19.280
I'm much more familiar with chess, but the top players probably have,
09:19.280 --> 09:26.480
they have echoes of the same kind of intuition instinct that in a moment's time, AlphaGo applies
09:27.280 --> 09:31.760
when they see a board. I mean, they've seen those patterns, human beings have seen those
09:31.760 --> 09:37.680
patterns before at the top, at the grandmaster level. It seems that there is some
09:40.000 --> 09:45.920
similarities, or maybe it's our imagination creates a vision of those similarities, but it
09:45.920 --> 09:53.120
feels like this kind of pattern recognition that the AlphaGo approaches are using is similar to
09:53.120 --> 10:00.560
what human beings at the top level are using. I think there's some truth to that.
10:01.520 --> 10:08.960
But not entirely. Yeah, I mean, I think the extent to which a human grandmaster can reliably
10:10.080 --> 10:13.680
instantly recognize the right move and instantly recognize the values of position.
10:13.680 --> 10:19.120
I think that's a little bit overrated. But if you sacrifice a queen, for example,
10:19.120 --> 10:23.840
I mean, there's these, there's these beautiful games of chess with Bobby Fisher, somebody where
10:24.640 --> 10:32.720
it's seeming to make a bad move. And I'm not sure there's a perfect degree of calculation
10:32.720 --> 10:37.440
involved where they've calculated all the possible things that happen. But there's an
10:37.440 --> 10:45.440
instinct there, right, that somehow adds up to. Yeah, so I think what happens is you get a sense
10:45.440 --> 10:51.680
that there's some possibility in the position, even if you make a weird looking move, that it
10:51.680 --> 11:05.120
opens up some lines of calculation that otherwise would be definitely bad. And it's that intuition
11:05.120 --> 11:13.920
that there's something here in this position that might might yield a win down the set. And then
11:13.920 --> 11:20.640
you follow that. Right. And in some sense, when a chess player is following a line in his or her
11:20.640 --> 11:27.120
mind, they're they mentally simulating what the other person is going to do, what the opponent
11:27.120 --> 11:33.680
is going to do. And they can do that as long as the moves are kind of forced, right, as long as
11:33.680 --> 11:39.440
there's a we call a forcing variation where the opponent doesn't really have much choice how to
11:39.440 --> 11:45.200
respond. And then you see if you can force them into a situation where you win. We see plenty
11:45.200 --> 11:53.520
of mistakes, even in grandmaster games, where they just miss some simple three, four, five move
11:54.560 --> 12:00.400
combination that wasn't particularly apparent in the position, but was still there.
12:00.400 --> 12:07.360
That's the thing that makes us human. Yeah. So when you mentioned that in Othello, those games
12:07.360 --> 12:13.760
were after some meta reasoning improvements and research was able to beat you. How did that make
12:13.760 --> 12:21.280
you feel part of the meta reasoning capability that it had was based on learning. And,
12:23.280 --> 12:28.160
and you could sit down the next day and you could just feel that it had got a lot smarter.
12:28.160 --> 12:33.280
You know, and all of a sudden, you really felt like you're sort of pressed against
12:34.480 --> 12:40.800
the wall because it was it was much more aggressive and was totally unforgiving of any
12:40.800 --> 12:47.760
minor mistake that you might make. And actually, it seemed understood the game better than I did.
12:47.760 --> 12:55.520
And Gary Kasparov has this quote where during his match against Deep Blue, he said he suddenly
12:55.520 --> 13:01.680
felt that there was a new kind of intelligence across the board. Do you think that's a scary or
13:01.680 --> 13:10.240
an exciting possibility for Kasparov and for yourself in the context of chess purely sort of
13:10.240 --> 13:16.720
in this like that feeling, whatever that is, I think it's definitely an exciting feeling.
13:17.600 --> 13:23.680
You know, this is what made me work on AI in the first place was as soon as I really understood
13:23.680 --> 13:30.080
what a computer was, I wanted to make it smart. You know, I started out with the first program I
13:30.080 --> 13:38.640
wrote was for the Sinclair Programmable Calculator. And I think you could write a 21 step algorithm.
13:38.640 --> 13:44.160
That was the biggest program you could write something like that and do little arithmetic
13:44.160 --> 13:49.440
calculations. So I think I implemented Newton's method for square roots and a few other things
13:49.440 --> 13:56.640
like that. But then, you know, I thought, okay, if I just had more space, I could make this thing
13:56.640 --> 14:10.560
intelligent. And so I started thinking about AI. And I think the thing that's scary is not the
14:10.560 --> 14:19.520
chess program, because you know, chess programs, they're not in the taking over the world business.
14:19.520 --> 14:29.440
But if you extrapolate, you know, there are things about chess that don't resemble the real
14:29.440 --> 14:37.600
world, right? We know, we know the rules of chess. The chess board is completely visible
14:37.600 --> 14:43.280
to the program where, of course, the real world is not. Most the real world is not visible from
14:43.280 --> 14:52.400
wherever you're sitting, so to speak. And to overcome those kinds of problems, you need
14:52.400 --> 14:58.240
qualitatively different algorithms. Another thing about the real world is that, you know, we
14:58.240 --> 15:07.520
we regularly plan ahead on the timescales involving billions or trillions of steps. Now,
15:07.520 --> 15:13.760
we don't plan those in detail. But, you know, when you choose to do a PhD at Berkeley,
15:14.800 --> 15:20.480
that's a five year commitment that amounts to about a trillion motor control steps that you
15:20.480 --> 15:26.160
will eventually be committed to. Including going up the stairs, opening doors,
15:26.160 --> 15:32.880
a drinking water type. Yeah, I mean, every every finger movement while you're typing every character
15:32.880 --> 15:37.280
of every paper and the thesis and everything. So you're not committing in advance to the specific
15:37.280 --> 15:43.760
motor control steps, but you're still reasoning on a timescale that will eventually reduce to
15:44.400 --> 15:50.000
trillions of motor control actions. And so for all these reasons,
15:50.000 --> 15:58.160
you know, AlphaGo and Deep Blue and so on don't represent any kind of threat to humanity. But
15:58.160 --> 16:08.320
they are a step towards it, right? And progress in AI occurs by essentially removing one by one
16:08.320 --> 16:14.640
these assumptions that make problems easy, like the assumption of complete observability
16:14.640 --> 16:21.120
of the situation, right? We remove that assumption, you need a much more complicated kind of computing
16:22.160 --> 16:26.000
design and you need something that actually keeps track of all the things you can't see
16:26.000 --> 16:31.920
and tries to estimate what's going on. And there's inevitable uncertainty in that. So it becomes a
16:31.920 --> 16:38.160
much more complicated problem. But, you know, we are removing those assumptions, we are starting to
16:38.160 --> 16:44.400
have algorithms that can cope with much longer timescales, cope with uncertainty that can cope
16:44.400 --> 16:53.360
with partial observability. And so each of those steps sort of magnifies by a thousand the range
16:53.360 --> 16:58.400
of things that we can do with AI systems. So the way I started in AI, I wanted to be a psychiatrist
16:58.400 --> 17:03.840
for a long time and understand the mind in high school, and of course program and so on. And I
17:03.840 --> 17:10.640
showed up University of Illinois to an AI lab and they said, okay, I don't have time for you, but here
17:10.640 --> 17:18.480
is a book, AI Modern Approach, I think it was the first edition at the time. Here, go learn this.
17:18.480 --> 17:23.120
And I remember the lay of the land was, well, it's incredible that we solved chess, but we'll
17:23.120 --> 17:30.480
never solve go. I mean, it was pretty certain that go in the way we thought about systems that reason
17:31.520 --> 17:36.080
wasn't possible to solve. And now we've solved it. So it's a very... Well, I think I would have said
17:36.080 --> 17:44.080
that it's unlikely we could take the kind of algorithm that was used for chess and just get
17:44.080 --> 17:55.680
it to scale up and work well for go. And at the time, what we thought was that in order to solve
17:55.680 --> 18:01.600
go, we would have to do something similar to the way humans manage the complexity of go,
18:01.600 --> 18:06.960
which is to break it down into kind of sub games. So when a human thinks about a go board,
18:06.960 --> 18:12.480
they think about different parts of the board as sort of weakly connected to each other.
18:12.480 --> 18:16.880
And they think about, okay, within this part of the board, here's how things could go.
18:16.880 --> 18:20.560
In that part of board, here's how things could go. And then you try to sort of couple those
18:20.560 --> 18:26.400
two analyses together and deal with the interactions and maybe revise your views of how things are
18:26.400 --> 18:32.160
going to go in each part. And then you've got maybe five, six, seven, 10 parts of the board. And
18:33.440 --> 18:40.640
that actually resembles the real world much more than chess does. Because in the real world,
18:41.440 --> 18:49.200
we have work, we have home life, we have sport, whatever different kinds of activities, shopping,
18:49.200 --> 18:57.040
these all are connected to each other, but they're weakly connected. So when I'm typing a paper,
18:58.480 --> 19:03.600
I don't simultaneously have to decide which order I'm going to get the milk and the butter.
19:04.400 --> 19:09.760
That doesn't affect the typing. But I do need to realize, okay, better finish this
19:10.320 --> 19:14.080
before the shops close because I don't have anything, I don't have any food at home.
19:14.080 --> 19:20.560
So there's some weak connection, but not in the way that chess works, where everything is tied
19:20.560 --> 19:27.600
into a single stream of thought. So the thought was that go to solve go would have to make progress
19:27.600 --> 19:31.600
on stuff that would be useful for the real world. And in a way, AlphaGo is a little bit disappointing
19:32.480 --> 19:38.160
because the program designed for AlphaGo is actually not that different from
19:38.160 --> 19:45.840
from Deep Blue or even from Arthur Samuel's Jacob playing program from the 1950s.
19:48.160 --> 19:54.560
And in fact, the two things that make AlphaGo work is one is this amazing ability to evaluate
19:54.560 --> 19:59.200
the positions. And the other is the meta reasoning capability, which allows it to
19:59.200 --> 20:06.960
to explore some paths in the tree very deeply and to abandon other paths very quickly.
20:06.960 --> 20:14.640
So this word meta reasoning, while technically correct, inspires perhaps the wrong
20:16.000 --> 20:21.360
degree of power that AlphaGo has, for example, the word reasoning is a powerful word. So let me
20:21.360 --> 20:29.840
ask you sort of, do you were part of the symbolic AI world for a while, like where AI was, there's
20:29.840 --> 20:38.960
a lot of excellent interesting ideas there that unfortunately met a winter. And so do you think
20:38.960 --> 20:46.800
it reemerges? Oh, so I would say, yeah, it's not quite as simple as that. So the AI winter,
20:46.800 --> 20:54.400
the first winter that was actually named as such was the one in the late 80s.
20:56.400 --> 21:00.880
And that came about because in the mid 80s, there was
21:03.280 --> 21:10.480
really a concerted attempt to push AI out into the real world using what was called
21:10.480 --> 21:17.280
expert system technology. And for the most part, that technology was just not ready for prime
21:17.280 --> 21:27.200
time. They were trying in many cases to do a form of uncertain reasoning, judgment combinations of
21:27.200 --> 21:34.640
evidence diagnosis, those kinds of things, which was simply invalid. And when you try to apply
21:34.640 --> 21:40.960
invalid reasoning methods to real problems, you can fudge it for small versions of the problem.
21:40.960 --> 21:47.200
But when it starts to get larger, the thing just falls apart. So many companies found that
21:49.040 --> 21:53.440
the stuff just didn't work. And they were spending tons of money on consultants to
21:53.440 --> 21:59.600
try to make it work. And there were other practical reasons, like they were asking
21:59.600 --> 22:07.760
the companies to buy incredibly expensive Lisp machine workstations, which were literally
22:07.760 --> 22:17.680
between $50,000 and $100,000 in 1980s money, which would be between $150,000 and $300,000 per
22:17.680 --> 22:24.000
workstation in current prices. Then the bottom line, they weren't seeing a profit from it.
22:24.000 --> 22:29.840
Yeah. In many cases, I think there were some successes. There's no doubt about that. But
22:30.880 --> 22:37.760
people, I would say, over invested. Every major company was starting an AI department just like
22:37.760 --> 22:45.840
now. And I worry a bit that we might see similar disappointments, not because the
22:45.840 --> 22:57.600
current technology is invalid, but it's limited in its scope. And it's almost the dual of the
22:57.600 --> 23:03.360
scope problems that expert systems had. What have you learned from that hype cycle? And
23:03.360 --> 23:09.760
what can we do to prevent another winter, for example? Yeah. So when I'm giving talks these
23:09.760 --> 23:17.520
days, that's one of the warnings that I give. So there's two part warning slide. One is that
23:18.480 --> 23:24.000
rather than data being the new oil, data is the new snake oil. That's a good line. And then
23:26.000 --> 23:35.440
the other is that we might see a very visible failure in some of the major application areas.
23:35.440 --> 23:42.400
And I think self driving cars would be the flagship. And I think
23:43.600 --> 23:48.560
when you look at the history, so the first self driving car was on the freeway,
23:51.200 --> 24:00.400
driving itself, changing lanes, overtaking in 1987. And so it's more than 30 years.
24:00.400 --> 24:06.720
And that kind of looks like where we are today, right? Prototypes on the freeway,
24:06.720 --> 24:13.760
changing lanes and overtaking. Now, I think significant progress has been made, particularly
24:13.760 --> 24:20.560
on the perception side. So we worked a lot on autonomous vehicles in the early, mid 90s at
24:20.560 --> 24:29.040
Berkeley. And we had our own big demonstrations. We put congressmen into self driving cars and
24:29.040 --> 24:36.000
had them zooming along the freeway. And the problem was clearly perception.
24:37.520 --> 24:42.880
At the time, the problem was perception. Yeah. So in simulation, with perfect perception,
24:42.880 --> 24:47.200
you could actually show that you can drive safely for a long time, even if the other cars
24:47.200 --> 24:55.360
are misbehaving and so on. But simultaneously, we worked on machine vision for detecting cars and
24:55.360 --> 25:03.040
tracking pedestrians and so on. And we couldn't get the reliability of detection and tracking
25:03.040 --> 25:11.440
up to a high enough level, particularly in bad weather conditions, nighttime rainfall.
25:11.440 --> 25:16.000
Good enough for demos, but perhaps not good enough to cover the general operation.
25:16.000 --> 25:20.800
Yeah. So the thing about driving is, so suppose you're a taxi driver and you drive every day,
25:20.800 --> 25:27.360
eight hours a day for 10 years, that's 100 million seconds of driving. And any one of those
25:27.360 --> 25:33.280
seconds, you can make a fatal mistake. So you're talking about eight nines of reliability.
25:34.960 --> 25:43.840
Now, if your vision system only detects 98.3% of the vehicles, that's sort of one
25:43.840 --> 25:52.720
on a bit nine reliability. So you have another seven orders of magnitude to go. And this is
25:52.720 --> 25:57.920
what people don't understand. They think, oh, because I had a successful demo, I'm pretty much
25:57.920 --> 26:07.440
done. But you're not even within seven orders of magnitude of being done. And that's the difficulty.
26:07.440 --> 26:14.320
And it's not, can I follow a white line? That's not the problem. We follow a white line all the
26:14.320 --> 26:22.160
way across the country. But it's the weird stuff that happens. It's all the edge cases. Yeah.
26:22.160 --> 26:30.640
The edge case, other drivers doing weird things. So if you talk to Google, so they had actually
26:30.640 --> 26:36.560
a very classical architecture where you had machine vision, which would detect all the
26:36.560 --> 26:41.920
other cars and pedestrians and the white lines and the road signs. And then basically,
26:42.480 --> 26:49.680
that was fed into a logical database. And then you had a classical 1970s rule based expert system
26:52.000 --> 26:55.680
telling you, okay, if you're in the middle lane, and there's a bicyclist in the right lane,
26:55.680 --> 27:03.040
who is signaling this, then then do that, right? And what they found was that every day that go
27:03.040 --> 27:07.760
out and there'd be another situation that the rules didn't cover. So they come to a traffic
27:07.760 --> 27:11.680
circle and there's a little girl riding her bicycle the wrong way around the traffic circle.
27:11.680 --> 27:17.520
Okay, what do you do? We don't have a rule. Oh my God. Okay, stop. And then they come back
27:17.520 --> 27:24.400
and add more rules. And they just found that this was not really converging. And if you think about
27:24.400 --> 27:31.280
it, right, how do you deal with an unexpected situation, meaning one that you've never previously
27:31.280 --> 27:37.200
encountered and the sort of the reasoning required to figure out the solution for that
27:37.200 --> 27:42.800
situation has never been done. It doesn't match any previous situation in terms of the kind of
27:42.800 --> 27:49.520
reasoning you have to do. Well, in chess programs, this happens all the time. You're constantly
27:49.520 --> 27:54.560
coming up with situations you haven't seen before. And you have to reason about them and you have
27:54.560 --> 27:59.840
to think about, okay, here are the possible things I could do. Here are the outcomes. Here's how
27:59.840 --> 28:04.560
desirable the outcomes are and then pick the right one. In the 90s, we were saying, okay,
28:04.560 --> 28:08.160
this is how you're going to have to do automated vehicles. They're going to have to have a look
28:08.160 --> 28:14.400
ahead capability. But the look ahead for driving is more difficult than it is for chess. Because
28:14.400 --> 28:20.720
of humans. Right, there's humans and they're less predictable than chess pieces. Well,
28:20.720 --> 28:28.240
then you have an opponent in chess who's also somewhat unpredictable. But for example, in chess,
28:28.240 --> 28:33.600
you always know the opponent's intention. They're trying to beat you. Whereas in driving, you don't
28:33.600 --> 28:39.040
know, is this guy trying to turn left or has he just forgotten to turn off his turn signal? Or is
28:39.040 --> 28:45.680
he drunk? Or is he changing the channel on his radio or whatever it might be, you got to try and
28:45.680 --> 28:52.560
figure out the mental state, the intent of the other drivers to forecast the possible evolutions
28:52.560 --> 28:58.160
of their trajectories. And then you got to figure out, okay, which is the trajectory for me that's
28:58.160 --> 29:04.000
going to be safest. And those all interact with each other because the other drivers are going
29:04.000 --> 29:09.120
to react to your trajectory and so on. So, you know, they've got the classic merging onto the
29:09.120 --> 29:14.640
freeway problem where you're kind of racing a vehicle that's already on the freeway and you're
29:14.640 --> 29:17.680
are you going to pull ahead of them or are you going to let them go first and pull in behind
29:17.680 --> 29:23.680
and you get this sort of uncertainty about who's going first. So all those kinds of things
29:23.680 --> 29:34.720
mean that you need a decision making architecture that's very different from either a rule based
29:34.720 --> 29:41.360
system or it seems to me a kind of an end to end neural network system. You know, so just as Alpha
29:41.360 --> 29:47.360
Go is pretty good when it doesn't do any look ahead, but it's way, way, way, way better when it does.
29:47.360 --> 29:52.720
I think the same is going to be true for driving. You can have a driving system that's pretty good
29:54.080 --> 29:59.280
when it doesn't do any look ahead, but that's not good enough. You know, and we've already seen
29:59.920 --> 30:07.440
multiple deaths caused by poorly designed machine learning algorithms that don't really
30:07.440 --> 30:13.600
understand what they're doing. Yeah, and on several levels, I think it's on the perception side,
30:13.600 --> 30:19.520
there's mistakes being made by those algorithms where the perception is very shallow on the
30:19.520 --> 30:26.720
planning side, the look ahead, like you said, and the thing that we come up against that's
30:28.560 --> 30:32.080
really interesting when you try to deploy systems in the real world is
30:33.280 --> 30:37.680
you can't think of an artificial intelligence system as a thing that responds to the world always.
30:38.320 --> 30:41.600
You have to realize that it's an agent that others will respond to as well.
30:41.600 --> 30:47.200
Well, so in order to drive successfully, you can't just try to do obstacle avoidance.
30:47.840 --> 30:51.520
You can't pretend that you're invisible, right? You're the invisible car.
30:52.400 --> 30:57.280
It doesn't work that way. I mean, but you have to assert, yet others have to be scared of you,
30:57.280 --> 31:04.160
just there's this tension, there's this game. So we study a lot of work with pedestrians.
31:04.160 --> 31:09.360
If you approach pedestrians as purely an obstacle avoidance, so you're doing look
31:09.360 --> 31:15.040
ahead as in modeling the intent, they're not going to take advantage of you.
31:15.040 --> 31:20.080
They're not going to respect you at all. There has to be a tension, a fear, some amount of
31:20.080 --> 31:26.720
uncertainty. That's how we have created. Or at least just a kind of a resoluteness.
31:28.000 --> 31:32.000
You have to display a certain amount of resoluteness. You can't be too tentative.
31:32.000 --> 31:42.480
Yeah. So the solutions then become pretty complicated. You get into game theoretic
31:42.480 --> 31:50.960
analyses. So at Berkeley now, we're working a lot on this kind of interaction between machines
31:50.960 --> 32:03.600
and humans. And that's exciting. So my colleague, Anka Dragan, actually, if you formulate the problem
32:03.600 --> 32:08.800
game theoretically and you just let the system figure out the solution, it does interesting,
32:08.800 --> 32:16.640
unexpected things. Like sometimes at a stop sign, if no one is going first, the car will
32:16.640 --> 32:23.200
actually back up a little. It's just to indicate to the other cars that they should go. And that's
32:23.200 --> 32:28.480
something it invented entirely by itself. That's interesting. We didn't say this is the language
32:28.480 --> 32:36.240
of communication at stop signs. It figured it out. That's really interesting. So let me one just
32:36.240 --> 32:42.960
step back for a second. Just this beautiful philosophical notion. So Pamela McCordick in
32:42.960 --> 32:50.320
1979 wrote AI began with the ancient wish to forge the gods. So when you think about the
32:50.320 --> 32:57.520
history of our civilization, do you think that there is an inherent desire to create,
32:58.960 --> 33:05.680
let's not say gods, but to create superintelligence? Is it inherent to us? Is it in our genes,
33:05.680 --> 33:13.680
that the natural arc of human civilization is to create things that are of greater and greater
33:13.680 --> 33:21.680
power and perhaps echoes of ourselves? So to create the gods, as Pamela said.
33:21.680 --> 33:34.160
It may be. I mean, we're all individuals, but certainly we see over and over again in history
33:35.760 --> 33:42.320
individuals who thought about this possibility. Hopefully, I'm not being too philosophical here.
33:42.320 --> 33:48.560
But if you look at the arc of this, where this is going and we'll talk about AI safety,
33:48.560 --> 33:55.840
we'll talk about greater and greater intelligence, do you see that when you created the Othello
33:55.840 --> 34:01.680
program and you felt this excitement, what was that excitement? Was it the excitement of a tinkerer
34:01.680 --> 34:10.240
who created something cool, like a clock? Or was there a magic, or was it more like a child being
34:10.240 --> 34:17.520
born? Yeah. So I mean, I certainly understand that viewpoint. And if you look at the light
34:17.520 --> 34:26.640
hill report, so in the 70s, there was a lot of controversy in the UK about AI and whether it
34:26.640 --> 34:34.720
was for real and how much the money the government should invest. So it's a long story, but the
34:34.720 --> 34:43.280
government commissioned a report by Lighthill, who was a physicist, and he wrote a very damning
34:43.280 --> 34:53.920
report about AI, which I think was the point. And he said that these are frustrated men who
34:54.480 --> 35:05.760
unable to have children would like to create life as a kind of replacement, which I think is
35:05.760 --> 35:21.600
really pretty unfair. But there is a kind of magic, I would say, when you build something
35:25.680 --> 35:29.760
and what you're building in is really just you're building in some understanding of the
35:29.760 --> 35:37.120
principles of learning and decision making. And to see those principles actually then
35:37.840 --> 35:47.920
turn into intelligent behavior in specific situations, it's an incredible thing. And
35:47.920 --> 35:58.480
that is naturally going to make you think, okay, where does this end?
36:00.080 --> 36:08.240
And so there's a there's magical, optimistic views of word and whatever your view of optimism is,
36:08.240 --> 36:13.360
whatever your view of utopia is, it's probably different for everybody. But you've often talked
36:13.360 --> 36:26.080
about concerns you have of how things might go wrong. So I've talked to Max Tegmark. There's a
36:26.080 --> 36:33.360
lot of interesting ways to think about AI safety. You're one of the seminal people thinking about
36:33.360 --> 36:39.360
this problem amongst sort of being in the weeds of actually solving specific AI problems,
36:39.360 --> 36:44.080
you're also thinking about the big picture of where we're going. So can you talk about
36:44.080 --> 36:49.200
several elements of it? Let's just talk about maybe the control problem. So this idea of
36:50.800 --> 36:58.720
losing ability to control the behavior of our AI system. So how do you see that? How do you see
36:58.720 --> 37:04.480
that coming about? What do you think we can do to manage it?
37:04.480 --> 37:11.520
Well, so it doesn't take a genius to realize that if you make something that's smarter than you,
37:11.520 --> 37:20.320
you might have a problem. Alan Turing wrote about this and gave lectures about this,
37:21.600 --> 37:32.480
1951. He did a lecture on the radio. And he basically says, once the machine thinking method
37:32.480 --> 37:45.600
starts, very quickly, they'll outstrip humanity. And if we're lucky, we might be able to turn off
37:45.600 --> 37:52.160
the power at strategic moments, but even so, our species would be humbled. And actually,
37:52.160 --> 37:56.240
I think it was wrong about that. If it's a sufficiently intelligent machine, it's not
37:56.240 --> 38:00.160
going to let you switch it off. It's actually in competition with you.
38:00.160 --> 38:05.840
So what do you think is meant just for a quick tangent if we shut off this
38:05.840 --> 38:08.800
super intelligent machine that our species would be humbled?
38:11.840 --> 38:20.560
I think he means that we would realize that we are inferior, that we only survive by the skin
38:20.560 --> 38:27.440
of our teeth because we happen to get to the off switch just in time. And if we hadn't,
38:27.440 --> 38:34.400
then we would have lost control over the earth. So are you more worried when you think about
38:34.400 --> 38:41.600
this stuff about super intelligent AI or are you more worried about super powerful AI that's not
38:41.600 --> 38:49.760
aligned with our values? So the paperclip scenarios kind of... I think so the main problem I'm
38:49.760 --> 38:58.960
working on is the control problem, the problem of machines pursuing objectives that are, as you
38:58.960 --> 39:06.720
say, not aligned with human objectives. And this has been the way we've thought about AI
39:06.720 --> 39:15.120
since the beginning. You build a machine for optimizing and then you put in some objective
39:15.120 --> 39:25.520
and it optimizes. And we can think of this as the king Midas problem. Because if the king Midas
39:26.480 --> 39:32.640
put in this objective, everything I touch should turn to gold and the gods, that's like the machine,
39:32.640 --> 39:39.360
they said, okay, done. You now have this power and of course his food and his drink and his family
39:39.360 --> 39:50.080
all turned to gold and then he dies of misery and starvation. It's a warning, it's a failure mode that
39:50.080 --> 39:56.160
pretty much every culture in history has had some story along the same lines. There's the
39:56.160 --> 40:01.920
genie that gives you three wishes and third wish is always, please undo the first two wishes because
40:01.920 --> 40:11.920
I messed up. And when Arthur Samuel wrote his checker playing program, which learned to play
40:11.920 --> 40:16.800
checkers considerably better than Arthur Samuel could play and actually reached a pretty decent
40:16.800 --> 40:25.040
standard, Norbert Wiener, who was one of the major mathematicians of the 20th century, he's sort of
40:25.040 --> 40:32.560
the father of modern automation control systems. He saw this and he basically extrapolated
40:33.360 --> 40:43.680
as Turing did and said, okay, this is how we could lose control. And specifically that
40:45.520 --> 40:50.960
we have to be certain that the purpose we put into the machine is the purpose which we really
40:50.960 --> 40:59.760
desire. And the problem is, we can't do that. Right. You mean we're not, it's a very difficult
40:59.760 --> 41:05.440
to encode, to put our values on paper is really difficult, or you're just saying it's impossible?
41:09.120 --> 41:15.360
The line is great between the two. So theoretically, it's possible, but in practice,
41:15.360 --> 41:23.520
it's extremely unlikely that we could specify correctly in advance the full range of concerns
41:23.520 --> 41:29.360
of humanity. You talked about cultural transmission of values, I think is how humans to human
41:29.360 --> 41:36.320
transmission of values happens, right? Well, we learn, yeah, I mean, as we grow up, we learn about
41:36.320 --> 41:42.640
the values that matter, how things should go, what is reasonable to pursue and what isn't
41:42.640 --> 41:47.920
reasonable to pursue. I think machines can learn in the same kind of way. Yeah. So I think that
41:49.120 --> 41:54.480
what we need to do is to get away from this idea that you build an optimizing machine and then you
41:54.480 --> 42:03.200
put the objective into it. Because if it's possible that you might put in a wrong objective, and we
42:03.200 --> 42:08.880
already know this is possible because it's happened lots of times, right? That means that the machine
42:08.880 --> 42:17.760
should never take an objective that's given as gospel truth. Because once it takes the objective
42:17.760 --> 42:26.800
as gospel truth, then it believes that whatever actions it's taking in pursuit of that objective
42:26.800 --> 42:31.200
are the correct things to do. So you could be jumping up and down and saying, no, no, no, no,
42:31.200 --> 42:36.480
you're going to destroy the world, but the machine knows what the true objective is and is pursuing
42:36.480 --> 42:42.640
it and tough luck to you. And this is not restricted to AI, right? This is, I think,
42:43.360 --> 42:48.880
many of the 20th century technologies, right? So in statistics, you minimize a loss function,
42:48.880 --> 42:54.320
the loss function is exogenously specified in control theory, you minimize a cost function,
42:54.320 --> 42:59.840
in operations research, you maximize a reward function, and so on. So in all these disciplines,
42:59.840 --> 43:08.560
this is how we conceive of the problem. And it's the wrong problem. Because we cannot specify
43:08.560 --> 43:15.360
with certainty the correct objective, right? We need uncertainty, we need the machine to be
43:15.360 --> 43:19.440
uncertain about what it is that it's supposed to be maximizing.
43:19.440 --> 43:25.200
It's my favorite idea of yours. I've heard you say somewhere, well, I shouldn't pick favorites,
43:25.200 --> 43:32.640
but it just sounds beautiful. We need to teach machines humility. It's a beautiful way to put
43:32.640 --> 43:40.320
it. I love it. That they're humble. They know that they don't know what it is they're supposed
43:40.320 --> 43:48.240
to be doing. And that those objectives, I mean, they exist, they're within us, but we may not
43:48.240 --> 43:56.160
be able to explicate them. We may not even know how we want our future to go.
43:57.040 --> 44:06.800
So exactly. And a machine that's uncertain is going to be differential to us. So if we say,
44:06.800 --> 44:11.840
don't do that, well, now the machines learn something a bit more about our true objectives,
44:11.840 --> 44:16.480
because something that it thought was reasonable in pursuit of our objective,
44:16.480 --> 44:20.800
it turns out not to be so now it's learned something. So it's going to defer because it
44:20.800 --> 44:30.240
wants to be doing what we really want. And that point, I think, is absolutely central
44:30.240 --> 44:37.920
to solving the control problem. And it's a different kind of AI when you take away this
44:37.920 --> 44:44.560
idea that the objective is known, then in fact, a lot of the theoretical frameworks that we're so
44:44.560 --> 44:53.520
familiar with, you know, mark off decision processes, goal based planning, you know,
44:53.520 --> 44:59.280
standard game research, all of these techniques actually become inapplicable.
45:01.040 --> 45:11.120
And you get a more complicated problem because because now the interaction with the human becomes
45:11.120 --> 45:20.400
part of the problem. Because the human by making choices is giving you more information about
45:21.280 --> 45:25.360
the true objective and that information helps you achieve the objective better.
45:26.640 --> 45:32.000
And so that really means that you're mostly dealing with game theoretic problems where you've
45:32.000 --> 45:38.000
got the machine and the human and they're coupled together, rather than a machine going off by itself
45:38.000 --> 45:43.600
with a fixed objective. Which is fascinating on the machine and the human level that we,
45:44.400 --> 45:51.920
when you don't have an objective means you're together coming up with an objective. I mean,
45:51.920 --> 45:56.160
there's a lot of philosophy that, you know, you could argue that life doesn't really have meaning.
45:56.160 --> 46:01.680
We we together agree on what gives it meaning and we kind of culturally create
46:01.680 --> 46:08.560
things that give why the heck we are in this earth anyway. We together as a society create
46:08.560 --> 46:13.680
that meaning and you have to learn that objective. And one of the biggest, I thought that's where
46:13.680 --> 46:19.200
you were going to go for a second. One of the biggest troubles we run into outside of statistics
46:19.200 --> 46:26.240
and machine learning and AI in just human civilization is when you look at, I came from,
46:26.240 --> 46:32.160
I was born in the Soviet Union. And the history of the 20th century, we ran into the most trouble,
46:32.160 --> 46:40.160
us humans, when there was a certainty about the objective. And you do whatever it takes to achieve
46:40.160 --> 46:46.480
that objective, whether you're talking about Germany or communist Russia, you get into trouble
46:46.480 --> 46:52.960
with humans. And I would say with corporations, in fact, some people argue that we don't have
46:52.960 --> 46:57.840
to look forward to a time when AI systems take over the world, they already have. And they call
46:57.840 --> 47:04.880
corporations, right? That corporations happen to be using people as components right now.
47:05.920 --> 47:11.680
But they are effectively algorithmic machines, and they're optimizing an objective, which is
47:11.680 --> 47:18.080
quarterly profit that isn't aligned with overall well being of the human race. And they are
47:18.080 --> 47:24.160
destroying the world. They are primarily responsible for our inability to tackle climate change.
47:24.960 --> 47:30.400
So I think that's one way of thinking about what's going on with corporations. But
47:31.840 --> 47:39.680
I think the point you're making is valid, that there are many systems in the real world where
47:39.680 --> 47:48.480
we've sort of prematurely fixed on the objective and then decoupled the machine from those that
47:48.480 --> 47:54.720
are supposed to be serving. And I think you see this with government, right? Government is supposed
47:54.720 --> 48:02.720
to be a machine that serves people. But instead, it tends to be taken over by people who have their
48:02.720 --> 48:08.160
own objective and use government to optimize that objective, regardless of what people want.
48:08.160 --> 48:16.080
Do you find appealing the idea of almost arguing machines where you have multiple AI systems with
48:16.080 --> 48:22.400
a clear fixed objective? We have in government the red team and the blue team that are very fixed
48:22.400 --> 48:28.240
on their objectives. And they argue, and it kind of maybe would disagree, but it kind of seems to
48:28.240 --> 48:39.440
make it work somewhat that the duality of it, okay, let's go 100 years back when there was still
48:39.440 --> 48:44.480
was going on or at the founding of this country, there was disagreements and that disagreement is
48:44.480 --> 48:52.160
where so it was a balance between certainty and forced humility because the power was distributed.
48:52.160 --> 49:05.280
Yeah, I think that the nature of debate and disagreement argument takes as a premise the idea
49:05.280 --> 49:12.800
that you could be wrong, which means that you're not necessarily absolutely convinced that your
49:12.800 --> 49:19.520
objective is the correct one. If you were absolutely convinced, there'd be no point
49:19.520 --> 49:24.160
in having any discussion or argument because you would never change your mind. And there wouldn't
49:24.160 --> 49:32.080
be any sort of synthesis or anything like that. So I think you can think of argumentation as an
49:32.080 --> 49:44.640
implementation of a form of uncertain reasoning. I've been reading recently about utilitarianism
49:44.640 --> 49:54.960
and the history of efforts to define in a sort of clear mathematical way a if you like a formula for
49:54.960 --> 50:02.320
moral or political decision making. And it's really interesting that the parallels between
50:02.320 --> 50:08.720
the philosophical discussions going back 200 years and what you see now in discussions about
50:08.720 --> 50:15.040
existential risk because it's almost exactly the same. So someone would say, okay, well,
50:15.040 --> 50:21.600
here's a formula for how we should make decisions. So utilitarianism is roughly each person has a
50:21.600 --> 50:27.680
utility function and then we make decisions to maximize the sum of everybody's utility.
50:28.720 --> 50:36.480
And then people point out, well, in that case, the best policy is one that leads to
50:36.480 --> 50:42.480
the enormously vast population, all of whom are living a life that's barely worth living.
50:43.520 --> 50:50.640
And this is called the repugnant conclusion. And another version is that we should maximize
50:51.200 --> 50:57.680
pleasure and that's what we mean by utility. And then you'll get people effectively saying,
50:57.680 --> 51:02.480
well, in that case, we might as well just have everyone hooked up to a heroin drip. And they
51:02.480 --> 51:08.720
didn't use those words. But that debate was happening in the 19th century, as it is now
51:09.920 --> 51:17.600
about AI, that if we get the formula wrong, we're going to have AI systems working towards
51:17.600 --> 51:22.080
an outcome that in retrospect, would be exactly wrong.
51:22.080 --> 51:26.960
Do you think there's has beautifully put so the echoes are there. But do you think,
51:26.960 --> 51:34.640
I mean, if you look at Sam Harris, our imagination worries about the AI version of that, because
51:34.640 --> 51:44.640
of the speed at which the things going wrong in the utilitarian context could happen.
51:45.840 --> 51:47.280
Is that a worry for you?
51:47.280 --> 51:55.360
Yeah, I think that in most cases, not in all, but if we have a wrong political idea,
51:55.360 --> 52:01.200
we see it starting to go wrong. And we're not completely stupid. And so we said, okay,
52:02.000 --> 52:10.160
maybe that was a mistake. Let's try something different. And also, we're very slow and inefficient
52:10.160 --> 52:15.520
about implementing these things and so on. So you have to worry when you have corporations
52:15.520 --> 52:20.800
or political systems that are extremely efficient. But when we look at AI systems,
52:20.800 --> 52:27.840
or even just computers in general, right, they have this different characteristic
52:28.400 --> 52:35.040
from ordinary human activity in the past. So let's say you were a surgeon. You had some idea
52:35.040 --> 52:40.480
about how to do some operation, right? Well, and let's say you were wrong, right, that that way
52:40.480 --> 52:45.840
of doing the operation would mostly kill the patient. Well, you'd find out pretty quickly,
52:45.840 --> 52:56.000
like after three, maybe three or four tries, right? But that isn't true for pharmaceutical
52:56.000 --> 53:03.040
companies, because they don't do three or four operations. They manufacture three or four billion
53:03.040 --> 53:08.800
pills and they sell them. And then they find out maybe six months or a year later that, oh,
53:08.800 --> 53:14.880
people are dying of heart attacks or getting cancer from this drug. And so that's why we have the FDA,
53:14.880 --> 53:22.960
right? Because of the scalability of pharmaceutical production. And there have been some unbelievably
53:22.960 --> 53:34.320
bad episodes in the history of pharmaceuticals and adulteration of products and so on that have
53:34.320 --> 53:37.520
killed tens of thousands or paralyzed hundreds of thousands of people.
53:39.360 --> 53:43.280
Now, with computers, we have that same scalability problem that you can
53:43.280 --> 53:49.520
sit there and type for i equals one to five billion, two, right? And all of a sudden,
53:49.520 --> 53:55.360
you're having an impact on a global scale. And yet we have no FDA, right? There's absolutely no
53:55.360 --> 54:02.480
controls at all over what a bunch of undergraduates with too much caffeine can do to the world.
54:03.440 --> 54:09.600
And, you know, we look at what happened with Facebook, well, social media in general, and
54:09.600 --> 54:18.480
click through optimization. So you have a simple feedback algorithm that's trying to just optimize
54:18.480 --> 54:24.080
click through, right? That sounds reasonable, right? Because you don't want to be feeding people
54:24.080 --> 54:33.200
ads that they don't care about or not interested in. And you might even think of that process as
54:33.200 --> 54:42.160
simply adjusting the the feeding of ads or news articles or whatever it might be to match people's
54:42.160 --> 54:50.000
preferences, right? Which sounds like a good idea. But in fact, that isn't how the algorithm works,
54:50.880 --> 54:59.760
right? You make more money. The algorithm makes more money. If it can better predict what people
54:59.760 --> 55:06.400
are going to click on, because then it can feed them exactly that, right? So the way to maximize
55:06.400 --> 55:13.360
click through is actually to modify the people, to make them more predictable. And one way to do
55:13.360 --> 55:21.280
that is to feed them information which will change their behavior and preferences towards
55:21.920 --> 55:27.600
extremes that make them predictable. Whatever is the nearest extreme or the nearest predictable
55:27.600 --> 55:33.840
point, that's where you're going to end up. And the machines will force you there.
55:34.480 --> 55:40.400
Now, and I think there's a reasonable argument to say that this, among other things, is
55:40.400 --> 55:48.880
contributing to the destruction of democracy in the world. And where was the oversight
55:50.160 --> 55:55.600
of this process? Where were the people saying, okay, you would like to apply this algorithm to
55:55.600 --> 56:01.120
five billion people on the face of the earth? Can you show me that it's safe? Can you show
56:01.120 --> 56:07.040
me that it won't have various kinds of negative effects? No, there was no one asking that question.
56:07.040 --> 56:14.800
There was no one placed between the undergrads with too much caffeine and the human race.
56:15.520 --> 56:20.480
It's just they just did it. And some way outside the scope of my knowledge,
56:20.480 --> 56:27.120
so economists would argue that the invisible hand, so the capitalist system, it was the
56:27.120 --> 56:32.480
oversight. So if you're going to corrupt society with whatever decision you make as a company,
56:32.480 --> 56:38.640
then that's going to be reflected in people not using your product. That's one model of oversight.
56:39.280 --> 56:48.000
We shall see. But in the meantime, but you might even have broken the political system
56:48.000 --> 56:54.960
that enables capitalism to function. Well, you've changed it. So we should see. Yeah.
56:54.960 --> 57:01.360
Change is often painful. So my question is absolutely, it's fascinating. You're absolutely
57:01.360 --> 57:07.840
right that there was zero oversight on algorithms that can have a profound civilization changing
57:09.120 --> 57:15.440
effect. So do you think it's possible? I mean, I haven't, have you seen government?
57:15.440 --> 57:22.800
So do you think it's possible to create regulatory bodies oversight over AI algorithms,
57:22.800 --> 57:28.400
which are inherently such cutting edge set of ideas and technologies?
57:30.960 --> 57:37.520
Yeah, but I think it takes time to figure out what kind of oversight, what kinds of controls.
57:37.520 --> 57:42.960
I mean, it took time to design the FDA regime. Some people still don't like it and they want
57:42.960 --> 57:50.400
to fix it. And I think there are clear ways that it could be improved. But the whole notion that
57:50.400 --> 57:56.400
you have stage one, stage two, stage three, and here are the criteria for what you have to do
57:56.400 --> 58:02.000
to pass a stage one trial, right? We haven't even thought about what those would be for algorithms.
58:02.000 --> 58:10.320
So I mean, I think there are, there are things we could do right now with regard to bias, for
58:10.320 --> 58:19.040
example, we have a pretty good technical handle on how to detect algorithms that are propagating
58:19.040 --> 58:26.720
bias that exists in data sets, how to debias those algorithms, and even what it's going to cost you
58:26.720 --> 58:34.480
to do that. So I think we could start having some standards on that. I think there are things to do
58:34.480 --> 58:41.840
with impersonation and falsification that we could, we could work on. So I think, yeah.
58:43.360 --> 58:50.240
Or the very simple point. So impersonation is a machine acting as if it was a person.
58:51.440 --> 58:58.800
I can't see a real justification for why we shouldn't insist that machine self identify as
58:58.800 --> 59:08.480
machines. Where is the social benefit in fooling people into thinking that this is really a person
59:08.480 --> 59:15.120
when it isn't? I don't mind if it uses a human like voice that's easy to understand. That's fine.
59:15.120 --> 59:22.000
But it should just say, I'm a machine in some form. And now many people are speaking to that.
59:22.800 --> 59:26.400
I would think relatively obvious facts. So I think most people... Yeah. I mean,
59:26.400 --> 59:32.400
there is actually a law in California that bans impersonation, but only in certain
59:33.280 --> 59:41.280
restricted circumstances. So for the purpose of engaging in a fraudulent transaction and for
59:41.280 --> 59:48.160
the purpose of modifying someone's voting behavior. So those are the circumstances where
59:48.160 --> 59:55.520
machines have to self identify. But I think, arguably, it should be in all circumstances.
59:56.400 --> 1:00:03.280
And then when you talk about deep fakes, we're just at the beginning. But already,
1:00:03.280 --> 1:00:10.480
it's possible to make a movie of anybody saying anything in ways that are pretty hard to detect.
1:00:11.520 --> 1:00:15.040
Including yourself because you're on camera now and your voice is coming through with high
1:00:15.040 --> 1:00:19.360
resolution. Yeah. So you could take what I'm saying and replace it with pretty much anything
1:00:19.360 --> 1:00:24.400
else you wanted me to be saying. And even it would change my lips and facial expressions to fit.
1:00:26.960 --> 1:00:35.280
And there's actually not much in the way of real legal protection against that.
1:00:35.920 --> 1:00:38.640
I think in the commercial area, you could say, yeah, that's...
1:00:38.640 --> 1:00:46.000
You're using my brand and so on. There are rules about that. But in the political sphere, I think,
1:00:47.600 --> 1:00:52.560
at the moment, it's anything goes. So that could be really, really damaging.
1:00:53.920 --> 1:01:02.400
And let me just try to make not an argument, but try to look back at history and say something
1:01:02.400 --> 1:01:09.680
dark, in essence, is while regulation seems to be... Oversight seems to be exactly the
1:01:09.680 --> 1:01:14.480
right thing to do here. It seems that human beings, what they naturally do is they wait
1:01:14.480 --> 1:01:20.080
for something to go wrong. If you're talking about nuclear weapons, you can't talk about
1:01:20.080 --> 1:01:26.000
nuclear weapons being dangerous until somebody actually, like the United States drops the bomb,
1:01:26.000 --> 1:01:34.960
or Chernobyl melting. Do you think we will have to wait for things going wrong in a way that's
1:01:34.960 --> 1:01:39.840
obviously damaging to society, not an existential risk, but obviously damaging?
1:01:42.320 --> 1:01:48.000
Or do you have faith that... I hope not. But I mean, I think we do have to look at history.
1:01:48.000 --> 1:01:57.600
So the two examples you gave, nuclear weapons and nuclear power, are very, very interesting because
1:01:59.520 --> 1:02:07.840
nuclear weapons, we knew in the early years of the 20th century that atoms contained a huge
1:02:07.840 --> 1:02:13.280
amount of energy. We had E equals MC squared. We knew the mass differences between the different
1:02:13.280 --> 1:02:20.640
atoms and their components, and we knew that you might be able to make an incredibly powerful
1:02:20.640 --> 1:02:28.000
explosive. So H.G. Wells wrote science fiction book, I think, in 1912. Frederick Soddy, who was the
1:02:28.000 --> 1:02:35.760
guy who discovered isotopes as a Nobel Prize winner, he gave a speech in 1915 saying that
1:02:37.840 --> 1:02:42.160
one pound of this new explosive would be the equivalent of 150 tons of dynamite,
1:02:42.160 --> 1:02:50.720
which turns out to be about right. And this was in World War I, so he was imagining how much worse
1:02:51.360 --> 1:02:57.600
the world war would be if we were using that kind of explosive. But the physics establishment
1:02:57.600 --> 1:03:05.760
simply refused to believe that these things could be made. Including the people who were making it.
1:03:06.400 --> 1:03:11.280
Well, so they were doing the nuclear physics. I mean, eventually were the ones who made it.
1:03:11.280 --> 1:03:21.280
You talk about Fermi or whoever. Well, so up to the development was mostly theoretical. So it was
1:03:21.280 --> 1:03:28.320
people using sort of primitive kinds of particle acceleration and doing experiments at the level
1:03:28.320 --> 1:03:36.480
of single particles or collections of particles. They weren't yet thinking about how to actually
1:03:36.480 --> 1:03:40.160
make a bomb or anything like that. But they knew the energy was there and they figured if they
1:03:40.160 --> 1:03:46.720
understood it better, it might be possible. But the physics establishment, their view, and I think
1:03:46.720 --> 1:03:51.840
because they did not want it to be true, their view was that it could not be true.
1:03:53.360 --> 1:04:00.240
That this could not provide a way to make a super weapon. And there was this famous
1:04:01.120 --> 1:04:08.240
speech given by Rutherford, who was the sort of leader of nuclear physics. And it was on
1:04:08.240 --> 1:04:14.800
September 11, 1933. And he said, you know, anyone who talks about the possibility of
1:04:14.800 --> 1:04:21.600
obtaining energy from transformation of atoms is talking complete moonshine. And the next
1:04:22.800 --> 1:04:28.560
morning, Leo Zillard read about that speech and then invented the nuclear chain reaction.
1:04:28.560 --> 1:04:35.920
And so as soon as he invented, as soon as he had that idea, that you could make a chain reaction
1:04:35.920 --> 1:04:40.640
with neutrons because neutrons were not repelled by the nucleus so they could enter the nucleus
1:04:40.640 --> 1:04:48.480
and then continue the reaction. As soon as he has that idea, he instantly realized that the world
1:04:48.480 --> 1:04:58.160
was in deep doo doo. Because this is 1933, right? Hitler had recently come to power in Germany.
1:04:58.160 --> 1:05:09.280
Zillard was in London. He eventually became a refugee and he came to the US. And in the
1:05:09.280 --> 1:05:14.880
process of having the idea about the chain reaction, he figured out basically how to make
1:05:14.880 --> 1:05:22.800
a bomb and also how to make a reactor. And he patented the reactor in 1934. But because
1:05:22.800 --> 1:05:28.480
of the situation, the great power conflict situation that he could see happening,
1:05:29.200 --> 1:05:38.160
he kept that a secret. And so between then and the beginning of World War II,
1:05:39.600 --> 1:05:48.000
people were working, including the Germans, on how to actually create neutron sources,
1:05:48.000 --> 1:05:54.720
what specific fission reactions would produce neutrons of the right energy to continue the
1:05:54.720 --> 1:06:02.480
reaction. And that was demonstrated in Germany, I think in 1938, if I remember correctly. The first
1:06:03.760 --> 1:06:17.280
nuclear weapon patent was 1939 by the French. So this was actually going on well before World War
1:06:17.280 --> 1:06:22.720
II really got going. And then the British probably had the most advanced capability
1:06:22.720 --> 1:06:27.920
in this area. But for safety reasons, among others, and bless just sort of just resources,
1:06:28.720 --> 1:06:33.520
they moved the program from Britain to the US. And then that became Manhattan Project.
1:06:34.480 --> 1:06:37.920
So the reason why we couldn't
1:06:37.920 --> 1:06:48.320
have any kind of oversight of nuclear weapons and nuclear technology was because we were basically
1:06:48.320 --> 1:06:57.520
already in an arms race in a war. But you mentioned in the 20s and 30s, so what are the echoes?
1:07:00.000 --> 1:07:04.320
The way you've described the story, I mean, there's clearly echoes. What do you think most AI
1:07:04.320 --> 1:07:11.440
researchers, folks who are really close to the metal, they really are not concerned about AI,
1:07:11.440 --> 1:07:16.960
they don't think about it, whether it's they don't want to think about it. But what are the,
1:07:16.960 --> 1:07:23.760
yeah, why do you think that is? What are the echoes of the nuclear situation to the current AI
1:07:23.760 --> 1:07:33.120
situation? And what can we do about it? I think there is a kind of motivated cognition, which is
1:07:33.120 --> 1:07:40.640
a term in psychology means that you believe what you would like to be true, rather than what is
1:07:40.640 --> 1:07:50.240
true. And it's unsettling to think that what you're working on might be the end of the human race,
1:07:50.800 --> 1:07:58.400
obviously. So you would rather instantly deny it and come up with some reason why it couldn't be
1:07:58.400 --> 1:08:07.440
true. And I collected a long list of regions that extremely intelligent, competent AI scientists
1:08:08.080 --> 1:08:16.560
have come up with for why we shouldn't worry about this. For example, calculators are superhuman at
1:08:16.560 --> 1:08:21.680
arithmetic and they haven't taken over the world, so there's nothing to worry about. Well, okay,
1:08:21.680 --> 1:08:29.440
my five year old could have figured out why that was an unreasonable and really quite weak argument.
1:08:31.520 --> 1:08:40.800
Another one was, while it's theoretically possible that you could have superhuman AI
1:08:41.760 --> 1:08:46.480
destroy the world, it's also theoretically possible that a black hole could materialize
1:08:46.480 --> 1:08:51.760
right next to the earth and destroy humanity. I mean, yes, it's theoretically possible,
1:08:51.760 --> 1:08:56.720
quantum theoretically, extremely unlikely that it would just materialize right there.
1:08:58.400 --> 1:09:04.720
But that's a completely bogus analogy because if the whole physics community on earth was working
1:09:04.720 --> 1:09:11.680
to materialize a black hole in near earth orbit, wouldn't you ask them, is that a good idea? Is
1:09:11.680 --> 1:09:19.040
that going to be safe? What if you succeed? And that's the thing. The AI community is sort of
1:09:19.040 --> 1:09:26.240
refused to ask itself, what if you succeed? And initially, I think that was because it was too
1:09:26.240 --> 1:09:35.520
hard, but Alan Turing asked himself that and he said, we'd be toast. If we were lucky, we might
1:09:35.520 --> 1:09:40.000
be able to switch off the power but probably we'd be toast. But there's also an aspect
1:09:40.000 --> 1:09:49.680
that because we're not exactly sure what the future holds, it's not clear exactly so technically
1:09:49.680 --> 1:09:58.480
what to worry about, sort of how things go wrong. And so there is something it feels like, maybe
1:09:58.480 --> 1:10:04.400
you can correct me if I'm wrong, but there's something paralyzing about worrying about something
1:10:04.400 --> 1:10:10.000
that logically is inevitable. But you don't really know what that will look like.
1:10:10.720 --> 1:10:19.440
Yeah, I think that's a reasonable point. And it's certainly in terms of existential risks,
1:10:19.440 --> 1:10:26.800
it's different from asteroid collides with the earth, which again is quite possible. It's
1:10:26.800 --> 1:10:33.040
happened in the past. It'll probably happen again. We don't know right now. But if we did detect an
1:10:33.040 --> 1:10:39.200
asteroid that was going to hit the earth in 75 years time, we'd certainly be doing something
1:10:39.200 --> 1:10:43.600
about it. Well, it's clear there's got big rock and we'll probably have a meeting and see what
1:10:43.600 --> 1:10:49.040
do we do about the big rock with AI. Right, with AI. I mean, there are very few people who think it's
1:10:49.040 --> 1:10:54.400
not going to happen within the next 75 years. I know Rod Brooks doesn't think it's going to happen.
1:10:55.200 --> 1:11:00.880
Maybe Andrew Ng doesn't think it's happened. But a lot of the people who work day to day,
1:11:00.880 --> 1:11:07.360
you know, as you say, at the rock face, they think it's going to happen. I think the median
1:11:08.640 --> 1:11:14.320
estimate from AI researchers is somewhere in 40 to 50 years from now. Or maybe, you know,
1:11:14.320 --> 1:11:20.000
I think in Asia, they think it's going to be even faster than that. I'm a little bit
1:11:21.280 --> 1:11:25.840
more conservative. I think it probably take longer than that. But I think, you know, as
1:11:25.840 --> 1:11:32.400
happened with nuclear weapons, it can happen overnight that you have these breakthroughs.
1:11:32.400 --> 1:11:38.240
And we need more than one breakthrough. But, you know, it's on the order of half a dozen.
1:11:38.800 --> 1:11:43.920
This is a very rough scale. But so half a dozen breakthroughs of that nature
1:11:45.840 --> 1:11:53.600
would have to happen for us to reach superhuman AI. But the AI research community is
1:11:53.600 --> 1:12:00.640
vast now, the massive investments from governments, from corporations, tons of really,
1:12:00.640 --> 1:12:05.760
really smart people. You just have to look at the rate of progress in different areas of AI
1:12:05.760 --> 1:12:10.800
to see that things are moving pretty fast. So to say, oh, it's just going to be thousands of years.
1:12:11.920 --> 1:12:18.160
I don't see any basis for that. You know, I see, you know, for example, the
1:12:18.160 --> 1:12:28.640
Stanford 100 year AI project, which is supposed to be sort of, you know, the serious establishment view,
1:12:29.520 --> 1:12:33.440
their most recent report actually said it's probably not even possible.
1:12:34.160 --> 1:12:34.720
Oh, wow.
1:12:35.280 --> 1:12:42.960
Right. Which if you want a perfect example of people in denial, that's it. Because, you know,
1:12:42.960 --> 1:12:49.760
for the whole history of AI, we've been saying to philosophers who said it wasn't possible. Well,
1:12:49.760 --> 1:12:53.520
you have no idea what you're talking about. Of course, it's possible. Right. Give me an
1:12:53.520 --> 1:12:59.760
argument for why it couldn't happen. And there isn't one. Right. And now, because people are
1:12:59.760 --> 1:13:04.000
worried that maybe AI might get a bad name, or I just don't want to think about this,
1:13:05.200 --> 1:13:09.520
they're saying, okay, well, of course, it's not really possible. You know, imagine, right? Imagine
1:13:09.520 --> 1:13:16.000
if, you know, the leaders of the cancer biology community got up and said, well, you know, of
1:13:16.000 --> 1:13:23.680
course, curing cancer, it's not really possible. There'd be a complete outrage and dismay. And,
1:13:25.040 --> 1:13:30.800
you know, I find this really a strange phenomenon. So,
1:13:30.800 --> 1:13:38.640
okay, so if you accept it as possible, and if you accept that it's probably going to happen,
1:13:40.560 --> 1:13:44.400
the point that you're making that, you know, how does it go wrong?
1:13:46.320 --> 1:13:51.600
A valid question without that, without an answer to that question, then you're stuck with what I
1:13:51.600 --> 1:13:56.160
call the gorilla problem, which is, you know, the problem that the gorillas face, right? They
1:13:56.160 --> 1:14:02.800
made something more intelligent than them, namely us a few million years ago, and now they're in
1:14:02.800 --> 1:14:09.360
deep doo doo. So there's really nothing they can do. They've lost the control. They failed to solve
1:14:09.360 --> 1:14:16.240
the control problem of controlling humans. And so they've lost. So we don't want to be in that
1:14:16.240 --> 1:14:22.320
situation. And if the gorilla problem is the only formulation you have, there's not a lot you can do.
1:14:22.320 --> 1:14:28.240
Right. Other than to say, okay, we should try to stop. You know, we should just not make the humans
1:14:28.240 --> 1:14:33.120
or, or in this case, not make the AI. And I think that's really hard to do.
1:14:35.120 --> 1:14:42.480
To, I'm not actually proposing that that's a feasible course of action. And I also think that,
1:14:43.040 --> 1:14:46.080
you know, if properly controlled AI could be incredibly beneficial.
1:14:46.080 --> 1:14:54.960
So the, but it seems to me that there's a, there's a consensus that one of the major
1:14:54.960 --> 1:15:02.320
failure modes is this loss of control that we create AI systems that are pursuing incorrect
1:15:02.320 --> 1:15:11.040
objectives. And because the AI system believes it knows what the objective is, it has no incentive
1:15:11.040 --> 1:15:16.800
to listen to us anymore, so to speak, right? It's just carrying out the,
1:15:17.920 --> 1:15:22.160
the strategy that it, it has computed as being the optimal solution.
1:15:24.320 --> 1:15:31.040
And, you know, it may be that in the process, it needs to acquire more resources to increase the
1:15:31.600 --> 1:15:37.360
possibility of success or prevent various failure modes by defending itself against interference.
1:15:37.360 --> 1:15:42.640
And so that collection of problems, I think, is something we can address.
1:15:45.280 --> 1:15:55.360
The other problems are roughly speaking, you know, misuse, right? So even if we solve the control
1:15:55.360 --> 1:16:00.960
problem, we make perfectly safe controllable AI systems. Well, why, you know, why does Dr.
1:16:00.960 --> 1:16:05.600
Evil going to use those, right? He wants to just take over the world and he'll make unsafe AI systems
1:16:05.600 --> 1:16:12.000
that then get out of control. So that's one problem, which is sort of a, you know, partly a
1:16:12.000 --> 1:16:20.880
policing problem, partly a sort of a cultural problem for the profession of how we teach people
1:16:21.760 --> 1:16:26.560
what kinds of AI systems are safe. You talk about autonomous weapon system and how pretty much
1:16:26.560 --> 1:16:31.920
everybody agrees that there's too many ways that that can go horribly wrong. You have this great
1:16:31.920 --> 1:16:36.560
Slotabots movie that kind of illustrates that beautifully. Well, I want to talk about that.
1:16:36.560 --> 1:16:41.440
That's another, there's another topic I'm happy to talk about the, I just want to mention that
1:16:41.440 --> 1:16:48.080
what I see is the third major failure mode, which is overuse, not so much misuse, but overuse of AI,
1:16:49.680 --> 1:16:55.280
that we become overly dependent. So I call this the warly problems. If you've seen the warly,
1:16:55.280 --> 1:17:00.800
the movie, all right, all the humans are on the spaceship and the machines look after everything
1:17:00.800 --> 1:17:07.680
for them. And they just watch TV and drink big gulps. And they're all sort of obese and stupid.
1:17:07.680 --> 1:17:17.040
And they sort of totally lost any notion of human autonomy. And, you know, so in effect, right,
1:17:18.240 --> 1:17:23.520
this would happen like the slow boiling frog, right, we would gradually turn over
1:17:24.320 --> 1:17:28.480
more and more of the management of our civilization to machines as we are already doing.
1:17:28.480 --> 1:17:34.560
And this, you know, this, if this process continues, you know, we sort of gradually
1:17:34.560 --> 1:17:41.440
switch from sort of being the masters of technology to just being the guests, right?
1:17:41.440 --> 1:17:45.840
So, so we become guests on a cruise ship, you know, which is fine for a week, but not,
1:17:46.480 --> 1:17:53.520
not for the rest of eternity, right? You know, and it's almost irreversible, right? Once you,
1:17:53.520 --> 1:18:00.000
once you lose the incentive to, for example, you know, learn to be an engineer or a doctor
1:18:00.800 --> 1:18:08.000
or a sanitation operative or any other of the, the infinitely many ways that we
1:18:08.000 --> 1:18:13.200
maintain and propagate our civilization. You know, if you, if you don't have the
1:18:13.200 --> 1:18:18.400
incentive to do any of that, you won't. And then it's really hard to recover.
1:18:18.400 --> 1:18:23.440
And of course they add just one of the technologies that could, that third failure mode result in that.
1:18:23.440 --> 1:18:27.360
There's probably other technology in general detaches us from.
1:18:28.400 --> 1:18:33.440
It does a bit, but the, the, the difference is that in terms of the knowledge to,
1:18:34.080 --> 1:18:39.360
to run our civilization, you know, up to now we've had no alternative, but to put it into
1:18:39.360 --> 1:18:44.400
people's heads, right? And if you, if you, software with Google, I mean, so software in
1:18:44.400 --> 1:18:51.600
general, so computers in general, but, but the, you know, the knowledge of how, you know, how
1:18:51.600 --> 1:18:56.560
a sanitation system works, you know, that's an AI has to understand that it's no good putting it
1:18:56.560 --> 1:19:02.960
into Google. So, I mean, we, we've always put knowledge in on paper, but paper doesn't run
1:19:02.960 --> 1:19:07.520
our civilization. It only runs when it goes from the paper into people's heads again, right? So
1:19:07.520 --> 1:19:13.920
we've always propagated civilization through human minds and we've spent about a trillion
1:19:13.920 --> 1:19:19.440
person years doing that literally, right? You, you can work it out. It's about, right? There's
1:19:19.440 --> 1:19:25.120
about just over a hundred billion people who've ever lived and each of them has spent about 10
1:19:25.120 --> 1:19:30.640
years learning stuff to keep their civilization going. And so that's a trillion person years we
1:19:30.640 --> 1:19:35.760
put into this effort. Beautiful way to describe all of civilization. And now we're, you know,
1:19:35.760 --> 1:19:39.840
we're in danger of throwing that away. So this is a problem that AI can't solve. It's not a
1:19:39.840 --> 1:19:47.120
technical problem. It's a, you know, and if we do our job right, the AI systems will say, you know,
1:19:47.120 --> 1:19:52.800
the human race doesn't in the long run want to be passengers in a cruise ship. The human race
1:19:52.800 --> 1:19:59.840
wants autonomy. This is part of human preferences. So we, the AI systems are not going to do this
1:19:59.840 --> 1:20:05.440
stuff for you. You've got to do it for yourself, right? I'm not going to carry you to the top of
1:20:05.440 --> 1:20:11.840
Everest in an autonomous helicopter. You have to climb it if you want to get the benefit and so on. So
1:20:14.160 --> 1:20:20.160
but I'm afraid that because we are short sighted and lazy, we're going to override the AI systems.
1:20:20.880 --> 1:20:27.520
And, and there's an amazing short story that I recommend to everyone that I talk to about this
1:20:27.520 --> 1:20:36.080
called the machine stops written in 1909 by Ian Forster, who, you know, wrote novels about the
1:20:36.080 --> 1:20:41.200
British Empire and sort of things that became costume dramas on the BBC. But he wrote this one
1:20:41.200 --> 1:20:49.280
science fiction story, which is an amazing vision of the future. It has, it has basically iPads.
1:20:49.280 --> 1:20:57.680
It has video conferencing. It has MOOCs. It has computer and computer induced obesity. I mean,
1:20:57.680 --> 1:21:02.960
literally, the whole thing is what people spend their time doing is giving online courses or
1:21:02.960 --> 1:21:07.920
listening to online courses and talking about ideas. But they never get out there in the real
1:21:07.920 --> 1:21:13.680
world. They don't really have a lot of face to face contact. Everything is done online.
1:21:13.680 --> 1:21:19.680
You know, so all the things we're worrying about now were described in the story and and then the
1:21:19.680 --> 1:21:26.640
human race becomes more and more dependent on the machine loses knowledge of how things really run
1:21:27.600 --> 1:21:35.200
and then becomes vulnerable to collapse. And so it's a it's a pretty unbelievably amazing
1:21:35.200 --> 1:21:41.520
story for someone writing in 1909 to imagine all this. Plus, yeah. So there's very few people
1:21:41.520 --> 1:21:46.080
that represent artificial intelligence more than you, Stuart Russell.
1:21:46.960 --> 1:21:50.880
If you say it's okay, that's very kind. So it's all my fault.
1:21:50.880 --> 1:21:59.680
It's all your fault. No, right. You're often brought up as the person. Well, Stuart Russell,
1:22:00.560 --> 1:22:04.960
like the AI person is worried about this. That's why you should be worried about it.
1:22:06.080 --> 1:22:11.280
Do you feel the burden of that? I don't know if you feel that at all. But when I talk to people,
1:22:11.280 --> 1:22:16.800
like from you talk about people outside of computer science, when they think about this,
1:22:16.800 --> 1:22:22.560
Stuart Russell is worried about AI safety, you should be worried too. Do you feel the burden
1:22:22.560 --> 1:22:31.520
of that? I mean, in a practical sense, yeah, because I get, you know, a dozen, sometimes
1:22:31.520 --> 1:22:39.600
25 invitations a day to talk about it, to give interviews, to write press articles and so on.
1:22:39.600 --> 1:22:47.120
So in that very practical sense, I'm seeing that people are concerned and really interested about
1:22:47.120 --> 1:22:53.680
this. Are you worried that you could be wrong, as all good scientists are? Of course. I worry about
1:22:53.680 --> 1:23:00.400
that all the time. I mean, that's always been the way that I've worked, you know, is like I have an
1:23:00.400 --> 1:23:06.320
argument in my head with myself, right? So I have some idea. And then I think, okay,
1:23:06.320 --> 1:23:11.680
okay, how could that be wrong? Or did someone else already have that idea? So I'll go and
1:23:12.800 --> 1:23:18.240
search in as much literature as I can to see whether someone else already thought of that
1:23:18.240 --> 1:23:25.600
or even refuted it. So, you know, right now, I'm reading a lot of philosophy because,
1:23:25.600 --> 1:23:37.920
you know, in the form of the debates over utilitarianism and other kinds of moral formulas,
1:23:37.920 --> 1:23:44.320
shall we say, people have already thought through some of these issues. But, you know,
1:23:44.320 --> 1:23:51.280
what one of the things I'm not seeing in a lot of these debates is this specific idea about
1:23:51.280 --> 1:23:58.560
the importance of uncertainty in the objective, that this is the way we should think about machines
1:23:58.560 --> 1:24:06.800
that are beneficial to humans. So this idea of provably beneficial machines based on explicit
1:24:06.800 --> 1:24:15.200
uncertainty in the objective, you know, it seems to be, you know, my gut feeling is this is the core
1:24:15.200 --> 1:24:21.200
of it. It's going to have to be elaborated in a lot of different directions. And they're a lot
1:24:21.200 --> 1:24:27.440
of beneficial. Yeah, but they're, I mean, it has to be, right? We can't afford, you know,
1:24:27.440 --> 1:24:33.040
hand wavy beneficial. Because there are, you know, whenever we do hand wavy stuff, there are
1:24:33.040 --> 1:24:38.080
loopholes. And the thing about super intelligent machines is they find the loopholes. You know,
1:24:38.080 --> 1:24:44.320
just like, you know, tax evaders, if you don't write your tax law properly, people will find
1:24:44.320 --> 1:24:53.440
the loopholes and end up paying no tax. And so you should think of it this way. And getting those
1:24:53.440 --> 1:25:03.440
definitions right, you know, it is really a long process, you know, so you can you can define
1:25:03.440 --> 1:25:07.760
mathematical frameworks. And within that framework, you can prove mathematical theorems that, yes,
1:25:07.760 --> 1:25:12.800
this will, you know, this this theoretical entity will be provably beneficial to that theoretical
1:25:12.800 --> 1:25:20.160
entity. But that framework may not match the real world in some crucial way. So the long process
1:25:20.160 --> 1:25:27.120
thinking through it to iterating and so on. Last question. Yep. You have 10 seconds to answer it.
1:25:27.120 --> 1:25:34.480
What is your favorite sci fi movie about AI? I would say interstellar has my favorite robots.
1:25:34.480 --> 1:25:42.160
Oh, beats space. Yeah, yeah, yeah. So so Tars, the robots, one of the robots in interstellar is
1:25:42.160 --> 1:25:52.080
the way robots should behave. And I would say X Machina is in some ways the one, the one that
1:25:52.080 --> 1:25:58.000
makes you think in a nervous kind of way about about where we're going.
1:25:58.000 --> 1:26:13.920
Well, Stuart, thank you so much for talking today. Pleasure.
|