Datasets:
File size: 43,877 Bytes
a3be5d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 |
WEBVTT
00:00.000 --> 00:06.400
You've studied the human mind, cognition, language, vision, evolution, psychology, from child to adult,
00:07.360 --> 00:11.120
from the level of individual to the level of our entire civilization,
00:11.680 --> 00:14.880
so I feel like I can start with a simple multiple choice question.
00:16.240 --> 00:21.840
What is the meaning of life? Is it A, to attain knowledge, as Plato said,
00:22.400 --> 00:28.000
B, to attain power, as Nietzsche said, C, to escape death, as Ernest Becker said,
00:28.000 --> 00:35.040
D, to propagate our genes, as Darwin and others have said, E, there is no meaning,
00:35.040 --> 00:41.440
as the nihilists have said, F, knowing the meaning of life is beyond our cognitive capabilities,
00:41.440 --> 00:47.360
as Stephen Pinker said, based on my interpretation 20 years ago, and G, none of the above.
00:48.160 --> 00:54.720
I'd say A comes closest, but I would amend that to attaining not only knowledge, but fulfillment
00:54.720 --> 01:06.000
more generally. That is, life, health, stimulation, access to the living cultural and social world.
01:06.000 --> 01:10.720
Now, this is our meaning of life. It's not the meaning of life, if you were to ask our genes.
01:12.160 --> 01:17.600
Their meaning is to propagate copies of themselves, but that is distinct from the
01:17.600 --> 01:26.640
meaning that the brain that they lead to sets for itself. So, to you, knowledge is a small subset
01:26.640 --> 01:33.280
or a large subset? It's a large subset, but it's not the entirety of human striving, because we
01:33.280 --> 01:39.120
also want to interact with people. We want to experience beauty. We want to experience the
01:39.120 --> 01:47.840
richness of the natural world, but understanding what makes the universe tick is way up there.
01:47.840 --> 01:54.000
For some of us more than others, certainly for me, that's one of the top five.
01:54.560 --> 02:00.080
So, is that a fundamental aspect? Are you just describing your own preference, or is this a
02:00.080 --> 02:05.920
fundamental aspect of human nature, is to seek knowledge? In your latest book, you talk about
02:05.920 --> 02:11.760
the power, the usefulness of rationality and reason and so on. Is that a fundamental
02:11.760 --> 02:16.160
nature of human beings, or is it something we should just strive for?
02:16.960 --> 02:21.840
Both. We're capable of striving for it, because it is one of the things that
02:22.640 --> 02:31.360
make us what we are, homo sapiens, wise men. We are unusual among our animals in the degree to
02:31.360 --> 02:39.760
which we acquire knowledge and use it to survive. We make tools. We strike agreements via language.
02:39.760 --> 02:47.760
We extract poisons. We predict the behavior of animals. We try to get at the workings of plants.
02:47.760 --> 02:52.640
And when I say we, I don't just mean we in the modern west, but we as a species everywhere,
02:52.640 --> 02:58.160
which is how we've managed to occupy every niche on the planet, how we've managed to drive other
02:58.160 --> 03:06.480
animals to extinction. And the refinement of reason in pursuit of human well being, of health,
03:06.480 --> 03:13.680
happiness, social richness, cultural richness, is our main challenge in the present. That is,
03:14.480 --> 03:19.280
using our intellect, using our knowledge to figure out how the world works, how we work,
03:19.280 --> 03:25.200
in order to make discoveries and strike agreements that make us all better off in the long run.
03:25.200 --> 03:31.920
Right. And you do that almost undeniably in a data driven way in your recent book,
03:31.920 --> 03:36.480
but I'd like to focus on the artificial intelligence aspect of things, and not just
03:36.480 --> 03:41.840
artificial intelligence, but natural intelligence too. So 20 years ago in the book, you've written
03:41.840 --> 03:49.600
on how the mind works, you conjecture, again, am I right to interpret things? You can correct me
03:49.600 --> 03:55.200
if I'm wrong, but you conjecture that human thought in the brain may be a result of a network, a massive
03:55.200 --> 04:00.560
network of highly interconnected neurons. So from this interconnectivity emerges thought,
04:01.280 --> 04:05.600
compared to artificial neural networks, which we use for machine learning today,
04:06.160 --> 04:12.640
is there something fundamentally more complex, mysterious, even magical about the biological
04:12.640 --> 04:19.440
neural networks versus the ones we've been starting to use over the past 60 years and
04:19.440 --> 04:24.960
become to success in the past 10? There is something a little bit mysterious about
04:25.840 --> 04:31.760
the human neural networks, which is that each one of us who is a neural network knows that we
04:31.760 --> 04:36.960
ourselves are conscious, conscious not in the sense of registering our surroundings or even
04:36.960 --> 04:42.720
registering our internal state, but in having subjective first person, present tense experience.
04:42.720 --> 04:49.840
That is, when I see red, it's not just different from green, but there's a redness to it that I
04:49.840 --> 04:54.720
feel. Whether an artificial system would experience that or not, I don't know and I don't think I
04:54.720 --> 05:00.480
can know. That's why it's mysterious. If we had a perfectly lifelike robot that was behaviorally
05:00.480 --> 05:06.800
indistinguishable from a human, would we attribute consciousness to it or ought we to attribute
05:06.800 --> 05:12.160
consciousness to it? And that's something that it's very hard to know. But putting that aside,
05:12.160 --> 05:19.040
putting aside that largely philosophical question, the question is, is there some difference between
05:19.040 --> 05:23.920
the human neural network and the ones that we're building in artificial intelligence will mean that
05:23.920 --> 05:30.400
we're on the current trajectory not going to reach the point where we've got a lifelike robot
05:30.400 --> 05:35.120
indistinguishable from a human because the way their so called neural networks are organized
05:35.120 --> 05:40.560
are different from the way ours are organized. I think there's overlap, but I think there are some
05:40.560 --> 05:48.720
big differences that their current neural networks, current so called deep learning systems are in
05:48.720 --> 05:53.840
reality not all that deep. That is, they are very good at extracting high order statistical
05:53.840 --> 06:00.640
regularities. But most of the systems don't have a semantic level, a level of actual understanding
06:00.640 --> 06:06.400
of who did what to whom, why, where, how things work, what causes, what else.
06:06.400 --> 06:10.960
Do you think that kind of thing can emerge as it does so artificial neural networks are much
06:10.960 --> 06:16.480
smaller the number of connections and so on than the current human biological networks? But do you
06:16.480 --> 06:22.640
think sort of go to consciousness or to go to this higher level semantic reasoning about things?
06:22.640 --> 06:28.960
Do you think that can emerge with just a larger network with a more richly, weirdly interconnected
06:28.960 --> 06:33.280
network? Separate it in consciousness because consciousness is even a matter of complexity.
06:33.280 --> 06:37.920
A really weird one. Yeah, you could sensibly ask the question of whether shrimp are conscious,
06:37.920 --> 06:43.200
for example. They're not terribly complex, but maybe they feel pain. So let's just put that
06:43.200 --> 06:50.000
part of it aside. But I think sheer size of a neural network is not enough to give it
06:50.960 --> 06:57.360
structure and knowledge. But if it's suitably engineered, then why not? That is, we're neural
06:57.360 --> 07:03.680
networks. Natural selection did a kind of equivalent of engineering of our brains. So I don't think
07:03.680 --> 07:10.880
there's anything mysterious in the sense that no systemated of silicon could ever do what a human
07:10.880 --> 07:16.080
brain can do. I think it's possible in principle. Whether it'll ever happen depends not only on
07:16.080 --> 07:21.040
how clever we are in engineering these systems, but whether even we even want to, whether that's
07:21.040 --> 07:27.440
even a sensible goal. That is, you can ask the question, is there any locomotion system that is
07:28.320 --> 07:32.960
as good as a human? Well, we kind of want to do better than a human ultimately in terms of
07:32.960 --> 07:39.360
legged locomotion. There's no reason that humans should be our benchmark. They're tools that might
07:39.360 --> 07:49.280
be better in some ways. It may be that we can't duplicate a natural system because at some point,
07:49.280 --> 07:53.840
it's so much cheaper to use a natural system that we're not going to invest more brain power
07:53.840 --> 08:00.000
and resources. So for example, we don't really have an exact substitute for wood. We still build
08:00.000 --> 08:04.400
houses out of wood. We still build furniture out of wood. We like the look. We like the feel. It's
08:04.400 --> 08:09.280
wood has certain properties that synthetics don't. There's not that there's anything magical or
08:09.280 --> 08:16.400
mysterious about wood. It's just that the extra steps of duplicating everything about wood is
08:16.400 --> 08:20.480
something we just haven't bothered because we have wood. Likewise, cotton. I'm wearing cotton
08:20.480 --> 08:26.880
clothing now. It feels much better than polyester. It's not that cotton has something magic in it,
08:27.600 --> 08:33.120
and it's not that we couldn't ever synthesize something exactly like cotton,
08:33.120 --> 08:37.760
but at some point, it's just not worth it. We've got cotton. Likewise, in the case of human
08:37.760 --> 08:43.520
intelligence, the goal of making an artificial system that is exactly like the human brain
08:43.520 --> 08:49.440
is a goal that we probably know is going to pursue to the bitter end, I suspect, because
08:50.080 --> 08:53.600
if you want tools that do things better than humans, you're not going to care whether it
08:53.600 --> 08:58.720
does something like humans. So for example, diagnosing cancer or predicting the weather,
08:58.720 --> 09:07.360
why set humans as your benchmark? But in general, I suspect you also believe that even if the human
09:07.360 --> 09:11.440
should not be a benchmark and we don't want to imitate humans in their system, there's a lot
09:11.440 --> 09:16.800
to be learned about how to create an artificial intelligence system by studying the humans.
09:16.800 --> 09:23.440
Yeah, I think that's right. In the same way that to build flying machines, we want to understand
09:23.440 --> 09:28.880
the laws of aerodynamics, including birds, but not mimic the birds, but they're the same laws.
09:30.480 --> 09:38.400
You have a view on AI, artificial intelligence and safety, that from my perspective,
09:38.400 --> 09:49.360
is refreshingly rational, or perhaps more importantly, has elements of positivity to it,
09:49.360 --> 09:55.440
which I think can be inspiring and empowering as opposed to paralyzing. For many people,
09:55.440 --> 10:02.320
including AI researchers, the eventual existential threat of AI is obvious, not only possible but
10:02.320 --> 10:08.640
obvious. And for many others, including AI researchers, the threat is not obvious. So
10:09.520 --> 10:16.480
Elon Musk is famously in the highly concerned about AI camp, saying things like AI is far
10:16.480 --> 10:22.240
more dangerous than nuclear weapons, and that AI will likely destroy human civilization.
10:22.960 --> 10:30.400
So in February, you said that if Elon was really serious about AI, the threat of AI,
10:30.400 --> 10:34.960
he would stop building self driving cars that he's doing very successfully as part of Tesla.
10:35.840 --> 10:40.880
Then he said, wow, if even Pinker doesn't understand the difference between narrow AI
10:40.880 --> 10:47.280
like a car and general AI, when the latter literally has a million times more compute power
10:47.280 --> 10:54.240
and an open ended utility function, humanity is in deep trouble. So first, what did you mean by
10:54.240 --> 10:59.200
the statement about Elon Musk should stop building self driving cars if he's deeply concerned?
10:59.200 --> 11:03.520
Well, not the last time that Elon Musk has fired off an intemperate tweet.
11:04.320 --> 11:07.600
Well, we live in a world where Twitter has power.
11:07.600 --> 11:16.640
Yes. Yeah, I think there are two kinds of existential threat that have been discussed
11:16.640 --> 11:19.760
in connection with artificial intelligence, and I think that they're both incoherent.
11:20.480 --> 11:28.800
One of them is a vague fear of AI takeover, that just as we subjugated animals and less
11:28.800 --> 11:33.360
technologically advanced peoples, so if we build something that's more advanced than us,
11:33.360 --> 11:39.200
it will inevitably turn us into pets or slaves or domesticated animal equivalents.
11:40.240 --> 11:46.720
I think this confuses intelligence with a will to power that it so happens that in the
11:46.720 --> 11:52.240
intelligence system we are most familiar with, namely Homo sapiens, we are products of natural
11:52.240 --> 11:56.800
selection, which is a competitive process. And so bundled together with our problem solving
11:56.800 --> 12:05.200
capacity are a number of nasty traits like dominance and exploitation and maximization of
12:05.200 --> 12:11.040
power and glory and resources and influence. There's no reason to think that sheer problem
12:11.040 --> 12:16.640
solving capability will set that as one of its goals. Its goals will be whatever we set its goals
12:16.640 --> 12:21.760
as, and as long as someone isn't building a megalomaniacal artificial intelligence,
12:22.560 --> 12:25.360
then there's no reason to think that it would naturally evolve in that direction.
12:25.360 --> 12:31.600
Now you might say, well, what if we gave it the goal of maximizing its own power source?
12:31.600 --> 12:35.280
That's a pretty stupid goal to give an autonomous system. You don't give it that goal.
12:36.000 --> 12:41.360
I mean, that's just self evidently idiotic. So if you look at the history of the world,
12:41.360 --> 12:45.680
there's been a lot of opportunities where engineers could instill in a system destructive
12:45.680 --> 12:49.520
power and they choose not to because that's the natural process of engineering.
12:49.520 --> 12:52.880
Well, except for weapons. I mean, if you're building a weapon, its goal is to destroy
12:52.880 --> 12:58.400
people. And so I think there are good reasons to not build certain kinds of weapons. I think
12:58.400 --> 13:06.240
building nuclear weapons was a massive mistake. You do. So maybe pause on that because that is
13:06.240 --> 13:12.880
one of the serious threats. Do you think that it was a mistake in a sense that it should have been
13:12.880 --> 13:19.200
stopped early on? Or do you think it's just an unfortunate event of invention that this was
13:19.200 --> 13:22.800
invented? Do you think it's possible to stop, I guess, is the question on that? Yeah, it's hard to
13:22.800 --> 13:27.440
rewind the clock because, of course, it was invented in the context of World War II and the
13:27.440 --> 13:33.120
fear that the Nazis might develop one first. Then once it was initiated for that reason,
13:33.120 --> 13:40.800
it was hard to turn off, especially since winning the war against the Japanese and the Nazis was
13:40.800 --> 13:46.160
such an overwhelming goal of every responsible person that they were just nothing that people
13:46.160 --> 13:51.440
wouldn't have done then to ensure victory. It's quite possible if World War II hadn't happened
13:51.440 --> 13:56.560
that nuclear weapons wouldn't have been invented. We can't know. But I don't think it was, by any
13:56.560 --> 14:01.760
means, a necessity any more than some of the other weapon systems that were envisioned but never
14:01.760 --> 14:09.040
implemented, like planes that would disperse poison gas over cities like crop dusters or systems to
14:09.040 --> 14:16.080
try to create earthquakes and tsunamis in enemy countries, to weaponize the weather,
14:16.080 --> 14:21.520
weaponize solar flares, all kinds of crazy schemes that we thought the better of. I think
14:21.520 --> 14:26.800
analogies between nuclear weapons and artificial intelligence are fundamentally misguided because
14:26.800 --> 14:31.520
the whole point of nuclear weapons is to destroy things. The point of artificial intelligence
14:31.520 --> 14:37.360
is not to destroy things. So the analogy is misleading. So there's two artificial
14:37.360 --> 14:42.320
intelligence you mentioned. The first one was the highly intelligent or power hungry. Yeah,
14:42.320 --> 14:47.040
an assistant that we design ourselves where we give it the goals. Goals are external to the
14:48.320 --> 14:55.840
means to attain the goals. If we don't design an artificially intelligent system to maximize
14:56.560 --> 15:02.400
dominance, then it won't maximize dominance. It's just that we're so familiar with homo sapiens
15:02.400 --> 15:08.800
where these two traits come bundled together, particularly in men, that we are apt to confuse
15:08.800 --> 15:16.720
high intelligence with a will to power. But that's just an error. The other fear is that
15:16.720 --> 15:23.040
we'll be collateral damage that will give artificial intelligence a goal like make paper clips
15:23.040 --> 15:28.320
and it will pursue that goal so brilliantly that before we can stop it, it turns us into paper
15:28.320 --> 15:34.400
clips. We'll give it the goal of curing cancer and it will turn us into guinea pigs for lethal
15:34.400 --> 15:40.000
experiments or give it the goal of world peace and its conception of world peace is no people,
15:40.000 --> 15:44.480
therefore no fighting and so it will kill us all. Now, I think these are utterly fanciful. In fact,
15:44.480 --> 15:49.600
I think they're actually self defeating. They first of all assume that we're going to be so
15:49.600 --> 15:54.880
brilliant that we can design an artificial intelligence that can cure cancer. But so stupid
15:54.880 --> 16:00.160
that we don't specify what we mean by curing cancer in enough detail that it won't kill us in the
16:00.160 --> 16:06.720
process. And it assumes that the system will be so smart that it can cure cancer. But so
16:06.720 --> 16:11.520
idiotic that it doesn't can't figure out that what we mean by curing cancer is not killing
16:11.520 --> 16:17.920
everyone. So I think that the collateral damage scenario, the value alignment problem is also
16:17.920 --> 16:23.200
based on a misconception. So one of the challenges, of course, we don't know how to build either system
16:23.200 --> 16:27.440
currently, or are we even close to knowing? Of course, those things can change overnight,
16:27.440 --> 16:33.840
but at this time, theorizing about is very challenging in either direction. So that's
16:33.840 --> 16:39.600
probably at the core of the problem is without that ability to reason about the real engineering
16:39.600 --> 16:45.200
things here at hand is your imagination runs away with things. Exactly. But let me sort of ask,
16:45.920 --> 16:52.320
what do you think was the motivation, the thought process of Elon Musk? I build autonomous vehicles,
16:52.320 --> 16:58.000
I study autonomous vehicles, I study Tesla autopilot. I think it is one of the greatest
16:58.000 --> 17:02.880
currently application, large scale application of artificial intelligence in the world.
17:02.880 --> 17:09.120
It has a potentially very positive impact on society. So how does a person who's creating this
17:09.120 --> 17:17.680
very good, quote unquote, narrow AI system also seem to be so concerned about this other
17:17.680 --> 17:21.120
general AI? What do you think is the motivation there? What do you think is the thing?
17:21.120 --> 17:30.640
Well, you probably have to ask him, but there and he is notoriously flamboyant, impulsive to the,
17:30.640 --> 17:35.120
as we have just seen, to the detriment of his own goals of the health of a company.
17:36.000 --> 17:41.600
So I don't know what's going on in his mind. You probably have to ask him. But I don't think the,
17:41.600 --> 17:48.160
and I don't think the distinction between special purpose AI and so called general AI is relevant
17:48.160 --> 17:54.400
that in the same way that special purpose AI is not going to do anything conceivable in order to
17:54.400 --> 18:00.560
attain a goal, all engineering systems have to are designed to trade off across multiple goals.
18:00.560 --> 18:05.920
When we build cars in the first place, we didn't forget to install brakes because the goal of a
18:05.920 --> 18:12.320
car is to go fast. It occurred to people, yes, you want to go fast, but not always. So you build
18:12.320 --> 18:18.960
and brakes too. Likewise, if a car is going to be autonomous, that doesn't program it to take the
18:18.960 --> 18:23.440
shortest route to the airport. It's not going to take the diagonal and mow down people and trees
18:23.440 --> 18:28.000
and fences because that's the shortest route. That's not what we mean by the shortest route when we
18:28.000 --> 18:34.720
program it. And that's just what an intelligence system is by definition. It takes into account
18:34.720 --> 18:40.640
multiple constraints. The same is true, in fact, even more true of so called general intelligence.
18:40.640 --> 18:47.040
That is, if it's genuinely intelligent, it's not going to pursue some goal single mindedly,
18:47.040 --> 18:53.280
omitting every other consideration and collateral effect. That's not artificial and
18:53.280 --> 18:58.560
general intelligence. That's artificial stupidity. I agree with you, by the way,
18:58.560 --> 19:03.280
on the promise of autonomous vehicles for improving human welfare. I think it's spectacular.
19:03.280 --> 19:08.080
And I'm surprised at how little press coverage notes that in the United States alone,
19:08.080 --> 19:13.200
something like 40,000 people die every year on the highways, vastly more than are killed by
19:13.200 --> 19:19.440
terrorists. And we spend a trillion dollars on a war to combat deaths by terrorism,
19:19.440 --> 19:24.080
about half a dozen a year, whereas every year and year out, 40,000 people are
19:24.080 --> 19:27.600
massacred on the highways, which could be brought down to very close to zero.
19:28.560 --> 19:31.840
So I'm with you on the humanitarian benefit.
19:31.840 --> 19:36.240
Let me just mention that as a person who's building these cars, it is a little bit offensive to me
19:36.240 --> 19:41.680
to say that engineers would be clueless enough not to engineer safety into systems. I often
19:41.680 --> 19:46.400
stay up at night thinking about those 40,000 people that are dying. And everything I try to
19:46.400 --> 19:52.000
engineer is to save those people's lives. So every new invention that I'm super excited about,
19:52.000 --> 19:59.680
every new, all the deep learning literature and CVPR conferences and NIPS, everything I'm super
19:59.680 --> 20:08.320
excited about is all grounded in making it safe and help people. So I just don't see how that
20:08.320 --> 20:13.200
trajectory can all of a sudden slip into a situation where intelligence will be highly
20:13.200 --> 20:17.840
negative. You and I certainly agree on that. And I think that's only the beginning of the
20:17.840 --> 20:23.760
potential humanitarian benefits of artificial intelligence. There's been enormous attention
20:23.760 --> 20:27.680
to what are we going to do with the people whose jobs are made obsolete by artificial
20:27.680 --> 20:31.600
intelligence. But very little attention given to the fact that the jobs that are going to be
20:31.600 --> 20:37.600
made obsolete are horrible jobs. The fact that people aren't going to be picking crops and making
20:37.600 --> 20:43.760
beds and driving trucks and mining coal, these are soul deadening jobs. And we have a whole
20:43.760 --> 20:51.280
literature sympathizing with the people stuck in these menial, mind deadening, dangerous jobs.
20:52.080 --> 20:56.160
If we can eliminate them, this is a fantastic boon to humanity. Now, granted,
20:56.160 --> 21:02.160
we, you solve one problem and there's another one, namely, how do we get these people a decent
21:02.160 --> 21:08.320
income? But if we're smart enough to invent machines that can make beds and put away dishes and
21:09.520 --> 21:14.080
handle hospital patients, I think we're smart enough to figure out how to redistribute income
21:14.080 --> 21:20.960
to a portion, some of the vast economic savings to the human beings who will no longer be needed to
21:20.960 --> 21:28.400
make beds. Okay. Sam Harris says that it's obvious that eventually AI will be an existential risk.
21:29.280 --> 21:36.240
He's one of the people who says it's obvious. We don't know when the claim goes, but eventually
21:36.240 --> 21:41.760
it's obvious. And because we don't know when we should worry about it now. It's a very interesting
21:41.760 --> 21:49.120
argument in my eyes. So how do we think about timescale? How do we think about existential
21:49.120 --> 21:55.040
threats when we don't really, we know so little about the threat, unlike nuclear weapons, perhaps,
21:55.040 --> 22:02.400
about this particular threat, that it could happen tomorrow, right? So, but very likely it won't.
22:03.120 --> 22:08.320
Very likely it'd be 100 years away. So how do, do we ignore it? Do, how do we talk about it?
22:08.880 --> 22:13.040
Do we worry about it? What, how do we think about those? What is it?
22:13.040 --> 22:19.600
A threat that we can imagine, it's within the limits of our imagination, but not within our
22:19.600 --> 22:25.760
limits of understanding to sufficient, to accurately predict it. But what, what is, what is the it
22:25.760 --> 22:31.280
that we're referring to? Oh, AI, sorry, AI, AI being the existential threat. AI can always...
22:31.280 --> 22:34.400
How? But like enslaving us or turning us into paperclips?
22:35.120 --> 22:38.800
I think the most compelling from the Sam Harris perspective would be the paperclip situation.
22:38.800 --> 22:44.000
Yeah. I mean, I just think it's totally fanciful. I mean, that is, don't build a system. Don't give a,
22:44.000 --> 22:50.400
don't... First of all, the code of engineering is you don't implement a system with massive
22:50.400 --> 22:55.040
control before testing it. Now, perhaps the culture of engineering will radically change,
22:55.040 --> 23:00.320
then I would worry, but I don't see any signs that engineers will suddenly do idiotic things,
23:00.320 --> 23:05.440
like put a, an electrical power plant in control of a system that they haven't tested
23:05.440 --> 23:14.720
first. Or all of these scenarios not only imagine a almost a magically powered intelligence,
23:15.360 --> 23:20.000
you know, including things like cure cancer, which is probably an incoherent goal because
23:20.000 --> 23:25.440
there's so many different kinds of cancer or bring about world peace. I mean, how do you even specify
23:25.440 --> 23:31.360
that as a goal? But the scenarios also imagine some degree of control of every molecule in the
23:31.360 --> 23:38.480
universe, which not only is itself unlikely, but we would not start to connect these systems to
23:39.200 --> 23:45.840
infrastructure without, without testing as we would any kind of engineering system. Now,
23:45.840 --> 23:53.920
maybe some engineers will be irresponsible and we need legal and regulatory and legal
23:53.920 --> 23:59.440
responsibility implemented so that engineers don't do things that are stupid by their own standards.
23:59.440 --> 24:08.560
But the, I've never seen enough of a plausible scenario of existential threat to devote large
24:08.560 --> 24:14.720
amounts of brain power to, to forestall it. So you believe in the sort of the power en masse of
24:14.720 --> 24:19.520
the engineering of reason as you argue in your latest book of reason and science to sort of
24:20.400 --> 24:26.160
be the very thing that guides the development of new technology so it's safe and also keeps us
24:26.160 --> 24:32.480
safe. Yeah, the same, you know, granted the same culture of safety that currently is part of the
24:32.480 --> 24:38.960
engineering mindset for airplanes, for example. So yeah, I don't think that, that that should
24:38.960 --> 24:44.800
be thrown out the window and that untested, all powerful systems should be suddenly implemented.
24:44.800 --> 24:47.360
But there's no reason to think they are. And in fact, if you look at the
24:48.160 --> 24:51.760
progress of artificial intelligence, it's been, you know, it's been impressive, especially in
24:51.760 --> 24:56.960
the last 10 years or so. But the idea that suddenly there'll be a step function that all of a sudden
24:56.960 --> 25:02.160
before we know it, it will be all powerful, that there'll be some kind of recursive self
25:02.160 --> 25:11.200
improvement, some kind of fume is also fanciful. Certainly by the technology that we that we're
25:11.200 --> 25:16.720
now impresses us, such as deep learning, where you train something on hundreds of thousands or
25:16.720 --> 25:22.720
millions of examples, they're not hundreds of thousands of problems of which curing cancer is
25:24.320 --> 25:30.560
typical example. And so the kind of techniques that have allowed AI to increase in the last
25:30.560 --> 25:37.600
five years are not the kind that are going to lead to this fantasy of exponential sudden
25:37.600 --> 25:43.680
self improvement. So I think it's kind of a magical thinking. It's not based on our understanding
25:43.680 --> 25:49.200
of how AI actually works. Now, give me a chance here. So you said fanciful, magical thinking.
25:50.240 --> 25:55.280
In his TED Talk, Sam Harris says that thinking about AI killing all human civilization is somehow
25:55.280 --> 26:00.400
fun intellectually. Now, I have to say as a scientist engineer, I don't find it fun.
26:01.200 --> 26:08.560
But when I'm having beer with my non AI friends, there is indeed something fun and appealing about
26:08.560 --> 26:14.720
it. Like talking about an episode of Black Mirror, considering if a large meteor is headed towards
26:14.720 --> 26:20.640
Earth, we were just told a large meteor is headed towards Earth, something like this. And can you
26:20.640 --> 26:25.840
relate to this sense of fun? And do you understand the psychology of it? Yes, great. Good question.
26:26.880 --> 26:33.440
I personally don't find it fun. I find it kind of actually a waste of time, because there are
26:33.440 --> 26:39.840
genuine threats that we ought to be thinking about, like pandemics, like cybersecurity
26:39.840 --> 26:47.040
vulnerabilities, like the possibility of nuclear war and certainly climate change. This is enough
26:47.040 --> 26:55.280
to fill many conversations without. And I think Sam did put his finger on something, namely that
26:55.280 --> 27:03.120
there is a community, sometimes called the rationality community, that delights in using its
27:03.120 --> 27:10.160
brain power to come up with scenarios that would not occur to mere mortals, to less cerebral people.
27:10.160 --> 27:15.360
So there is a kind of intellectual thrill in finding new things to worry about that no one
27:15.360 --> 27:21.200
has worried about yet. I actually think, though, that it's not only is it a kind of fun that doesn't
27:21.200 --> 27:27.280
give me particular pleasure. But I think there can be a pernicious side to it, namely that you
27:27.280 --> 27:35.280
overcome people with such dread, such fatalism, that there's so many ways to die to annihilate
27:35.280 --> 27:40.160
our civilization that we may as well enjoy life while we can. There's nothing we can do about it.
27:40.160 --> 27:46.560
If climate change doesn't do us in, then runaway robots will. So let's enjoy ourselves now. We
27:46.560 --> 27:55.200
got to prioritize. We have to look at threats that are close to certainty, such as climate change,
27:55.200 --> 28:00.320
and distinguish those from ones that are merely imaginable, but with infinitesimal probabilities.
28:01.360 --> 28:07.120
And we have to take into account people's worry budget. You can't worry about everything. And
28:07.120 --> 28:13.920
if you sow dread and fear and terror and and fatalism, it can lead to a kind of numbness. Well,
28:13.920 --> 28:18.240
they're just these problems are overwhelming and the engineers are just going to kill us all.
28:19.040 --> 28:25.760
So let's either destroy the entire infrastructure of science, technology,
28:26.640 --> 28:32.080
or let's just enjoy life while we can. So there's a certain line of worry, which I'm
28:32.080 --> 28:36.160
worried about a lot of things engineering. There's a certain line of worry when you cross,
28:36.160 --> 28:42.800
you allow it to cross, that it becomes paralyzing fear as opposed to productive fear. And that's
28:42.800 --> 28:49.760
kind of what you're highlighting. Exactly right. And we've seen some, we know that human effort is
28:49.760 --> 28:58.080
not well calibrated against risk in that because a basic tenet of cognitive psychology is that
28:59.440 --> 29:05.120
perception of risk and hence perception of fear is driven by imaginability, not by data.
29:05.920 --> 29:11.200
And so we misallocate vast amounts of resources to avoiding terrorism,
29:11.200 --> 29:16.240
which kills on average about six Americans a year with a one exception of 9 11. We invade
29:16.240 --> 29:23.920
countries, we invent an entire new departments of government with massive, massive expenditure
29:23.920 --> 29:30.800
of resources and lives to defend ourselves against a trivial risk. Whereas guaranteed risks,
29:30.800 --> 29:36.720
you mentioned as one of them, you mentioned traffic fatalities and even risks that are
29:36.720 --> 29:46.240
not here, but are plausible enough to worry about like pandemics, like nuclear war,
29:47.120 --> 29:51.760
receive far too little attention. In presidential debates, there's no discussion of
29:51.760 --> 29:56.720
how to minimize the risk of nuclear war, lots of discussion of terrorism, for example.
29:57.840 --> 30:05.520
And so we, I think it's essential to calibrate our budget of fear, worry, concerned planning
30:05.520 --> 30:12.640
to the actual probability of harm. Yep. So let me ask this in this question.
30:13.520 --> 30:18.960
So speaking of imaginability, you said it's important to think about reason. And one of my
30:18.960 --> 30:26.560
favorite people who likes to dip into the outskirts of reason through fascinating exploration of his
30:26.560 --> 30:34.880
imagination is Joe Rogan. Oh, yes. So who has, through reason, used to believe a lot of conspiracies
30:34.880 --> 30:40.000
and through reason has stripped away a lot of his beliefs in that way. So it's fascinating actually
30:40.000 --> 30:47.920
to watch him through rationality, kind of throw away the ideas of Bigfoot and 911. I'm not sure
30:47.920 --> 30:52.320
exactly. Kim Trails. I don't know what he believes in. Yes, okay. But he no longer believed in,
30:52.320 --> 30:57.920
that's right. No, he's become a real force for good. So you were on the Joe Rogan podcast in
30:57.920 --> 31:02.880
February and had a fascinating conversation, but as far as I remember, didn't talk much about
31:02.880 --> 31:09.280
artificial intelligence. I will be on his podcast in a couple of weeks. Joe is very much concerned
31:09.280 --> 31:14.640
about existential threat of AI. I'm not sure if you're, this is why I was hoping that you'll get
31:14.640 --> 31:20.480
into that topic. And in this way, he represents quite a lot of people who look at the topic of AI
31:20.480 --> 31:27.840
from 10,000 foot level. So as an exercise of communication, you said it's important to be
31:27.840 --> 31:33.280
rational and reason about these things. Let me ask, if you were to coach me as an AI researcher
31:33.280 --> 31:38.320
about how to speak to Joe and the general public about AI, what would you advise?
31:38.320 --> 31:42.400
Well, the short answer would be to read the sections that I wrote in Enlightenment.
31:44.080 --> 31:48.880
But longer reason would be, I think to emphasize, and I think you're very well positioned as an
31:48.880 --> 31:54.800
engineer to remind people about the culture of engineering, that it really is safety oriented,
31:54.800 --> 32:02.160
that another discussion in Enlightenment now, I plot rates of accidental death from various
32:02.160 --> 32:09.280
causes, plane crashes, car crashes, occupational accidents, even death by lightning strikes,
32:09.280 --> 32:16.560
and they all plummet. Because the culture of engineering is how do you squeeze out the lethal
32:16.560 --> 32:23.360
risks, death by fire, death by drowning, death by asphyxiation, all of them drastically declined
32:23.360 --> 32:28.160
because of advances in engineering, that I got to say, I did not appreciate until I saw those
32:28.160 --> 32:34.000
graphs. And it is because exactly people like you who stay up at night thinking, oh my God,
32:36.000 --> 32:42.560
what I'm inventing likely to hurt people and to deploy ingenuity to prevent that from happening.
32:42.560 --> 32:47.360
Now, I'm not an engineer, although I spent 22 years at MIT, so I know something about the culture
32:47.360 --> 32:51.360
of engineering. My understanding is that this is the way you think if you're an engineer.
32:51.360 --> 32:58.160
And it's essential that that culture not be suddenly switched off when it comes to artificial
32:58.160 --> 33:02.080
intelligence. So I mean, that could be a problem, but is there any reason to think it would be
33:02.080 --> 33:07.360
switched off? I don't think so. And one, there's not enough engineers speaking up for this way,
33:07.360 --> 33:13.680
for the excitement, for the positive view of human nature, what you're trying to create is
33:13.680 --> 33:18.240
the positivity, like everything we try to invent is trying to do good for the world.
33:18.240 --> 33:23.600
But let me ask you about the psychology of negativity. It seems just objectively,
33:23.600 --> 33:27.680
not considering the topic, it seems that being negative about the future, it makes you sound
33:27.680 --> 33:32.720
smarter than being positive about the future, in regard to this topic. Am I correct in this
33:32.720 --> 33:39.120
observation? And if so, why do you think that is? Yeah, I think there is that phenomenon,
33:39.120 --> 33:43.920
that as Tom Lehrer, the satirist said, always predict the worst and you'll be hailed as a
33:43.920 --> 33:51.840
prophet. It may be part of our overall negativity bias. We are as a species more attuned to the
33:51.840 --> 33:59.200
negative than the positive. We dread losses more than we enjoy gains. And that might open up a
33:59.200 --> 34:06.560
space for prophets to remind us of harms and risks and losses that we may have overlooked.
34:06.560 --> 34:15.040
So I think there is that asymmetry. So you've written some of my favorite books
34:16.080 --> 34:21.680
all over the place. So starting from Enlightenment now, to the better ranges of our nature,
34:21.680 --> 34:28.560
blank slate, how the mind works, the one about language, language instinct. Bill Gates,
34:28.560 --> 34:37.840
big fan too, said of your most recent book that it's my new favorite book of all time. So for
34:37.840 --> 34:44.000
you as an author, what was the book early on in your life that had a profound impact on the way
34:44.000 --> 34:50.560
you saw the world? Certainly this book Enlightenment now is influenced by David Deutch's The Beginning
34:50.560 --> 34:57.520
of Infinity. We have a rather deep reflection on knowledge and the power of knowledge to improve
34:57.520 --> 35:02.960
the human condition. They end with bits of wisdom such as that problems are inevitable,
35:02.960 --> 35:07.760
but problems are solvable given the right knowledge and that solutions create new problems
35:07.760 --> 35:12.480
that have to be solved in their turn. That's I think a kind of wisdom about the human condition
35:12.480 --> 35:16.960
that influenced the writing of this book. There's some books that are excellent but obscure,
35:16.960 --> 35:22.080
some of which I have on my page on my website. I read a book called The History of Force,
35:22.080 --> 35:27.920
self published by a political scientist named James Payne on the historical decline of violence and
35:27.920 --> 35:35.120
that was one of the inspirations for the better angels of our nature. What about early on if
35:35.120 --> 35:40.640
you look back when you were maybe a teenager? I loved a book called One, Two, Three, Infinity.
35:40.640 --> 35:45.920
When I was a young adult, I read that book by George Gamov, the physicist, which had very
35:45.920 --> 35:55.120
accessible and humorous explanations of relativity, of number theory, of dimensionality, high
35:56.080 --> 36:02.240
multiple dimensional spaces in a way that I think is still delightful 70 years after it was published.
36:03.120 --> 36:09.280
I like the Time Life Science series. These are books that arrive every month that my mother
36:09.280 --> 36:15.600
subscribed to. Each one on a different topic. One would be on electricity, one would be on
36:15.600 --> 36:21.440
forests, one would be on evolution, and then one was on the mind. I was just intrigued that there
36:21.440 --> 36:27.040
could be a science of mind. That book, I would cite as an influence as well. Then later on.
36:27.040 --> 36:30.960
That's when you fell in love with the idea of studying the mind. Was that the thing that grabbed
36:30.960 --> 36:38.560
you? It was one of the things, I would say. I read as a college student the book Reflections on
36:38.560 --> 36:44.800
Language by Noam Chomsky. He spent most of his career here at MIT. Richard Dawkins,
36:44.800 --> 36:48.800
two books, The Blind Watchmaker and the Selfish Gene were enormously influential,
36:49.520 --> 36:56.640
partly mainly for the content, but also for the writing style, the ability to explain
36:56.640 --> 37:03.760
abstract concepts in lively prose. Stephen Jay Gould's first collection ever since Darwin, also
37:05.040 --> 37:11.120
excellent example of lively writing. George Miller, the psychologist that most psychologists
37:11.120 --> 37:17.440
are familiar with, came up with the idea that human memory has a capacity of seven plus or minus
37:17.440 --> 37:21.920
two chunks. That's probably his biggest claim to fame. He wrote a couple of books on language
37:21.920 --> 37:27.520
and communication that I'd read as an undergraduate. Again, beautifully written and intellectually deep.
37:28.400 --> 37:31.840
Wonderful. Stephen, thank you so much for taking the time today.
37:31.840 --> 37:42.960
My pleasure. Thanks a lot, Lex.
|