Datasets:
File size: 52,167 Bytes
a3be5d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 |
WEBVTT
00:00.000 --> 00:03.040
The following is a conversation with Peter Abiel.
00:03.040 --> 00:07.760
He's a professor at UC Berkeley and the director of the Berkeley Robotics Learning Lab.
00:07.760 --> 00:13.200
He's one of the top researchers in the world working on how to make robots understand and
00:13.200 --> 00:18.480
interact with the world around them, especially using imitation and deeper enforcement learning.
00:19.680 --> 00:24.160
This conversation is part of the MIT course on artificial general intelligence
00:24.160 --> 00:29.040
and the artificial intelligence podcast. If you enjoy it, please subscribe on YouTube,
00:29.040 --> 00:34.160
iTunes, or your podcast provider of choice, or simply connect with me on Twitter at Lex
00:34.160 --> 00:40.560
Freedman, spelled F R I D. And now here's my conversation with Peter Abiel.
00:41.440 --> 00:46.480
You've mentioned that if there was one person you could meet, it would be Roger Federer. So let
00:46.480 --> 00:52.720
me ask, when do you think we will have a robot that fully autonomously can beat Roger Federer
00:52.720 --> 00:59.840
at tennis? Roger Federer level player at tennis? Well, first, if you can make it happen for me
00:59.840 --> 01:07.840
to meet Roger, let me know. In terms of getting a robot to beat him at tennis, it's kind of an
01:07.840 --> 01:15.280
interesting question because for a lot of the challenges we think about in AI, the software
01:15.280 --> 01:22.800
is really the missing piece. But for something like this, the hardware is nowhere near either. To
01:22.800 --> 01:28.240
really have a robot that can physically run around, the Boston Dynamics robots are starting to get
01:28.240 --> 01:34.560
there, but still not really human level ability to run around and then swing a racket.
01:36.720 --> 01:40.160
So you think that's a hardware problem? I don't think it's a hardware problem only. I think it's
01:40.160 --> 01:45.600
a hardware and a software problem. I think it's both. And I think they'll have independent progress.
01:45.600 --> 01:53.360
So I'd say the hardware maybe in 10, 15 years. On clay, not grass. I mean, grass is probably hard.
01:53.360 --> 02:00.080
With the sliding? Yeah. Well, clay, I'm not sure what's harder, grass or clay. The clay involves
02:00.080 --> 02:09.360
sliding, which might be harder to master actually. Yeah. But you're not limited to bipedal. I mean,
02:09.360 --> 02:12.560
I'm sure there's no... Well, if we can build a machine, it's a whole different question, of
02:12.560 --> 02:18.000
course. If you can say, okay, this robot can be on wheels, it can move around on wheels and
02:18.000 --> 02:24.880
can be designed differently, then I think that can be done sooner probably than a full humanoid
02:24.880 --> 02:30.400
type of setup. What do you think of swing a racket? So you've worked at basic manipulation.
02:31.120 --> 02:36.480
How hard do you think is the task of swinging a racket with a be able to hit a nice backhand
02:36.480 --> 02:44.240
or a forehand? Let's say we just set up stationery, a nice robot arm, let's say. You know,
02:44.240 --> 02:49.440
a standard industrial arm, and it can watch the ball come and then swing the racket.
02:50.560 --> 02:57.600
It's a good question. I'm not sure it would be super hard to do. I mean, I'm sure it would require
02:57.600 --> 03:01.520
a lot... If we do it with reinforcement learning, it would require a lot of trial and error. It's
03:01.520 --> 03:06.960
not going to swing it right the first time around, but yeah, I don't see why I couldn't
03:08.240 --> 03:12.320
swing it the right way. I think it's learnable. I think if you set up a ball machine, let's say
03:12.320 --> 03:18.960
on one side and then a robot with a tennis racket on the other side, I think it's learnable
03:20.160 --> 03:25.360
and maybe a little bit of pre training and simulation. Yeah, I think that's feasible.
03:25.360 --> 03:28.880
I think the swinging the racket is feasible. It'd be very interesting to see how much precision it
03:28.880 --> 03:37.760
can get. I mean, that's where... I mean, some of the human players can hit it on the lines,
03:37.760 --> 03:44.320
which is very high precision. With spin. The spin is an interesting whether RL can learn to
03:44.320 --> 03:48.160
put a spin on the ball. Well, you got me interested. Maybe someday we'll set this up.
03:51.040 --> 03:55.440
Your answer is basically, okay, for this problem, it sounds fascinating, but for the general problem
03:55.440 --> 03:59.840
of a tennis player, we might be a little bit farther away. What's the most impressive thing
03:59.840 --> 04:06.720
you've seen a robot do in the physical world? So physically, for me, it's
04:08.720 --> 04:16.560
the Boston Dynamics videos always just ring home and just super impressed. Recently, the robot
04:16.560 --> 04:22.160
running up the stairs during the parkour type thing. I mean, yes, we don't know what's underneath.
04:22.160 --> 04:26.400
They don't really write a lot of detail, but even if it's hard coded underneath,
04:27.040 --> 04:30.800
which it might or might not be just the physical abilities of doing that parkour,
04:30.800 --> 04:36.000
that's a very impressive robot right there. So have you met Spotmini or any of those robots in
04:36.000 --> 04:43.040
person? I met Spotmini last year in April at the Mars event that Jeff Bezos organizes. They
04:43.040 --> 04:49.840
brought it out there and it was nicely falling around Jeff. When Jeff left the room, they had it
04:49.840 --> 04:55.680
following him along, which is pretty impressive. So I think there's some confidence to know that
04:55.680 --> 05:00.080
there's no learning going on in those robots. The psychology of it, so while knowing that,
05:00.080 --> 05:03.360
while knowing there's not, if there's any learning going on, it's very limited,
05:03.920 --> 05:08.720
I met Spotmini earlier this year and knowing everything that's going on,
05:09.520 --> 05:12.400
having one on one interaction, so I get to spend some time alone.
05:14.400 --> 05:18.720
And there's immediately a deep connection on the psychological level,
05:18.720 --> 05:22.320
even though you know the fundamentals, how it works, there's something magical.
05:23.280 --> 05:29.040
So do you think about the psychology of interacting with robots in the physical world,
05:29.040 --> 05:36.000
even you just showed me the PR2, the robot, and there was a little bit something like a face,
05:37.040 --> 05:40.480
had a little bit something like a face, there's something that immediately draws you to it.
05:40.480 --> 05:45.040
Do you think about that aspect of the robotics problem?
05:45.040 --> 05:50.560
Well, it's very hard with Brett here. We'll give him a name, Berkeley Robot,
05:50.560 --> 05:56.480
for the elimination of tedious tasks. It's very hard to not think of the robot as a person,
05:56.480 --> 06:00.560
and it seems like everybody calls him a he for whatever reason, but that also makes it more
06:00.560 --> 06:07.200
a person than if it was a it. And it seems pretty natural to think of it that way.
06:07.200 --> 06:12.400
This past weekend really struck me, I've seen Pepper many times on videos,
06:12.400 --> 06:18.640
but then I was at an event organized by, this was by Fidelity, and they had scripted Pepper to help
06:19.280 --> 06:24.880
moderate some sessions, and they had scripted Pepper to have the personality of a child a
06:24.880 --> 06:31.360
little bit. And it was very hard to not think of it as its own person in some sense, because it
06:31.360 --> 06:35.120
was just kind of jumping, it would just jump into conversation making it very interactive.
06:35.120 --> 06:38.720
Moderate would be saying Pepper would just jump in, hold on, how about me,
06:38.720 --> 06:43.600
how about me, can I participate in this doing it, just like, okay, this is like like a person,
06:43.600 --> 06:48.800
and that was 100% scripted. And even then it was hard not to have that sense of somehow
06:48.800 --> 06:55.120
there is something there. So as we have robots interact in this physical world, is that a signal
06:55.120 --> 07:00.160
that could be used in reinforcement learning? You've worked a little bit in this direction,
07:00.160 --> 07:05.920
but do you think that that psychology can be somehow pulled in? Yes, that's a question I would
07:05.920 --> 07:12.800
say a lot, a lot of people ask. And I think part of why they ask it is they're thinking about
07:14.160 --> 07:18.560
how unique are we really still as people, like after they see some results, they see
07:18.560 --> 07:23.200
a computer play go to say a computer do this that they're like, okay, but can it really have
07:23.200 --> 07:28.960
emotion? Can it really interact with us in that way? And then once you're around robots,
07:28.960 --> 07:33.760
you already start feeling it. And I think that kind of maybe methodologically, the way that I
07:33.760 --> 07:38.560
think of it is, if you run something like reinforcement learnings about optimizing some
07:38.560 --> 07:48.240
objective, and there's no reason that the objective couldn't be tied into how much
07:48.240 --> 07:53.120
does a person like interacting with this system? And why could not the reinforcement learning system
07:53.120 --> 07:59.040
optimize for the robot being fun to be around? And why wouldn't it then naturally become more
07:59.040 --> 08:03.920
more interactive and more and more maybe like a person or like a pet? I don't know what it would
08:03.920 --> 08:08.720
exactly be, but more and more have those features and acquire them automatically. As long as you
08:08.720 --> 08:16.320
can formalize an objective of what it means to like something, how you exhibit what's the ground
08:16.320 --> 08:21.360
truth? How do you get the reward from human? Because you have to somehow collect that information
08:21.360 --> 08:27.120
from human. But you're saying if you can formulate as an objective, it can be learned.
08:27.120 --> 08:30.800
There's no reason it couldn't emerge through learning. And maybe one way to formulate as an
08:30.800 --> 08:35.840
objective, you wouldn't have to necessarily score it explicitly. So standard rewards are
08:35.840 --> 08:41.920
numbers. And numbers are hard to come by. This is a 1.5 or 1.7 on some scale. It's very hard to do
08:41.920 --> 08:47.680
for a person. But much easier is for a person to say, okay, what you did the last five minutes
08:47.680 --> 08:53.600
was much nicer than we did the previous five minutes. And that now gives a comparison. And in fact,
08:53.600 --> 08:58.080
there have been some results on that. For example, Paul Cristiano and collaborators at OpenEye had
08:58.080 --> 09:05.840
the hopper, Mojoka hopper, one legged robot, the backflip, backflips purely from feedback. I like
09:05.840 --> 09:11.280
this better than that. That's kind of equally good. And after a bunch of interactions, it figured
09:11.280 --> 09:15.200
out what it was the person was asking for, namely a backflip. And so I think the same thing.
09:16.080 --> 09:20.880
It wasn't trying to do a backflip. It was just getting a score from the comparison score from
09:20.880 --> 09:27.760
the person based on person having a mind in their own mind. I wanted to do a backflip. But
09:27.760 --> 09:32.480
the robot didn't know what it was supposed to be doing. It just knew that sometimes the person
09:32.480 --> 09:37.120
said, this is better, this is worse. And then the robot figured out what the person was actually
09:37.120 --> 09:42.560
after was a backflip. And I imagine the same would be true for things like more interactive
09:42.560 --> 09:47.520
robots that the robot would figure out over time. Oh, this kind of thing apparently is appreciated
09:47.520 --> 09:54.720
more than this other kind of thing. So when I first picked up Sutton's Richard Sutton's
09:54.720 --> 10:02.480
reinforcement learning book, before sort of this deep learning, before the reemergence
10:02.480 --> 10:07.600
of neural networks as a powerful mechanism for machine learning, IRL seemed to me like magic.
10:07.600 --> 10:18.000
It was beautiful. So that seemed like what intelligence is, RRL reinforcement learning. So how
10:18.000 --> 10:24.320
do you think we can possibly learn anything about the world when the reward for the actions is delayed
10:24.320 --> 10:32.160
is so sparse? Like where is, why do you think RRL works? Why do you think you can learn anything
10:32.160 --> 10:37.600
under such sparse rewards, whether it's regular reinforcement learning or deeper reinforcement
10:37.600 --> 10:45.600
learning? What's your intuition? The kind of part of that is, why is RRL, why does it need
10:45.600 --> 10:51.040
so many samples, so many experiences to learn from? Because really what's happening is when you
10:51.040 --> 10:56.240
have a sparse reward, you do something maybe for like, I don't know, you take 100 actions and then
10:56.240 --> 11:01.920
you get a reward, or maybe you get like a score of three. And I'm like, okay, three. Not sure what
11:01.920 --> 11:06.960
that means. You go again and now you get two. And now you know that that sequence of 100 actions
11:06.960 --> 11:10.640
that you did the second time around somehow was worse than the sequence of 100 actions you did
11:10.640 --> 11:15.040
the first time around. But that's tough to now know which one of those were better or worse.
11:15.040 --> 11:19.680
Some might have been good and bad in either one. And so that's why you need so many experiences.
11:19.680 --> 11:24.080
But once you have enough experiences, effectively RRL is teasing that apart. It's starting to say,
11:24.080 --> 11:28.640
okay, what is consistently there when you get a higher reward and what's consistently there when
11:28.640 --> 11:34.080
you get a lower reward? And then kind of the magic of sometimes the policy grant update is to say,
11:34.720 --> 11:39.520
now let's update the neural network to make the actions that were kind of present when things are
11:39.520 --> 11:44.960
good, more likely, and make the actions that are present when things are not as good, less likely.
11:44.960 --> 11:50.480
So that's that is the counterpoint. But it seems like you would need to run it a lot more than
11:50.480 --> 11:55.120
you do. Even though right now, people could say that RRL is very inefficient. But it seems to be
11:55.120 --> 12:01.200
way more efficient than one would imagine on paper, that the simple updates to the policy,
12:01.760 --> 12:07.520
the policy gradient that somehow you can learn is exactly as I said, what are the common actions
12:07.520 --> 12:11.680
that seem to produce some good results, that that somehow can learn anything.
12:12.640 --> 12:16.800
It seems counterintuitive, at least. Is there some intuition behind it?
12:16.800 --> 12:24.720
Yeah, so I think there's a few ways to think about this. The way I tend to think about it
12:24.720 --> 12:29.920
mostly originally. And so when we started working on deep reinforcement learning here at Berkeley,
12:29.920 --> 12:36.880
which was maybe 2011, 12, 13, around that time, John Shulman was a PhD student initially kind of
12:36.880 --> 12:44.480
driving it forward here. And kind of the way we thought about it at the time was if you think
12:44.480 --> 12:51.360
about rectified linear units or kind of rectifier type neural networks, what do you get? You get
12:51.360 --> 12:56.320
something that's piecewise linear feedback control. And if you look at the literature,
12:56.960 --> 13:02.080
linear feedback control is extremely successful, can solve many, many problems surprisingly well.
13:03.520 --> 13:07.200
I remember, for example, when we did helicopter flight, if you're in a stationary flight regime,
13:07.200 --> 13:12.080
not a non stationary, but a stationary flight regime like hover, you can use linear feedback
13:12.080 --> 13:16.960
control to stabilize the helicopter, a very complex dynamical system. But the controller
13:16.960 --> 13:22.240
is relatively simple. And so I think that's a big part of is that if you do feedback control,
13:22.240 --> 13:25.280
even though the system you control can be very, very complex, often,
13:26.000 --> 13:31.520
relatively simple control architectures can already do a lot. But then also just linear
13:31.520 --> 13:35.840
is not good enough. And so one way you can think of these neural networks is that in some of the
13:35.840 --> 13:40.880
tile the space, which people were already trying to do more by hand or with finite state machines,
13:40.880 --> 13:44.560
say this linear controller here, this linear controller here, neural network,
13:44.560 --> 13:48.160
learns to tell the spin say linear controller here, another linear controller here,
13:48.160 --> 13:52.000
but it's more subtle than that. And so it's benefiting from this linear control aspect is
13:52.000 --> 13:57.760
benefiting from the tiling, but it's somehow tiling it one dimension at a time. Because if
13:57.760 --> 14:04.160
let's say you have a two layer network, even that hidden layer, you make a transition from active
14:04.160 --> 14:09.600
to inactive or the other way around, that is essentially one axis, but not axis aligned, but
14:09.600 --> 14:15.200
one direction that you change. And so you have this kind of very gradual tiling of the space,
14:15.200 --> 14:19.840
we have a lot of sharing between the linear controllers that tile the space. And that was
14:19.840 --> 14:25.280
always my intuition as to why to expect that this might work pretty well. It's essentially
14:25.280 --> 14:30.000
leveraging the fact that linear feedback control is so good. But of course, not enough. And this
14:30.000 --> 14:35.520
is a gradual tiling of the space with linear feedback controls that share a lot of expertise
14:35.520 --> 14:41.120
across them. So that that's, that's really nice intuition. But do you think that scales to the
14:41.120 --> 14:47.040
more and more general problems of when you start going up the number of control dimensions,
14:48.160 --> 14:55.280
when you start going down in terms of how often you get a clean reward signal,
14:55.280 --> 15:00.960
does that intuition carry forward to those crazy or weirder worlds that we think of as the real
15:00.960 --> 15:10.000
world? So I think where things get really tricky in the real world compared to the things we've
15:10.000 --> 15:13.920
looked at so far with great success and reinforcement learning is
15:16.160 --> 15:21.920
the time scales, which takes us to an extreme. So when you think about the real world, I mean,
15:22.800 --> 15:28.560
I don't know, maybe some student decided to do a PhD here, right? Okay, that's that's a decision,
15:28.560 --> 15:34.000
that's a very high level decision. But if you think about their lives, I mean, any person's life,
15:34.000 --> 15:39.360
it's a sequence of muscle fiber contractions and relaxations. And that's how you interact with
15:39.360 --> 15:44.480
the world. And that's a very high frequency control thing. But it's ultimately what you do
15:44.480 --> 15:49.280
and how you affect the world. Until I guess we have brain readings, you can maybe do it slightly
15:49.280 --> 15:55.120
differently. But typically, that's how you affect the world. And the decision of doing a PhD is
15:55.120 --> 16:00.240
like so abstract relative to what you're actually doing in the world. And I think that's where
16:00.240 --> 16:07.360
credit assignment becomes just completely beyond what any current RL algorithm can do. And we need
16:07.360 --> 16:13.360
hierarchical reasoning at a level that is just not available at all yet. Where do you think we can
16:13.360 --> 16:19.360
pick up hierarchical reasoning by which mechanisms? Yeah, so maybe let me highlight what I think the
16:19.360 --> 16:27.600
limitations are of what already was done 20, 30 years ago. In fact, you'll find reasoning systems
16:27.600 --> 16:33.200
that reason over relatively long horizons. But the problem is that they were not grounded in the real
16:33.200 --> 16:43.040
world. So people would have to hand design some kind of logical, dynamical descriptions of the
16:43.040 --> 16:49.120
world. And that didn't tie into perception. And so that didn't tie into real objects and so forth.
16:49.120 --> 16:57.920
And so that was a big gap. Now with deep learning, we start having the ability to really see with
16:57.920 --> 17:02.800
sensors process that and understand what's in the world. And so it's a good time to try to
17:02.800 --> 17:08.080
bring these things together. I see a few ways of getting there. One way to get there would be to say
17:08.080 --> 17:12.160
deep learning can get bolted on somehow to some of these more traditional approaches.
17:12.160 --> 17:16.160
Now bolted on would probably mean you need to do some kind of end to end training,
17:16.160 --> 17:21.840
where you say, my deep learning processing somehow leads to a representation that in term
17:22.720 --> 17:29.680
uses some kind of traditional underlying dynamical systems that can be used for planning.
17:29.680 --> 17:33.920
And that's, for example, the direction of Eve Tamar and Thanard Kuritach here have been pushing
17:33.920 --> 17:38.800
with causal info again. And of course, other people too, that that's that's one way. Can we
17:38.800 --> 17:43.520
somehow force it into the form factor that is amenable to reasoning?
17:43.520 --> 17:50.160
Another direction we've been thinking about for a long time and didn't make any progress on
17:50.160 --> 17:56.880
was more information theoretic approaches. So the idea there was that what it means to take
17:56.880 --> 18:03.840
high level action is to take and choose a latent variable now that tells you a lot about what's
18:03.840 --> 18:08.640
going to be the case in the future, because that's what it means to to take a high level action.
18:08.640 --> 18:14.480
I say, okay, what I decide I'm going to navigate to the gas station because I need to get
18:14.480 --> 18:18.800
gas from my car. Well, that'll now take five minutes to get there. But the fact that I get
18:18.800 --> 18:23.200
there, I could already tell that from the high level action I took much earlier.
18:24.480 --> 18:30.080
That we had a very hard time getting success with, not saying it's a dead end,
18:30.080 --> 18:34.160
necessarily, but we had a lot of trouble getting that to work. And then we started revisiting
18:34.160 --> 18:39.600
the notion of what are we really trying to achieve? What we're trying to achieve is
18:39.600 --> 18:42.880
not necessarily a hierarchy per se, but you could think about what does hierarchy give us?
18:44.160 --> 18:50.560
What we hope it would give us is better credit assignment. What is better credit assignment
18:50.560 --> 18:58.640
is giving us, it gives us faster learning. And so faster learning is ultimately maybe
18:58.640 --> 19:03.840
what we're after. And so that's where we ended up with the RL squared paper on learning to
19:03.840 --> 19:10.640
reinforcement learn, which at a time Rocky Dwan led. And that's exactly the meta learning
19:10.640 --> 19:15.040
approach where we say, okay, we don't know how to design hierarchy. We know what we want to get
19:15.040 --> 19:20.000
from it. Let's just enter and optimize for what we want to get from it and see if it might emerge.
19:20.000 --> 19:24.720
And we saw things emerge. The maze navigation had consistent motion down hallways,
19:25.920 --> 19:29.520
which is what you want. A hierarchical control should say, I want to go down this hallway.
19:29.520 --> 19:33.040
And then when there is an option to take a turn, I can decide whether to take a turn or not and
19:33.040 --> 19:38.480
repeat, even had the notion of, where have you been before or not to not revisit places you've
19:38.480 --> 19:45.840
been before? It still didn't scale yet to the real world kind of scenarios I think you had in mind,
19:45.840 --> 19:50.000
but it was some sign of life that maybe you can meta learn these hierarchical concepts.
19:51.040 --> 19:58.000
I mean, it seems like through these meta learning concepts, we get at the, what I think is one of
19:58.000 --> 20:05.120
the hardest and most important problems of AI, which is transfer learning. So it's generalization.
20:06.240 --> 20:12.160
How far along this journey towards building general systems are we being able to do transfer
20:12.160 --> 20:18.320
learning? Well, so there's some signs that you can generalize a little bit. But do you think
20:18.320 --> 20:25.360
we're on the right path or totally different breakthroughs are needed to be able to transfer
20:25.360 --> 20:34.000
knowledge between different learned models? Yeah, I'm pretty torn on this in that I think
20:34.000 --> 20:44.400
there are some very impressive results already, right? I mean, I would say when even with the
20:44.400 --> 20:50.160
initial kind of big breakthrough in 2012 with Alex net, right, the initial, the initial thing is,
20:50.160 --> 20:57.600
okay, great. This does better on image net hands image recognition. But then immediately thereafter,
20:57.600 --> 21:04.080
there was of course the notion that wow, what was learned on image net, and you now want to solve
21:04.080 --> 21:11.280
a new task, you can fine tune Alex net for new tasks. And that was often found to be the even
21:11.280 --> 21:15.920
bigger deal that you learn something that was reusable, which was not often the case before
21:15.920 --> 21:19.520
usually machine learning, you learn something for one scenario. And that was it. And that's
21:19.520 --> 21:23.200
really exciting. I mean, that's just a huge application. That's probably the biggest
21:23.200 --> 21:28.960
success of transfer learning today, if in terms of scope and impact. That was a huge breakthrough.
21:28.960 --> 21:37.040
And then recently, I feel like similar kind of by scaling things up, it seems like this has been
21:37.040 --> 21:41.440
expanded upon like people training even bigger networks, they might transfer even better. If
21:41.440 --> 21:46.480
you look that, for example, some of the opening results on language models. And so in the recent
21:46.480 --> 21:53.600
Google results on language models, they are learned for just prediction. And then they get
21:54.320 --> 21:59.600
reused for other tasks. And so I think there is something there where somehow if you train a
21:59.600 --> 22:05.200
big enough model on enough things, it seems to transfer some deep mind results that I thought
22:05.200 --> 22:12.160
were very impressive, the unreal results, where it was learning to navigate mazes in ways where
22:12.160 --> 22:16.880
it wasn't just doing reinforcement learning, but it had other objectives was optimizing for. So I
22:16.880 --> 22:23.680
think there's a lot of interesting results already. I think maybe where it's hard to wrap my head
22:23.680 --> 22:30.160
around this, to which extent or when do we call something generalization, right? Or the levels
22:30.160 --> 22:37.360
of generalization involved in these different tasks, right? So you draw this, by the way, just
22:37.360 --> 22:43.280
to frame things. I've heard you say somewhere, it's the difference in learning to master versus
22:43.280 --> 22:49.680
learning to generalize. That it's a nice line to think about. And I guess you're saying it's a gray
22:49.680 --> 22:54.640
area of what learning to master and learning to generalize where one starts.
22:54.640 --> 22:58.800
I think I might have heard this. I might have heard it somewhere else. And I think it might have
22:58.800 --> 23:05.120
been one of your interviews, maybe the one with Yoshua Benjamin, 900% sure. But I like the example
23:05.120 --> 23:12.000
and I'm going to not sure who it was, but the example was essentially if you use current deep
23:12.000 --> 23:20.480
learning techniques, what we're doing to predict, let's say the relative motion of our planets,
23:20.480 --> 23:27.680
it would do pretty well. But then now if a massive new mass enters our solar system,
23:28.320 --> 23:32.880
it would probably not predict what will happen, right? And that's a different kind of
23:32.880 --> 23:38.400
generalization. That's a generalization that relies on the ultimate simplest explanation
23:38.400 --> 23:42.640
that we have available today to explain the motion of planets, whereas just pattern recognition
23:42.640 --> 23:48.160
could predict our current solar system motion pretty well. No problem. And so I think that's
23:48.160 --> 23:53.920
an example of a kind of generalization that is a little different from what we've achieved so far.
23:54.480 --> 24:01.360
And it's not clear if just, you know, regularizing more and forcing it to come up with a simpler,
24:01.360 --> 24:05.280
simpler, simpler explanation. Look, this is not simple, but that's what physics researchers do,
24:05.280 --> 24:10.000
right, to say, can I make this even simpler? How simple can I get this? What's the simplest
24:10.000 --> 24:14.560
equation that can explain everything, right? The master equation for the entire dynamics of the
24:14.560 --> 24:20.960
universe. We haven't really pushed that direction as hard in deep learning, I would say. Not sure
24:20.960 --> 24:24.960
if it should be pushed, but it seems a kind of generalization you get from that that you don't
24:24.960 --> 24:30.400
get in our current methods so far. So I just talked to Vladimir Vapnik, for example, who was
24:30.400 --> 24:39.200
a statistician in statistical learning, and he kind of dreams of creating the E equals Mc
24:39.200 --> 24:44.400
squared for learning, right, the general theory of learning. Do you think that's a fruitless pursuit
24:46.480 --> 24:50.560
in the near term, within the next several decades?
24:51.680 --> 24:56.800
I think that's a really interesting pursuit. And in the following sense, in that there is a
24:56.800 --> 25:05.440
lot of evidence that the brain is pretty modular. And so I wouldn't maybe think of it as the theory,
25:05.440 --> 25:12.480
maybe, the underlying theory, but more kind of the principle where there have been findings where
25:14.160 --> 25:20.240
people who are blind will use the part of the brain usually used for vision for other functions.
25:20.240 --> 25:26.800
And even after some kind of, if people get rewired in some way, they might be able to reuse parts of
25:26.800 --> 25:35.040
their brain for other functions. And so what that suggests is some kind of modularity. And I think
25:35.040 --> 25:41.120
it is a pretty natural thing to strive for to see, can we find that modularity? Can we find this
25:41.120 --> 25:45.440
thing? Of course, it's not every part of the brain is not exactly the same. Not everything can be
25:45.440 --> 25:50.080
rewired arbitrarily. But if you think of things like the neocortex, which is a pretty big part of
25:50.080 --> 25:56.880
the brain, that seems fairly modular from what the findings so far. Can you design something
25:56.880 --> 26:01.840
equally modular? And if you can just grow it, it becomes more capable, probably. I think that would
26:01.840 --> 26:07.200
be the kind of interesting underlying principle to shoot for that is not unrealistic.
26:07.200 --> 26:14.400
Do you think you prefer math or empirical trial and error for the discovery of the essence of what
26:14.400 --> 26:19.680
it means to do something intelligent? So reinforcement learning embodies both groups, right?
26:19.680 --> 26:25.760
To prove that something converges, prove the bounds. And then at the same time, a lot of those
26:25.760 --> 26:31.280
successes are, well, let's try this and see if it works. So which do you gravitate towards? How do
26:31.280 --> 26:40.960
you think of those two parts of your brain? So maybe I would prefer we could make the progress
26:41.600 --> 26:46.560
with mathematics. And the reason maybe I would prefer that is because often if you have something you
26:46.560 --> 26:54.080
can mathematically formalize, you can leapfrog a lot of experimentation. And experimentation takes
26:54.080 --> 27:01.440
a long time to get through. And a lot of trial and error, reinforcement learning, your research
27:01.440 --> 27:05.040
process. But you need to do a lot of trial and error before you get to a success. So if you can
27:05.040 --> 27:10.400
leapfrog that, to my mind, that's what the math is about. And hopefully once you do a bunch of
27:10.400 --> 27:15.600
experiments, you start seeing a pattern, you can do some derivations that leapfrog some experiments.
27:16.240 --> 27:20.160
But I agree with you. I mean, in practice, a lot of the progress has been such that we have not
27:20.160 --> 27:25.840
been able to find the math that allows it to leapfrog ahead. And we are kind of making gradual
27:25.840 --> 27:30.480
progress one step at a time. A new experiment here, a new experiment there that gives us new
27:30.480 --> 27:35.280
insights and gradually building up, but not getting to something yet where we're just, okay,
27:35.280 --> 27:39.920
here's an equation that now explains how, you know, that would be have been two years of
27:39.920 --> 27:44.880
experimentation to get there. But this tells us what the results going to be. Unfortunately,
27:44.880 --> 27:52.800
unfortunately, not so much yet. Not so much yet. But your hope is there. In trying to teach robots
27:52.800 --> 28:01.200
or systems to do everyday tasks, or even in simulation, what do you think you're more excited
28:01.200 --> 28:10.560
about? imitation learning or self play. So letting robots learn from humans, or letting robots plan
28:10.560 --> 28:18.240
their own, try to figure out in their own way, and eventually play, eventually interact with humans,
28:18.240 --> 28:23.200
or solve whatever problem is. What's the more exciting to you? What's more promising you think
28:23.200 --> 28:34.240
is a research direction? So when we look at self play, what's so beautiful about it is,
28:34.240 --> 28:37.680
goes back to kind of the challenges in reinforcement learning. So the challenge
28:37.680 --> 28:43.200
of reinforcement learning is getting signal. And if you don't never succeed, you don't get any signal.
28:43.200 --> 28:49.040
In self play, you're on both sides. So one of you succeeds. And the beauty is also one of you
28:49.040 --> 28:53.520
fails. And so you see the contrast, you see the one version of me that did better than the other
28:53.520 --> 28:58.400
version. And so every time you play yourself, you get signal. And so whenever you can turn
28:58.400 --> 29:04.160
something into self play, you're in a beautiful situation where you can naturally learn much
29:04.160 --> 29:10.080
more quickly than in most other reinforcement learning environments. So I think, I think if
29:10.080 --> 29:15.760
somehow we can turn more reinforcement learning problems into self play formulations, that would
29:15.760 --> 29:21.760
go really, really far. So far, self play has been largely around games where there is natural
29:21.760 --> 29:25.440
opponents. But if we could do self play for other things, and let's say, I don't know,
29:25.440 --> 29:29.360
a robot learns to build a house, I mean, that's a pretty advanced thing to try to do for a robot,
29:29.360 --> 29:34.240
but maybe it tries to build a hut or something. If that can be done through self play, it would
29:34.240 --> 29:38.560
learn a lot more quickly if somebody can figure it out. And I think that would be something where
29:38.560 --> 29:42.560
it goes closer to kind of the mathematical leapfrogging where somebody figures out a
29:42.560 --> 29:47.680
formalism to say, okay, any RL problem by playing this and this idea, you can turn it
29:47.680 --> 29:50.480
into a self play problem where you get signal a lot more easily.
29:52.400 --> 29:57.680
Reality is many problems, we don't know how to turn to self play. And so either we need to provide
29:57.680 --> 30:02.640
detailed reward. That doesn't just reward for achieving a goal, but rewards for making progress,
30:02.640 --> 30:06.480
and that becomes time consuming. And once you're starting to do that, let's say you want a robot
30:06.480 --> 30:09.920
to do something, you need to give all this detailed reward. Well, why not just give a
30:09.920 --> 30:15.920
demonstration? Because why not just show the robot. And now the question is, how do you show
30:15.920 --> 30:20.240
the robot? One way to show is to tally operator robot and then robot really experiences things.
30:20.800 --> 30:24.480
And that's nice, because that's really high signal to noise ratio data. And we've done a lot
30:24.480 --> 30:29.360
of that. And you teach your robot skills. In just 10 minutes, you can teach your robot a new basic
30:29.360 --> 30:33.360
skill, like, okay, pick up the bottle, place it somewhere else. That's a skill, no matter where
30:33.360 --> 30:38.000
the bottle starts, maybe it always goes on to a target or something. That's fairly easy to teach
30:38.000 --> 30:43.120
your robot with teleop. Now, what's even more interesting, if you can now teach your robot
30:43.120 --> 30:48.480
through third person learning, where the robot watches you do something, and doesn't experience
30:48.480 --> 30:52.880
it, but just watches it and says, okay, well, if you're showing me that, that means I should
30:52.880 --> 30:56.880
be doing this. And I'm not going to be using your hand, because I don't get to control your hand,
30:56.880 --> 31:02.000
but I'm going to use my hand, I do that mapping. And so that's where I think one of the big breakthroughs
31:02.000 --> 31:07.520
has happened this year. This was led by Chelsea Finn here. It's almost like learning a machine
31:07.520 --> 31:12.000
translation for demonstrations where you have a human demonstration and the robot learns to
31:12.000 --> 31:17.440
translate it into what it means for the robot to do it. And that was a meta learning formulation,
31:17.440 --> 31:23.440
learn from one to get the other. And that I think opens up a lot of opportunities to learn a lot
31:23.440 --> 31:28.080
more quickly. So my focus is on autonomous vehicles. Do you think this approach of third
31:28.080 --> 31:33.040
person watching is the autonomous driving is amenable to this kind of approach?
31:33.840 --> 31:42.080
So for autonomous driving, I would say it's third person is slightly easier. And the reason I'm
31:42.080 --> 31:48.320
going to say it's slightly easier to do with third person is because the car dynamics are very well
31:48.320 --> 31:56.560
understood. So the easier than first person, you mean, or easier than. So I think the distinction
31:56.560 --> 32:01.680
between third person and first person is not a very important distinction for autonomous driving.
32:01.680 --> 32:07.760
They're very similar. Because the distinction is really about who turns the steering wheel.
32:07.760 --> 32:15.280
And or maybe let me put it differently. How to get from a point where you are now to a point,
32:15.280 --> 32:19.120
let's say a couple of meters in front of you. And that's a problem that's very well understood.
32:19.120 --> 32:22.480
And that's the only distinction between third and first person there. Whereas with the robot
32:22.480 --> 32:26.720
manipulation, interaction forces are very complex. And it's still a very different thing.
32:27.840 --> 32:33.840
For autonomous driving, I think there's still the question imitation versus RL.
32:33.840 --> 32:39.520
Well, so imitation gives you a lot more signal. I think where imitation is lacking and needs
32:39.520 --> 32:47.600
some extra machinery is it doesn't in its normal format, doesn't think about goals or objectives.
32:48.480 --> 32:52.240
And of course, there are versions of imitation learning, inverse reinforcement learning type
32:52.240 --> 32:57.440
imitation, which also thinks about goals. I think then we're getting much closer. But I think it's
32:57.440 --> 33:05.120
very hard to think of a fully reactive car generalizing well, if it really doesn't have a notion
33:05.120 --> 33:10.720
of objectives to generalize well to the kind of general that you would want, you want more than
33:10.720 --> 33:15.200
just that reactivity that you get from just behavioral cloning slash supervised learning.
33:17.040 --> 33:22.560
So a lot of the work, whether it's self play or even imitation learning would benefit
33:22.560 --> 33:27.440
significantly from simulation, from effective simulation, and you're doing a lot of stuff
33:27.440 --> 33:32.400
in the physical world and in simulation, do you have hope for greater and greater
33:33.520 --> 33:40.160
power of simulation loop being boundless, eventually, to where most of what we need
33:40.160 --> 33:45.600
to operate in the physical world, what could be simulated to a degree that's directly
33:45.600 --> 33:54.720
transferable to the physical world? Are we still very far away from that? So I think
33:55.840 --> 34:03.200
we could even rephrase that question in some sense, please. And so the power of simulation,
34:04.720 --> 34:09.760
as simulators get better and better, of course, becomes stronger, and we can learn more in
34:09.760 --> 34:13.760
simulation. But there's also another version, which is where you say the simulator doesn't
34:13.760 --> 34:19.120
even have to be that precise. As long as it's somewhat representative. And instead of trying
34:19.120 --> 34:24.480
to get one simulator that is sufficiently precise to learn and transfer really well to the real
34:24.480 --> 34:29.200
world, I'm going to build many simulators, ensemble of simulators, ensemble of simulators,
34:30.080 --> 34:35.120
not any single one of them is sufficiently representative of the real world such that
34:35.120 --> 34:41.760
it would work if you train in there. But if you train in all of them, then there is something
34:41.760 --> 34:47.840
that's good in all of them. The real world will just be, you know, another one of them. That's,
34:47.840 --> 34:50.720
you know, not identical to any one of them, but just another one of them.
34:50.720 --> 34:53.120
Now, this sample from the distribution of simulators.
34:53.120 --> 34:53.360
Exactly.
34:53.360 --> 34:57.600
We do live in a simulation. So this is just one, one other one.
34:57.600 --> 35:03.440
I'm not sure about that. But yeah, it's definitely a very advanced simulator if it is.
35:03.440 --> 35:08.960
Yeah, it's a pretty good one. I've talked to Russell. It's something you think about a little bit
35:08.960 --> 35:13.120
too. Of course, you're like really trying to build these systems. But do you think about the future
35:13.120 --> 35:18.880
of AI? A lot of people have concern about safety. How do you think about AI safety as you build
35:18.880 --> 35:24.960
robots that are operating the physical world? What is, yeah, how do you approach this problem
35:24.960 --> 35:27.440
in an engineering kind of way in a systematic way?
35:29.200 --> 35:36.720
So when a robot is doing things, you kind of have a few notions of safety to worry about. One is that
35:36.720 --> 35:43.760
the robot is physically strong and of course could do a lot of damage. Same for cars, which we can
35:43.760 --> 35:49.360
think of as robots do in some way. And this could be completely unintentional. So it could be not
35:49.360 --> 35:54.240
the kind of long term AI safety concerns that, okay, AI is smarter than us. And now what do we do?
35:54.240 --> 35:57.760
But it could be just very practical. Okay, this robot, if it makes a mistake,
35:58.800 --> 36:04.080
what are the results going to be? Of course, simulation comes in a lot there to test in simulation.
36:04.080 --> 36:10.960
It's a difficult question. And I'm always wondering, like I always wonder, let's say you look at,
36:10.960 --> 36:14.000
let's go back to driving, because a lot of people know driving well, of course.
36:15.120 --> 36:20.800
What do we do to test somebody for driving, right, to get a driver's license? What do they
36:20.800 --> 36:27.680
really do? I mean, you fill out some tests, and then you drive and I mean, for a few minutes,
36:27.680 --> 36:34.800
it's suburban California, that driving test is just you drive around the block, pull over, you
36:34.800 --> 36:39.280
do a stop sign successfully, and then, you know, you pull over again, and you're pretty much done.
36:40.000 --> 36:46.720
And you're like, okay, if a self driving car did that, would you trust it that it can drive?
36:46.720 --> 36:49.840
And I'd be like, no, that's not enough for me to trust it. But somehow for humans,
36:50.560 --> 36:54.480
we've figured out that somebody being able to do that is representative
36:54.480 --> 36:59.840
of them being able to do a lot of other things. And so I think somehow for humans,
36:59.840 --> 37:05.200
we figured out representative tests of what it means if you can do this, what you can really do.
37:05.760 --> 37:09.840
Of course, testing humans, humans don't want to be tested at all times. Self driving cars or
37:09.840 --> 37:13.760
robots could be tested more often probably, you can have replicas that get tested and are known
37:13.760 --> 37:19.600
to be identical because they use the same neural net and so forth. But still, I feel like we don't
37:19.600 --> 37:25.040
have this kind of unit tests or proper tests for robots. And I think there's something very
37:25.040 --> 37:29.440
interesting to be thought about there, especially as you update things, your software improves,
37:29.440 --> 37:34.640
you have a better self driving car suite, you update it. How do you know it's indeed more
37:34.640 --> 37:41.440
capable on everything than what you had before that you didn't have any bad things creep into it?
37:41.440 --> 37:45.680
So I think that's a very interesting direction of research that there is no real solution yet,
37:45.680 --> 37:50.640
except that somehow for humans, we do because we say, okay, you have a driving test, you passed,
37:50.640 --> 37:55.760
you can go on the road now and you must have accents every like a million or 10 million miles,
37:55.760 --> 38:01.520
something pretty phenomenal compared to that short test that is being done.
38:01.520 --> 38:06.000
So let me ask, you've mentioned, you've mentioned that Andrew Ang, by example,
38:06.000 --> 38:11.440
showed you the value of kindness. And do you think the space of
38:11.440 --> 38:20.240
of policies, good policies for humans and for AI is populated by policies that
38:21.440 --> 38:28.880
with kindness or ones that are the opposite, exploitation, even evil. So if you just look
38:28.880 --> 38:34.400
at the sea of policies we operate under as human beings, or if AI system had to operate in this
38:34.400 --> 38:39.440
real world, do you think it's really easy to find policies that are full of kindness,
38:39.440 --> 38:44.480
like we naturally fall into them? Or is it like a very hard optimization problem?
38:47.920 --> 38:52.720
I mean, there is kind of two optimizations happening for humans, right? So for humans,
38:52.720 --> 38:57.440
there's kind of the very long term optimization, which evolution has done for us. And we're kind of
38:57.440 --> 39:02.640
predisposed to like certain things. And that's in some sense, what makes our learning easier,
39:02.640 --> 39:10.000
because I mean, we know things like pain and hunger and thirst. And the fact that we know about those
39:10.000 --> 39:13.840
is not something that we were taught. That's kind of innate. When we're hungry, we're unhappy.
39:13.840 --> 39:20.720
When we're thirsty, we're unhappy. When we have pain, we're unhappy. And ultimately evolution
39:20.720 --> 39:25.040
built that into us to think about those things. And so I think there is a notion that it seems
39:25.040 --> 39:33.840
somehow humans evolved in general to prefer to get along in some ways. But at the same time,
39:33.840 --> 39:43.040
also to be very territorial and kind of centric to their own tribe. It seems like that's the kind
39:43.040 --> 39:47.360
of space we converged on to. I mean, I'm not an expert in anthropology, but it seems like we're
39:47.360 --> 39:54.480
very kind of good within our own tribe, but need to be taught to be nice to other tribes.
39:54.480 --> 39:58.000
Well, if you look at Steven Pinker, he highlights this pretty nicely in
40:00.720 --> 40:05.520
Better Angels of Our Nature, where he talks about violence decreasing over time consistently.
40:05.520 --> 40:11.360
So whatever tension, whatever teams we pick, it seems that the long arc of history goes
40:11.360 --> 40:17.840
towards us getting along more and more. So do you think that
40:17.840 --> 40:27.280
do you think it's possible to teach RRL based robots this kind of kindness, this kind of ability
40:27.280 --> 40:33.040
to interact with humans, this kind of policy, even to let me ask, let me ask upon one, do you think
40:33.040 --> 40:38.800
it's possible to teach RRL based robot to love a human being and to inspire that human to love
40:38.800 --> 40:48.080
the robot back? So to like a RRL based algorithm that leads to a happy marriage? That's an interesting
40:48.080 --> 40:56.080
question. Maybe I'll answer it with another question, right? Because I mean, but I'll come
40:56.080 --> 41:02.000
back to it. So another question you can have is okay. I mean, how close does some people's
41:02.000 --> 41:09.760
happiness get from interacting with just a really nice dog? Like, I mean, dogs, you come home,
41:09.760 --> 41:14.000
that's what dogs do. They greet you. They're excited. It makes you happy when you come home
41:14.000 --> 41:17.600
to your dog. You're just like, okay, this is exciting. They're always happy when I'm here.
41:18.160 --> 41:22.560
I mean, if they don't greet you, because maybe whatever, your partner took them on a trip or
41:22.560 --> 41:27.600
something, you might not be nearly as happy when you get home, right? And so the kind of,
41:27.600 --> 41:33.600
it seems like the level of reasoning a dog has is pretty sophisticated, but then it's still not yet
41:33.600 --> 41:38.240
at the level of human reasoning. And so it seems like we don't even need to achieve human level
41:38.240 --> 41:44.320
reasoning to get like very strong affection with humans. And so my thinking is, why not, right?
41:44.320 --> 41:51.360
Why couldn't, with an AI, couldn't we achieve the kind of level of affection that humans feel
41:51.360 --> 41:59.280
among each other or with friendly animals and so forth? So question, is it a good thing for us
41:59.280 --> 42:07.040
or not? That's another thing, right? Because I mean, but I don't see why not. Why not? Yeah.
42:07.040 --> 42:12.640
So Elon Musk says love is the answer. Maybe he should say love is the objective function and
42:12.640 --> 42:19.280
then RL is the answer, right? Well, maybe. Oh, Peter, thank you so much. I don't want to take
42:19.280 --> 42:23.360
up more of your time. Thank you so much for talking today. Well, thanks for coming by.
42:23.360 --> 42:53.200
Great to have you visit.
|