|
WEBVTT |
|
|
|
00:00.000 --> 00:04.320 |
|
What difference between biological neural networks and artificial neural networks |
|
|
|
00:04.320 --> 00:07.680 |
|
is most mysterious, captivating and profound for you? |
|
|
|
00:11.120 --> 00:15.280 |
|
First of all, there's so much we don't know about biological neural networks, |
|
|
|
00:15.280 --> 00:21.840 |
|
and that's very mysterious and captivating because maybe it holds the key to improving |
|
|
|
00:21.840 --> 00:29.840 |
|
artificial neural networks. One of the things I studied recently is something that |
|
|
|
00:29.840 --> 00:36.160 |
|
we don't know how biological neural networks do, but would be really useful for artificial ones, |
|
|
|
00:37.120 --> 00:43.440 |
|
is the ability to do credit assignment through very long time spans. |
|
|
|
00:44.080 --> 00:49.680 |
|
There are things that we can in principle do with artificial neural nets, but it's not very |
|
|
|
00:49.680 --> 00:55.920 |
|
convenient and it's not biologically plausible. And this mismatch, I think this kind of mismatch, |
|
|
|
00:55.920 --> 01:03.600 |
|
maybe an interesting thing to study, to A, understand better how brains might do these |
|
|
|
01:03.600 --> 01:08.720 |
|
things because we don't have good corresponding theories with artificial neural nets, and B, |
|
|
|
01:10.240 --> 01:19.040 |
|
maybe provide new ideas that we could explore about things that brain do differently and |
|
|
|
01:19.040 --> 01:22.160 |
|
that we could incorporate in artificial neural nets. |
|
|
|
01:22.160 --> 01:27.680 |
|
So let's break credit assignment up a little bit. So what? It's a beautifully technical term, |
|
|
|
01:27.680 --> 01:34.560 |
|
but it could incorporate so many things. So is it more on the RNN memory side, |
|
|
|
01:35.840 --> 01:39.760 |
|
thinking like that, or is it something about knowledge, building up common sense knowledge |
|
|
|
01:39.760 --> 01:46.560 |
|
over time, or is it more in the reinforcement learning sense that you're picking up rewards |
|
|
|
01:46.560 --> 01:50.080 |
|
over time for a particular to achieve a certain kind of goal? |
|
|
|
01:50.080 --> 01:58.080 |
|
So I was thinking more about the first two meanings whereby we store all kinds of memories, |
|
|
|
01:59.120 --> 02:09.680 |
|
episodic memories in our brain, which we can access later in order to help us both infer |
|
|
|
02:10.560 --> 02:19.520 |
|
causes of things that we are observing now and assign credit to decisions or interpretations |
|
|
|
02:19.520 --> 02:26.960 |
|
we came up with a while ago when those memories were stored. And then we can change the way we |
|
|
|
02:26.960 --> 02:34.800 |
|
would have reacted or interpreted things in the past, and now that's credit assignment used for learning. |
|
|
|
02:36.320 --> 02:43.760 |
|
So in which way do you think artificial neural networks, the current LSTM, |
|
|
|
02:43.760 --> 02:52.240 |
|
the current architectures are not able to capture the presumably you're thinking of very long term? |
|
|
|
02:52.240 --> 03:00.720 |
|
Yes. So current, the current nets are doing a fairly good jobs for sequences with dozens or say |
|
|
|
03:00.720 --> 03:06.560 |
|
hundreds of time steps. And then it gets sort of harder and harder and depending on what you |
|
|
|
03:06.560 --> 03:13.120 |
|
have to remember and so on as you consider longer durations. Whereas humans seem to be able to |
|
|
|
03:13.120 --> 03:18.080 |
|
do credit assignment through essentially arbitrary times like I could remember something I did last |
|
|
|
03:18.080 --> 03:23.360 |
|
year. And then now because I see some new evidence, I'm going to change my mind about |
|
|
|
03:23.360 --> 03:29.040 |
|
the way I was thinking last year, and hopefully not do the same mistake again. |
|
|
|
03:31.040 --> 03:36.800 |
|
I think a big part of that is probably forgetting. You're only remembering the really important |
|
|
|
03:36.800 --> 03:43.680 |
|
things that's very efficient forgetting. Yes. So there's a selection of what we remember. |
|
|
|
03:43.680 --> 03:49.120 |
|
And I think there are really cool connection to higher level cognitions here regarding |
|
|
|
03:49.120 --> 03:55.760 |
|
consciousness, deciding and emotions. So deciding what comes to consciousness and what gets stored |
|
|
|
03:55.760 --> 04:04.800 |
|
in memory, which are not trivial either. So you've been at the forefront there all along |
|
|
|
04:04.800 --> 04:10.800 |
|
showing some of the amazing things that neural networks, deep neural networks can do in the |
|
|
|
04:10.800 --> 04:16.560 |
|
field of artificial intelligence is just broadly in all kinds of applications. But we can talk |
|
|
|
04:16.560 --> 04:23.200 |
|
about that forever. But what in your view, because we're thinking towards the future is the weakest |
|
|
|
04:23.200 --> 04:29.120 |
|
aspect of the way deep neural networks represent the world. What is that? What is in your view |
|
|
|
04:29.120 --> 04:41.200 |
|
is missing? So current state of the art neural nets trained on large quantities of images or texts |
|
|
|
04:43.840 --> 04:49.760 |
|
have some level of understanding of what explains those data sets, but it's very |
|
|
|
04:49.760 --> 05:01.440 |
|
basic. It's very low level. And it's not nearly as robust and abstract and general as our understanding. |
|
|
|
05:02.960 --> 05:09.760 |
|
Okay, so that doesn't tell us how to fix things. But I think it encourages us to think about |
|
|
|
05:09.760 --> 05:21.200 |
|
how we can maybe train our neural nets differently, so that they would focus, for example, on causal |
|
|
|
05:21.200 --> 05:30.000 |
|
explanations, something that we don't do currently with neural net training. Also, one thing I'll |
|
|
|
05:30.000 --> 05:37.920 |
|
talk about in my talk this afternoon is instead of learning separately from images and videos on |
|
|
|
05:37.920 --> 05:45.600 |
|
one hand and from texts on the other hand, we need to do a better job of jointly learning about |
|
|
|
05:45.600 --> 05:54.320 |
|
language and about the world to which it refers. So that, you know, both sides can help each other. |
|
|
|
05:54.880 --> 06:02.480 |
|
We need to have good world models in our neural nets for them to really understand sentences |
|
|
|
06:02.480 --> 06:10.000 |
|
which talk about what's going on in the world. And I think we need language input to help |
|
|
|
06:10.640 --> 06:17.760 |
|
provide clues about what high level concepts like semantic concepts should be represented |
|
|
|
06:17.760 --> 06:26.400 |
|
at the top levels of these neural nets. In fact, there is evidence that the purely unsupervised |
|
|
|
06:26.400 --> 06:33.840 |
|
learning of representations doesn't give rise to high level representations that are as powerful |
|
|
|
06:33.840 --> 06:40.320 |
|
as the ones we're getting from supervised learning. And so the clues we're getting just with the labels, |
|
|
|
06:40.320 --> 06:46.960 |
|
not even sentences, is already very powerful. Do you think that's an architecture challenge |
|
|
|
06:46.960 --> 06:55.920 |
|
or is it a data set challenge? Neither. I'm tempted to just end it there. |
|
|
|
07:02.960 --> 07:06.800 |
|
Of course, data sets and architectures are something you want to always play with. But |
|
|
|
07:06.800 --> 07:13.040 |
|
I think the crucial thing is more the training objectives, the training frameworks. For example, |
|
|
|
07:13.040 --> 07:20.240 |
|
going from passive observation of data to more active agents, which |
|
|
|
07:22.320 --> 07:27.280 |
|
learn by intervening in the world, the relationships between causes and effects, |
|
|
|
07:28.480 --> 07:36.240 |
|
the sort of objective functions which could be important to allow the highest level |
|
|
|
07:36.240 --> 07:44.000 |
|
of explanations to rise from the learning, which I don't think we have now. The kinds of |
|
|
|
07:44.000 --> 07:50.320 |
|
objective functions which could be used to reward exploration, the right kind of exploration. So |
|
|
|
07:50.320 --> 07:56.160 |
|
these kinds of questions are neither in the data set nor in the architecture, but more in |
|
|
|
07:56.800 --> 08:03.920 |
|
how we learn under what objectives and so on. Yeah, that's a, I've heard you mention in several |
|
|
|
08:03.920 --> 08:08.080 |
|
contexts, the idea of sort of the way children learn, they interact with objects in the world. |
|
|
|
08:08.080 --> 08:15.040 |
|
And it seems fascinating because in some sense, except with some cases in reinforcement learning, |
|
|
|
08:15.760 --> 08:23.600 |
|
that idea is not part of the learning process in artificial neural networks. It's almost like |
|
|
|
08:24.320 --> 08:33.120 |
|
do you envision something like an objective function saying, you know what, if you poke this |
|
|
|
08:33.120 --> 08:38.800 |
|
object in this kind of way, it would be really helpful for me to further, further learn. |
|
|
|
08:39.920 --> 08:44.880 |
|
Sort of almost guiding some aspect of learning. Right, right, right. So I was talking to Rebecca |
|
|
|
08:44.880 --> 08:54.240 |
|
Sachs just an hour ago and she was talking about lots and lots of evidence from infants seem to |
|
|
|
08:54.240 --> 09:04.880 |
|
clearly pick what interests them in a directed way. And so they're not passive learners. |
|
|
|
09:04.880 --> 09:11.680 |
|
They, they focus their attention on aspects of the world, which are most interesting, |
|
|
|
09:11.680 --> 09:17.760 |
|
surprising in a non trivial way that makes them change their theories of the world. |
|
|
|
09:17.760 --> 09:29.120 |
|
So that's a fascinating view of the future progress. But on a more maybe boring question, |
|
|
|
09:30.000 --> 09:37.440 |
|
do you think going deeper and larger? So do you think just increasing the size of the things |
|
|
|
09:37.440 --> 09:43.520 |
|
that have been increasing a lot in the past few years will, will also make significant progress? |
|
|
|
09:43.520 --> 09:49.760 |
|
So some of the representational issues that you, you mentioned, they're kind of shallow |
|
|
|
09:50.560 --> 09:54.880 |
|
in some sense. Oh, you mean in the sense of abstraction, |
|
|
|
09:54.880 --> 09:59.040 |
|
abstract in the sense of abstraction, they're not getting some, I don't think that having |
|
|
|
10:00.400 --> 10:05.520 |
|
more depth in the network in the sense of instead of 100 layers, we have 10,000 is going to solve |
|
|
|
10:05.520 --> 10:13.120 |
|
our problem. You don't think so? Is that obvious to you? Yes. What is clear to me is that |
|
|
|
10:13.120 --> 10:21.600 |
|
engineers and companies and labs, grad students will continue to tune architectures and explore |
|
|
|
10:21.600 --> 10:27.520 |
|
all kinds of tweaks to make the current state of the art slightly ever slightly better. But |
|
|
|
10:27.520 --> 10:31.840 |
|
I don't think that's going to be nearly enough. I think we need some fairly drastic changes in |
|
|
|
10:31.840 --> 10:39.680 |
|
the way that we're considering learning to achieve the goal that these learners actually |
|
|
|
10:39.680 --> 10:45.680 |
|
understand in a deep way the environment in which they are, you know, observing and acting. |
|
|
|
10:46.480 --> 10:51.920 |
|
But I guess I was trying to ask a question that's more interesting than just more layers |
|
|
|
10:53.040 --> 11:00.800 |
|
is basically once you figure out a way to learn through interacting, how many parameters does |
|
|
|
11:00.800 --> 11:07.760 |
|
it take to store that information? So I think our brain is quite bigger than most neural networks. |
|
|
|
11:07.760 --> 11:13.120 |
|
Right, right. Oh, I see what you mean. Oh, I'm with you there. So I agree that in order to |
|
|
|
11:14.240 --> 11:19.760 |
|
build neural nets with the kind of broad knowledge of the world that typical adult humans have, |
|
|
|
11:20.960 --> 11:24.880 |
|
probably the kind of computing power we have now is going to be insufficient. |
|
|
|
11:25.600 --> 11:30.320 |
|
So the good news is there are hardware companies building neural net chips. And so |
|
|
|
11:30.320 --> 11:39.280 |
|
it's going to get better. However, the good news in a way, which is also a bad news, is that even |
|
|
|
11:39.280 --> 11:47.840 |
|
our state of the art deep learning methods fail to learn models that understand even very simple |
|
|
|
11:47.840 --> 11:53.680 |
|
environments like some grid worlds that we have built. Even these fairly simple environments, |
|
|
|
11:53.680 --> 11:57.120 |
|
I mean, of course, if you train them with enough examples, eventually they get it, |
|
|
|
11:57.120 --> 12:05.200 |
|
but it's just like instead of what humans might need just dozens of examples, these things will |
|
|
|
12:05.200 --> 12:12.720 |
|
need millions, right, for very, very, very simple tasks. And so I think there's an opportunity |
|
|
|
12:13.520 --> 12:18.080 |
|
for academics who don't have the kind of computing power that say Google has |
|
|
|
12:19.280 --> 12:25.360 |
|
to do really important and exciting research to advance the state of the art in training |
|
|
|
12:25.360 --> 12:32.720 |
|
frameworks, learning models, agent learning in even simple environments that are synthetic, |
|
|
|
12:33.440 --> 12:37.200 |
|
that seem trivial, but yet current machine learning fails on. |
|
|
|
12:38.240 --> 12:48.240 |
|
We talked about priors and common sense knowledge. It seems like we humans take a lot of knowledge |
|
|
|
12:48.240 --> 12:57.040 |
|
for granted. So what's your view of these priors of forming this broad view of the world, this |
|
|
|
12:57.040 --> 13:02.560 |
|
accumulation of information, and how we can teach neural networks or learning systems to pick that |
|
|
|
13:02.560 --> 13:10.880 |
|
knowledge up? So knowledge, you know, for a while, the artificial intelligence, maybe in the 80, |
|
|
|
13:10.880 --> 13:16.880 |
|
like there's a time where knowledge representation, knowledge, acquisition, expert systems, I mean, |
|
|
|
13:16.880 --> 13:24.080 |
|
though, the symbolic AI was a view, was an interesting problem set to solve. And it was kind |
|
|
|
13:24.080 --> 13:29.440 |
|
of put on hold a little bit, it seems like because it doesn't work. It doesn't work. That's right. |
|
|
|
13:29.440 --> 13:37.840 |
|
But that's right. But the goals of that remain important. Yes, remain important. And how do you |
|
|
|
13:37.840 --> 13:45.920 |
|
think those goals can be addressed? Right. So first of all, I believe that one reason why the |
|
|
|
13:45.920 --> 13:52.560 |
|
classical expert systems approach failed is because a lot of the knowledge we have, so you talked |
|
|
|
13:52.560 --> 14:01.760 |
|
about common sense and tuition, there's a lot of knowledge like this, which is not consciously |
|
|
|
14:01.760 --> 14:06.320 |
|
accessible. There are lots of decisions we're taking that we can't really explain, even if |
|
|
|
14:06.320 --> 14:16.160 |
|
sometimes we make up a story. And that knowledge is also necessary for machines to take good |
|
|
|
14:16.160 --> 14:22.320 |
|
decisions. And that knowledge is hard to codify in expert systems, rule based systems, and, you |
|
|
|
14:22.320 --> 14:27.920 |
|
know, classical AI formalism. And there are other issues, of course, with the old AI, like, |
|
|
|
14:29.680 --> 14:34.320 |
|
not really good ways of handling uncertainty, I would say something more subtle, |
|
|
|
14:34.320 --> 14:40.480 |
|
which we understand better now, but I think still isn't enough in the minds of people. |
|
|
|
14:41.360 --> 14:48.480 |
|
There's something really powerful that comes from distributed representations, the thing that really |
|
|
|
14:49.120 --> 14:58.480 |
|
makes neural nets work so well. And it's hard to replicate that kind of power in a symbolic world. |
|
|
|
14:58.480 --> 15:05.200 |
|
The knowledge in expert systems and so on is nicely decomposed into like a bunch of rules. |
|
|
|
15:05.760 --> 15:11.280 |
|
Whereas if you think about a neural net, it's the opposite. You have this big blob of parameters |
|
|
|
15:11.280 --> 15:16.480 |
|
which work intensely together to represent everything the network knows. And it's not |
|
|
|
15:16.480 --> 15:22.880 |
|
sufficiently factorized. And so I think this is one of the weaknesses of current neural nets, |
|
|
|
15:22.880 --> 15:30.080 |
|
that we have to take lessons from classical AI in order to bring in another kind of |
|
|
|
15:30.080 --> 15:35.920 |
|
compositionality, which is common in language, for example, and in these rules. But that isn't |
|
|
|
15:35.920 --> 15:45.040 |
|
so native to neural nets. And on that line of thinking, disentangled representations. Yes. So |
|
|
|
15:46.320 --> 15:51.680 |
|
let me connect with disentangled representations. If you might, if you don't mind. Yes, exactly. |
|
|
|
15:51.680 --> 15:58.080 |
|
Yeah. So for many years, I thought, and I still believe that it's really important that we come |
|
|
|
15:58.080 --> 16:04.080 |
|
up with learning algorithms, either unsupervised or supervised, but reinforcement, whatever, |
|
|
|
16:04.720 --> 16:11.600 |
|
that build representations in which the important factors, hopefully causal factors are nicely |
|
|
|
16:11.600 --> 16:16.240 |
|
separated and easy to pick up from the representation. So that's the idea of disentangled |
|
|
|
16:16.240 --> 16:22.560 |
|
representations. It says transfer the data into a space where everything becomes easy, we can maybe |
|
|
|
16:22.560 --> 16:29.360 |
|
just learn with linear models about the things we care about. And I still think this is important, |
|
|
|
16:29.360 --> 16:36.880 |
|
but I think this is missing out on a very important ingredient, which classical AI systems can remind |
|
|
|
16:36.880 --> 16:41.920 |
|
us of. So let's say we have these disentangled representations, you still need to learn about |
|
|
|
16:41.920 --> 16:47.120 |
|
the, the relationships between the variables, those high level semantic variables, they're not |
|
|
|
16:47.120 --> 16:52.000 |
|
going to be independent. I mean, this is like too much of an assumption. They're going to have some |
|
|
|
16:52.000 --> 16:56.400 |
|
interesting relationships that allow to predict things in the future to explain what happened in |
|
|
|
16:56.400 --> 17:01.840 |
|
the past. The kind of knowledge about those relationships in a classical AI system is |
|
|
|
17:01.840 --> 17:06.640 |
|
encoded in the rules, like a rule is just like a little piece of knowledge that says, oh, I have |
|
|
|
17:06.640 --> 17:12.160 |
|
these two, three, four variables that are linked in this interesting way. Then I can say something |
|
|
|
17:12.160 --> 17:17.280 |
|
about one or two of them given a couple of others, right? In addition to disentangling the, |
|
|
|
17:18.880 --> 17:23.520 |
|
the elements of the representation, which are like the variables in a rule based system, |
|
|
|
17:24.080 --> 17:33.200 |
|
you also need to disentangle the, the mechanisms that relate those variables to each other. |
|
|
|
17:33.200 --> 17:37.760 |
|
So like the rules. So if the rules are neatly separated, like each rule is, you know, living |
|
|
|
17:37.760 --> 17:44.960 |
|
on its own. And when I, I change a rule because I'm learning, it doesn't need to break other rules. |
|
|
|
17:44.960 --> 17:49.280 |
|
Whereas current neural nets, for example, are very sensitive to what's called catastrophic |
|
|
|
17:49.280 --> 17:54.800 |
|
forgetting, where after I've learned some things, and then they learn new things, they can destroy |
|
|
|
17:54.800 --> 18:00.480 |
|
the old things that I had learned, right? If the knowledge was better factorized and, and |
|
|
|
18:00.480 --> 18:08.240 |
|
and separated disentangled, then you would avoid a lot of that. Now you can't do this in the |
|
|
|
18:08.880 --> 18:17.200 |
|
sensory domain, but my idea in like a pixel space, but, but my idea is that when you project the |
|
|
|
18:17.200 --> 18:22.560 |
|
data in the right semantic space, it becomes possible to now represent this extra knowledge |
|
|
|
18:23.440 --> 18:27.760 |
|
beyond the transformation from input to representations, which is how representations |
|
|
|
18:27.760 --> 18:33.120 |
|
act on each other and predict the future and so on, in a way that can be neatly |
|
|
|
18:34.560 --> 18:38.560 |
|
disentangled. So now it's the rules that are disentangled from each other and not just the |
|
|
|
18:38.560 --> 18:43.680 |
|
variables that are disentangled from each other. And you draw distinction between semantic space |
|
|
|
18:43.680 --> 18:48.400 |
|
and pixel, like, does there need to be an architectural difference? Well, yeah. So, so |
|
|
|
18:48.400 --> 18:51.840 |
|
there's the sensory space like pixels, which where everything is entangled, |
|
|
|
18:51.840 --> 18:58.000 |
|
and the information, like the variables are completely interdependent in very complicated |
|
|
|
18:58.000 --> 19:03.760 |
|
ways. And also computation, like the, it's not just variables, it's also how they are |
|
|
|
19:03.760 --> 19:10.240 |
|
related to each other is, is all intertwined. But, but I'm hypothesizing that in the right |
|
|
|
19:10.240 --> 19:16.800 |
|
high level representation space, both the variables and how they relate to each other |
|
|
|
19:16.800 --> 19:22.960 |
|
can be disentangled and that will provide a lot of generalization power. Generalization power. |
|
|
|
19:22.960 --> 19:29.760 |
|
Yes. Distribution of the test set, it's assumed to be the same as a distribution of the training |
|
|
|
19:29.760 --> 19:36.640 |
|
set. Right. This is where current machine learning is too weak. It doesn't tell us anything, |
|
|
|
19:36.640 --> 19:41.120 |
|
is not able to tell us anything about how our neural nets, say, are going to generalize to a |
|
|
|
19:41.120 --> 19:46.160 |
|
new distribution. And, and, you know, people may think, well, but there's nothing we can say if |
|
|
|
19:46.160 --> 19:51.840 |
|
we don't know what the new distribution will be. The truth is, humans are able to generalize to |
|
|
|
19:51.840 --> 19:56.560 |
|
new distributions. Yeah, how are we able to do that? So yeah, because there is something, these |
|
|
|
19:56.560 --> 20:00.720 |
|
new distributions, even though they could look very different from the training distributions, |
|
|
|
20:01.520 --> 20:05.360 |
|
they have things in common. So let me give you a concrete example. You read a science fiction |
|
|
|
20:05.360 --> 20:12.560 |
|
novel, the science fiction novel, maybe, you know, brings you in some other planet where |
|
|
|
20:12.560 --> 20:17.760 |
|
things look very different on the surface, but it's still the same laws of physics. |
|
|
|
20:18.560 --> 20:21.440 |
|
All right. And so you can read the book and you understand what's going on. |
|
|
|
20:22.960 --> 20:29.200 |
|
So the distribution is very different. But because you can transport a lot of the knowledge you had |
|
|
|
20:29.200 --> 20:35.680 |
|
from Earth about the underlying cause and effect relationships and physical mechanisms and all |
|
|
|
20:35.680 --> 20:40.880 |
|
that, and maybe even social interactions, you can now make sense of what is going on on this |
|
|
|
20:40.880 --> 20:43.920 |
|
planet where like visually, for example, things are totally different. |
|
|
|
20:45.920 --> 20:52.000 |
|
Taking that analogy further and distorting it, let's enter a science fiction world of, say, |
|
|
|
20:52.000 --> 21:00.720 |
|
Space Odyssey 2001 with Hal. Yeah. Or maybe, which is probably one of my favorite AI movies. |
|
|
|
21:00.720 --> 21:06.080 |
|
Me too. And then there's another one that a lot of people love that may be a little bit outside |
|
|
|
21:06.080 --> 21:13.120 |
|
of the AI community is Ex Machina. I don't know if you've seen it. Yes. By the way, what are your |
|
|
|
21:13.120 --> 21:19.600 |
|
reviews on that movie? Are you able to enjoy it? So there are things I like and things I hate. |
|
|
|
21:21.120 --> 21:25.760 |
|
So let me, you could talk about that in the context of a question I want to ask, |
|
|
|
21:25.760 --> 21:31.920 |
|
which is there's quite a large community of people from different backgrounds off and outside of AI |
|
|
|
21:31.920 --> 21:36.480 |
|
who are concerned about existential threat of artificial intelligence. Right. You've seen |
|
|
|
21:36.480 --> 21:41.920 |
|
now this community develop over time. You've seen you have a perspective. So what do you think is |
|
|
|
21:41.920 --> 21:47.680 |
|
the best way to talk about AI safety, to think about it, to have discourse about it within AI |
|
|
|
21:47.680 --> 21:53.920 |
|
community and outside and grounded in the fact that Ex Machina is one of the main sources of |
|
|
|
21:53.920 --> 21:59.040 |
|
information for the general public about AI. So I think you're putting it right. There's a big |
|
|
|
21:59.040 --> 22:04.400 |
|
difference between the sort of discussion we ought to have within the AI community |
|
|
|
22:05.200 --> 22:11.600 |
|
and the sort of discussion that really matter in the general public. So I think the picture of |
|
|
|
22:11.600 --> 22:19.040 |
|
Terminator and, you know, AI loose and killing people and super intelligence that's going to |
|
|
|
22:19.040 --> 22:26.320 |
|
destroy us, whatever we try, isn't really so useful for the public discussion because |
|
|
|
22:26.320 --> 22:32.960 |
|
for the public discussion that things I believe really matter are the short term and |
|
|
|
22:32.960 --> 22:40.560 |
|
mini term, very likely negative impacts of AI on society, whether it's from security, |
|
|
|
22:40.560 --> 22:45.680 |
|
like, you know, big brother scenarios with face recognition or killer robots, or the impact on |
|
|
|
22:45.680 --> 22:52.400 |
|
the job market, or concentration of power and discrimination, all kinds of social issues, |
|
|
|
22:52.400 --> 22:58.240 |
|
which could actually, some of them could really threaten democracy, for example. |
|
|
|
22:58.800 --> 23:04.000 |
|
Just to clarify, when you said killer robots, you mean autonomous weapons as a weapon system? |
|
|
|
23:04.000 --> 23:10.400 |
|
Yes, I don't mean, no, that's right. So I think these short and medium term concerns |
|
|
|
23:11.280 --> 23:18.560 |
|
should be important parts of the public debate. Now, existential risk, for me, is a very unlikely |
|
|
|
23:18.560 --> 23:26.880 |
|
consideration, but still worth academic investigation. In the same way that you could say, |
|
|
|
23:26.880 --> 23:32.640 |
|
should we study what could happen if meteorite, you know, came to earth and destroyed it. |
|
|
|
23:32.640 --> 23:37.680 |
|
So I think it's very unlikely that this is going to happen in or happen in a reasonable future. |
|
|
|
23:37.680 --> 23:45.520 |
|
It's very, the sort of scenario of an AI getting loose goes against my understanding of at least |
|
|
|
23:45.520 --> 23:50.160 |
|
current machine learning and current neural nets and so on. It's not plausible to me. |
|
|
|
23:50.160 --> 23:54.320 |
|
But of course, I don't have a crystal ball and who knows what AI will be in 50 years from now. |
|
|
|
23:54.320 --> 23:59.280 |
|
So I think it is worth that scientists study those problems. It's just not a pressing question, |
|
|
|
23:59.280 --> 24:04.880 |
|
as far as I'm concerned. So before I continue down that line, I have a few questions there, but |
|
|
|
24:06.640 --> 24:11.440 |
|
what do you like and not like about X Machina as a movie? Because I actually watched it for the |
|
|
|
24:11.440 --> 24:17.840 |
|
second time and enjoyed it. I hated it the first time and I enjoyed it quite a bit more the second |
|
|
|
24:17.840 --> 24:26.080 |
|
time when I sort of learned to accept certain pieces of it. See it as a concept movie. What |
|
|
|
24:26.080 --> 24:36.160 |
|
was your experience? What were your thoughts? So the negative is the picture it paints of science |
|
|
|
24:36.160 --> 24:41.760 |
|
is totally wrong. Science in general and AI in particular. Science is not happening |
|
|
|
24:43.120 --> 24:51.840 |
|
in some hidden place by some really smart guy. One person. One person. This is totally unrealistic. |
|
|
|
24:51.840 --> 24:58.240 |
|
This is not how it happens. Even a team of people in some isolated place will not make it. |
|
|
|
24:58.240 --> 25:07.920 |
|
Science moves by small steps thanks to the collaboration and community of a large number |
|
|
|
25:07.920 --> 25:16.000 |
|
of people interacting and all the scientists who are expert in their field kind of know what is |
|
|
|
25:16.000 --> 25:24.000 |
|
going on even in the industrial labs. Information flows and leaks and so on. And the spirit of |
|
|
|
25:24.000 --> 25:30.320 |
|
it is very different from the way science is painted in this movie. Yeah, let me ask on that |
|
|
|
25:30.320 --> 25:36.400 |
|
point. It's been the case to this point that kind of even if the research happens inside |
|
|
|
25:36.400 --> 25:42.000 |
|
Google or Facebook, inside companies, it still kind of comes out. Do you think that will always be |
|
|
|
25:42.000 --> 25:48.960 |
|
the case with AI? Is it possible to bottle ideas to the point where there's a set of breakthroughs |
|
|
|
25:48.960 --> 25:53.120 |
|
that go completely undiscovered by the general research community? Do you think that's even |
|
|
|
25:53.120 --> 26:02.240 |
|
possible? It's possible, but it's unlikely. It's not how it is done now. It's not how I can force |
|
|
|
26:02.240 --> 26:13.120 |
|
it in in the foreseeable future. But of course, I don't have a crystal ball. And so who knows, |
|
|
|
26:13.120 --> 26:18.240 |
|
this is science fiction after all. But but usually ominous that the lights went off during |
|
|
|
26:18.240 --> 26:24.320 |
|
during that discussion. So the problem again, there's a you know, one thing is the movie and |
|
|
|
26:24.320 --> 26:28.720 |
|
you could imagine all kinds of science fiction. The problem with for me, maybe similar to the |
|
|
|
26:28.720 --> 26:37.120 |
|
question about existential risk is that this kind of movie paints such a wrong picture of what is |
|
|
|
26:37.120 --> 26:43.520 |
|
actual, you know, the actual science and how it's going on that that it can have unfortunate effects |
|
|
|
26:43.520 --> 26:49.040 |
|
on people's understanding of current science. And so that's kind of sad. |
|
|
|
26:50.560 --> 26:56.800 |
|
There's an important principle in research, which is diversity. So in other words, |
|
|
|
26:58.000 --> 27:02.720 |
|
research is exploration, research is exploration in the space of ideas. And different people |
|
|
|
27:03.440 --> 27:09.920 |
|
will focus on different directions. And this is not just good, it's essential. So I'm totally fine |
|
|
|
27:09.920 --> 27:16.640 |
|
with people exploring directions that are contrary to mine or look orthogonal to mine. |
|
|
|
27:18.560 --> 27:24.880 |
|
I am more than fine, I think it's important. I and my friends don't claim we have universal |
|
|
|
27:24.880 --> 27:29.680 |
|
truth about what will especially about what will happen in the future. Now that being said, |
|
|
|
27:30.320 --> 27:37.600 |
|
we have our intuitions and then we act accordingly, according to where we think we can be most useful |
|
|
|
27:37.600 --> 27:43.360 |
|
and where society has the most to gain or to lose. We should have those debates and |
|
|
|
27:45.920 --> 27:50.080 |
|
and not end up in a society where there's only one voice and one way of thinking and |
|
|
|
27:51.360 --> 27:59.120 |
|
research money is spread out. So this agreement is a sign of good research, good science. So |
|
|
|
27:59.120 --> 28:08.560 |
|
yes. The idea of bias in the human sense of bias. How do you think about instilling in machine |
|
|
|
28:08.560 --> 28:15.440 |
|
learning something that's aligned with human values in terms of bias? We intuitively assume |
|
|
|
28:15.440 --> 28:21.680 |
|
beings have a concept of what bias means, of what fundamental respect for other human beings means, |
|
|
|
28:21.680 --> 28:25.280 |
|
but how do we instill that into machine learning systems, do you think? |
|
|
|
28:25.280 --> 28:32.720 |
|
So I think there are short term things that are already happening and then there are long term |
|
|
|
28:32.720 --> 28:39.040 |
|
things that we need to do. In the short term, there are techniques that have been proposed and |
|
|
|
28:39.040 --> 28:44.800 |
|
I think will continue to be improved and maybe alternatives will come up to take data sets |
|
|
|
28:45.600 --> 28:51.200 |
|
in which we know there is bias, we can measure it. Pretty much any data set where humans are |
|
|
|
28:51.200 --> 28:56.080 |
|
being observed taking decisions will have some sort of bias discrimination against particular |
|
|
|
28:56.080 --> 29:04.000 |
|
groups and so on. And we can use machine learning techniques to try to build predictors, classifiers |
|
|
|
29:04.000 --> 29:11.920 |
|
that are going to be less biased. We can do it for example using adversarial methods to make our |
|
|
|
29:11.920 --> 29:19.520 |
|
systems less sensitive to these variables we should not be sensitive to. So these are clear, |
|
|
|
29:19.520 --> 29:24.240 |
|
well defined ways of trying to address the problem, maybe they have weaknesses and more |
|
|
|
29:24.240 --> 29:30.400 |
|
research is needed and so on, but I think in fact they're sufficiently mature that governments should |
|
|
|
29:30.400 --> 29:36.160 |
|
start regulating companies where it matters say like insurance companies so that they use those |
|
|
|
29:36.160 --> 29:43.840 |
|
techniques because those techniques will probably reduce the bias, but at a cost for example maybe |
|
|
|
29:43.840 --> 29:47.920 |
|
their predictions will be less accurate and so companies will not do it until you force them. |
|
|
|
29:47.920 --> 29:56.000 |
|
All right, so this is short term. Long term, I'm really interested in thinking how we can |
|
|
|
29:56.000 --> 30:02.160 |
|
instill moral values into computers. Obviously this is not something we'll achieve in the next five |
|
|
|
30:02.160 --> 30:11.680 |
|
or 10 years. There's already work in detecting emotions for example in images and sounds and |
|
|
|
30:11.680 --> 30:21.520 |
|
texts and also studying how different agents interacting in different ways may correspond to |
|
|
|
30:22.960 --> 30:30.000 |
|
patterns of say injustice which could trigger anger. So these are things we can do in the |
|
|
|
30:30.000 --> 30:42.160 |
|
medium term and eventually train computers to model for example how humans react emotionally. I would |
|
|
|
30:42.160 --> 30:49.920 |
|
say the simplest thing is unfair situations which trigger anger. This is one of the most basic |
|
|
|
30:49.920 --> 30:55.360 |
|
emotions that we share with other animals. I think it's quite feasible within the next few years so |
|
|
|
30:55.360 --> 31:00.800 |
|
we can build systems that can detect these kind of things to the extent unfortunately that they |
|
|
|
31:00.800 --> 31:07.840 |
|
understand enough about the world around us which is a long time away but maybe we can initially do |
|
|
|
31:07.840 --> 31:14.800 |
|
this in virtual environments so you can imagine like a video game where agents interact in some |
|
|
|
31:14.800 --> 31:21.760 |
|
ways and then some situations trigger an emotion. I think we could train machines to detect those |
|
|
|
31:21.760 --> 31:27.920 |
|
situations and predict that the particular emotion will likely be felt if a human was playing one |
|
|
|
31:27.920 --> 31:34.080 |
|
of the characters. You have shown excitement and done a lot of excellent work with unsupervised |
|
|
|
31:34.080 --> 31:42.800 |
|
learning but there's been a lot of success on the supervised learning. One of the things I'm |
|
|
|
31:42.800 --> 31:48.800 |
|
really passionate about is how humans and robots work together and in the context of supervised |
|
|
|
31:48.800 --> 31:54.800 |
|
learning that means the process of annotation. Do you think about the problem of annotation of |
|
|
|
31:55.520 --> 32:04.080 |
|
put in a more interesting way is humans teaching machines? Yes, I think it's an important subject. |
|
|
|
32:04.880 --> 32:11.280 |
|
Reducing it to annotation may be useful for somebody building a system tomorrow but |
|
|
|
32:12.560 --> 32:17.600 |
|
longer term the process of teaching I think is something that deserves a lot more attention |
|
|
|
32:17.600 --> 32:21.840 |
|
from the machine learning community so there are people of coin the term machine teaching. |
|
|
|
32:22.560 --> 32:30.480 |
|
So what are good strategies for teaching a learning agent and can we design, train a system |
|
|
|
32:30.480 --> 32:38.000 |
|
that is going to be a good teacher? So in my group we have a project called a BBI or BBI game |
|
|
|
32:38.640 --> 32:46.000 |
|
where there is a game or a scenario where there's a learning agent and a teaching agent |
|
|
|
32:46.000 --> 32:54.400 |
|
presumably the teaching agent would eventually be a human but we're not there yet and the |
|
|
|
32:56.000 --> 33:00.880 |
|
role of the teacher is to use its knowledge of the environment which it can acquire using |
|
|
|
33:00.880 --> 33:09.680 |
|
whatever way brute force to help the learner learn as quickly as possible. So the learner |
|
|
|
33:09.680 --> 33:13.920 |
|
is going to try to learn by itself maybe using some exploration and whatever |
|
|
|
33:13.920 --> 33:21.520 |
|
but the teacher can choose, can have an influence on the interaction with the learner |
|
|
|
33:21.520 --> 33:28.960 |
|
so as to guide the learner maybe teach it the things that the learner has most trouble with |
|
|
|
33:28.960 --> 33:34.320 |
|
or just add the boundary between what it knows and doesn't know and so on. So there's a tradition |
|
|
|
33:34.320 --> 33:41.280 |
|
of these kind of ideas from other fields and like tutorial systems for example and AI |
|
|
|
33:41.280 --> 33:46.880 |
|
and of course people in the humanities have been thinking about these questions but I think |
|
|
|
33:46.880 --> 33:52.560 |
|
it's time that machine learning people look at this because in the future we'll have more and more |
|
|
|
33:53.760 --> 33:59.680 |
|
human machine interaction with the human in the loop and I think understanding how to make this |
|
|
|
33:59.680 --> 34:04.080 |
|
work better. Oh the problems around that are very interesting and not sufficiently addressed. |
|
|
|
34:04.080 --> 34:11.440 |
|
You've done a lot of work with language too, what aspect of the traditionally formulated |
|
|
|
34:11.440 --> 34:17.040 |
|
touring test, a test of natural language understanding in generation in your eyes is the |
|
|
|
34:17.040 --> 34:22.960 |
|
most difficult of conversation, what in your eyes is the hardest part of conversation to solve for |
|
|
|
34:22.960 --> 34:30.640 |
|
machines. So I would say it's everything having to do with the non linguistic knowledge which |
|
|
|
34:30.640 --> 34:36.400 |
|
implicitly you need in order to make sense of sentences. Things like the winner grad schemas |
|
|
|
34:36.400 --> 34:42.400 |
|
so these sentences that are semantically ambiguous. In other words you need to understand enough about |
|
|
|
34:42.400 --> 34:48.720 |
|
the world in order to really interpret properly those sentences. I think these are interesting |
|
|
|
34:48.720 --> 34:55.840 |
|
challenges for machine learning because they point in the direction of building systems that |
|
|
|
34:55.840 --> 35:02.880 |
|
both understand how the world works and there's causal relationships in the world and associate |
|
|
|
35:03.520 --> 35:09.760 |
|
that knowledge with how to express it in language either for reading or writing. |
|
|
|
35:11.840 --> 35:17.600 |
|
You speak French? Yes, it's my mother tongue. It's one of the romance languages. Do you think |
|
|
|
35:17.600 --> 35:23.040 |
|
passing the touring test and all the underlying challenges we just mentioned depend on language? |
|
|
|
35:23.040 --> 35:28.000 |
|
Do you think it might be easier in French than it is in English or is independent of language? |
|
|
|
35:28.800 --> 35:37.680 |
|
I think it's independent of language. I would like to build systems that can use the same |
|
|
|
35:37.680 --> 35:45.840 |
|
principles, the same learning mechanisms to learn from human agents, whatever their language. |
|
|
|
35:45.840 --> 35:53.600 |
|
Well, certainly us humans can talk more beautifully and smoothly in poetry. So I'm Russian originally. |
|
|
|
35:53.600 --> 36:01.360 |
|
I know poetry in Russian is maybe easier to convey complex ideas than it is in English |
|
|
|
36:02.320 --> 36:09.520 |
|
but maybe I'm showing my bias and some people could say that about French. But of course the |
|
|
|
36:09.520 --> 36:16.400 |
|
goal ultimately is our human brain is able to utilize any kind of those languages to use them |
|
|
|
36:16.400 --> 36:21.040 |
|
as tools to convey meaning. Yeah, of course there are differences between languages and maybe some |
|
|
|
36:21.040 --> 36:25.920 |
|
are slightly better at some things but in the grand scheme of things where we're trying to understand |
|
|
|
36:25.920 --> 36:31.040 |
|
how the brain works and language and so on, I think these differences are minute. |
|
|
|
36:31.040 --> 36:42.880 |
|
So you've lived perhaps through an AI winter of sorts. Yes. How did you stay warm and continue |
|
|
|
36:42.880 --> 36:48.480 |
|
with your research? Stay warm with friends. With friends. Okay, so it's important to have friends |
|
|
|
36:48.480 --> 36:57.200 |
|
and what have you learned from the experience? Listen to your inner voice. Don't, you know, be |
|
|
|
36:57.200 --> 37:07.680 |
|
trying to just please the crowds and the fashion and if you have a strong intuition about something |
|
|
|
37:08.480 --> 37:15.520 |
|
that is not contradicted by actual evidence, go for it. I mean, it could be contradicted by people. |
|
|
|
37:16.960 --> 37:21.920 |
|
Not your own instinct of based on everything you've learned. So of course you have to adapt |
|
|
|
37:21.920 --> 37:29.440 |
|
your beliefs when your experiments contradict those beliefs but you have to stick to your |
|
|
|
37:29.440 --> 37:36.160 |
|
beliefs otherwise. It's what allowed me to go through those years. It's what allowed me to |
|
|
|
37:37.120 --> 37:44.480 |
|
persist in directions that, you know, took time, whatever other people think, took time to mature |
|
|
|
37:44.480 --> 37:53.680 |
|
and bring fruits. So history of AI is marked with these, of course it's marked with technical |
|
|
|
37:53.680 --> 37:58.880 |
|
breakthroughs but it's also marked with these seminal events that capture the imagination |
|
|
|
37:58.880 --> 38:06.000 |
|
of the community. Most recent, I would say AlphaGo beating the world champion human go player |
|
|
|
38:06.000 --> 38:14.000 |
|
was one of those moments. What do you think the next such moment might be? Okay, sir, first of all, |
|
|
|
38:14.000 --> 38:24.880 |
|
I think that these so called seminal events are overrated. As I said, science really moves by |
|
|
|
38:24.880 --> 38:33.760 |
|
small steps. Now what happens is you make one more small step and it's like the drop that, |
|
|
|
38:33.760 --> 38:40.560 |
|
you know, allows to, that fills the bucket and then you have drastic consequences because now |
|
|
|
38:40.560 --> 38:46.240 |
|
you're able to do something you were not able to do before or now say the cost of building some |
|
|
|
38:46.240 --> 38:51.920 |
|
device or solving a problem becomes cheaper than what existed and you have a new market that opens |
|
|
|
38:51.920 --> 39:00.080 |
|
up. So especially in the world of commerce and applications, the impact of a small scientific |
|
|
|
39:00.080 --> 39:07.520 |
|
progress could be huge but in the science itself, I think it's very, very gradual and |
|
|
|
39:07.520 --> 39:15.280 |
|
where are these steps being taken now? So there's unsupervised, right? So if I look at one trend |
|
|
|
39:15.280 --> 39:24.080 |
|
that I like in my community, for example, and at me line, my institute, what are the two hardest |
|
|
|
39:24.080 --> 39:32.800 |
|
topics? GANs and reinforcement learning, even though in Montreal in particular, like reinforcement |
|
|
|
39:32.800 --> 39:39.600 |
|
learning was something pretty much absent just two or three years ago. So it is really a big |
|
|
|
39:39.600 --> 39:48.400 |
|
interest from students and there's a big interest from people like me. So I would say this is |
|
|
|
39:48.400 --> 39:54.960 |
|
something where we're going to see more progress even though it hasn't yet provided much in terms of |
|
|
|
39:54.960 --> 40:01.280 |
|
actual industrial fallout. Like even though there's Alpha Gold, there's no, like Google is not making |
|
|
|
40:01.280 --> 40:06.320 |
|
money on this right now. But I think over the long term, this is really, really important for many |
|
|
|
40:06.320 --> 40:13.760 |
|
reasons. So in other words, I would say reinforcement learning maybe more generally agent learning |
|
|
|
40:13.760 --> 40:17.520 |
|
because it doesn't have to be with rewards. It could be in all kinds of ways that an agent |
|
|
|
40:17.520 --> 40:23.040 |
|
is learning about its environment. Now, reinforcement learning, you're excited about. Do you think |
|
|
|
40:23.040 --> 40:32.320 |
|
GANs could provide something? Yes. Some moment in it. Well, GANs or other |
|
|
|
40:33.760 --> 40:41.360 |
|
generative models, I believe, will be crucial ingredients in building agents that can understand |
|
|
|
40:41.360 --> 40:48.880 |
|
the world. A lot of the successes in reinforcement learning in the past has been with policy |
|
|
|
40:48.880 --> 40:53.360 |
|
gradient where you'll just learn a policy. You don't actually learn a model of the world. But |
|
|
|
40:53.360 --> 40:58.640 |
|
there are lots of issues with that. And we don't know how to do model based RL right now. But I |
|
|
|
40:58.640 --> 41:06.080 |
|
think this is where we have to go in order to build models that can generalize faster and better, |
|
|
|
41:06.080 --> 41:13.200 |
|
like to new distributions that capture, to some extent, at least the underlying causal |
|
|
|
41:13.200 --> 41:20.320 |
|
mechanisms in the world. Last question. What made you fall in love with artificial intelligence? |
|
|
|
41:20.960 --> 41:28.400 |
|
If you look back, what was the first moment in your life when you were fascinated by either |
|
|
|
41:28.400 --> 41:33.600 |
|
the human mind or the artificial mind? You know, when I was an adolescent, I was reading a lot. |
|
|
|
41:33.600 --> 41:41.920 |
|
And then I started reading science fiction. There you go. That's it. That's where I got hooked. |
|
|
|
41:41.920 --> 41:50.160 |
|
And then, you know, I had one of the first personal computers and I got hooked in programming. |
|
|
|
41:50.960 --> 41:55.040 |
|
And so it just, you know, start with fiction and then make it a reality. That's right. |
|
|
|
41:55.040 --> 42:12.080 |
|
Yosha, thank you so much for talking to me. My pleasure. |
|
|
|
|