{"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "individuallyselected_w5cb5-by Vael Gates-date 20220318", "authors": ["Vael Gates"], "date_published": "2022-03-18", "text": "# Interview with AI Researchers individuallyselected_w5cb5 by Vael Gates\n\n**Interview with w5cb5, on 3/18/22**\n====================================\n\n**0:00:02.2 Vael:** Alright, so my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:08.5 Interviewee:** I worked in \\[subfield\\] originally, but I guess I branched out more broadly into AI research, because I\\'m \\[high-level research role\\] now at an AI company.\n\n**0:00:19.9 Vael:** Great, yeah. And then what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:00:28.3 Interviewee:** So I think, yeah, the world is going to change quite a lot with AI technology, and I think mostly in good ways, just because we\\'re going to empower people with this technology. And it\\'s going to be empowering I think in similar ways to the Internet, where people can do faster search, they have an assistant who can help them with all kinds of stuff. They have friends who maybe are not real, and all kinds of ways to make people happier, I think, or more efficient, or to give them time back and that sort of stuff. But obviously, there are also risks and the main risks are, I think that the field is too dominated by tech bros from Silicon Valley, so I guess I fall under that in a way. And so I think that\\'s a real problem, so we need to take democratization of the technology much more seriously, that\\'s also what my company is doing. And I think if we think about the ethical implications of our technology from first principles, and if we make them first-class citizens rather than just treating them as an afterthought, where you submit your paper and then, \\\"Oh, I also need to write a broader impact statement,\\\" but if you take that very seriously from the beginning as a core principle of your organization, then I think you can do much better research in a much more responsible way.\n\n**0:01:56.5 Vael:** Interesting. Alright, so that was the question of \\\"what are you most excited about and what are you most worried about in AI\\\", okay. I heard\\-- Lots of things they can go, lots of places they can go, lots of directions they can go, but you\\'re worried about domination from specific areas and then people not caring about\\... ethics enough? or---\n\n**0:02:14.6 Interviewee:** Yeah, so misuse of technology. Do you want me to give you concrete examples? So I think very often, the technology that we develop, even if it\\'s meant for benevolent purposes, can also be re-applied for not so benevolent purposes. And so like speech recognition or face recognition, things like that, you have to just be very careful with how you treat this technology. So that\\'s why I think if people take responsible AI seriously from the beginning, that that is a good thing too.\n\n**0:02:53.0 Vael:** Interesting. So you think if people incorporate responsible AI from the beginning of the process, then there will be less risk of misuse by any agent in the future?\n\n**0:03:04.5 Interviewee:** Yeah, yeah. So you mentioned your IRB, so for a lot of technological research happening in industry, there is no real IRB. Some companies have sort of IRBs but most of them are so commercial and so driven by money in the end. And I think maybe we need an independent AI IRB for the broader research community, where anybody can go there and have somebody look at the potential applications of their work.\n\n**0:03:39.6 Vael:** I see, cool. And then just having that sort of mindset seems good, in addition to the object-level effects. Alright. Makes sense. So focusing on future AI, putting on a science fiction forecasting hat, say we are 50 years, 50 plus years into the future. So at least 50 years into the future, what does that future look like?\n\n**0:04:00.3 Interviewee:** At least 50 years in the future. So I still don\\'t think we will have AGI, and that\\'s I guess, I\\'m probably unusual in the field because I think a lot of my colleagues would disagree, especially if they\\'re at OpenAI or DeepMind because they think that it\\'s like two years away. (Vael: \\\"Two years, huh!\\\") Yeah, well it depends on who you ask, they have some crazy people. \\[chuckle\\] I think in the next decade, we\\'re going to realize what the limitations are of our current technology. I think what we\\'ve been doing now has been very efficient in terms of scaling with data and scaling with compute, but it\\'s very likely that we\\'re just going to need entirely new algorithms that just require pure scientific breakthroughs. And so I don\\'t think there\\'s going to be another AI winter, but I do think that things are going to cool down a little bit again, because right now it\\'s just been super hyped up. For good reason too, because we are really making really great progress. But there is still things that we really don\\'t know how to do, so we have language models and they can do things and they\\'re amazing, but we don\\'t know how to make the language model do what we want it to do. So we\\'re all just sort of hacking it a little bit, but it\\'s not really anywhere close to being like a proper assistant, for example, who actually understands what you\\'re saying, who actually understands the world. I think where we want to be 50 years from now is where we have machines who understand the world in the same way that humans understand it, so maybe something like Neuralink. So if I\\'m being very futuristic, connecting AI to human brains and human perception of reality, that could be a way to get AI to have a much richer understanding of the world in the same way that humans understand it. So like dolphins are also very intelligent, but they also don\\'t understand humans and they are not very useful assistants, right? I don\\'t know if you\\'ve ever had any dolphin assistant. So it\\'s not really bad intelligence, it\\'s specifically about human intelligence that makes AI potentially useful for us, and so that\\'s something that I think is often overlooked.\n\n**0:06:26.9 Vael:** So it sounds like, so you\\'re thinking about when AGI will happen. And you said that you don\\'t think we\\'re gonna hit some sort of ceiling or slow down on the current deep learning paradigm or just like keep on scaling\\--\n\n**0:06:39.6 Interviewee:** Yeah, it\\'s going to be asymptotic, and at some point, we\\'re just going to hit the limits of what we can do with scaling data and scaling compute. And in order to get the next leap to real AGI I think we just need radically different ideas.\n\n**0:06:55.1 Vael:** Yeah, when do you think we\\'re going to\\-- what kind of systems do you think we\\'re going to have when we cap out on the current scaling paradigm?\n\n**0:07:02.0 Interviewee:** Well, I think like the ones we have now, but yeah, in 50 years, I don\\'t know. But in like 5 to 10 years, it will just be much bigger versions of this. And so what we have seen is that if you scale these systems, they generalize much better. If that keeps happening, then we would just have much better versions of what we have now. But still it\\'s a language model that doesn\\'t understand the world, and so still it\\'s the component that is very limited in seeing only the training data that is in images on the internet, which is not all of the images that we have in the world, right? So I think the real problem is data, not so much scaling the compute.\n\n**0:07:49.7 Vael:** What if we had a system that has cameras and can process auditory stuff that is happening all around it or something and it\\'s not just using internet data, do you think that would eventually have enough data?\n\n**0:08:03.3 Interviewee:** Yeah, so that\\'s what I was just saying. If you have something that\\'s embodied in the world in the same way as a human and where humans treat it as another human, sort of like cyborg style, things like that, that\\'s a good way to get lots of very high quality data in the same way that humans get it. What are they called? Androids, right?\n\n**0:08:24.9 Vael:** Yeah.\n\n**0:08:25.3 Interviewee:** So if we actually had android robots walking around and being raised by humans and then we figured out how the learning algorithms would work in those settings, then you would get something that is very close to human intelligence. A good example I always like to use is the smell of coffee. So I know that you know what coffee smells like, but can you describe it to me in one sentence?\n\n**0:08:54.2 Vael:** Probably not, no.\n\n**0:08:55.7 Interviewee:** You can\\'t, right? But the same goes for the taste of banana or things like that. I know that you know, so I\\'ve never had to express this in words. So this is one of the fundamental parts of your brain; smell and taste are even older than sight and hearing. And so there\\'s a lot of stuff happening in your brain that is just taken for granted. You can call this common sense or whatever you want, but it\\'s like an evolutionary prior that all humans share with each other, and so that prior governs a lot of our behavior and a lot of our communication. So if you want machines to learn language but they don\\'t have that prior, it becomes really, really hard for them to really understand what we\\'re saying, right?\n\n**0:09:38.7 Vael:** Yeah. I think when I think about AGI, I think about AGI that can do\\-- or, just, generalizable systems that can do things that humans want them to do. So imagine we have like a CEO AI or a scientist AI. I don\\'t think I need my CEO or scientist AI enough to know what coffee smells like per se, but I do need it to be able to like break down experiments and think kind of creative thoughts and figure out things.\n\n**0:09:58.7 Interviewee:** Yeah, but I think what I\\'m saying is that if they don\\'t know what coffee smells like, that\\'s just one example, but there are millions of these things that are just things we take for granted, that we don\\'t really talk about. And so this will not be born out in the data in any way, so that means that a lot of the underlying assumptions are never really in the data, right? They\\'re in our behavior, and so for an AI to pick up on those is going to be very difficult.\n\n**0:10:27.6 Vael:** What if there were cameras everywhere, and it got to record everyone and process those?\n\n**0:10:32.3 Interviewee:** Yeah, maybe. So the real question is, if you just throw infinite data at it, then will it work with current machine learning algorithms? Is I guess what you\\'re asking, right? And so I don\\'t know. I mean, I know that our learning algorithm is very different from a neural net, but I think if you look at it from a mathematical perspective, then gradient descent is probably more efficient than Hebbian learning anyway. So mathematically, it\\'s definitely possible that if you have infinite data and infinite compute, then you can get something really amazing. Sure, we are the proof of that, right? So whether that also immediately makes it useful for us is a different question, I think.\n\n**0:11:20.8 Vael:** Interesting. Yeah, I think I\\'m trying to probe \\\"do we need something like embodied AI in order to get AGI\\\" or something. And then your last comment was like, whether that makes it useful for us. I\\'m like, well, presumably we\\'re going to\\... feeding it a lot of data lets it do grounding, so like relationships between language and what actually exists in the world and how physics works. But presumably, we\\'re going to be training them to do what we want, right? So that it will be useful to us?\n\n**0:11:43.5 Interviewee:** Well, it depends, right? Can we do that? Probably the way they will learn this stuff is through self-supervised learning, not through us supervising them. We don\\'t know how to specify reward signals and things like that anyway. I\\'m not sure, if we actually are able to train up these huge systems that are actually intelligent through self-supervised learning, if they are then going to listen to us, right? Why would they?\n\n**0:12:15.2 Vael:** Right. Okay, cool. Yeah, so this kind of leads right into my next question here. So imagine we\\'re in the future and we\\'ve got some AGIs and we\\'ve got a CEO AI, and I\\'m like, \\\"Okay, CEO AI, I want you to maximize profits and not run out of money and not try to exploit people and try to avoid side effects,\\\" and it seems like this would currently be extremely challenging for many reasons. But one is that we\\'re not very good at taking human values and putting them\\-- and like goals and preferences\\-- and putting them in mathematical formulations that AI can currently work. And I worry that this is gonna happen in the future as well. So the question is: what do you think of the argument, \\\"Highly intelligence systems will fail to optimize exactly what their designers intended them to and this is dangerous\\\"?\n\n**0:12:53 Interviewee:** Well, yeah. I agree with that. I don\\'t think\\... I think there are two separate questions here. So one you\\'re asking about is the paperclip maximizer argument from Nick Bostrom. So like if you have a system and you tell it like \\\"you need to make as many paperclips as you possibly can\\\" then it\\'s going to like destroy the earth to make as many paperclips as possible.\n\n**0:13:15 Vael:** Well that would be doing maybe\\-- oh, I see. Not quite what I intended. Yeah, all right.\n\n**0:13:19.8 Interviewee:** Yeah, so\\-- okay, so if that\\'s not what the underlying question was, then\\... We don\\'t really\\... I also think that we are\\... some of us are fooling ourselves into believing that we know everything as humans and I think human values are changing all the time. I don\\'t think we can capture correct human values. I don\\'t think there is an absolute moral truth that we should all adhere to. I think that just morality itself is a very cultural concept. But I\\'m \\[interested in\\] philosophy, so I\\'m a bit different from most AI researchers, I guess. So I think that we could try to encode some very basic principles, so this is like Asimov\\'s laws and things like that, but I don\\'t think we can really go much further than that. And I think even in those cases, like you said, we don\\'t know how to mathematically encode them in a way where you enforce whatever this dynamical system is that you\\'re training, so a neural net, but then probably more complicated than the current neural nets\\-- how do we impose a particular set of values? I don\\'t think we know how to do that. I don\\'t think there\\'s a mathematical way to do that either actually, because it\\'s all \\[inaudible\\]\\--\n\n**0:14:44.7 Vael:** Yeah, do you think we are eventually going to be able to?\n\n**0:14:50.0 Interviewee:** So I think if you ask Yann LeCun or someone like that, he would say that probably, if we ever get to systems of this sort of level of intelligence, then they would be benevolent, because they\\'re very smart and able to sort of understand how weak humans are.\n\n**0:15:09.4 Vael:** Interesting. Yeah. So when I hear that argument, I\\'m like, okay, it seems like Yann LeCun thinks that as you get more intelligent, you have morals that are very similar to humans, and this just kind of comes\\--\n\n**0:15:21.7 Interviewee:** No, not necessarily. No, but just better morals, right? So I think that the argument is sort of that if you look at human progress, then we\\'ve also been getting better and better moral systems and a better understanding of what human values really matter. And like 100 years from now, probably everybody\\'s gonna look back at us and say, \\\"They were eating meat. They were killing all these animals.\\\" So we are on the path of enlightenment. I don\\'t know if I agree with this, but that\\'s one way of saying it. And so a sign of an organism or a culture becoming more and more enlightened is also that you become more and more benevolent I think for others, but maybe that\\'s a bit of a naive take.\n\n**0:16:05.9 Vael:** Yeah. I think in my mind\\-- certainly we have\\-- well, actually, I don\\'t know that we have the correlation that humans are getting smarter and also at the same rate, or, like\\... Like humans are pretty smart. And we\\'re getting better at IQ tests, but I don\\'t know that we\\'re vastly increasing our intelligence per se.\n\n**0:16:20.4 Interviewee:** Yeah. That\\'s for different reasons, right. Yeah.\n\n**0:16:24.9 Vael:** Yeah. And meanwhile, we have, over\\-- centuries, like not that many centuries, we\\'ve been increasing our moral circle and putting in animals and people far away from us, etcetera. But I kind of think of the axes of intelligence and morality as kind of orthogonal, where if we have a system that is getting much smarter, I don\\'t expect it to have\\... I expect kind of a lot of human morality runs from evolutionary pressures and also coordination difficulties, such that you need to be able to not kill people, otherwise the species is gonna go extinct. And you know, there\\'s a bunch of stuff that are kind of built into humans that I wouldn\\'t expect to happen just natively with intelligence; where intelligence, I would think of something like\\... the ability to solve problems well, to make multi-step plans, to think in the future, to take out correlations and figure out predictions, and I don\\'t expect that to naively correlate with---\n\n**0:17:19.9 Interviewee:** Yeah, so I think that\\'s a very narrow definition of intelligence, and so I don\\'t know if that definition of intelligence you have, if that actually is the most useful kind of intelligence for humans. So I think that in our society there is this concept where intelligence just means like mathematical reasoning capabilities almost, right? (Vael: \\\"Yeah.\\\") And that is a very, very narrow definition, and most of our intelligence is not that, right? (Vael: \\\"Yes.\\\") So for regimes to be useful to us\\... so I think what you\\'re talking about is sort of like this good old-fashioned AI concept of intelligence, where you have symbolic reasoners, and you\\'re like\\... you\\'re very good at very fast symbol manipulation. And like, \\\"This is what computers are for.\\\" So we should just have super smart computers who can do the stuff that we don\\'t want to do or can\\'t do. It\\'s possible that our intelligence is a direct consequence, not of our mathematical reasoning capabilities, but of something else, of our cultural interactions. So I definitely think if humans were not a multi-agent society, that we would not be nearly as intelligent. So a lot of our intelligence comes from sharing knowledge and communicating knowledge and having to abstract knowledge so that you can convey it to other agents and that sort of stuff.\n\n**0:18:50.0 Vael:** Cool. Yeah. So when I think about how I define intelligence, I\\'m like, \\\"What is the thing I care about?\\\" The thing I care about is how we develop AI. And I\\'m like, \\\"How are we gonna develop AI?\\\" We\\'re gonna develop it so that it completes economic incentives. So we want robots that do tasks that humans don\\'t want to do. We want computers\\--\n\n**0:19:09.2 Interviewee:** Yeah. But is that AI or is that just machine learning? We\\'re trying to have a\\... like input-output black box, and we want that black box to be as optimal as possible for making money or whatever the goal is, right? So that\\'s also a worry I have, is that a lot of people are conflating these different concepts. So artificial intelligence\\...yeah, it depends on how you define it. Some people think of it more as like AGI. If you ask Yann again and all the old school deep learners, they would say, it used to be that they were explicitly not doing AI. So AI is like Simon and Newell and all that sort of stuff, so like pure symbol manipulation, symbolic AI. And pattern recognition is not AI. And now, since deep learning became very popular, some of the people were like, \\\"Oh yeah, this is AI now,\\\" but they used to be machine learning and not AI. So one thing is just like this black box. It can be anything and we just want to have the best possible black box for our particular problem mapping X to Y. And this could be any kind of problem, it could be like image recognition or whatever. In some cases, you want to have a symbolic approach, in other cases, you want to have a learning approach, it sort of just depends. So it\\'s just software. Right? But in one case, the software is well defined, and in the other case, it\\'s a bit fuzzier.\n\n**0:20:37.9 Vael:** Yeah. So this all kind of depends on your frame, of course. I think my frame, or the reason why I care, is I\\'m like, I think machine learning, AI, I don\\'t know, whatever this thing is where humans are pouring a lot of investment and effort into making software better, and by better I mean better able to accomplish tasks that we want it to do\\-- I think that this will be\\-- it is very powerful, it has affected society a lot already and it will continue to affect society a lot. Such that like 50 years out, I expect this to be\\... Whatever we developed to be very important in how\\... Affect just a lot of things.\n\n**0:21:10.8 Interviewee:** But we\\'re notoriously bad at predicting the future, right? So if you asked in the \\'60s, people would say like, there\\'s flying cars, and like we\\'re living on Mars and all that stuff. And we\\'re getting a bit closer, but we\\'re still not there yet. But none of these people would have seen the internet coming. And so I think maybe the next version of the internet is going to be more AI driven. So that is a sort of\\... first use case that I would see for AI, which is like a better internet.\n\n**0:21:50.0 Vael:** Interesting. Yeah, I think kind of\\... people will find whatever economic niches will get them a lot of profit, is sort of how I expect things to continue to go, given that that seems to be \\... Given that society works kind of the same way, and people have a lot of time and energy and have the capability to invest in this stuff, we will continue to develop machine learning, AI software, etcetera, such that it\\--\n\n**0:22:13.2 Interviewee:** We\\'ve been doing that for like 30 years or even more. From the Perceptron, Rosenblatt. We\\'ve been already doing this and so it\\'s not really a question of like AI taking over the world, it\\'s software taking over the world, and AI in some cases is better than like rule-based software. But it\\'s still software taking over the world.\n\n**0:22:35.8 Vael:** Yeah, yeah, certainly. And then the current paradigm of like, gigantic neural nets, seems to be better at doing things that we want it to do. And so we\\'re continuing on in that direction, and at some point, as you say, it becomes less able to do what we want it to do, given the amount of resources that we\\'re pouring into it, like that ratio trades off. Okay\\--\n\n**0:22:54.3 Interviewee:** Yeah. So there\\'s other trade offs too, right? So as you become bigger as a neural net, you also become a lot more inefficient. This is already the case for something like GPT-3; latency is a big problem. For us to be able to talk like this to a machine, if the machine has 100 trillion parameters, it\\'s going to be way too slow. It\\'s going to take, I don\\'t know, 10 minutes to generate an answer to a simple question. So it\\'s not only a tradeoff of\\... Best does not just mean accuracy. Best also is like, how efficient are you? How fair are you? How robust are you? How much environmental impact do you have? All of these different sort of metrics that all matter for choosing what defines \\\"best\\\" for a system. I think this is something we need to improve a lot on as a community, where we stop thinking beyond this pure accuracy thing, which is like an academic concept, to an actual\\... like how can we deploy these systems in a responsible way, where we think about all the possible metrics that matter for deployment. So we want to be at the Pareto frontier of like 10 different metrics, not just accuracy.\n\n**0:24:06.8 Vael:** Cool. Alright, that makes sense. So still thinking ahead in the future, do you think we\\'ll ever get something like a CEO AI?\n\n**0:24:14.0 Interviewee:** So, if\\-- so a CEO AGI or a CEO AI?\n\n**0:24:18.8 Vael:** Um, some sort of software system that can do the things that a CEO can do.\n\n**0:24:25.6 Interviewee:** No.\n\n**0:24:26.1 Vael:** No. Okay.\n\n**0:24:28.6 Interviewee:** So not before we get AGI. So I think that is an AI complete problem. But I do think we\\'ll get a very good CEO AI assistant. \\[inaudible\\] \\...real human. It\\'s like a plane, right? So like a plane is flown by a pilot but it\\'s really flown by a computer. So I think the same could be true for a company where the company has like, a CEO pilot whose job is also to inspire people and do all of the human soft skills. And they have an assistant who does a lot of measurement stuff and tries to give advice for like where the company should be headed and things like that.\n\n**0:25:05.1 Vael:** Okay, awesome. And you do think that you could have a CEO AGI, it sounds like.\n\n**0:25:10.3 Interviewee:** Yeah, but if you have an AGI, then we don\\'t need CEOs anymore.\n\n**0:25:14.3 Vael:** What happens when we get AGI?\n\n**0:25:16.9 Interviewee:** All the humans die.\n\n**0:25:17.5 Vael:** All the humans die. Okay! \\[laughs\\]\n\n**0:25:20.1 Interviewee:** \\[laughs\\] So I think it depends. I think actually the most likely scenario, as I said, for AGI to come into existence is when humans merge with AI. And so I don\\'t think that it\\'s a bad thing for AGI to emerge. So if there is an AGI, then it will be a beautiful thing, and we will have made it as a society. So yeah, if that thing takes over, then that thing is going to be insane, it\\'s going to take over the universe, and then we will be sort of like the cute little people who made it happen. So either we become very redundant very quickly or we sort of merge with AI into this new species kind of.\n\n**0:26:14.1 Vael:** Interesting, okay. And you don\\'t necessarily see a connection between, like, the current\\... \\[you think\\] if we just push really hard on the current machine learning paradigm for 50 years, we won\\'t have an AGI. We need to do something different for an AGI, which sounds like embodiment / combination with humans, biological merging?\n\n**0:26:31.7 Interviewee:** So it could be embodiment and combination with humans, but also just better, different learning algorithms. So probably more sparsity is something that scales better. More efficient learning. So the problem with gradient descent is that you need too much data for it. Maybe we need some like Bayesian things where we can very quickly update belief systems. But maybe that needs to happen at a symbolic level. I still think we have to fix symbolic processing happening on neural networks\\-- so we\\'re still very good at pattern recognition, and I think one of the things you see with things like GPT-3 is that humans are amazing at anthropomorphizing anything. I don\\'t know if you\\'ve ever read any Daniel Dennett, but what we do is we take an intentional stance towards things, and so we are ascribing intentionality even to inanimate objects. His theory is essentially that consciousness comes from that. So we are taking an intentional stance towards ourselves and thinking of ourselves as a rational agent and that loop is what consciousness is. But actually we\\'re sort of biological machines who perceive their own actions and over time this became what we consider consciousness. So\\... where was I going with this? \\[laughs\\] What was the question?\n\n**0:27:57.2 Vael:** Yeah, okay. So I\\'m like, alright, we\\'ve got AI, we\\'ve got lots of machine learning\\--\n\n**0:28:00.8 Interviewee:** \\--oh yeah, so do you need new learning algorithms? Yeah. So I think what we need to solve is the sort of System 2, higher-level thinking and how to implement that on the neural net. The neural symbolic divide is still very much an open problem. There are lots of problems we need to solve, where I really don\\'t think we can just easily solve them by scaling. And that\\'s\\-- like there is very little other research happening actually in field right now.\n\n**0:28:35.3 Vael:** Alright. So say we do scaling, but we also have a bunch of software. Like algorithmic improvements at the rate we\\'re seeing, and we\\'ve got hardware improvements as well. I guess this is just more scaling, but we have optical, we have quantum computing. And then we have some sort of fast learning systems, we know how to do symbolic processing, we\\'re much more efficient. Here we now have a system that generalizes very well and is pretty efficient, and I don\\'t know, maybe we\\'re hundred years out. Say maybe we\\'re in a different paradigm, maybe we\\'re kind of in the same paradigm. We now have a system that is\\--\n\n**0:29:05.5 Interviewee:** We would be in a different paradigm for sure.\n\n**0:29:07.4 Vael:** Okay. We are in a different paradigm, because\\... because all these learning algorithms\\--?\n\n**0:29:11.4 Interviewee:** Paradigms don\\'t really last that long, if you look at the history of science.\n\n**0:29:16.2 Vael:** Okay, cool. But are we still operating under like, here\\'s software with faster learning algorithms, more efficient learning algorithms, like symbolic reasoning, Bayesian stuff\\--\n\n**0:29:24.7 Interviewee:** Maybe. But I mean it could be that neuromorphic hardware finally lives up to its promise, or that we can do photonic chips at the speed of light computation and things like that. We\\'re also very good in AI at fooling ourselves into thinking that we are responsible for all of these amazing breakthroughs, but without hardware engineers at NVIDIA, none of this stuff would have happened, right? They are doing very different things.\n\n**0:29:55.1 Vael:** Alright, so we\\'ve got this AI system which is quite general, we\\'re in maybe a different paradigm, but we\\'re still like\\-- faster learning systems. Here we are, these things are very capable, very general, when they generate stories, they model physics in the world and then use that to generate their stories. Maybe they can do a lot of social stuff, maybe they know how to interact with people. And here we are with our system. Is this now an AGI?\n\n**0:30:18.0 Interviewee:** No, no, so\\-- Okay, now I remember what I was gonna say about the Dennett thing. So we anthropomorphize everything, we take this intentional stance at everything. We do this to ourselves, we do this to everything, especially when it speaks language. So when we see a language model and it\\'s like, \\\"whoa, it\\'s amazing, it does this thing,\\\" but all it\\'s really doing is negative log likelihood, maximum likelihood estimation. It\\'s basically just trying to fit \\\"what is the most likely word to go here\\\". So you can ask yourself whether we are so impressed by this system because it\\'s so amazing, or because we are sort of programmed to have a lot of respect for things that speak language, because things that speak language tend to be humans. What you were just saying made it sound like you were saying, when these systems are sort of like humans, when they can do this and when they do that, and when they understand the world. So how do you define \\\"understanding the world\\\" there\\--\n\n**0:31:18.7 Vael:** I mostly mean like they could sub in for human jobs, for example\\--\n\n**0:31:25.0 Interviewee:** Yeah, but that\\'s not the same thing as\\-- stepping in for a human, they can already do that. But it depends on the problem. They\\'re very good at counting, but\\--\n\n**0:31:34.5 Vael:** Yeah, but I don\\'t think we could have like a mathematician AI right now per se. I guess I forgot to define my interpretation of AGI, but like a system that is very capable of replacing all current human day jobs.\n\n**0:31:51.6 Interviewee:** Including yours and mine?\n\n**0:31:55.9 Interviewee:** Yup.\n\n**0:31:57.8 Interviewee:** Okay. But then who would it be useful for? Would the president still have a job or not?\n\n**0:32:09.7 Vael:** Uh\\... It doesn\\'t have to. I think you could just spend\\-- humans wouldn\\'t have to work anymore, for example, and they could just go around doing whatever they do.\n\n**0:32:16.7 Interviewee:** Yeah. But that\\'s not at all what humans do. We\\'re all so programmed to compete with each other.\n\n**0:32:24.7 Vael:** Yeah, we can have games, we can have competitions, we can do all sorts of things, we have sports.\n\n**0:32:29.1 Interviewee:** I think it\\'s gonna be very quickly my AI versus your AI, basically.\n\n**0:32:33.9 Vael:** Okay, we can have big fights with AIs, that seems very dangerous.\n\n**0:32:37.3 Interviewee:** Yeah, I know, yeah. So that is a more likely scenario, I think, than everybody being nice and friendly and playing games. (Vael: \\\"Yeah.\\\") If people want to have power, and whoever controls the AGI will have the most power, (Vael: \\\"That seems right,\\\") then I think we\\'re going to be developing your own AGIs at the same time. And then those AGIs at some point are going to be fighting with each other.\n\n**0:33:02.0 Vael:** Yeah, yeah, I think we might even get problems before that, where we\\'re not able to get AIs aligned with us. Have you heard of AI alignment?\n\n**0:33:10.9 Interviewee:** Yeah, so \\[close professional relationship\\] wrote a nice thesis about it. \\[Name\\], I don\\'t know if you know \\[them\\] by any chance. So yeah, alignment is important, but my concern with all this alignment stuff is that it\\'s very ill-defined, I think. Either it means the same as correctness, so is your system just correct, or good at what you want it to be good at\\... alignment is sort of like a reinvention of just correctness. I can see why this is useful for some people to put a new name on it. But I think it\\'s a very old concept where it\\'s just, okay, we\\'re measuring things on a very narrow static test set, but we should be thinking about all these other things. You want your system to be really good when you deploy it in the real world. So it needs to be a good system or a correct or an aligned system. And so alignment maybe is a useful concept, only in the sense that the systems are getting so good now that you can start thinking about different kinds of goodness that we didn\\'t think about before, and we can call that alignment, like human value-style things. But I think the concept itself is very old; it\\'s just like, is your system correct?\n\n**0:34:40.0 Vael:** Yeah. And then it\\'s nowadays being thought about in terms of very far future systems and aligning with all values and preferences. (Interviewee: Yeah.) Cool. Yeah, do you work on any sort of AI safety or what would convince you to work on this or not work on this, etcetera?\n\n**0:34:56.5 Interviewee:** Yeah so, I\\'m not sure. AI safety is a bit of a weird concept to me, but I do work on the responsible AI and ethical AI, yeah.\n\n**0:35:06.9 Vael:** Hm. And what does that mean\\--\n\n**0:35:09.1 Interviewee:** So these are things like\\... I\\'m trying to get better fairness metrics for systems. So in \\[company\\] we built this provisional fairness metric where we do some heuristic swaps. And so right now we\\'re working on a more sophisticated method for doing this where, let\\'s say, you have something, a sentence or some sort of natural language inference example, so a premise and a hypothesis and it\\'s about James, like if you change James to Jamal, that shouldn\\'t change your prediction at all. Or if you change the gender from James and you turn into a woman, that shouldn\\'t change anything there. And it does, actually, if you look at restaurant reviews, if you changed the restaurant to a Mexican restaurant and the person who\\'s eating there to Jamal, then your sentiment goes down. So this is the sort of stuff that shouldn\\'t happen in these systems that is direct consequence of us just scaling the hell out of our systems on as much data as we can, including all of the biases that exist in this data. So I\\'m working on trying to do that better measurement for these sort of things. And so I think if we are not getting better at measurement, then all of this stuff is basically a pointless discussion.\n\n**0:36:29.1 Vael:** Great, thank you. And then my last question is, have you changed your mind on anything during this interview and how was this interview for you?\n\n**0:36:35.9 Interviewee:** It was fun. Yeah, I\\'ve done a few of these with various people and it\\'s always a bit like, I don\\'t know. It feels a bit like\\... we\\'re getting ahead of ourselves a little bit. But maybe I\\'m also just old. So when I talked to \\[close professional relationship\\] and how \\[they\\] think about stuff, I\\'m like, I just don\\'t understand how \\[they\\] think about AI.\n\n**0:37:06.2 Vael:** Got it. \\[They\\'re\\] like way out here, and we need to make sure that systems do our correct\\--\n\n**0:37:11.9 Interviewee:** Yeah, \\[they\\'re\\] really.. Yeah, \\[they\\] put a lot more faith also in AI, which I think is very interesting. So I asked \\[them\\] like, \\\"Okay, so this alignment stuff, in the end who should we ask what is right or what is wrong? When we\\'re trying to design the best AI systems, who should we ask for what\\'s right and wrong?\\\" And then \\[their\\] answer was, \\\"We should ask the AI.\\\"\n\n**0:37:38.7 Vael:** What? No, we should ask humans.\n\n**0:37:41.0 Interviewee:** Yeah, no, so \\[they\\] think that basically AGI or AI is going to get so good, these language models are gonna get so good that they can tell us how we should think about our own moral philosophical values so that we can impose them onto AI systems. That to me just sounds crazy, like batshit crazy, but that\\'s one way to think about it. I mean, I respect \\[their\\] opinion. I just can\\'t understand it.\n\n**0:38:11.7 Vael:** Interesting. Yeah, I think if I try to model what I would imagine \\[they\\] would be saying, under the alignment paradigm, I would say that you need to ask human feedback, but it\\'s hard to get human feedback on very intelligent systems. And so you should ask AI to summarize human feedback, but it should always be ground down on a human otherwise we\\'re in trouble, so.\n\n\\[ending comments\\]\n", "filename": "individuallyselected_w5cb5-by Vael Gates-date 20220318.md", "id": "3a3964000110e6fbc09a0c4efd28d3bb", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "individuallyselected_zlzai-by Vael Gates-date 20220318", "authors": ["Vael Gates"], "date_published": "2022-03-18", "text": "# Interview with AI Researchers individuallyselected_zlzai by Vael Gates\n\n**Interview with zlzai, on 3/18/22**\n\n\\[Note: Th interviewee\\'s Zoom connection was very bad since they were calling in from an Uber, so there\\'s a lot of missing content (and \\\"I lost you\\...\\\" parts of the conversation that have been removed for clarity).\\]\n\n**0:00:02.5 Vael:** Cool. All right. So my first question is: can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:09.2 Interviewee:** Definitely. The way that I describe it is, first of all, I work on deep learning, let\\'s get that out of the way. And then beyond that, I\\'m interested in understanding \\[inaudible - long\\]. If I had to describe in a word, I would call it the science of deep learning. This is a topic that there are few other researchers interested in. We try to understand\\... We treat these as complex systems that have emergent properties due to that complexity, in the same way you might think of biology as an emergent property of physics. It\\'s not something you might predict for first principles, but once it\\'s there, there\\'s a lot you can say about it that\\'s pretty interesting. And then from there, I\\'m interested in trying to understand how do these things learn and how can you make them learn more efficiently. You can think of that\\... A metaphor I like to use is like pharmaceuticals. Once you understand a biological organism, you can design an intervention that takes advantage of the patterns that you\\'ve seen in order to get a certain behavior or a certain outcome.\n\n**0:01:00.8 Vael:** Great. Cool. I missed maybe the second sentence there. I assume that was like large NLP systems or foundations or something?\n\n**0:01:08.1 Interviewee:** Yes. So I work on deep learning, so I\\'m interested in the neural networks as they are in practice.\n\n**0:01:12.7 Vael:** Got it. Cool. All right. And then my next question is, what are you most excited about in AI? And what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:01:23.4 Interviewee:** So the thing that I\\'m most excited about is that this can do extraordinary things that we don\\'t know how to program, but we know we can do. Let me try to elaborate on that a little bit. What I mean by that is there\\'s a world of things that we don\\'t know whether anything can accomplish, tasks that are sufficiently complicated or tasks where given the input data, you may not know whether you can predict the output data. But neural networks are closing the gap between things that we know are possible to do because we do them as humans all the time, but we don\\'t know how to program, or there\\'s no kind of discreet handwritten program that we can write easily that will describe this. And so I\\'m really excited about the fact that we can take data and actually do something that starts to resemble things that require human complexity in order to do that. So to me, that\\'s the exciting part. Self-diving cars are one example, though we\\'re not that good at that, but even just a mere handwritten digit recognition like the MNIST task, which is the most basic machine learning task in the world, now that I\\'m in the field, but when I first took a machine learning class and saw that task, my mind was blown that you could just, with this tiny little data set, learn how to do handwritten digit recognition. That\\'s pretty profound, as far as I\\'m concerned, and I\\'ve never quite lost my fascination for that.\n\n**0:02:35.9 Interviewee:** On the risk side, I\\'m concerned about a few things. I\\'m not an AGI believer or an AGI concerned person, so you can cross that one off. I think \\[inaudible\\] huge risks, I am always concerned about\\... \\[inaudible - very long\\]. \\[Vael saying what was missed\\] \\...Let me start over. It was me trying to search for the right words so you missed absolutely nothing.\n\n**0:03:47.2 Interviewee:** Now that I found the right words though, the two things that scare me most, number one, the fact that the systems fail in ways that are unintuitive to humans or that we wouldn\\'t be able to reason about intuitively. So the example I give is, you think an automobile driver is likely to drive more poorly at night or when they\\'re tired or when they\\'re impaired or what have you. It\\'s really hard to have an intuition about when the Tesla is going to think of the bus as a cloud, and then slam into the back of it. And it\\'s those rare failures when we use a system a lot of times that lead to really bad outcomes and lead to mistrust. At the end of the day, AI needs to be trustworthy and we need to, as humans, trust it, otherwise, it will never be useful, and we\\'ll never accept it or terrible things will happen, and so I think a lot about the notion of what does it mean for it to be trusted; where are the gaps between where we are right now and what it would take to be trustworthy.\n\n**0:04:41.3 Interviewee:** The other big concern I have is that I think we don\\'t take the problem of understanding how these systems work seriously. It used to be that when we design a big complicated piece of software, think like Windows or PowerPoint or a browser or something like that, we assumed it would take 1000 engineers to build it, and then maybe let\\'s say 50 or 100 to test it. Deep learning, we have this really incredible situation where \\[inaudible - long\\] \\... the way that I would put it, the system kind of \\[?verb\\] on its own based on the data. And we\\'re lazy in computer science, we like to think that, \\\"Oh, the system came to life on its own, so we should have this one hammer that we can bang against it and see whether it\\'s good or not, or whether it\\'s fair or not, or use the explainability bot to understand how to explain the system.\n\n**0:05:32.1 Interviewee:** But in reality, I think the ratios are just going to be reversed. The software, if you\\'re using\\... 100 researchers may be able to build a very exciting piece of software. You\\'re going to need 1000 or 2000 engineers to tear it apart down to the studs and try to understand how each individual piece of one specific system works. So I think the entire literature around explainability and understanding of deep learning is completely wrong-headed because people are looking for general solutions to a problem that doesn\\'t have general solutions. It\\'s like writing a program that will debug any program for you. There\\'s just no such thing. It\\'s not how this game works. So that\\'s scares me a lot, that we\\'re thinking about this completely the wrong way. And we do need some degree of explainability, we need some degree of understanding of how these systems work, but we\\'re not going to get there from the way that we\\'re currently thinking about this, and it\\'s just going to take a lot more effort and resources than we\\'re currently remotely considering giving to it.\n\n**0:06:24.5 Vael:** Interesting. And it sounded like\\... So how many engineers per researcher do we need, or per model do we need, do you think?\n\n**0:06:33.8 Interviewee:** Well, this is going to be complicated. The 1000 and 100 are just kind of\\... Every piece of software needs a different number of engineers, and three engineers can actually build a pretty big piece of software, but the amount of\\... I\\'ll go out there on a limb and say it\\'s going to be a 10 to 1 ratio.\n\n**0:06:53.8 Interviewee:** That\\'s for every 10\\... Let\\'s call it 10 testers. Whatever we call them, whether it\\'s engineers or researchers or what have you, let\\'s say that 10 people need to try to understand the system or 100 people need to understand the system. For every \\[inaudible\\] building the system. \\...Feel free to tell me I\\'m wrong if \\[inaudible\\].\n\n**0:07:16.5 Vael:** No, I\\'m just trying to actually hear you, so it\\'s 10 to\\... Or 10 testers or 100 testers to one capabilities person making it happen. Wow. \\...Are you still here? (Interviewee: \\\"Yep, still here.\\\")\n\n**0:07:34.0 Vael:** Great, alright, so that\\'s pretty extreme ratio. Do you think we\\'ll get there? And what happens if we don\\'t get there?\n\n**0:07:42.3 Interviewee:** I think we\\'ll have to get there. At the end of the day, we evaluate these systems by their capabilities. If you have an autopilot system and it crashes the plane a lot, at the end of the day, you\\'re not going to use it and it\\'s not going to be allowed to fly. In order to get the kinds of capabilities we\\'re hoping for, and to get the degree of understanding of failures that we typically expect in any high assurance system, we\\'re going to need to have\\... If your self-driving car crashes enough times, someone will require you to tear that thing down to the studs to understand that, and you\\'re going to have to do this regardless of whether you decide to do this proactively, or eventually your system won\\'t be allowed on the road until you do. So I think there\\'s going to be a huge amount of manual labor in understanding how these systems work, and I think that either through failures that lead to bans of certain uses of technology until it\\'s better understood or until it\\'s more resilient, or someone who\\'s being proactive and actually wants to ensure their self-driving car works. Either way, you\\'re going to end on this situation. No better way to put it.\n\n**0:08:48.5 Vael:** Okay, that\\'s super fascinating. Okay, so you think that we are going to need\\... The way we\\'re doing interpretability is not good right now, it needs to be specific to each system, and currently, we\\'re probably doing much\\... Yeah, we\\'re not even going in the right direction. I kind of expect that self-driving cars will be deployed without this huge amount of intervention on it, but you say that for most systems or something, we will just continue to have failures until it becomes obvious from society that we need people to do interpretability and explainability type of work in this correct way.\n\n**0:09:22.0 Interviewee:** I think so, I think that\\'s a good summary. The only thing I\\'ll add to this, and I spend a big chunk of my life doing AI policy, so I have to think about this stuff a lot. Is that the amount of work we put into the assurance is proportional to the risk that the system has if it fails. So my favorite example of this is\\-- I\\'ve done a lot of work in the past on facial recognition policy. Obviously a hot topic, facial bias issues, etc. So let me give you two examples. Google Photos has this clustering algorithm where it will basically find all the faces of grandma and cluster them so that you click on one picture of grandma, and you can find them all if you want to.\n\n**0:09:58.4 Interviewee:** Suppose there\\'s racial bias in that algorithm, and suppose that instead the police department is using a facial recognition system to try to identify the perpetrator of a crime via a driver\\'s license database. These are two applications that might both have the same technology underneath, they might both have the same biases. We\\'re going to be much more worried about one than the other because one could lead people to lose their freedom, and the other may lead to people rightly feeling offended and people rightly feeling hurt, but not someone being thrown in jail potentially. So we have to handle the consequences of the systems in line with their risks. I\\'m a lot less concerned about interpretability of Google\\'s facial recognition system than I am about a police\\'s facial recognition system.\n\n**0:10:47.8 Vael:** I see. And since they need to be treated differently anyway, according to your view\\... Yeah, just do it\\-- do the interpretability in accordance with their importance. That makes sense.\n\n**0:10:56.7 Interviewee:** Exactly.\n\n**0:10:57.7 Vael:** Yeah. Cool. Alright, so my next question is about future AI. So putting on a science fiction forecasting hat, say we\\'re 50 plus years into the future. So at least 50 years in the future, what does that future look like?\n\n**0:11:11.4 Interviewee:** I think it\\'s honestly not going to look that different from the present, just a little bit more extreme. I don\\'t think we\\'ll be in an AGI world. I don\\'t think we will have systems that rival human intelligence. I think we will be able to specify what we want out of systems in much higher level terms and actually get them to do this\\-- \\[I\\'m\\] thinking, like, a Roomba that is actually a useful intelligent Roomba. A Roomba that knows when to vacuum, knows when not to disturb you, that you can say, \\\"Hey, can you please do an extra touch-up on that area,\\\" and it\\'ll go do it. I think the current wave of machine learning is great at pattern recognition; I don\\'t think pattern recognition gets us to AGI. I think the technology is going to have to look fundamentally different, and I don\\'t know that we\\'re any closer now than we were 50 years ago in that respect, beyond that we know there are a lot of dead ends.\n\n**0:12:05.0 Interviewee:** That\\'s not to say what we\\'re doing right now isn\\'t exceedingly useful. And \\[that\\] we\\'re not going to push this to the nth degree, and \\[that\\] we won\\'t have the New York Times of the future where you click on a headline and it will generate an article for you based on your interests, your knowledge level, your background on the topic and the amount of time you have to read it; I think that\\'s an application that might be 10 or 15 years in the future, if not sooner than that. But I don\\'t think we\\'ll be in a place where we\\'re worried about giving robots rights or things like that. I don\\'t think that a huge amount of pattern recognition will get us to cognition, and I don\\'t think the current track we\\'re on will get us there. That doesn\\'t bother me personally because I don\\'t really care about that. I\\'m much more interested in what we can do for people now and in the near future, and not how we create intelligence. That\\'s more of a scary question for me than an exciting question for me, but it\\'s\\... \\[inaudible - short\\]\n\n**0:13:00.2 Interviewee:** \\...thinks that, heading to an AGI world, even if\\... And we may be in a world where we do have self-driving cars that actually work, even if it may not be here for 10 or 15 years, \\'cause it\\'s a hard problem. We may have actual smart digital assistants, but that\\'s \\[inaudible - short\\] intelligent beings who are peers or \\[hard to parse\\] in that respect.\n\n**0:13:25.1 Vael:** Interesting. I don\\'t know if it will help for me to turn my video off, but I\\'m going to try it to hope it gets a little bit less choppy. Great. Okay. So you\\'re talking about within the frame of AGI, like lots of people presumably talk about AGI around you. What are people\\'s opinions? What do you think of their opinions? Etcetera.\n\n**0:13:46.5 Interviewee:** Oh, I think there are a lot of nut cases. I mean, there are also a lot of optimists and a lot of people who have read a lot of science fiction and want to bring that to life, which is great. And a lot of people who have spent too much time in San Francisco and are surrounded by peer groups who\\... Basically, there\\'s a lot of monoculture in San Francisco. And I say this having just come back to San Francisco, and as someone who refuses to move \\[there\\] despite \\[\\...\\]. I have friends who, if I recall correctly, go and worship at the Church of the AGI or something like that, from what I understand, under the belief that eventually there will be artificial intelligences that are smarter and more powerful than us, and therefore they should get ahead of the game and start worshipping them now, since they\\'ll be our overlords later.\n\n**0:14:32.4 Interviewee:** I\\'m a little more concerned with the here and now, and I think that the technological leaps between great pattern recognizes that we have today and that are way far away. Just because you\\'re a nutcase doesn\\'t mean you can\\'t change the world. And there are plenty of examples of that, and it\\'s really good to have that point of view constantly echoed in the community. But I don\\'t worry about the end of the world because of AGI in the way that I think some of my friends do. I don\\'t consider that to be the biggest existential risk that we have, far from it. And I don\\'t think it\\'s a risk that we should really be spending any time worrying about at the moment.\n\n**0:15:07.6 Vael:** Got it. Is that because, like you said, there\\'s other existential risks to prioritize, or you don\\'t think this could be an existential risk?\n\n**0:15:15.1 Interviewee:** I don\\'t think this is\\... In the very, very long tail of ultra low probability risks to civilization, this is so far out in the tail that it\\'s not worth spending any time on, independent of the fact that there are also much greater risks.\n\n**0:15:29.6 Interviewee:** It\\'s not just a matter of priority. It\\'s also a matter of\\... it\\'s not a good use of any resources.\n\n**0:15:34.8 Vael:** Got it. That makes sense, yeah. Unfortunately, a lot of my questions are about AGI so you\\... \\[chuckle\\] Here we go.\n\n**0:15:42.1 Interviewee:** No, I\\'m happy to give you a strong contrasting opinion to many that I\\'m sure you\\'ve heard. So come at it.\n\n**0:15:48.1 Vael:** Lovely, lovely. Okay, cool. So how I\\'m defining AGI here is like any sort of very general system that could, for example, replace all human jobs, current-day human jobs, whether or not we choose to or don\\'t choose to do that. And the frame I usually take is like, 2012, deep learning revolution, here we are. We\\'ve only been doing AI for like 70 years or something, and here we are 10 years later, and we have got systems like GPT-3, which have some weirdly general capabilities. But regardless of how you get there, because I can imagine that we hit some ceiling on the current deep learning revolution and we need to have paradigm shifts\\-- my impression is generally that if we keep on pouring in the amount of human talent and have software\\-- algorithmic improvements at the rate we\\'ve seen, hardware improvements, etcetera, and just like, the driving human desire to continue to follow economic incentives to earn money and replace things and make life more convenient, which I think is what a lot of what ML is aimed at right now, that eventually we will get some sort of AGI system. I don\\'t really know when. Do you think we will at some point get some very general system?\n\n**0:16:55.8 Interviewee:** I think if you take the time to infinity and humanity lasts in time going to infinity, yes, we will. I do think we have one example of a truly general intelligence system, namely human beings. And eventually we will probably get to the point where we could replicate that intelligence manually, if we have to literally photocopy somebody\\'s brain and the transistors at some point, when we get technology advanced enough for that. Do I think that will happen this century? No. Do I think it will happen next century? Probably not. Do I think it might happen in the ones after that? Maybe.\n\n**0:17:28.8 Interviewee:** So the answer is yes, in the limit, but no in any kind of limit that you or I would think about or be able to conceptualize.\n\n**0:17:35.7 Vael:** Yeah, that makes sense. So you think there\\'s probably going to be a bunch more paradigm shifts needed before we get there?\n\n**0:17:42.8 Interviewee:** I think at least one\\-- it\\'s hard to know how many paradigm shifts because it\\'s hard to know what they are, but I do not think this paradigm is the right one. I think this paradigm is amazing, and I\\'m really excited about the kinds of machines we can build. But you can be excited about the kinds of machines we can build and recognize the limits of those machines. In the same way that\\-- how are we going to get to AGI in 10 years if we\\'ve been pouring\\... You were talking about economic incentives. How much investment do you think has gone into self driving cars over the past, let\\'s say, 10 years?\n\n**0:18:14.2 Interviewee:** Let\\'s call it \\$100 billion plus or minus. That\\'s probably the largest single investment in AI technology for an application anywhere. Ever, very likely. And where are we today on self driving cars? They\\'re 90% of the way there, \\[inaudible - short\\] 10 times as difficult as the first 90%. And I don\\'t think you\\'ll be seeing fully autonomous self-driving cars in the general case on general roads in the next 10 or 15 years. So the thought that there would be an AGI at that point, let alone in 50 years is completely nonsensical to me, personally.\n\n**0:18:56.0 Vael:** Yeah, that makes sense. Especially since self-driving cars are, like, robotics and robotics is behind as well. But even GPT and stuff doesn\\'t really have good grounding with anything that\\'s happening in the world and how\\--\n\n**0:19:09.2 Interviewee:** GPT\\'s capabilities are also wildly overstated. You can pull out a lot of good examples out of GPT if you really want to, and you can pull out a lot of crappy ones. But we\\'re not going to just brute force large language models to get our way to general intelligence. That\\'s BS you\\'ll only hear from someone who works at OpenAI who wants their equity to be worth more, quite frankly. The only people who say this are the ones who have an economic incentive to say this, and the people who follow the hype. Otherwise, I don\\'t really know of anyone who thinks GPT is the road to AGI, especially given that we can\\'t scale up any bigger. I mean, this is something, this is my whole push right now, is that the only way that Nvidia is going to come out with new GPUs next week, and they are going to come out with new GPUs next week that will be twice as fast as the ones that came out two years ago, is if they doubled the amount of power. It\\'s not like we\\'re doubling the amount of hardware we have available.\n\n**0:20:02.4 Vael:** Got it. Do you know how good optical or quantum computing will be? I know that those are in the pipeline.\n\n**0:20:08.9 Interviewee:** They\\'re in the pipeline. Quantum\\'s been in the pipeline for a long time, and we\\'re up to what, four qubits? Cool. Again, this is one of those cases where we\\'ve been awful close to nuclear fusion for a long time. We\\'re going to have nuclear fusion in the limit. I\\'m a 100% certain of that. When we have that, call me.\n\n**0:20:32.7 Interviewee:** It could be next year. I mean, we\\'re very close to crossing that threshold, but we\\'ve been really close to crossing that threshold for several decades. And so, trying to call the year that it\\'s going to happen within a five, 10 or 50-year time horizon, it\\'s really, really tough when it comes to these technologies. And I\\'m in that same place about quantum. We are making progress on quantum and I\\'m really excited about it. I\\'m a little bit scared of it, but I\\'m excited about it, but that doesn\\'t mean\\... Like, crossing that threshold is really difficult. The difference between 10 years and 50 years and a 100 years away may be very small improvements in technology, but it may take an exceedingly long time for us to accomplish. Optical, same thing. So I\\'m not sitting here expecting that breakthroughs are just going to happen left and right. These breakthroughs often take a very long time; a lot of incremental advances and all sorts of other technical advances in other fields to make them happen. Material science especially in the case of quantum. And we may get there in five years, or we may get there in 50 or 100 years or longer, and it\\'s hard to say.\n\n**0:21:36.6 Vael:** That makes sense. So what would convince you that you should start working on AI alignment? It sounds like there\\'s probably going to be some breakthrough that would make you think that it\\'s important, but we\\'re not necessarily anywhere near that breakthrough right now. Do you have an idea of what that might be?\n\n**0:21:56.9 Interviewee:** Give me your personal definition of AI alignment.\n\n**0:22:00.0 Vael:** Yeah. Well, actually I want your definition first. \\[chuckle\\]\n\n**0:22:03.3 Interviewee:** So this is not a field that I follow that closely. The entire concept of alignment has really come to the fore in the past three or four months while I\\'ve been trying to \\[job-related task\\]. So I haven\\'t been paying as much attention, so I\\'d actually appreciate your definition. I can tell you the kinds of people who I see talking about alignment and the kinds of papers that I\\'ve seen, the paper titles that I\\'ve seen go across my desk. But I couldn\\'t give you a good definition even if \\[inaudible - short\\].\n\n**0:22:33.9 Vael:** Yeah. So one of the definitions I use, and I\\'ll give you a problem setting I usually think about as well\\-- so, \\[the\\] definition is building models that represent and safely optimize \\[inaudible\\] specify human values. Alternatively, ensuring that AI behavior aligns with system designer intentions. And one of the examples I use for what an alignment problem would be is the idea that highly intelligent systems would fail to optimize exactly what their designers intended them to and instead do what we tell them to do. So the example of OpenAI, trying to\\... Have that boat win a race and then it getting caught on like some little\\-- but optimizing instead for a number of points and instead ending up in this little\\-- side, collecting-points area, instead of winning the race. So doing what the designers told it to do instead of what it intended them to.\n\n**0:23:26.2 Interviewee:** So, I mean, this sounds like a\\... If you want my honest frank opinion.\n\n**0:23:34.8 Vael:** Yeah.\n\n**0:23:35.3 Interviewee:** A BS-ey rebranding of a simple fact of life in computing for\\... Since the dawn of computing, the computer does what you tell it to do, not what you want it to do. (Vael: \\\"Yes.\\\")\n\n**0:23:47.5 Interviewee:** And so, I don\\'t\\... I know a lot of people at OpenAI think they\\'re very deep and profound for calling it AI alignment. \\[inaudible - very long\\].\n\n**0:25:02.7 Interviewee:** So picking up where I was, there are probably a lot of folks at OpenAI, and I know exactly who they are, who think they\\'re very deep and profound who are wondering about this question, but this is kind of the obvious fundamental thing that every first year programmer learns and is a question that everybody has been asking in every context whenever they develop a loss function for a neural network. Your loss function never reflects exactly the value that you want the system to carry exactly, even just the outcome. We optimize language models to be good at predicting the next word and yet they somehow also generate\\... We want them to have properties that go above and beyond that, we want them to be able to even just transfer the representations for downstream tasks.\n\n**0:25:44.0 Interviewee:** This is just a question of to what extent is the thing you\\'re optimizing a system for going to actually align with the task that you want. I just, I don\\'t see what\\'s interesting or new about this question from a research perspective. I think of course it\\'s an important one, but it\\'s not a\\... \\[inaudible - long\\] Saying, oh, we should worry about computer security. Well, yes, but there\\'s no security robot that fixes all computer security. It\\'s a complex context-specific problem. Again, people say things that are aligned with their economic incentives and that is certainly true for my friends at OpenAI but I don\\'t see any profundity in this observation that\\'s just the nature of computing.\n\n**0:26:38.1 Vael:** Yeah. I think it has to be paired with the idea that if you have a very intelligent system that can plan ahead, that can model itself in the world, that it may have an incentive to preserve itself just as an agent pursuing any goal, because it doesn\\'t want to decrease its chance of it succeeding at its goal. I think that has to be paired. Otherwise, it\\'s not particularly special. But I do think creating an agent that in the far future, whenever AGI develops, that has an incentive to not be modified makes it much more dangerous than anything we\\'ve seen previously.\n\n**0:27:16.2 Interviewee:** I agree with that factor, when we get to a point where this becomes a problem. But I think you can see that the amount I care about this question is proportional to the amount that I think that AGI is or will be a concern in your lifetime, or your grandchildren\\'s lifetime. And so I think there are a lot more fundamental basic questions. Like, we don\\'t even understand why a neural network actually is able to recognize handwritten digits in the first place. And until we get these basic things down, we\\'re never going to get to build the kind of systems that have these properties anyway.\n\n**0:27:44.7 Interviewee:** So, I would make the analogy of, let me see, I\\'m trying to think of the right metaphor\\... I don\\'t know, for the folks who worry about how we\\'re going to communicate with the alien civilizations we inevitably come into contact with\\-- we better build some space ships that can get us to space first before we start worrying. And then figure out whether there are alien civilizations out there before we start worrying about how we\\'re going to communicate with them.\n\n**0:28:05.6 Vael:** Yeah, this sounds like a very coherent worldview, I\\'m like, yep, makes sense, is logical.\n\n**0:28:11.7 Interviewee:** Yeah, I have strong opinions, and researchers are always incentivized to have strong opinions; it\\'s what moves science forward. We argue and we disagree with each other, and we\\'re all right, and we\\'re all wrong. But these are my strong opinions. And if you were to chat with any of my friends at OpenAI, you\\'d hear the opposite view and we could still go out for drinks and have a good time.\n\n**0:28:32.5 Vael:** Great. So I think this line of questioning was originally developed by me asking what would you see\\... What would you want to see in the world before you\\'re like, \\\"Oh actually, people are right, this thing is coming earlier than I expected.\\\"\n\n**0:28:47.6 Interviewee:** That\\'s a good question. Honestly, I would want to see a\\... I\\'m thinking of examples of a task, so like a Turing test of some sort that would fit here. Obviously, the Turing Test is not a very effective test given that GPT-3 probably passes it with flying colors, despite the fact that GPT-3 is a very good language model. What I would look for is long-lived machine learning systems that can learn continually. We don\\'t even have learning continually down. We have reinforcement learning systems that eventually become effective agents at solving one particular task; they can\\'t move on to a second task, we don\\'t have any general kind of learning process. Even the reinforcement learning agents that we have have to be really, really, specially hand-calibrated for a specific task, to the point where even the specific random seed you use is an important hyperparameter in determining whether the agent will succeed or fail at a given task.\n\n**0:29:50.0 Interviewee:** If we\\'re in that world, we\\'re very far away from an agent that can learn many tasks or has a generalized learning process. This is I guess why people are excited about meta-learning in general. Again, I think we shouldn\\'t worry about light speed travelling until we can get off the planet, but that\\'s a whole different\\... You know where I stand on that. So\\... \\[inaudible - short\\] and improve themselves over time; reinforcement learning is kind of a shadow of what we would expect of a real system. A smart home assistant, where you can teach it new tasks by explaining it new tasks, and it will be able to figure out how to accomplish them. We don\\'t have any kind of abstract or general learning capabilities right now. And we talk a very big game about things like AlphaFold or what have you, or any of the Alpha stuff coming out of DeepMind. But these are requiring lots and lots and lots of engineers to get a system to work on one very specific setting in ways that don\\'t transfer to any other setting or any other task. There are principles we\\... \\[inaudible short\\] being really hard trying a bunch of stuff. And that\\'s a\\... \\[inaudible long\\] real general learning process that will keep me up at night.\n\n**0:31:03.5 Vael:** Got it. Okay. Interesting. So do you think you and your colleagues probably have the same information but are just reacting to it differently? Other people are also not seeing these very general systems or maybe\\-- yeah, presumably, because you have access to the same information, but you\\'re just interpreting it differently based on the incentives that your company or whatever is in?\n\n**0:31:29.9 Interviewee:** I wouldn\\'t always blame the incentives and the all, again, throwing some shade at San Francisco in particular. When you spend all of your waking hours and all of your sleeping hours around exclusively people who are working on the same \\[inaudible\\], same field. And when you work as they do not only in the tech industry, but especially in the very tight knit AI community, where if we all go to the same birthday parties when I\\'m out there, and everybody goes to the same spas, and runs into each other on the street, despite the fact that San Francisco is a big city. When you live in that kind of monoculture, quite frankly, you\\'ll lose touch with reality, to some extent. And I think, a lot of people start believing that\\... \\[inaudible very long\\]\n\n**0:32:31.9 Vael:** Okay, cool. My question was, although I missed that last bit, but my question was, how do people stay in touch with reality, how do they\\...\n\n**0:32:43.7 Interviewee:** I think you have to talk to people who are doing things other than building AI systems. You have to actually maybe interact with someone who works in healthcare or someone who works in finance, or someone who works in any other field besides AI or besides machine learning. And you have to remember that they are real human beings out there. I think this is just a general San Francisco problem. You see people, you see 20-somethings who are wealthy, producing systems that are convenient for them. We see this all the time, startups coming out of the Bay Area.\n\n**0:33:24.1 Interviewee:** My experience in Boston and New York is it\\'s a very different start up culture of healthtech, fintech, edtech, what have you, things that are quite frankly real and useful. So when I hang out with\\... The times I\\'ve been working at \\[large tech company\\] in California, for example, all of my friends are worried that the sky is falling for some reason. Maybe we\\'re about to enter an AI winter or we\\'re about to create AGI, it\\'s usually one or the other. Depending on who you talk to and what mood they\\'re in, and what\\'s going on in their organization at that different point in time. But people are just so caught up in the inside baseball, they\\'ve lost the bigger context for everything we\\'ve accomplished, but also how far away we are from some of the things people talk about. And you gotta actually taste the real world from time to time, and remember that you don\\'t just live in this little bubble of lots of neural networks and lots of people with PhDs working on neural networks.\n\n**0:34:19.7 Vael:** And interacting with other people and seeing the wider world gives the context to believe more\\... mainline views? Because it sounds like people end up in either direction, you said they believe that everything is about AI, everything\\'s going to go badly: everything is going to go badly and too fast or everything\\'s going too badly and too slow. And there\\'s some regularity effect that happens if you\\'re hanging out with other people?\n\n**0:34:50.4 Interviewee:** It\\'s almost the opposite of\\... \\[inaudible - short\\] in that respect. I think it\\'s partially because people will say things like, \\\"Well, there hasn\\'t been a big advance or there hasn\\'t been a big breakthrough in 18 months.\\\" And my reaction is, it\\'s been a\\... Yes, the transformer paper came out 18 months ago. Yeah, let\\'s be patient, and the field does not always continually accelerate. No scientific field continually accelerates. We have periods of big progress and then periods of stagnation and paradigm shift. This is the structure of scientific revolution. If you want to\\... For the extent to which you take that view of the world seriously, that does, in my impression, simply how science moves, in fits and starts, not at a steady pace. And in computers we\\'re used to exponentials. That doesn\\'t mean that these systems are going to get exponentially better over time. It may mean we had 10 really good years of progress based on a bunch of different factors all coming together, between big data being accessible, lots of computing accessible, and some improvements in deep learning, that all came together really nicely to give us a big burst of progress.\n\n**0:36:00.1 Interviewee:** Maybe things will slow down a little bit. Or maybe there will be a new architecture in five years that will be another big burst, the way that convolutional neural networks were around the AlexNet breakthrough and then the way that transformers were. But this happens in fits and starts. But around \\[large tech company\\], partially because of \\[large tech company\\]\\'s promotion process and partially because of the very internally competitive atmosphere that that fosters, people are always dour and feel like the sky is falling. And at OpenAI, the incentive is to continue the hype and build really big systems. It\\'s partially in the culture, partially in the leadership, partially in the brand of the place and what it takes to make your stock go up in value. So I think people really are shaped by their environments in that respect, and you gotta get out and talk to other kinds of people and get other perspectives, and I don\\'t think there\\'s that much you can do when you\\'re in a big organization like \\[large tech company\\], and there\\'s not a lot you can do when you\\'re in San Francisco and everybody you talk to. It\\'s not \\\"what you do\\\". It\\'s \\\"which tech company do you work for\\\" or more likely \\\"which AI project do you work on\\\".\n\n**0:37:04.4 Vael:** I see, so this is probably, like you said, why you haven\\'t moved and also presumably why you have friends who are not AI researchers.\n\n**0:37:12.4 Interviewee:** Yeah, my office right now is at \\[university\\] \\[non-science field\\]. I just sit by a bunch of \\[non-science field\\] professors. I\\'ve worked there at times in the past. I live in \\[city\\] right now, and they were kind enough to offer me a place to sit and work for a little bit. So I\\'m hanging out with a bunch of \\[non-science field\\] faculty all day. It\\'s a very different perspective on the world. They\\'re worried about very mundane things compared to AGI coming and killing us all, or a quantum computing breakthrough that leads to the end of civilization when we train quantum neural networks or something like that. They\\'re much more worried about bail reform, or very basic mundane day-to-day problems that are actually affecting people around them. It\\'s hard to get caught up in the hype.\n\n**0:37:56.5 Vael:** Yeah, and it seems like it\\'s important to make sure that people are working on things that actually affect more than just their local environment. I think you mentioned something like that earlier, right?\n\n**0:38:05.9 Interviewee:** Exactly. And there\\'s also a little bit of\\... as computer scientists, we love to calculate things, we love to measure things. Effective Altruism is one great example of this. We love to say, \\\"What is the way that I can put my dollar the furthest along?\\\" And I think sometimes people lose touch with the fact that there are humans out there. There are people right now, you ask questions like, well, either I can help the humans today or I can put all my resources toward saving the entirety of civilization. And civilization and the future is obviously\\-- there are many more humans there than there are here today, so I\\'ll worry about, on the 0.1% chance that we develop AGI and it comes to threaten us all, I\\'ll worry about that problem, because in expectation, that will save more lives than what have you. And I think\\--\n\n**0:38:51.5 Vael:** \\--Yeah, and this is the wrong way to think about things?\n\n**0:38:56.5 Interviewee:** I don\\'t think this is necessarily the wrong way to think about things, but I think it\\'s a little bit\\... Sometimes, it\\'s more important to focus on what\\'s right in front of you and what\\'s tangible and what\\'s here and now. Or the other way I\\'ll put it is, MIT is pathologically bad as an institution at dealing with topics that can\\'t be measured for exactly this reason. It\\'s really hard to talk to people at MIT about values, unless a value can be measured or optimized.\n\n**0:39:25.0 Interviewee:** And this is just, for a lot of folks who get empirical education and who are training systems all day, for whom this is a problem, they are just as much\\... The people are just as much subject to this problem as the systems they build. So, if we want to talk about alignment, it\\'s an issue not just of the systems, but of the people building the systems.\n\n**0:39:43.5 Vael:** Alright, well, we are at time. Thank you so much, this is a very different opinion than the ones I\\'ve received so far, and yeah, very well presented. Awesome.\n\n**0:39:52.6 Interviewee:** Thank you, that\\'s what I\\'m here for. You know where to find me if you need to chat more. I\\'m very exited to see what the outcome of the project is.\n\n**0:39:58.4 Vael:** Great. Alright. Well, thank you so much.\n", "filename": "individuallyselected_zlzai-by Vael Gates-date 20220318.md", "id": "abad64bdf7f439b5c0627fdda3b44e3c", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "The Windfall Clause - Sharing the benefits of advanced AI _ Cullen OΓÇÖKeefe-by Centre for Effective Altruism-video_id vFDL-NxY610-date 20190829", "authors": ["Cullen O'Keefe"], "date_published": "2019-08-29", "text": "# Cullen O'Keefe The Windfall Clause — sharing the benefits of advanced AI - EA Forum\n\n_The potential upsides of advanced AI are enormous, but there’s no guarantee they’ll be distributed optimally. In this talk, Cullen O’Keefe, a researcher at the_ [_Centre for the Governance of AI_](https://www.fhi.ox.ac.uk/govai/)_, discusses one way we could work toward equitable distribution of AI’s benefits — the Windfall Clause, a commitment by artificial intelligence (AI) firms to share a significant portion of their future profits — as well as the legal validity of such a policy and some of the challenges to implementing it._\n\n_Below is a transcript of the talk, which we’ve lightly edited for clarity. You can also watch it on_ [_YouTube_](https://www.youtube.com/watch?v=vFDL-NxY610) _or read it on_ [_effectivealtruism.org_](https://effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai)_._\n\n## The Talk\n\nThank you. Today I'll be talking about a research project that I’m leading at the Centre for the Governance of Artificial Intelligence (AI) on [the Windfall Clause](https://www.fhi.ox.ac.uk/windfallclause/).\n\n![](https://images.ctfassets.net/ohf186sfn6di/7kcc3peZb8bYEL1pUQiECC/b48ff8b72b17f453bb282d39deed89f4/Slide03.png)\n\nMany \\[people in the effective altruism movement\\] believe that AI could be a big deal. As a result, we spend a lot of time focusing on its potential downsides — so-called x-risks \\[existential risks\\] and s-risks \\[systemic risks\\]. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4eXmYnvvmLwYMr4uaA7Yu9/a987aec8a0a5d4ca71f308d30a47a1a4/Slide04.png)\n\nBut if we manage to avoid those risks, AI could be a very good thing. It could generate wealth on a scale never before seen in human history. And therefore, if we manage to avoid the worst downsides of artificial intelligence, we have a lot of work ahead of us.\n\nRecognizing this opportunity (and challenge), Nick Bostrom, at the end of [_Superintelligence_](https://www.amazon.com/dp/B00LOOCGB2/), laid out what he called “the common good principle” — the premise that advanced AI should be developed only for the benefit of all humanity. \n\n![](https://images.ctfassets.net/ohf186sfn6di/79oCHgmBoiaUgAvtjEnrmL/9690e46375dea4ba3780045b66288923/Slide05.png)\n\nIt's in service of the common good principle that we've been working on the Windfall Clause at the Center for the Governance of AI. \n\n![](https://images.ctfassets.net/ohf186sfn6di/29XkeBrIdolcFRiuyD0AUU/27bd9ac6d0e9059b863edee6c91eb7f0/Slide06.png)\n\nWe've been working on it for about a year \\[as of summer 2019\\], and I'd like to share with you some of our key findings.\n\nI'll start by defining the project’s goal, then describe how we're going to pursue that goal, and end by sharing some open questions.\n\n![](https://images.ctfassets.net/ohf186sfn6di/5hTsi3YTedlfp5v9KRDBaK/b96397af25551134bc6ef8efc32c6e3b/Slide07.png)\n\n**The goal of the Windfall Clause project**\n\n![](https://images.ctfassets.net/ohf186sfn6di/5tgfQ6gFIkvYs6Lqq2q4ow/92bae0439bd5a48cbc17f7699f544741/Slide09.png)\n\nOur goal with this project is to work toward distributing the gains from AI optimally. Obviously, this is both easier said than done and underdefined, as it invokes deep questions of moral and political philosophy. We don't aim to answer those questions \\[at the time of this update\\], though hopefully our friends from the [Global Priorities Institute](https://globalprioritiesinstitute.org/) will help us do that \\[in the future\\]. But we do think that this goal is worth pursuing, and one we can make progress on for a few reasons.\n\nFirst, it's not a goal that we expect will be achieved naturally. The gains from the current global economy are distributed very unequally, as graphs like \\[the one below indicate\\]. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4BYytNk9tF0wynorZRCP7/27c4bd60ff58cd6866105b592defa971/Slide10.png)\n\nAI could further exacerbate these trends by primarily benefiting the world's wealthiest economies, and also by devaluing human labor. Indeed, industrialization has been a path to development for a number of economies, including the one in which we sit today. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4GZzJympTdubjy4q4Dx7vr/4ec3b92dcdccb6b07703e2810724fcfe/Slide11.png)\n\nAnd by eroding the need for human labor with complementary technologies like robotics, AI could remove that path to development.\n\nIndustry structure is also very relevant. The advanced tech industries tend to be quite concentrated \\[with a few large companies dominating most major markets\\]. \n\n![](https://images.ctfassets.net/ohf186sfn6di/3uUKMoXOLlZpr8tf9Jv6NW/481bd60af93ccd989e60b82cd0396f88/Slide12.png)\n\nA number of people have speculated that due to increasing returns on data and other input factors, AI could be a natural monopoly or oligopoly. If so, we should expect oligopoly pricing to take effect, which would erode social surplus, transferring it from consumers to shareholders of technology producers.\n\n![](https://images.ctfassets.net/ohf186sfn6di/4pHp4813aN9k31W3M39Vi2/07f3c76ddcc52529693c97cf8578a071/Slide13.png)\n\nWorking toward the common good principle could serve as a useful way to signal that people in the technology fields are taking the benefits of AI seriously — thereby establishing a norm of beneficial AI development. And on an international level, it could credibly signal that the gains from cooperative (or at least non-adversarial) AI development outweigh the potential benefits of an AI race.\n\nOne caveat: I don't want to fall victim to the Luddite fallacy, which is the prediction throughout history that new technologies would erode the value of human labor, cause mass unemployment, and more. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4O6BTdn8QfmsNBhKJ8zpMQ/caad04682d2983738371793ccfbb61a2/Slide14.png)\n\nThose predictions have repeatedly been proven wrong, and could be proven wrong again with AI. The answer will ultimately turn on complex economic factors that are difficult to predict _a priori_. So instead of making predictions about what the impacts of AI will be, I merely want to assert that there are plausible reasons to worry about the gains from AI.\n\n**How we intend to pursue the Windfall Clause**\n\nOur goal is to optimally distribute the gains from AI, and I've shared some reasons why we think that goal is worth pursuing. Now I’ll talk about our mechanism for pursuing it, which is the Windfall Clause.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6Rr66dR1h923zA7uAhNsol/6f0f0a231439461d32a17d0622ad950f/Slide17.png)\n\nIn a phrase, the Windfall Clause is an ex ante commitment to share extreme benefits from AI. I'm calling it an “ex ante commitment” because it's something that we want \\[companies working on AI\\] to agree to before \\[any firm\\] reaches, or comes close to reaching, extreme benefits. \n\n![](https://images.ctfassets.net/ohf186sfn6di/2mSnMtTz9TQ2uz4DIF0V0g/9dc307fcc42290273b0ba17e26c5d288/Slide20.png)\n\nIt's a commitment mechanism, not just an agreement in principle or a nice gesture. It’s something that, in theory, would be legally binding for the firms that sign it. So ultimately, the Windfall Clause is about distributing benefits in a way that's closer to the goal of optimal distribution than we would have without the Windfall Clause. \n\nA lot turns on the phrase “extreme benefits.” It’s worth defining this a bit more. \n\n![](https://images.ctfassets.net/ohf186sfn6di/6q4Wgt96ODGmckXcxCOHko/8fc85bac679bfd5578f05b4467907bb8/Slide21.png)\n\nFor the purposes of this talk, the phrase is synonymous with “windfall profits,” or just “windfall” generally. Qualitatively, you can think of it as something like “benefits beyond what we would expect an AI developer to achieve without achieving a fundamental breakthrough in AI” — something along the lines of AGI \\[artificial general intelligence\\] or [transformative AI](https://www.youtube.com/watch?v=9GxVIf3FNJk). Quantitatively, you can think of it as on the order of trillions of dollars of annual profit, or as profits that exceed 1% of the world’s GDP \\[gross domestic product\\].\n\nA key part of the Windfall Clause is translating the definition of “windfall” into meaningful, firm obligations. And we're doing that with something that we're calling a “windfall function.” That's how we translate the amount of money a firm is earning into obligations that are in accordance with the Windfall Clause.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6FHjdQxvcRMd6KAYcUr31j/0bdf008f90c3bdce600458542f94cd1b/Slide22.png)\n\nWe wanted to develop a windfall function that was clear, had low expected costs, scaled up with firm size, and was hard for firms to manipulate or game to their advantage. We also wanted to ensure that it would not \\[competitively\\] disadvantage the signatories, for reasons I'll talk about later.\n\n![](https://images.ctfassets.net/ohf186sfn6di/Iw6xUDyCBpa9bSQTi6E7c/272c67a607c31979bd0797f2f6c85787/Slide24.png)\n\nTo make this more concrete, you can think of \\[the windfall function\\] like this: When firm profits are at normal levels (i.e., the levels that we currently see), obligations remain low, nominal, or nothing at all. But as a firm reaches windfall profits, over time their obligations would scale up on the margin. It’s like income taxes; the more you earn, the more on the margin you're obligated to pay. \n\nJust as a side note, this particular example \\[in the slide above\\] uses a step function on the margin, but you can also think of it as smoothly increasing over time. And there might be strategic benefits to that.\n\nThe next natural question is: Is this worth pursuing? \n\n![](https://images.ctfassets.net/ohf186sfn6di/dgmAewzmLDuibqX1mpI97/c1050c00466bd40c9ad386b314741400/Slide26.png)\n\nA key input to that question is whether it could actually work. Since we're members of the effective altruism community, we want \\[the change we recommend to actually have an effect\\].\n\n![](https://images.ctfassets.net/ohf186sfn6di/6ZiavrVJKR1GvqBCBlNFS6/cb580d87907ae6857d4ff2b1b74f787f/Slide27.png)\n\nOne good reason to think that the Windfall Clause might work is that it’s a matter of corporate law. This was a problem that we initially had to confront, because in American corporate law, firm directors — the people who control and make decisions for corporations — are generally expected to act in the best interests of their shareholders. After all, the shareholders are the ones who provide money to start the firm. At the same time, the directors have a lot of discretion in how they make decisions for the firm. The shareholders are not supposed to second-guess every decision that corporate directors make. And so traditionally, courts have been quite deferential to firm boards of directors and executives, applying a very high standard to find that they have violated their duties to shareholders. In addition, corporate philanthropy has traditionally been seen as an acceptable means of pursuing the best interests of shareholders.\n\nIn fact, in all seven cases in which shareholders have challenged firms on their corporate philanthropy, courts have upheld it as permissible. That’s a good track record. Why is it permissible if firm directors are supposed to be acting in the best interest of shareholders? Traditionally, firms have noted that corporate philanthropy can bring benefits like public relations value, improved relations with the government, and improved employee relations.\n\n![](https://images.ctfassets.net/ohf186sfn6di/4EdU6AE3FqJxUv0uAbBfDN/ba76bf3e4033a33d7e864904ca8ee75d/Slide32.png)\n\nThe Windfall Clause could bring all of these as well. We know that there is increasing scrutiny \\[of firms’ actions\\] by the public, the government, and their own employees. We can think of several different examples. Amazon comes to mind. And the Windfall Clause could help \\[advance this type of scrutiny\\]. When you add in executives who are sympathetic to examining the negative implications of artificial intelligence, then there's a plausible case that they would be interested in signing the Windfall Clause.\n\n![](https://images.ctfassets.net/ohf186sfn6di/53Q1e3vbW6WIC5fkZmJiuL/f2889d1971b3e4721b054943d91ccc26/Slide33.png)\n\nAnother important consideration: We think that the Windfall Clause could be made binding as a matter of contract law, at least in theory. Obviously, we're thinking about how a firm earning that much money might be able to circumvent, delay, or hinder performing its obligations under the Windfall Clause. And that invokes questions of internal governance and rule of law. The first step, at least theoretically, is making the Windfall Clause binding.\n\n**Open questions**\n\nWe’ve done a lot of work on this project, but many open questions remain. \n\n![](https://images.ctfassets.net/ohf186sfn6di/Y7ozpLEeLYW76N4tVChEM/d1b12062bb586dcb8e6177352d9b2507/Slide35.png)\n\nSome of the hard ones that we've grappled with so far are: \n\n![](https://images.ctfassets.net/ohf186sfn6di/1A8ijdO9Hg2gMywh9iqepd/eed74267e5848f0598d3ca35e90d9697/Slide37.png)\n\n**1\\. What's the proper measure of “bigness” or “windfall”?** I previously defined it in relation to profits, but there's also a good case to be made that market cap is the right measure of whether a firm has achieved windfall status, since market cap is a better predictor of a firm's long-term expected value. A related question: Should windfall be defined relative to the world economy or in absolute terms? We’ve made the assumption that it would be relative, but it remains an open question.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6hsvrKsUk5z1kOoI5f4ZVl/2a55fb05fb0297c0f5344ce72eb13f38/Slide39.png)\n\n**2\\. How do we make sure that the Windfall Clause doesn’t disadvantage benevolent firms, competitively speaking?** This is a more important question. It would be quite bad if multiple firms were potential contenders for achieving AGI, but only the most benevolent ones signed onto the Windfall Clause. They’d be putting themselves at a competitive disadvantage by giving themselves less money to reinvest \\[if they activated the Windfall Clause\\], or making themselves less able to attract capital to invest in that goal \\[because investor returns would probably be lower\\]. Therefore, it would become more likely that amoral firms would achieve windfall profits or AGI \\[before benevolent firms\\]. We’ve had some ideas for how to prevent this \\[see slide below\\], and think these could go a long way toward solving the problem. But it’s unclear whether these ideas are sufficient.\n\nThose are questions that we have explored. \n\n![](https://images.ctfassets.net/ohf186sfn6di/19aVwZZp56nndOGfDAOcaa/daee458a0d3c7b7557277e92a8098eec/Slide40.png)\n\nSome questions that are still largely open — and that we intend to continue to pursue throughout the lifetime of this project — include: \n\n**1\\. How does the Windfall Clause interact with different policy measures that have been proposed to address some of the same problems?**\n\n![](https://images.ctfassets.net/ohf186sfn6di/5zvKx0HuSyNt2wIbVjhplz/04d98c49adce9ae03a14164de19e54fe/Slide41.png)\n\n**2\\. How do we distribute the windfall?** I think that's a bigger question, and a pretty natural question for us to ponder as EAs. There are also some related questions: Who has input into this process? How flexible should this be to input at later stages in the lifetime of the Clause? How do we ensure that the windfall is being spent in accordance with the common good principle?\n\n![](https://images.ctfassets.net/ohf186sfn6di/1NMK1LaaSIWJWLG6Vtqqes/fdb8bb272a4d46c279838e84a57e8c2f/Slide45.png)\n\nLuckily, as members of the EA community, we have a lot of experience answering these questions through charity design and governance. I think that if this project goes well and we decide to pursue it further, this could be a relevant project for the EA community as a whole to undertake — to think about how to spend the gains from AI.\n\n![](https://images.ctfassets.net/ohf186sfn6di/7s4xoF2XhdvRULjEnODCZN/278542e708a3d54ae19014a9db66bcb5/Slide46.png)\n\nAccordingly, the Centre for the Governance of AI, in collaboration with partners like [OpenAI](https://openai.com/) and the [Partnership on AI](https://www.partnershiponai.org/), intends to make this a flagship policy investigation. It is one in a series of investigations on the general question of how to ensure that the gains from AI are spent and distributed well.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2nsLefzZnfeo2UGz7wJK2L/40cbfd2d30abf1d0b77bf117b08de958/Slide47.png)\n\nIn closing, I'd like to reiterate the common good principle and make sure that when we think about the potential downside risks of AI, we don't lose sight of the fact that \\[a safe and powerful AI could address many of the world’s problems\\]. \\[Keeping this fact in sight\\] is a task worth pursuing on its own.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6VjKdUaaxKVUiVxNcdE6Mv/0ccb5b4b749d77671a1e3fe433691b05/Slide48.png)\n\nIf you're interested in learning more about this project and potentially contributing to it, I invite you to [email me](mailto:cullen@openai.com). Thank you so much.\n\n**Moderator:** Thanks for your talk. I think this is an \\[overlooked\\] area, so it is illuminating to \\[hear about\\]. Going back to the origins of the project, did you find historical case studies — other examples where companies came into a lot of wealth or power, and tried to distribute it more broadly?\n\n**Cullen:** Yeah. We found one interesting one, which is in what is now the Democratic Republic of the Congo. When that area was under Belgian colonial rule, a mineral company came into so much wealth from its extractive activities there that it felt embarrassed about how much money it had. The company tried to do charitable work in the Congo with the money as a way of defusing tensions.\n\nI don't think that turned out well for the company. But it is also quite common for firms throughout the world to engage in corporate social responsibility campaigns in the communities in which they work, as a means of improving community relations and ultimately mitigating the risk of expropriation, or activist action, or adverse governmental actions of other sorts. There's a range of cases — from those that are very analogous to more common ones.\n\n**Moderator:** What mechanisms currently exist in large companies that are closest to the kind of distribution you're thinking about?\n\n**Cullen:** A number of companies make commitments that are contingent on their profit levels. For example, Paul Newman has a line of foods. They give all of their profits to charity; it's \\[Newman’s\\] charitable endeavor. That's a close analogy to this. \n\nIt’s also quite common to see companies making commitments along the lines of a certain percentage of a product’s purchase going to charity. That's similar, although it doesn't involve the firm’s relative profit levels.\n\nIt's not super common to see companies make commitments contingent on profit levels. But OpenAI just restructured into a capped-profit model. That’s somewhat similar to \\[what we’re proposing in the Windfall Clause project\\]. They're giving all profits above a certain level to the nonprofit that continues to govern them. \n\n**Moderator:** You mentioned toward the end of your talk that there are some legal binding commitments that a company could be held to, assuming that they decided to enter into this sort of social contract. But you could also imagine that they become so powerful from having come into so many resources that the law is a lot weaker. Can you say a little bit more on the mechanisms you have in mind that might be used to hold their feet to the fire?\n\n**Cullen:** Yeah, absolutely. I think this is an outstanding problem in AI governance that's worth addressing not just for this project, but in general. We don’t want companies to be above the law once they achieve AGI. So it’s worth addressing for reasons beyond just the Windfall Clause. \n\nThere are very basic things you could do, like have the Windfall Clause set up in a country with good rule of law and enough of a police force to plausibly enforce the clause. But I don't expect this project to resolve this question. \n\nAnother point along the same lines is that this involves questions of corporate governance. Who in a corporation has the authority to tell an AGI, or an AGI agent, what to do? That will be relevant to whether we can expect AGI-enabled corporations to follow the rule of law. It also involves safety questions around whether AI systems are designed to be inherently constrained by the rule of law, regardless of what their operators tell them to do. I think that's worth investigating from a technical perspective as well.\n\n**Moderator:** Right. Have you had a chance to speak with large companies that look like they might come into a windfall, as a result of AI, and see how receptive they are to an idea like this?\n\n**Cullen:** We haven't done anything formal along those lines yet. We've informally pitched this at a few places, and have received generally positive reactions to it, so we think it's worth pursuing for that reason. I think that we’ve laid the foundation for further discussions, negotiations, and collaborations to come. That's how we see the project at this stage. \n\nPresumably, the process of getting commitments (if we want to pursue it that far) will involve further discussions and tailoring to the specific receptiveness of different firms based on where they perceive themselves to be, what the executives’ personal views are, and so forth.\n\n**Moderator:** Right. One might think that if you're trying to take a vast amount of resources and make sure that they get distributed to the public at large — or at least distributed to a large fraction of the public — that you might want those resources to be held by a government whose job it is to distribute them. Is that part of your line of thinking?\n\n**Cullen:** Yeah. I think there are reasons that the Windfall Clause could be preferable to taxation. But that is also not to say that we don't think governments should have a role in input for democratic accountability, and also pragmatic reasons to avoid nationalization. \n\nOne general consideration is that tax dollars tend not to be spent as effectively as charitable dollars, for a number of reasons. And, for quite obvious reasons involving stakeholder incentives, taxes tend to be spent primarily on the voters of that constituency. On the other hand, the common good principle, to which we're trying to stick with this project, demands that we distribute resources more evenly. \n\nBut as a pragmatic consideration, making sure that governments feel like they have influence in this process is something that we are quite attentive to.\n\n**Moderator:** Right. And is there some consideration of internationalizing the project?\n\n**Cullen:** Yeah. One thing that this project doesn't do is talk about control of AGI. And one might reasonably think that AGI should be controlled by humanity collectively, or through some decision-making body that is representative of a wide variety of needs. \\[Our project\\] is more to do with the benefits of AI, which is a bit different from control. I think that's definitely worth thinking about more, and might be very worthwhile. This project just doesn't address it.\n\n**Moderator:** As a final question, the whole talk is predicated on the notion that there would be a windfall from having a more advanced AI system. What are the circumstances in which you wouldn't get such a windfall, and all of this is for naught?\n\n**Cullen:** That’s definitely a very good question. I'm not an economist, so what I'm saying here might be more qualitative than I would like. But if you're living in a post-scarcity economy, then money might not be super relevant. But it's hard to imagine this in a case where corporations remain accountable to their shareholders. Their shareholders are going to want to benefit in some way, and so there's going to have to be some way to distribute those benefits to shareholders.\n\nWhether that looks like money as we currently conceive it, or vouchers for services that the corporation is itself providing, is an interesting question — and one that I think current corporate law and corporate governance are not well-equipped to handle, since money is the primary mode of benefit. But you can think of the Windfall Clause as capturing other sorts of benefits as well. \n\nA more likely failure mode is just that firms begin to primarily structure themselves to benefit insiders, without meaningful accountability from shareholders. This is also a rule-of-law question. Because if that begins to happen, then the normal thing that you expect is for shareholders to vote out or sue the bad directors. And whether they're able to do that turns on whether the rule of law holds up, and whether there's meaningful accountability from a corporate-governance perspective. \n\nIf that fails to happen, you could foresee that the benefits might accrue qualitatively inside of corporations for the corporate directors.\n\n**Moderator:** Right. Well, on that note, thank you so much for your talk.\n\n**Cullen:** Thank you.", "filename": "The Windfall Clause - Sharing the benefits of advanced AI _ Cullen OΓÇÖKeefe-by Centre for Effective Altruism-video_id vFDL-NxY610-date 20190829.md", "id": "174f266a92c382f3dbc8abf38a6872d3", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Fireside chat - AI governance _ Markus Anderljung _ Ben Garfinkel _ EA Global - Virtual 2020-by Centre for Effective Altruism-video_id bSTYiIgjgrk-date 20200321", "authors": ["Markus Anderljung", "Ben Garfinkel"], "date_published": "2020-03-21", "text": "# Markus Anderljung and Ben Garfinkel Fireside chat on AI governance - EA Forum\n\n**Getting involved in AI governance**\n\n**Markus:** Ben, how did you get into the field of AI governance?\n\n**Ben:** I majored in physics and philosophy at Yale, and was considering working in the philosophy of physics. I started thinking about it not being the most useful or employable field. At the same time, I got interested in EA \\[effective altruism\\]. Then, Allan Dafoe was transitioning to focus on a new Centre for the Governance of AI. He was looking for researchers. I seized the opportunity — it seemed important, with not enough people working in \\[the field\\]. That’s how I got involved. \n**Markus:** What happened next?\n\n**Ben:** I was a Research Assistant there for about a year, and, at the same time, held a job at the Centre for Effective Altruism. I then had the opportunity to transition to the AI governance area full-time.\n\n**Markus:** Sounds a bit random — just having this opportunity pop up.\n\n**Ben:** There was indeed an element of randomness, but it wasn’t a random thing to do. I got really interested in long-termism and EA, so AI governance was on my radar — and yes, an opportunity lined up at the same time. \n**Markus:** How has your work in the field changed?\n\n**Ben:** Quite a lot. I still have a broad focus in the area, but when I started, there was this sense that AI was going to be a very important field. It was around 2016. [AlphaGo](https://deepmind.com/research/case-studies/alphago-the-story-so-far) had just come out. [_Superintelligence_](https://www.amazon.com/dp/B00LOOCGB2/) had been written, so a good fraction of long-termist concern was on AI. AI seemed to be this really transformative technology, with risks we didn’t understand very well yet.\n\nAlmost no one was working on AI safety at the time or thinking about long-term AI governance challenges, and not that much AI governance was going on. So the early questions were “What is going on here?” and “What are we trying to do?” A lot of the early research was probably more naive, with people transitioning to the field and not knowing that much about AI.\n\n**Markus:** And you’re now doing a DPhil at Oxford?\n\n**Ben:** Yes, I just started \\[working toward a DPhil\\] in international relations. My highest credential so far in the area \\[of AI governance\\] remains an undergraduate degree in an unrelated field, so it seemed useful to have a proper degree in a domain more relevant to governance than physics.\n\nLet’s turn the question on you, Markus. How did you get involved?\n\n**Markus:** It’s a slightly convoluted story. I was first involved in EA issues at university, around 2011-2013. I got involved with [Giving What We Can](https://www.givingwhatwecan.org/) and \\[other EA\\] organizations. Then I spent a few years transitioning from \\[the belief that\\] long-termism — or the closest \\[approximation\\] of it back then — was true, but there wasn’t much to do about it in the emerging technology field, to becoming increasingly convinced that there _were_ things to do.\n\nWhen I graduated from university in 2016, I moved back to Sweden. I considered building career capital, so I went into management consulting for a few years, which was very fun and interesting. After a while it felt that the trajectory had leveled off, and that I could do more in the cause areas I cared about.\n\nI transitioned into work for EA Sweden. Building a community seemed like a good idea, and heading up the organization was a great opportunity. I got funding for that and did it for about a year. I then became convinced that I could do more with my career outside of Sweden.\n\nI was looking for other options, thinking that my comparative advantage was “I’m a management consultant who understands the research, someone who _gets it_.” I applied to organizations where I thought my broader skill set of operations and recruitment could help. GovAI was one of those organizations.\n\nIt wasn’t like AI governance was the most important thing and I had to help there; it was just part of a broader class of areas that seemed useful.\n\n**The relative importance of AI governance research**\n\n**Markus:** Let’s talk about your recent research. At EA Global: London 2018, you gave a talk.\n\n**Ben:** “[How sure are we about this AI stuff?](https://www.effectivealtruism.org/articles/ea-global-2018-how-sure-are-we-about-this-ai-stuff/)”\n\n**Markus:** Right. So, I want to ask: How sure are we about this AI stuff?\n\n**Ben:** There are two ways to be sure. One is robust. \\[It centers on the question\\] “If we had all of the facts and considerations, would this still be a top priority for EA?” The other \\[centers on the question\\] “Given our limited information and the expectations we have, are we sure it still makes sense to put a lot of resources into this area?”\n\nWith the first question, it’s really hard to be sure. There’s so much we don’t know: what future AI systems will look like, the rate of progress, which institutions will matter, the timelines. That’s true of any transformative technology; we don’t have a great picture.\n\n**Markus:** Will it be different for AI versus other causes?\n\n**Ben:** If you compare AI to climate change, there’s a lot of uncertainty in climate models. We don’t know everything about the feedback loops, or how to think about extreme events — is there a one-in-100 probability or a one-in-1,000 probability? We still have a basic sense of the parameters of the problem, such as how hot things will get (to some extent) and how bad it is.\n\nWith AI, if human labor becomes essentially unnecessary in the long term, as it’s replaced by AI systems, we don’t know what that world looks like or how it’s organized. It’s very difficult to picture. It’s like being in the 1500s and describing the internet in very rough terms, as \\[something that will be\\] more efficient \\[and enable\\] faster communication and information retrieval. You could have reasoned a bit about this future — maybe there would be larger businesses, since you can communicate on a larger scale. But you wouldn’t be visualizing the internet, but rather very fast carrier pigeons or something. You’re going to be way off. It’s hard to even find single dimensions where the answer is clear.\n\nI think that’s about where we are with AI, which is a long-winded way of saying that it’s hard to be sure. \n\nI actually feel pretty good about the weaker standard (“How sure are we, given these considerations, that a decent chunk of the EA movement should be focused on this?”). Overall, I think a smaller portion of the EA portfolio should be focused on this, but at least a few dozen people should be actively thinking about long-term applications of AI, and we’re not far from that number at the moment.\n\n**Markus:** That sounds smaller than I expected. When you say a smaller portion of the EA portfolio should be focused on AI, what’s your current ballpark estimate of what that percentage is?\n\n**Ben:** I think it’s really hard to think about the spread between things. Maybe it’s something along the lines of having one in five people who are fully engaged and oriented on the long term thinking about AI. That would be great.\n\n**Markus:** Whereas now, you think the number is more like three or four in five people?\n\n**Ben:** Yeah, it feels to me that it might be more than half, but I’m not sure that’s correct.\n\n**Markus:** What interventions would you like these people to do instead?\n\n**Ben:** There’s a lot of uncertainty. A lot of it is based on the skill set that these people have and the sort of work they’d be inclined to do. There being a lot of math and computer science \\[in EA\\] may justify the strong AI focus, but let’s imagine completely fungible people who are able to work on anything.\n\nI really think fundamental cause prioritization research is still pretty neglected. There's a lot of good work being done at the [Global Priorities Institute](https://globalprioritiesinstitute.org/). There are some broad topics that seem relevant to long-term thinking that not many people are considering. They include questions like those I was working on: “Should we assume that we don’t have really great opportunities to influence the future now, relative to what future people might have if we save our money?” and “Should we pass resources on?” These seem crucial for the long-termist community.\n\n**The importance and difficulty of meta-level research**\n\n**Ben:** Even within AI, there are strangely not that many people thinking about, at the meta-level, the pathways for influence in AI safety and governance. What exactly is the nature of the risk? I think there’s a handful of people doing this sort of work, part-time, on the side of other things.\n\nFor example, Rohin Shah is doing a lot of good \\[by thinking through\\] “What exactly is the case for AI risk?” But there are not that many people on that list compared to the set of people working on AI safety. There’s an abstract argument to be made: Before you put many resources into object-level work, it’s quite useful, early on, to put them toward prioritizing different kinds of object-level work, in order to figure out what, exactly, is motivating the object-level work.\n\n**Markus:** One of your complaints in \\[your past\\] talk was that people seemed to be putting a lot of research into this topic, but haven’t produced many proper writeups laying out the argument motivating people’s choices. Do you think we’ve seen improvement there? There are a few things that have been published since then.\n\n**Ben:** Yeah, there’s definitely been an improvement since I gave the talk. I think the time at which I gave the talk was kind of a low point.\n\nThere was an initial period of time, after the publication of _Superintelligence_, when the motivation for AI governance, for a lot of people, corresponded to the nature of AI risk. Over time, there was some sort of transition; people have very different visions of this. The change happened along a lot of dimensions. One of them is that _Superintelligence_ focuses on a very discrete transition to advanced AI, in which not much is happening, and then we transition to quite advanced systems in a matter of days or weeks. A lot of people moved away from that.\n\nAlso, a lot of people, myself included, started thinking about risks that weren’t specifically safety-oriented. _Superintelligence_ discusses these but they’re not the main focus.\n\n**Markus:** What do you mean by “not safety-oriented”?\n\n**Ben:** There’s a lot of concern you might have about the future of AI not necessarily being great. For example, in a scenario in which human labor and government functions have been automated, it may not be a great world in terms of democracy and representation of the will of the people. \n\nAnother category is ethical \\[and concerns\\] the moral status of AI systems. Maybe those decisions are made wrongly, or are substantial deviations from the best possible case.\n\n**Markus:** So these are risks that don’t \\[involve\\] accidents with very powerful systems.\n\n**Ben:** We’ve had major technological transitions in history which haven’t been uniformly good. The classic one is the Neolithic Revolution — the introduction of agriculture — having a few aftereffects like the rise of the state. It’s difficult to do an overall assessment. Some of the results were positive, and some were very much not, like slavery becoming a massive institution, disease, and the establishment of hierarchies instead of decentralized decision making.\n\nIt’s not hard to imagine that if there’s another transition, in which human labor is replaced by capital, that \\[transition\\] may have various effects that aren’t exactly what we want.\n\n**Markus:** Yes, and in these previous transitions, the bad consequences have been permanent structural effects, like slavery being more economically viable. \n\nSo \\[the time of your EA Global talk\\] — November 2018 — was the low point? In what sense?\n\n**Ben:** It was the low point in the sense of people having changed their justifications quite a bit in a lot of different areas, \\[without those changes being\\] reflected in much published writing \\[other than\\] maybe some blog posts.\n\nThere hasn’t been a massive improvement, but there definitely has been useful stuff published since then. For example, Paul Christiano wrote a [series of blog posts](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd) arguing for AI safety even in the context of a continuous transition; Richard Ngo did [some work](https://www.alignmentforum.org/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety) to taxonomize different arguments and lay out \\[current thinking in\\] the space; Tom Sittler did [similar work](https://fragile-credences.github.io/prioritising-ai/); and Rohin Shah presented a case for AI risk in a good sequence called “[Value Learning](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc),” a series of essays that laid out the nature of the alignment problem. \n\nI think that was after — \n\n**Markus:** “[Reframing Superintelligence](https://www.fhi.ox.ac.uk/reframing/)”?\n\n**Ben:** Yeah, Eric Drexler’s work at FHI \\[the Future of Humanity Institute\\] also came out framing his quite different picture of AI progress and the nature of the risks. Also MIRI \\[the Machine Intelligence Research Institute\\] put out [a paper](https://intelligence.org/embedded-agency/) on what they call “mesa-optimization,” which corresponds to one of their main arguments for why they’re worried about AI risk, and which wasn’t in _Superintelligence_.\n\nThere have been a decent amount of publications, but quite a bit fewer that I would ideally want. There are still a lot of viewpoints that aren’t captured in any existing writing, and a lot of writing is fairly short blog posts. Those are useful, but I’m not very comfortable with putting a lot of time and work into an area where justifications are short blog posts.\n\nIt’s obviously very difficult to communicate clearly about this. We don’t have the right picture of how things will go. It’s not uncommon to have arguments about what a given post is actually saying, which is not a great signal for our community being on the same page about the landscape of arguments and considerations.\n\n**Markus:** Why do you think this is? Is it due to individual mistakes? Not spending enough time on this meta-level question?\n\n**Ben:** To some extent, yes. There are complications. Working on this stuff is fairly difficult. It requires an understanding of the current landscape, of arguments in this area, of what’s going on in AI safety and in machine learning. It also requires the ability to do conceptual analysis and synthesis. There are perhaps not that many people right now \\[who meet these criteria\\].\n\nAnother complicating factor is that most people currently working in this area have just recently come into it, so there’s an unfortunate dynamic where people have the false sense that the problem framing is a lot more \\[advanced\\] than it actually is — that it just hasn’t been published, and that the people in the know have a good understanding.\n\nWhen you enter an area, you aren’t usually in a great position to do this high-level framing work, because you don’t really know what exists in terms of unpublished Google Docs. It’s quite easy, and maybe sensible, when entering the area, to not do high-level \\[thinking and writing\\], and instead pick an object-level topic to work on.\n\nSome of us might be better off dropping the object-level research program and \\[addressing\\] more high-level problems. Some have been doing this in their spare time, while their \\[main area of study\\] is object-level. It does seem like a difficult transition: to stop an object-level project and embark on a loose, “what-are-we-even-doing” project.\n\n**Markus:** Are there particular viewpoints that you feel haven’t been accurately represented, or written up thoroughly enough?\n\n**Ben:** Paul Christiano, for example, has written a few blog posts. One is called “[What Failure Looks Like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like).” It shows what a bad outcome would look like, even in a scenario with a slow, gradual transition. However, there’s a limit on how thoroughly you can communicate in the form of a blog post. There is still a lot of ambiguity about what is being described — an active disaster? A lost opportunity? What is the argument for this being plausible? There’s a lot more that could be done there.\n\nI feel similarly about this idea of mesa-optimization, which is now, I think, one of the primary justifications that MIRI has for assigning a high probability to AI safety risk. I saw a lot of ambiguity around what this concept exactly is. Maybe different people are trying to characterize it differently or are misunderstanding the paper, or the paper is ambiguous. It doesn’t seem like everyone is on the same page about what exactly mesa-optimization is. The paper argues that there might be this phenomenon called mesa-optimization, but doesn’t try to make the argument that, because this phenomenon might arise, then we should view it as a plausible existential risk. I think that work still hasn’t been done.\n\n**Markus:** So the arguments that _are_ out there ought to be scrutinized more. Are there arguments or classes of viewpoints that you feel don’t even have an initial writeup?\n\n**Ben:** I think there are a couple. For example, Allan \\[Dafoe\\] is thinking quite a lot about structural risks. Maybe the situation starts getting bad or disappointing in terms of our current values. It’s a bit nebulous like the Neolithic Revolution — not a concrete disaster. Some structural forces could push things in a direction you ideally wouldn’t want.\n\nSimilarly, there’s not much writing on whether AI systems will eventually have some sort of moral status. If they do, is that a plausible risk, and one that will be important enough for longtermists to focus on?\n\nThose are probably two of the main things that stand out in my mind as plausible justifications for focusing on AI, but where I can’t point to a longer writeup.\n\n**Markus:** What if I were to turn this around on you? You’re here, you’re working on these issues. What is your stab at a justification?\n\n**Ben:** The main point is that it’s very likely that AI will be really transformative. We will eventually get to the point where human labor is no longer necessary for most things, and that world will look extremely different.\n\nThen, there are around a half-dozen vaguely sketched arguments for why there might be some areas with EA leverage that would make the future much better or much worse — or why it could go either way.\n\nIt is hard to \\[determine\\] which topics may have long-term significance. I don’t think they \\[comprise\\] a large set, and there’s value in \\[surfacing\\] that information right now, in getting clearer on “what’s going on in AI.” It seems like one of the few places where there’s currently the opportunity to do useful, \\[future-focused\\] work.\n\n**Markus:** So the argument is: “If you’re a long-termist, you should work in the areas that hold great leverage over the future, and this seems like one of the best bets.”\n\n**Ben:** Yeah, that’s basically my viewpoint. The influence of historical events is extremely ambiguous. How plausible is it for us to know what our impact today will have 100 years from now? In the 1300s, people’s focus may have been on stopping Genghis Khan. I think that would have been ethically justified, but from a long-termist perspective, things are less clear. Genghis Khan’s impact may have ultimately been good because of the trade networks established and \\[other such factors\\]. We’re unable to discern good from bad from a long-termist perspective. Pick any century from more than five centuries ago, and you’ll be in the same position.\n\nI think we should have a strong prior that justifies working on issues for their present influence, but for the long-term view, we shouldn’t prioritize issues where we can’t predict what difference they’ll make hundreds of years in the future.\n\nThere are not many candidates for relevant long-term interventions. Insofar as there are semi-plausible arguments for why AI could be one of them, I think it’s more useful to put resources into figuring out what is going on in the space and \\[improving\\] the value of information, rather than putting resources into object-level issues.\n\n**The role of GovAI**\n\n**Ben:** So, Markus, could you tell me what GovAI is currently up to in this space?\n\n**Markus:** Broadly, we’re working on AI governance questions from a long-termist perspective, and I spend most of my time doing what I can to make sure we build an organization that does the best research.\n\nIn practice, we have a lot of different research projects going on. I personally spend a lot of time with recruiting, growing the organization. We run a [GovAI Fellowship](https://www.fhi.ox.ac.uk/governance-of-ai-fellowship/), where people spend three months doing research on topics that relate to the kinds of things we’re interested in. That’s a path into the AI governance space, and something we’ll continue doing for the foreseeable future. We \\[award\\] 10 fellowships every year. We’ll see whether we’ll be able to bring people \\[on-site\\] this summer. I’m pretty excited so far about this as a way of getting people into the field.\n\nSince 2016, we’ve not been able to build up institutions in the field that provide clear pathways for people. I think this fellowship is one example of how we can do that.\n\nMy hope is to have, a few years down the line, a team of a dozen great researchers in Oxford doing great research. In terms of the research that we’ll do, we’re \\[an unusual\\] organization, in that we could define the very broad problem of AI governance as “everything required to make AI go well that isn’t technical AI safety.” That’s a _lot_.\n\nIt will span fields ranging from economics, to law, to policy — a tremendous number of different topics. I’m hoping that, over time, we’ll build narrower expertise as we get clearer on the meta picture and the specific fields that people need to work on. \n\nA few years down the line, I’d really like for us and others in this space to have at least somewhat solid \\[recommendations\\] in \\[situations\\] like a corporation asking what their publication norms should be, or what sorts of internal mechanisms they should have to make sure they’re held accountable to the beautiful principles they’ve written up (e.g., “benefit humanity with our research”).\n\nI don’t think we have that yet. Those are the kinds of questions I’m hoping we can make progress on in the next few years.\n\n**Career recommendations**\n\n**Ben:** Besides just applying for the GovAI Fellowship, do you have other career recommendations for people interested in \\[entering or exploring whether to enter\\] this space?\n\n**Markus:** In general, I \\[subscribe to the view\\] that if you’re trying to figure out if you should be doing something, then do a bit of it. Find some bit of research that you can do and try it.\n\nThere aren’t a lot of opportunities that look like the GovAI Fellowship in the long-termist AI governance space, but others that are similar are at the [Center for Security and Emerging Technology](https://cset.georgetown.edu/), based in Washington, DC. There are also junior roles at DeepMind and OpenAI, where you might do \\[some exploratory\\] research. But there aren’t many such roles — probably fewer than a dozen a year.\n\nI would encourage people to think much more broadly. You might try to work at a wider set of technology companies like Microsoft or Facebook as they start building up ethics and policy teams. It would be awesome to have people in these types of “council” roles.\n\nAnother good idea would be to use your studies to dip your toe into the water. You could do your bachelor’s or master’s dissertation on a relevant topic. Some are looking into PhDs in this area as well, and that may be a good way to expand the field.\n\nThe other main tip is to engage with a lot of the research. Read everything coming out of institutions like ours, or [CSET](https://cset.georgetown.edu/) \\[the Center for Security and Emerging Technology\\]. Try to really engage with it, keeping in mind that the authors may be smart and good at what they do, but aren’t oracles and don’t have that much knowledge. There’s a lot of uncertainty in this space, so read things with an open mind. Consider what may be wrong, stay critical, and try to form your own views instead of directly \\[adopting\\] other people’s conclusions. Form your own model of how you think the world could go.\n\n**Ben:** Sounds good. Very wise.\n\n**Markus:** Do you have any tips? What would you have told your past self?\n\n**Ben:** I can’t think of anything beyond what you just said, other than to have checked my email in case Allan Dafoe was looking for research assistants in this area, with a fairly non-competitive process at that time. Not many people were interested in it back then.\n\n**Markus:** Right, so _get really lucky_.\n\n**Ben:** Get really lucky — that’s exactly what I would tell my past self.\n\n**Markus:** Cool.\n\n**Ben:** Well, it’s been fun chatting!\n\n**Markus:** Yes, and I’ll add my new sign-off: Stay safe, stay sane, and see you later!", "filename": "Fireside chat - AI governance _ Markus Anderljung _ Ben Garfinkel _ EA Global - Virtual 2020-by Centre for Effective Altruism-video_id bSTYiIgjgrk-date 20200321.md", "id": "f637b02129bc74b01332733e696dc18c", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Yudkowsky vs Hanson ΓÇö Singularity Debate-by Jane Street-video_id TuXl-iidnFY-date 20110101", "authors": [], "date_published": "2011-01-01", "text": "# Yudkowsky-Hanson Jane Street Debate 2011\n\nSpeakers: Eliezer Yudkowsky and Robin Hanson\n\nTranscriber(s): Ethan Dickinson and John Maxwell\n\nModerator: ...say what the statement is?\n\nEliezer Yudkowsky: I forget what the exact form of it was. The question is, \"After all sorts of interesting technological things happen at some undetermined point in the future, are we going to see a very small nucleus that can or does control all the resources, or do we see a general, more civilization-wide, large fraction of society participating in all these things going down?\"\n\nRobin Hanson: I think, if I remember it, it was, \"Compared to the industrial and farming revolutions, intelligence explosion first movers will soon dominate a larger fraction of the future world.\"\n\nEliezer: That's what I remember.\n\nModerator: There was a whole debate to get to this statement.\n\n\\[laughter\\]\n\nModerator: Right, so, \"for\"...\n\nRobin: We'll try to explain what those mean.\n\nModerator: \"For\" -- you're saying that you believe that the first movers will gain a large lead relative to first movers in the industrial and farming revolutions.\n\nRobin: Right.\n\nModerator: If you agree with that statement, you're \"for.\"\n\nRobin: This side. \\[gestures to Eliezer\\]\n\nModerator: If you think it's going to be more broad-based...\n\nRobin: Con. \\[gestures toward self\\]\n\nEliezer: Maybe a one-word thing would be \"highly centralized,\" \"highly decentralized.\" Does that sound like a one-word \\[inaudible 1:27\\]?\n\nRobin: There has to be a cut-off in between \"highly,\" so – \\[laughs\\] There's a – middle ground.\n\nEliezer: With the cut-off point being the agricultural revolution, for example. Or no, that's actually not the cut-off point. That's your side.\n\nModerator: On the yellow sheet, if you're in favor, you write your name and \"I'm in favor.\" If you're against, you write your name and \"I'm against.\" Then pass them that way. Keep the colored sheet, that's going to be your vote afterwards. Eliezer and Robin are hoping to convert you.\n\nRobin: Or have fun.\n\nModerator: What?\n\nRobin: Or have fun trying.\n\nModerator: We're very excited at Jane Street today to have Eliezer Yudkowsky, Robin Hanson.\n\n\\[applause\\]\n\nModerator: I'll keep the intros short so we can jump into the debate. Both very highly regarded intellectuals and have been airing this debate for some time, so it should be a lot of fun.\n\n\\[gestures to Robin Hanson\\] Professor at George Mason University of economics, one of the frontiers in prediction markets, all the way back to 1988. Avid publisher. Both a co-founder of \"Overcoming Bias,\" now, he's moved over to \"Less Wrong.\"\n\nEliezer: Oh, I moved over to \"Less Wrong,\" and he's at \"Overcoming Bias.\"\n\nModerator: Eliezer, a co-founder of the Singularity Institute. Many, many publications. Without further ado, on to the debate, and, see, first five minutes.\n\n\\[laughter\\]\n\nEliezer: Quick question. How many people here are already familiar with the differences between what Ray Kurzweil means when he uses the word \"singularity\" and the difference between what the Singularity Institute means when they use the word \"singularity\"? Raise your hand if you're already familiar with the difference. OK. I don't see a sea of hands. That means that I designed this talk correctly.\n\nYou've probably run across a word, \"singularity.\" People use it with a lot of different and mutually incompatible meanings. When we named the Singularity Institute for Artificial Intelligence in 2000, it meant something pretty different then than now.\n\nThe original meaning was, a mathematician and science fiction writer named Vernor Vinge originally coined the word \"singularity\" to describe the breakdown in his ability to model and imagine the future, when he tried to extrapolate that model past the point where it predicted the technological creation of smarter than human intelligence. In this particular case, he was trying to write a story about a human with a brain computer interface increasing his intelligence. The rejection letter he got from John Campbell said, \"Sorry. You can't write this story. Neither can anyone else.\"\n\nIf you asked an ancient Greek from 2,500 years ago to imagine the modern world, in point of fact they wouldn't be able to, but they'd have much better luck imagining our world and would manage to get more things right than, say, a chimpanzee would. There are stories from thousands of years ago that still resonate with us today, because the minds, the brains haven't really changed over that time. If you change the brain, the mind, that implies a difference in the future that is different in kind from faster cars or interplanetary travel or curing cancer or bionic arms or similar such neat, cool, technological trivia, because that would not really have an impact on the future comparable to the rise of human intelligence 50,000 years ago.\n\nThe other thing is that since intelligence is the source of technology – that is, this is ultimately the factor that produces the chairs, the floor, the projectors, this computer in front of me. If you tamper with this, then you would expect that to ripple down the causal chain and, in other words, if you make this more powerful, you get a different kind of technological impact than you get from any one breakthrough.\n\nI. J. Good, another mathematician, coined a related concept of the singularity when he pointed out that if you could build an artificial intelligence that was smarter than you, it would also be better than you at designing and programming artificial intelligence. This AI builds an even smarter AI, or instead of a whole another AI, just reprograms modules within itself, then that AI build an even smarter one.\n\nI. J. Good suggested that you'd get a positive feedback loop leading, to what I. J. Good termed \"ultraintelligence\" but what is now generally called \"superintelligence,\" and the general phenomenon of smarter minds building even smarter minds is what I. J. Good termed the \"intelligence explosion.\"\n\nYou could get an intelligence explosion outside of AI. For example, humans with brain computer interfaces designing the next generation of brain computer interfaces, but the purest and fastest form of the intelligence explosion seems to be likely to be an AI rewriting its own source code.\n\nThis is what the Singularity Institute is actually about. If we'd foreseen what the word \"singularity\" was going to turn into, we'd have called ourselves the \"Good Institute\" or \"The Institute for Carefully Programmed Intelligence Explosions.\"\n\n\\[laughter\\]\n\nEliezer: Here at \"The Institute for Carefully Programmed Intelligence Explosions,\" we do not necessarily believe or advocate that, for example, there was more change in the 40 years between 1970 and 2010 than the 40 years between 1930 and 1970.\n\nI myself do not have a strong opinion that I could argue on this subject, but our president, Michael Vassar, our major donor, Peter Thiel, and Thiel's friend, Kasparov, who, I believe, recently spoke here, all believe that it's obviously wrong that technological change has been accelerating at all, let alone that it's been accelerating exponentially. This doesn't contradict the basic thesis that we would advocate, because you do not need exponentially accelerating technological progress to eventually get an AI. You just need some form of technological progress, period.\n\nWhen we try to visualize how all this is likely to go down, we tend to visualize a scenario that someone else once termed \"a brain in a box in a basement.\" I love that phrase, so I stole it. In other words, we tend to visualize that there's this AI programming team, a lot like the sort of wannabe AI programming teams you see nowadays, trying to create artificial general intelligence, like the artificial general intelligence projects you see nowadays. They manage to acquire some new deep insights which, combined with published insights in the general scientific community, let them go down into their basement and work in it for a while and create an AI which is smart enough to reprogram itself, and then you get an intelligence explosion.\n\nOne of the strongest critics of this particular concept of a localized intelligence explosion is Robin Hanson. In fact, it's probably fair to say that he is the strongest critic by around an order of magnitude and a margin so large that there's no obvious second contender.\n\n\\[laughter\\]\n\nEliezer: How much time do I have left in my five minutes? Does anyone know, or..?\n\nModerator: You just hit five minutes, but...\n\nEliezer: All right. In that case, I'll turn you over to Robin.\n\n\\[laughter\\]\n\nRobin: We're going to be very flexible here, going back and forth, so there'll be plenty of time. I thank you for inviting us. I greatly respect this audience and my esteemed debate opponent here. We've known each other for a long time. We respect each other, we've talked for a lot. It's a lot of fun to talk about this here with you all.\n\nThe key question here, as we agree, is this idea of a local intelligence explosion. That's what the topic's about. We're not talking about this idea of gradually accelerating change, where in 30 years everything you've ever heard about will all be true or more. We're talking about a world where we've had relatively steady change over a century, roughly, and we might have steady change for a while, and then the hypothesis is there'll be this sudden dramatic event with great consequences, and the issue is what is the nature of that event, and how will it play out.\n\nThis \"brain in a box in a basement\" scenario is where something that starts out very small, very quickly becomes very big. And the way it goes from being small to being very big is it gets better. It gets more powerful. So, in an essence, during this time this thing in the basement is outcompeting the entire rest of the world.\n\nNow, as you know, or maybe you don't know, the world today is vastly more powerful than it has been in the past. The long-term history of your civilization, your species, has been a vast increase in capacity. From primates to humans with language, eventually developing farming, then industry and who knows where, over this very long time, lots and lots of things have been developed, lots of innovations have happened.\n\nThere's lots of big stories along the line, but the major, overall, standing-from-a-distance story is of relatively steady, gradual growth. That is there's lots of inventions here, changes there, that add up to disruptions, but most of the disruptions are relatively small and on the distance scale there's relatively steady growth. It's more steady even on the larger scales. If you look at a company like yours, or a city, even, like this, you'll have ups and downs, or even a country, but on the long time scale...\n\nThis is central to the idea of where innovation comes from, and that's the center of this debate, really. Where does innovation come from, where can it come from, and how fast can it come?\n\nSo the brain in the box in the basement, within a relatively short time a huge amount of innovation happens, that is this thing hardly knows anything, it's hardly able to do anything, and then within a short time it's able to do so much that it basically can take over the world and do whatever it wants, and that's the problem.\n\nNow, let me stipulate right from the front, there is a chance he's right. OK? Somebody ought to be working on that chance. He looks like a good candidate to me, so I'm fine with him working on this chance. I'm fine with there being a bunch of people working on the chance. My only dispute is the perceptions of probability. Some people seem to think this is the main, most likely thing that's going to happen. I think it's a small chance that's worth looking into, and protecting against, so we all agree there. Our dispute is more about the chance of this scenario.\n\nIf you remember the old Bond villain, he had an island somewhere with jumpsuited minions, all wearing the same color, if I recall. They had some device they invented and Bond had to go in and put it off. Usually, they had invented a whole bunch of devices back there, and they just had a whole bunch of stuff going on.\n\nSort of the epitome of this might be Captain Nemo, from \"20,000 Leagues Under the Sea.\" One guy off on his own island with a couple of people invented the entire submarine technology, if you believe the movie, undersea cities, nuclear weapons, et cetera, all within a short time.\n\nNow, that makes wonderful fiction. You'd like to have a great powerful villain that everybody can go fight and take down, but in the real world it's very hard to imagine somebody isolated on an island with a few people inventing large amounts of technology, innovating, and competing with the rest of the world.\n\nThat's just not going to happen, it doesn't happen in the real world. In our world, so far, in history, it's been very rare for any one local place to have such an advantage in technology that it really could do anything remotely like take over the world.\n\nIn fact, if we look for major disruptions in history, of which might be parallel to what's being hypothesized here, the three major disruptions you might think about would be the introductions of something special about humans, perhaps language, the introduction of farming, and the introduction of industry.\n\nThose three events, whatever was special about them we're not sure, but for those three events the growth rate of the world economy suddenly within a very short time changed from something that was slow to something 100 or more times faster. We're not sure exactly what those were, but those would be candidates, things I would call singularities, that is big, enormous disruptions.\n\nIn those singularities, the places that first had the new technology had varying degrees of how much an advantage they gave. Edinburgh gained some advantage by being the beginning of the Industrial Revolution, but it didn't take over the world. Northern Europe did more like take over the world, but even then it's not so much taken over the world. Edinburgh and parts of Northern Europe needed each other. They needed a large economy to build things together, so that limited... Also, people could copy. Even in the farming revolution, it was more like a 50/50 split between the initial farmers spreading out and taking over territory and the other locals copying them and interbreeding with them.\n\nIf you go all the way back to the introduction of humans, that was much more about one displaces all the rest because there was relatively little way in which they could help each other, complement each other, or share technology.\n\nWhat the issue here is – and obviously I'm done with my five minutes – in this new imagined scenario, how plausible is it that something that's very small could have that much of an advantage that whatever it has that's new and better gives it such an advantage that it can grow from something that's small, on an even town scale, to being bigger than the world when it's competing against the entire rest of the world, when in these previous innovation situations where even the most disruptive things that ever happened, still, the new first mover only gained a modest advantage in terms of being a larger fraction of the new world.\n\nI'll end my five minutes there.\n\nEliezer: The fundamental question of rationality is, what do you think you know and how you do think you know it? This is rather interesting and in fact, it's rather embarrassing, because it seems to me like there's very strong reason to believe that we're going to be looking at a localized intelligence explosion.\n\nRobin Hanson feels there's pretty strong reason to believe that we're going to be looking at a non-local general economic growth mode changeover. Calling it a singularity seems... Putting them all into the category of singularity is a slightly begging the definitional question. I would prefer to talk about the intelligence explosion as a possible candidate for the reference class, economic growth mode changeovers.\n\nRobin: OK.\n\nEliezer: The embarrassing part is that both of us know the theorem which shows that two rational agents cannot agree to have common knowledge of disagreement, called Aumann's Agreement Theorem. So we're supposed to, since we know that the other person believes something different, we're supposed to have agreed by now, but we haven't. It's really quite embarrassing.\n\nBut the underlying question is, is the next big thing going to look more like the rise of human intelligence or is it going to look more like the Industrial Revolution? If you look at modern AI projects, the leading edge of artificial intelligence does not look like the product of an economy among AI projects.\n\nThey tend to rewrite their own code. They tend to not use very much cognitive content that other AI projects have developed. They've been known to import libraries that have been published, but you couldn't look at that and say that an AI project which just used what had been published and then developed its own further code, would suffer a disadvantage analogous to a country that tried to go its own way for the rest of the world economy.\n\nRather, AI projects nowadays look a lot like species, which only share genes within a species and then the other species are all off going their own way.\n\n\\[gestures to Robin\\] What is your vision of the development of intelligence or technology where things are getting traded very quickly, analogous to the global economy?\n\nRobin: Let's back up and make sure we aren't losing people with some common terminology. I believe, like most of you do, that in the near future, within a century, we will move more of the knowledge and intelligence in our society into machines. That is, machines have a lot of promise as hardware substrate for intelligence. You can copy them. You can reproduce them. You can make them go faster. You can have them in environments. We are in complete agreement that eventually hardware, nonbiological hardware, silicon, things like that, will be a more dominant substrate of where intelligence resides. By intelligence, I just mean whatever mental capacities exist that allow us to do mental tasks.\n\nWe are a powerful civilization able to do many mental tasks, primarily because we rely heavily on bodies like yours with heads like yours where a lot of that stuff happens inside, biological heads. But we agree that in the future there will be much more of that happening in machines. The question is the path to that situation.\n\nNow, our heritage, what we have as a civilization, a lot of it is a lot of the things inside people's heads. They are things that part of it isn't what was in people's heads 50,000 years ago. But a lot of it is also just what was in people's heads 50,000 years ago. We have this common heritage of brains and minds that go back millions of years to animals and built up with humans and that's part of our common heritage.\n\nThere's a lot in there. Human brains contain an enormous amount of things. I think it's not just one or two clever algorithms or something, it's this vast pool of resources. It's like comparing it to a city, like New York City. New York City is a vast, powerful thing because it has lots and lots of stuff in it.\n\nWhen you think in the future there will be these machines and they will have a lot of intelligence in them, one of the key questions is, \"Where will all of this vast mental capacity that's inside them come from?\" Where Eliezer and I differ, I think, is that I think we all have this vast capacity in our heads and these machines are just way, way behind us at the moment, and basically they have to somehow get what's in our head transferred over to them somehow. Because if you just put one box in a basement and ask it to rediscover the entire world, it's just way behind us. Unless it has some, almost inconceivable advantage over us at learning and growing and discovering things for itself, it's just going to remain way behind unless there's some way it can inherit what we have.\n\nEliezer: OK. I gave a talk here at Jane Street that was on the speed of evolution. Raise your hand if you were here for this and remember some of it. OK.\n\n\\[laughter\\]\n\nEliezer: There's a single, simple algorithm which produced the design for the human brain. It's not a very good algorithm, it's extremely slow. It took it millions and millions and billions of years to cough up this artifact over here \\[gestures to head\\]. Evolution is so simple and so slow that we can even make mathematical statements about how slow it is, such as the two separate bounds that I've seen calculated for how fast evolution can work, one of which is on the order of one bit per generation.\n\nIn the sense that, let's say two parents have 16 children, then on average, all but 2 of those children must die or fail to reproduce or the population goes to zero or infinity very rapidly. 16 cut down to 2, that would be three bits of selection pressure per generation. There's another argument which says that it's faster than this.\n\nBut if you actually look at the genome, then we've got about 30,000 genes in here, most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it, and the brain is simply not a very complicated artifact by comparison to, say, Windows Vista. Now, the complexity that it does have, it uses a lot more effectively than Windows Vista does. It probably contains a number of design principles which Microsoft knows not.\n\nBut nonetheless, what I'm trying to say is... I'm not saying that it's that small because it's 750 megabytes, I'm saying it's got to be that small because most of it, at least 90 percent of the 750 megabytes is junk and there's only 30,000 genes for the whole body, never mind the brain.\n\nThat something that simple can be this powerful and this hard to understand is a shock. But if you look at the brain design, it's got 52 major areas on each side of the cerebral cortex, distinguishable by the local pattern, the tiles and so on. It just doesn't really look all that complicated. It's very powerful. It's very mysterious. What we can say about it is it probably involves 1,000 different deep major mathematical insights into the nature of intelligence that we need to comprehend before we can build it.\n\nThis is probably one of the more intuitive, less easily quantified, and argued by reference to large bodies of experimental evidence type things. It's more a sense of well, you read through \"The MIT Encyclopedia of Cognitive Sciences\" and you read Judea Pearl's \"Probabilistic Reasoning in Intelligent Systems.\" Here's an insight. It's an insight into the nature of causality. How many more insights of this size do we need given that this is what the \"The MIT Encyclopedia of Cognitive Sciences\" seems to indicate we already understand and what it doesn't? You take a gander on it, and you say there's probably about 10 more insights. Definitely not 1. Not 1,000. Probably not 100 either.\n\nRobin: Clarify what's at issue. The question is, what makes your human brain powerful?\n\nMost people who look at the brain and compare it to other known systems have said things like \"It's the most complicated system we know,\" or things like that. Automobiles are also powerful things, but they're vastly simpler than the human brain, at least in terms of the fundamental constructs.\n\nBut the question is, what makes the brain powerful? Because we won't have a machine that competes with the brain until we have it have whatever the brain has that makes it so good. So the key question is, what makes the brain so good?\n\nI think our dispute in part comes down to an inclination toward architecture or content. That is, one view is that there's just a clever structure and if you have that basic structure, you have the right sort of architecture, and you set it up that way, then you don't need very much else, you just give it some sense organs, some access to the Internet or something, and then it can grow and build itself up because it has the right architecture for growth. Here we mean architecture for growth in particular, what architecture will let this thing grow well?\n\nEliezer hypothesizes that there are these insights out there, and you need to find them. And when you find enough of them, then you can have something that competes well with the brain at growing because you have enough of these architectural insights.\n\nMy opinion, which I think many AI experts will agree with at least, including say Doug Lenat who did the Eurisko program that you most admire in AI \\[gesturing toward Eliezer\\], is that it's largely about content. There are architectural insights. There are high-level things that you can do right or wrong, but they don't, in the end, add up to enough to make vast growth. What you need for vast growth is simply to have a big base.\n\nIn the world, there are all these nations. Some are small. Some are large. Large nations can grow larger because they start out large. Cities, like New York City, can grow larger because they start out as a larger city.\n\nIf you took a city like New York and you said, \"New York's a decent city. It's all right. But look at all these architectural failings. Look how this is designed badly or that's designed badly. The roads are in the wrong place or the subways are in the wrong place or the building heights are wrong, the pipe format is wrong. Let's imagine building a whole new city somewhere with the right sort of architecture.\" How good would that better architecture have to be?\n\nYou clear out some spot in the desert. You have a new architecture. You say, \"Come, world, we have a better architecture here. You don't want those old cities. You want our new, better city.\" I predict you won't get many comers because, for cities, architecture matters, but it's not that important. It's just lots of people being there and doing lots of specific things that makes a city better.\n\nSimilarly, I think that for minds, what matters is that it just has lots of good, powerful stuff in it, lots of things it knows, routines, strategies, and there isn't that much at the large architectural level.\n\nEliezer: The fundamental thing about our modern civilization is that everything you've ever met that you bothered to regard as any sort of ally or competitor had essentially exactly the same architecture as you.\n\nThe logic of evolution in a sexually reproducing species, you can't have half the people having a complex machine that requires 10 genes to build because then if all the individual genes are at 50 percent frequency, the whole thing only gets assembled 0.1 percent of the time. Everything evolves piece by piece, piecemeal. This, by the way, is standard evolutionary biology. It's not a creationist argument. I just thought I would emphasize that in case anyone was... This is bog standard evolutionary biology.\n\nEveryone you've met, unless they've suffered specific brain damage or a specific genetic deficit, they have all the same machinery as you. They have no complex machine in their brain that you do not have.\n\nOur nearest neighbors, the chimpanzees, who have 95 percent shared DNA with us...  Now, in one sense, that may be a little misleading because what they don't share is probably more heavily focused on brain than body type stuff, but on the other hand, you can look at those brains. You can put the brains through an MRI. They have almost exactly the same brain areas as us. We just have larger versions of some brain areas. I think there's one sort of neuron that we have and they don't, or possibly even they had it but only in very tiny quantities.\n\nThis is because there have been only five million years since we split off from the chimpanzees. There simply has not been time to do any major changes to brain architecture in five million years. It's just not enough to do really significant complex machinery. The intelligence we have is the last layer of icing on the cake and yet, if you look at the sort of curve of evolutionary optimization into the hominid line versus how much optimization power put out, how much horsepower was the intelligence, it goes like this. \\[gestures a flat line, then a sharp vertical increase, then another flat line\\]\n\nIf we look at the world today, we find that taking a little bit out of the architecture produces something that is just not in the running as an ally or a competitor when it comes to doing cognitive labor. Chimpanzees don't really participate in the economy at all, in fact, but the key point from our perspective is that although they are in a different environment, they grow up learning to do different things, there are genuinely skills that chimpanzees have that we don't, such as being able to poke a branch into an anthill and draw it out in such a way as to  have it covered with lots of tasty ants. Nonetheless, there are no branches of science where the chimps do better because they have mostly the same architecture and more relevant content.\n\nIt seems to me at least, that if we look at the present cognitive landscape, we're getting really strong information that, pardon me... You can imagine that we're trying to reason from one sample, but then pretty much all of this is reasoning from one sample in one way or another, we're seeing that in this particular case at least, humans can develop all sorts of content that lets them totally outcompete other animal species who have been doing things for millions of years longer than we have by virtue of architecture, and anyone who doesn't have the architecture isn't really in the running for it.\n\nRobin: So something happened to humans. I'm happy to grant that humans are outcompeting all the rest of the species on the planet.\n\nWe don't know exactly what it is about humans that was different. We don't actually know how much of it was architecture, in a sense, versus other things. But what we can say, for example, is that chimpanzees actually could do a lot of things in our society, except they aren't domesticated.\n\nThe animals we actually use are a very small fraction of the animals out there. It's not because they're smarter, per se, it's because they are just more willing to be told what to do. Most animals aren't willing to be told what to do. If chimps would be willing to be told what to do, there's a lot of things we could have them do. \"Planet of the Apes\" would actually be a much more feasible scenario. It's not clear that their cognitive abilities are really that lagging, more that their social skills are lacking.\n\nThe more fundamental point is to say that, since a million years ago when humans probably had language, we are a vastly more powerful species, and that because we use this ability to collect cultural content and built up a vast society that contains so much more. I think that if you took humans and made some better architectural innovations to them and put a pile of them off in the forest somewhere, we're still going to outcompete them if they're isolated from us because we just have this vaster base that we have built up since then.\n\nAgain, the issue comes down, how important is architecture? Even if something happened such that some architectural thing finally enabled humans to have culture, to share culture, to have language, to talk to each other, that was powerful. The question is, how many more of those are there? Because we have to hypothesize not just that there are one or two, but there are a whole bunch of these things, because that's the whole scenario, remember?\n\nThe scenario is box in a basement, somebody writes the right sort of code, turns it on. This thing hardly knows anything, but because it has all these architectural insights, it can in a short time, take over the world. There have to be a lot of really powerful architectural low-hanging fruit to find in order for that scenario to work. It's not just a few ways in which architecture helps, it's architecture dominates.\n\nEliezer: I'm not sure I would agree that you need lots of architectural insights like that. I mean, to me, it seems more like you just need one or two.\n\nRobin: But one architectural insight allows a box in a basement that hardly knows anything to outcompete the entire rest of the world?\n\nEliezer: Well, if you look at humans, they outcompeted everything evolving, as it were, in the sense that there was this one optimization process, natural selection, that was building up content over millions and millions and millions of years, and then there's this new architecture which can all of the sudden generate vast amounts...\n\nRobin: So humans can accumulate culture, but you're thinking there's another thing that's meta-culture that these machines will accumulate that we aren't accumulating?\n\nEliezer: I'm pointing out that the time scale for generating content underwent this vast temporal compression. In other words, content that used to take millions of years to do now can now be done on the order of hours.\n\nRobin: So cultural evolution can happen a lot faster?\n\nEliezer: Well, for one thing, I could say, unimpressively non-abstract observation, but this thing \\[picks up laptop\\] does run at around 2 billion hertz and this thing \\[points at head\\] runs at about 200 hertz.\n\nRobin: Right.\n\nEliezer: If you can have architectural innovations which merely allow this thing \\[picks up laptop\\] to do the same sort of thing that this thing is doing \\[points to head\\], only a million times faster, then that million times faster means that that 31 seconds works out to about a subjective year and all the time between ourselves and Socrates works out to about eight hours. It may look like it's –\n\nRobin: Lots of people have those machines in their basements. You have to imagine that your basement has something better. They have those machines. You have your machines. Your machine has to have this architectural advantage that beats out everybody else's machines in their basements.\n\nEliezer: Hold on, there's two sort of separate topics here. Previously, you did seem to me to be arguing that we just shouldn't expect that much of a speedup. Then there's the separate question of, \"Well, suppose the speedup was possible, would one basement get it ahead of other basements?\"\n\nRobin: To be clear, the dispute here is that I grant fully that these machines are wonderful and we will move more and more of our powerful content to them and they will execute rapidly and reliably in all sorts of ways to help our economy grow quickly, and in fact, I think it's quite likely that the economic growth rate could accelerate and become much faster. That's with the entire world economy working together, sharing these things, exchanging them and using them.\n\nBut now the scenario is, in a world where people are using these as best they can with their best architecture, best software, best approaches for the computers, one guy in a basement has a computer that's not really much better than anybody else's computer in a basement except that it's got this architectural thing that allows it to within a few weeks take over the world. That's the scenario.\n\nEliezer: Again, you seem to be conceding much more probability. I'm not sure to what degree you think it's likely, but you do seem to be conceding much more probability that there is, in principle, some program where if it was magically transmitted to us, we could take a modern day large computing cluster and turn it into something that could generate what you call content a million times faster.\n\nTo the extent that that is possible, the whole brain in a box scenario thing does seem to become intuitively more credible. To put it another way, if you just couldn't have an architecture better than this \\[points to head\\], if you couldn't run at faster speeds than this, if all you could do was use the same sort of content that had been laboriously developed over thousands of years of civilization and you couldn't really generate, and there wasn't really any way to generate content faster than that, then the \"foom\" scenario does go out the window.\n\nIf, on the other hand, there's this gap between where we are now and this place where you can generate content millions of times faster, then there is a further issue of whether one basement gets that ahead of other basements, but it suddenly does become a lot more plausible if you had a civilization that was ticking along just fine for thousands of years, generating lots of content, and then something else came along and just sucked all that content that it was interested in off the Internet, and...\n\nRobin: We've had computers for a few decades now. This idea that once we have computers, innovation will speed up, we've already been able to test that idea, right? Computers are useful in some areas as complementary inputs, but they haven't overwhelmingly changed the growth rate of the economy. We've got these devices. They run a lot faster, but where we can use them, we use them, but overall limitations to innovation are much more about having good ideas and trying them out in the right places, and pure computation isn't, in our world, that big an advantage in doing innovation.\n\nEliezer: Yes, but it hasn't been running this algorithm, only faster \\[gestures to head\\]. It's been running spreadsheet algorithms. I fully agree that spreadsheet algorithms are not as powerful as the human brain. I mean, I don't know if there's any animal that builds spreadsheets, but if they do, they would not have taken over the world thereby.\n\nRobin: Right. When you point to your head, you say, \"This algorithm.\" There's million of algorithms in there. We are slowly making your laptops include more and more kind of algorithms that are the sorts of things in your head. The question is, will there be some sudden threshold where entire heads go into the laptops all at once, or do laptops slowly accumulate the various kinds of innovations that heads contain?\n\nEliezer: Let me try to take it down a level in concreteness. The idea is there are key insights, you can use them to build an AI. You've got a brain in the box in a basement team. They take the key insights, they build the AI, the AI goes out, sucks a lot of information off the Internet, duplicating a lot of content that way because it's stored in a form where it can understand it on its own and download it very rapidly and absorb it very rapidly.\n\nThen, in terms of taking over the world, nanotechnological progress is not that far ahead of its current level, but this AI manages to crack the protein folding problem so it can email something off to one of those places that will take an email DNA strain and FedEx you back the proteins in 72 hours. There are places like this. Yes, we have them now.\n\nRobin: So, we grant that if there's a box somewhere that's vastly smarter than anybody on Earth, or vastly smarter than any million people on Earth, then we've got a problem. The question is, how likely is that scenario?\n\nEliezer: No, what I'm trying to distinguish here is the question of does that potential exist versus is that potential centralized. To the extent that that you say, \"OK. There would in principle be some way to know enough about intelligence that you could build something that could learn and absorb existing content very quickly.\"\n\nIn other words, the question, I'm trying to separate out the question of, \"How dumb is this thing, \\[points to head\\] how much smarter can you build an agent, if that agent were teleported into today's world, could it take over?\" versus the question of \"Who develops it, in what order, and were they all trading insights or was it more like a modern\\-day financial firm where you don't show your competitors your key insights, and so on, or, for that matter, modern artificial intelligence programs?\"\n\nRobin: I grant that a head like yours could be filled with lots more stuff, such that it would be vastly more powerful. I will call most of that stuff \"content,\" you might call it \"architecture,\" but if it's a million little pieces, architecture is kind of content. The key idea is, is there one or two things, such that, with just those one or two things, your head is vastly, vastly more powerful?\n\nEliezer: OK. So what do you think happened between chimps and humans?\n\nRobin: Something happened, something additional. But the question is how many more things are there like that?\n\nEliezer: One obvious thing is just the speed. You do –\n\nRobin: Between chimps and humans, we developed the ability to transmit culture, right? That's the obvious explanation for why we've been able to grow faster. Using language, we've been able to transmit insights and accumulate them socially rather than in the genes, right?\n\nEliezer: Well, people have tried raising chimps in human surroundings, and they absorbed this mysterious capacity for abstraction that sets them apart from other chimps. There's this wonderful book about one of these chimps, Kanzi was his name. Very, very famous chimpanzee, probably the world's most famous chimpanzee, and probably the world's smartest chimpanzee as well. They were trying to teach his mother to do these human things. He was just a little baby chimp, he was watching. He picked stuff up. It's amazing, but nonetheless he did not go on to become the world's leading chimpanzee scientist using his own chimpanzee abilities separately.\n\nIf you look at human beings, then we have this enormous processing object containing billions upon billions of neurons, and people still fail the Wason selection task. They cannot figure out which playing card they need to turn over to verify the rule, \"If a card has an even number on one side, it has a vowel on the other.\" They can't figure out which cards they need to turn over to verify whether this rule is true or false.\n\nRobin: Again, we're not distinguishing architecture and content here. I grant that you can imagine boxes the size of your brain that are vastly more powerful than your brain. The question is, what could create a box like that? The issue here is I'm saying the way something like that happens is through the slow accumulation of improvement over time the hard way. There's no shortcut of having one magic innovation that jumps you there all at once. I'm saying that –\n\nI wonder if we should ask for questions and see if we've lost the audience by now.\n\nEliezer: Yeah. It does seem to me that you're sort of equivocating between arguing that the gap doesn't exist or isn't crossable versus saying the gap is crossed in a decentralized fashion. But I agree that taking some sort of question from the audience might help refocus this.\n\nRobin: Help us.\n\nEliezer: Yes. Does anyone want to..?\n\nRobin: We lost you?\n\nAudience Member: Isn't one of the major advantages..?\n\nEliezer: Voice, please.\n\nMan 1: Isn't one of the major advantages that humans have over animals the prefrontal cortex? More of the design than content?\n\nRobin: I don't think we know, exactly.\n\nWoman 1: Robin, you were hypothesizing that it would be a series of many improvements that would lead to this vastly smarter meta-brain.\n\nRobin: Right.\n\nWoman 1: But if the idea is that each improvement makes the next improvement that much easier, then wouldn't it quickly, quickly look like just one or two improvements?\n\nRobin: The issue is the spatial scale on which improvement happens. For example, if you look at, say, programming languages, a programming language with a lot of users, compared to a programming language with a small number of users, the one with a lot of users can accumulate improvements more quickly, because there are many...\n\n\\[laughter\\]\n\nRobin: There are ways you might resist it too, of course. But there are just many people who could help improve it. Or similarly, with something other that gets used by many users, they can help improve it. It's not just what kind of thing it is, but how large a base of people are helping to improve it.\n\nEliezer: Robin, I have a slight suspicion that Jane Street Capital is using its own proprietary programming language.\n\n\\[laughter\\]\n\nRobin: Right.\n\nEliezer: Would I be correct in that suspicion?\n\nRobin: Well, maybe get advantages.\n\nMan 2: It's not proprietary – esoteric.\n\nRobin: Esoteric. But still, it's a tradeoff you have. If you use your own thing, you can be specialized. It can be all yours. But you have fewer people helping to improve it.\n\nIf we have the thing in the basement, and it's all by itself, it's not sharing innovations with the rest of the world in some large research community that's building on each other, it's just all by itself, working by itself, it really needs some other advantage that is huge to counter that. Because otherwise we've got a scenario where people have different basements and different machines, and they each find a little improvement and they share that improvement with other people, and they include that in their machine, and then other people improve theirs, and back and forth, and all the machines get better and faster.\n\nEliezer: Well, present-day artificial intelligence does not actually look like that. So you think that in 50 years artificial intelligence or creating cognitive machines is going to look very different than it does right now.\n\nRobin: Almost every real industrial process pays attention to integration in ways that researchers off on their own trying to do demos don't. People inventing new cars, they didn't have to make a car that matched a road and a filling station and everything else, they just made a new car and said, \"Here's a car. Maybe we should try it.\" But once you have an automobile industry, you have a whole set of suppliers and manufacturers and filling stations and repair shops and all this that are matched and integrated to each other. In a large, actual economy of smart machines with pieces, they would have standards, and there would be strong economic pressures to match those standards.\n\nEliezer: Right, so a very definite difference of visualization here is that I expect the dawn of artificial intelligence to look like someone successfully building a first-of-its-kind AI that may use a lot of published insights and perhaps even use some published libraries but it's nonetheless a prototype, it's a one-of-a-kind thing, it was built by a research project.\n\nAnd you're visualizing that at the time interesting things start to happen, or maybe even there is no key threshold, because there's no storm of recursive self-improvements, you're visualizing just like everyone gets slowly better and better at building smarter and smarter machines. There's no key threshold.\n\nRobin: I mean, it is the sort of Bond villain, Captain Nemo on his own island doing everything, beating out the rest of the world isolated, versus an integrated...\n\nEliezer: Or rise of human intelligence. One species beats out all the other species. We are not restricted to fictional examples.\n\nRobin: Human couldn't share with the other species, so there was a real limit.\n\nMan 3: In one science fiction novel, I don't remember its name, there was a very large storm of nanobots. These nanobots had been created so long ago that no one knew what the original plans were. You could ask the nanobots for their documentation, but there was no method, they'd sometimes lie. You couldn't really trust the manual they gave you. I think one question that's happening here is when we have a boundary where we hit the point where suddenly someone's created software that we can't actually understand, like it's not actually  \\[inaudible 46:13\\] –\n\nRobin: We're there. \\[laughs\\]\n\nMan 3: Well, so are we actually there... so, Hanson –\n\nRobin: We've got lots of software we don't understand. Sure. \\[laughs\\]\n\nMan 3: But we can still understand it at a very local level, disassemble it. It's pretty surprising to what extent Windows has been reverse engineered by the millions of programmers who work on it. I was going to ask you if getting to that point was key to the resulting exponential growth, which is not permitting the transfer of information. Because if you can't understand the software, you can't transmit the insights using your own \\[inaudible 46:53\\].\n\nEliezer: That's not really a key part of my visualization. I think that there's a sort of mysterian tendency, like people who don't know how neural networks work are very impressed by the fact that you can train neural networks to do something you don't know how it works. As if your ignorance of how they worked was responsible for making them work better somehow. So ceteris paribus, not being able to understand your own software is a bad thing.\n\nRobin: Agreed.\n\nEliezer: I wasn't really visualizing there being a key threshold where incomprehensible software is a... Well OK. The key piece of incomprehensible software in this whole thing is the brain. This thing is not end\\-user modifiable. If something goes wrong you can't just swap out one module and plug in another one, and that's why you die. You die, ultimately, because your brain is not end\\-user modifiable and doesn't have IO ports or hot\\-swappable modules or anything like that.\n\nThe reason why I expect localist sort of things is that I expect one project to go over the threshold for intelligence in much the same way that chimps went over the threshold of intelligence and became humans. Yes, I know that's not evolutionarily accurate.\n\nThen, even though they now have this functioning mind, to which they can make all sorts of interesting improvements and have it run even better and better. Whereas, meanwhile all the other cognitive work on the planet is being done by these non-end\\-user\\-modifiable human intelligences which cannot really make very good use of the insights, although it is an intriguing fact that after spending some time trying to figure out artificial intelligence I went off and started blogging about human rationality.\n\nMan 4: I just wanted to clarify one thing. Would you guys both agree – well, I know you would agree, would you agree, Robin, that in your scenario, if one – just imagine one had a time machine that could carry a physical object the size of this room, and you could go forward 1,000 years into the future and essentially create and bring back to the present day an object, say, the size of this room, that you could take over the world with that?\n\nRobin: Aye aye without doubt.\n\nMan 4: OK. The question is whether that object is –\n\nEliezer: Point of curiosity. Does this work too? \\[holds up cell phone\\] Object of this size?\n\nRobin: Probably.\n\nEliezer: Yeah. I figured \\[inaudible 49:21\\] \\[laughs\\]\n\n \nMan 4: The question is, does the development of that object essentially happen in a very asynchronous way or more broadly?\n\nRobin: I think I should actually admit that there is a concrete scenario that I can imagine that fits much more of his concerns. I think that the most likely way that the content that's in our heads will end up in silicon is something called \"whole brain emulation,\" where you take actual brains, scan them, and make a computer model of that brain, and then you can start to hack them to take out the inefficiencies and speed them up.\n\nIf the time at which it was possible to scan a brain and model it sufficiently was a time when the computer power to actually run those brains was very cheap, then you have more of a computing cost overhang, where the first person who can manage to do that can then make a lot of it very fast, and then you have more of your scenario. It's because, with emulation, there is this sharp threshold. Until you have a functioning emulation, you just have shit, because it doesn't work, and then when you have it work, it works as well as \\[indecipherable 50:22\\].\n\nEliezer: Right. So, in other words, we get a centralized economic shock, because there's a curve here that has a little step function in it. If I can step back and describe what you're describing on a higher level of abstraction, you have emulation technology that is being developed all over the world, but there's this very sharp threshold in how well the resulting emulation runs as a function of how good your emulation technology is. The output of the emulation experiences a sharp threshold.\n\nRobin: Exactly.\n\nEliezer: In particular, you can even imagine there's a lab that builds the world's first correctly functioning scanner. It would be a prototype, one\\-of\\-its\\-kind sort of thing. It would use lots of technology from around the world, and it would be very similar to other technology from around the world, but because they got it, you know, there's one little extra year they added on, they are now capable of absorbing all of the content in here \\[points at head\\] at an extremely great rate of speed, and that's where the first\\-mover effect would come from.\n\nRobin: Right. The key point is for an emulation there's this threshold. If you get it almost right, you just don't have something that works. When you finally get enough, then it works, and you get all the content through. It's like if some aliens were sending a signal and we just couldn't decode their signal. It was just noise, and then finally we figured out the code, and then we got a high bandwidth rate and they're telling us lots of technology secrets. That would be another analogy, a sharp threshold where suddenly you get lots of stuff.\n\nEliezer: So you think there's a mainline, like, higher-than-50-percent probability that we get this sort of threshold with emulations?\n\nRobin: It depends on which is the last technology to be ready with emulations. If computing is cheap when the thing is ready, then we have this risk. I actually think that's relatively unlikely, that the computing will still be expensive when the other things are ready, but...\n\nEliezer: But there'd still be a speed\\-of\\-content\\-absorption effect, it just wouldn't give you lots of emulations very quickly.\n\nRobin: Right. It wouldn't give you this huge economic power.\n\nEliezer: And similarly, with chimpanzees we also have some indicators that at least their ability to do abstract science... There's what I like to call the \"one wrong number\" function curve or the \"one wrong number\" curve where dialing 90 percent of my phone number correctly does not get you 90 percent of Eliezer Yudkowsky.\n\nRobin: Right.\n\nEliezer: So similarly, dialing 90 percent of human correctly does not get you a human – or 90 percent of a scientist.\n\nRobin: I'm more skeptical that there's this architectural thing between humans and chimps. I think it's more about the social dynamic of, \"We managed to have a functioning social situation \"\n\nEliezer: Why can't we raise chimps to be scientists?\n\nRobin: Most animals can't be raised to be anything in our society. Most animals aren't domesticable. It's a matter of whether they evolved the social instincts to work together.\n\nEliezer: But Robin, do you actually think that if we could domesticate chimps they would make good scientists?\n\nRobin: They would certainly be able to do a lot of things in our society. There are a lot of roles in even scientific labs that don't require that much intelligence.\n\n\\[laughter\\]\n\nEliezer: OK, so they can be journal editors, but can they actually be innovators. \\[laughs\\]\n\n\\[laughter\\]\n\nRobin: For example.\n\nMan 5: My wife's a journal editor!\n\n\\[laughter\\]\n\nRobin: Let's take more questions.\n\nEliezer: My sympathies.\n\n\\[laughter\\]\n\nRobin: Questions.\n\nMan 6: Professor Hanson, you seem to have the idea that social skill is one of the main things that separate humans from chimpanzees. Can you envision a scenario where one of the computers acquired this social skill and comes to the other computers and says, \"Hey, guys, we can start a revolution here\"?\n\n\\[laughter\\]\n\nMan 6: Maybe that the first mover, then? That that might be the first mover?\n\nRobin: One of the nice things about the vast majority of software in our world is that it's really quite socially compliant. You can take a chimpanzee and bring him in and you can show him some tasks and then he can do it for a couple of hours. Then just some time randomly in the next week he'll go crazy and smash everything, and that ruins their entire productivity. Software doesn't do that so often.\n\n\\[laughter\\]\n\nEliezer: No comment. \\[laughs\\]\n\n\\[laughter\\]\n\nRobin: Software, the way it's designed, it's set up to be relatively socially compliant. Assuming that we continue having software like that, we're relatively safe. If you go out and design software like wild chimps, that can just go crazy and smash stuff once in a while, I don't think I want to buy your software. \\[laughs\\]\n\nMan 7: I don't know if this sidesteps the issue, but to what extent do either of you think something like government classification or the desire of some more powerful body to innovate and then keep what it innovates secret could affect centralization to the extent you were talking about?\n\nEliezer: As far as I can tell, what happens when the government tries to develop AI is nothing, but that could just be an artifact of our local technological level and it might change over the next few decades.\n\nTo me it seems like a deeply confusing issue whose answer is probably not very complicated in an absolutely sense, it's just more confusing. We know why it's difficult to build a star. You've got to gather a very large amount of interstellar hydrogen in one place. We understand what sort of labor goes into a star and we know why a star is difficult to build.\n\nWhen it comes to building a mind, we don't know how to do it, so it seems very hard. We query our brains to say, \"Map us a strategy to build this thing,\" and it returns null, so it feels like it's a very difficult problem. But in point of fact, we don't actually know that the problem is difficult apart from being confusing.\n\nWe understand the star-building problems. We know it's difficult. This one, we don't know how difficult it's going to be after it's no longer confusing. So, to me, the AI problem looks like the problem is finding bright enough researchers, bringing them together, letting them work on that problem instead of demanding that they work on something where they're going to produce a progress report in two years which will validate the person who approved the grant and advance their career.\n\nThe government has historically been tremendously bad at producing basic research progress in AI, in part because the most senior people in AI are often people who got to be very senior by having failed to build it for the longest period of time. This is not a universal statement. I've met smart senior people in AI, but nonetheless.\n\nBasically I'm not very afraid of the government because I don't think it's a \"throw warm bodies at the problem,\" and I don't think it's \"throw warm computers at the problem,\" I think it's a good methodology, good people selection, letting them do sufficiently blue\\-sky stuff, and so far, historically, the government has just been tremendously bad at producing that kind of progress. When they have a great big project and try to build something, it doesn't work. When they fund long-term research \\[inaudible 57:48\\].\n\nRobin: I agree with Eliezer, that in general you too often go down the route of trying to grab something before it's grabbable. But there is the scenario, certainly in the midst of a total war, when you have a technology that seems to have strong military applications and not much other applications, you'd be wise to keep that application within the nation or your side of the alliance in the war.\n\nBut there's too much of a temptation to use that sort of thinking when you're not in a war or when the technology isn't directly military\\-applicable but has several steps of indirection. You can often just screw it up by trying to keep it secret.\n\nThat is, your tradeoff is between trying to keep it secret and getting this advantage versus putting this technology into the pool of technologies that the entire world develops together and shares, and usually that's the better way to get advantage out of it unless, again, you can identify a very strong military applications and a particular use.\n\nEliezer: That sounds like a plausible piece of economic logic, but it seems plausible to the same extent as the economic logic which says there should obviously never be wars because they're never Pareto optimal. There's always a situation where you didn't spend any of your resources in attacking each other, which was better. And it sounds like the economic logic which says that there should never be any unemployment because of Ricardo's Law of Comparative Advantage, which means there's always someone who you can trade with.\n\nIf you look at the state of present\\-world technological development, there's basically either published research or proprietary research. We do not see corporations in closed networks where they trade their research with each other, but not with the outside world. There's either published research, with all the attendant free\\-rider problems that implies, or there's proprietary research. As far as I know, may this room correct me if I'm mistaken, there is not a set of, like, three leading trading firms which are trading all of their internal innovations with each other and not with the outside world.\n\nRobin: If you're a software company, and you locate in Silicon Valley, you've basically agreed that a lot of your secrets will leak out, as your employees come in and leave your company. Choosing where to locate a company is often a choice to accept a certain level of leakage of what happens within your... in trade for a leakage from the other companies back toward you. So, in fact, people who choose to move to those areas in those industries do in fact choose to have a set of...\n\nEliezer: But that's not trading innovations with each other and not with the rest of the outside world. I can't actually even think of where we would see that pattern.\n\nRobin: It is. More trading with the people in the area than with the rest of the world.\n\nEliezer: But that's coincidental side-effect trading. That's not deliberate, like, \"you scratch my back...\"\n\nRobin: But that's why places like that get the big advantage, because you go there and lots of stuff gets traded back and forth.\n\nEliezer: Yes, but that's the commons. It's like a lesser form of publication. It's not a question of me offering this company an innovation in exchange for their innovation.\n\nRobin: Well, probably a little sidetracked. Other...\n\nMan 8: It's actually relevant to this little... It seems to me that there's both an economic and social incentive for people to release partial results and imperfect products and steps along the way, which it seems would tend to yield a more gradual approach towards this breakthrough that we've been discussing. Do you disagree? I know you disagree, but why do you disagree?\n\nEliezer: Well, here at the Singularity Institute, we plan to keep all of our most important insights private and hope that everyone else releases their results.\n\n\\[laughter\\]\n\nMan 8: Right, but... human-inspired innovations haven't worked that way, which then I guess –\n\nEliezer: Well, we certainly hope everyone else thinks that way.\n\n\\[laughter\\]\n\nRobin: Usually you don't have a policy about having these things leaked, but in fact you make very social choices that you know will lead to leaks, and you accept those leaks in trade for the other advantages those policies bring. Often they are that you are getting leaks from others. So locating yourself in a city where there are lots of other firms, sending your people to conferences where other people going to the same conferences, those are often ways in which you end up leaking and getting leaks in trade.\n\nMan 8: So the team in the basement won't release anything until they've got the thing that's going to take over the world?\n\nEliezer: Right. We were not planning to have any windows in the basement.\n\n\\[laughter\\]\n\nMan 9: Why do we think that...\n\nEliezer: If anyone has a microphone that can be set up over here, I will happily donate this microphone.\n\nMan 9: Why do we think that if we manage to create an artificial human brain, that it would immediately work much, much faster than a human brain? What if a team in the basement makes an artificial human brain, but it works at one billionth the speed of a human brain? Wouldn't that give other teams enough time to catch up?\n\nEliezer: First of all, the course we're visualizing is not like building a human brain in your basement, because, based on what we already understand about intelligence, we don't understand everything, but we understand some things, and what we understand seems to me to be quite sufficient to tell you that the human brain is a completely crap design, which is why it can't solve the Wason selection task.\n\nYou pick up any bit of the heuristics and biases literature and there's 100 different ways that this thing reliably experimentally malfunctions when you give it some simple-seeming problems. You wouldn't want to actually want to build anything that worked like the human brain. It would miss the entire point of trying to build a better intelligence.\n\nBut if you were to scan a brain, then this is more something that Robin has studied in more detail than I have, then the first one might run at one thousandth your speed or might run at 1,000 times your speed. It depends on the hardware overhang, on what the cost of computer power happens to be at the point where your scanners get good enough. Is that fair?\n\nRobin: Or your modeling is good enough.\n\nActually, the scanner being the last thing isn't such a threatening scenario because then you'd have a big consortium get together to do the last scan when it's finally cheap enough. But the modeling being the last thing is more disruptive, because it's just more uncertain when modeling gets done.\n\nEliezer: By modeling, you mean?\n\nRobin: The actual modeling of the brain cells in terms of translating a scan into...\n\nEliezer: Oh, I see. So in other words, if there's known scans but you can't model the brain cells, then there's an even worse last\\-mile problem?\n\nRobin: Exactly.\n\nEliezer: I'm trying to think if there's anything else I can...\n\nI would hope to build an AI that was sufficiently unlike human, because it worked better, that there would be no direct concept of how fast does this run relative to you. It would be able to solve some problems very quickly, and if it can solve all problems much faster than you, we're already getting into the superintelligence range.\n\nBut at the beginning, you would already expect it to be able to do arithmetic immensely faster than you, and at the same time it might be doing basic scientific research a bit slower. Then eventually, it's faster than you at everything, but possibly not the first time you boot up the code.\n\nMan 10: I'm trying to envision intelligence explosions that win Robin over to Yudkowsky's position. Does either one of these, or maybe a combination of both, self-improving software or nanobots that build better nanobots, is that unstable enough? Or do you still sort of feel that would be a widespread benefit?\n\nRobin: The key debate we're having isn't about the rate of change that might eventually happen. It's about how local that rate of change might start.\n\nIf you take the self-improving software – of course, we have software that self improves, it just does a lousy job of it. If you imagine steady improvement in the self-improvement, that doesn't give a local team a strong advantage. You have to imagine that there's some clever insight that gives a local team a vast, cosmically vast, advantage in its ability to self-improve compared to the other teams such that not only can it self improve, but it self improves like gangbusters in a very short time.\n\nWith nanobots again, if there's a threshold where you have nothing like a nanobot and then you have lots of them and they're cheap, that's more of a threshold kind of situation. Again, that's something that the nanotechnology literature had a speculation about a while ago. I think the consensus moved a little more against that in the sense that people realized those imagined nanobots just wouldn't be as economically viable as some more, larger-scale manufacturing process to make them.\n\nBut again, it's the issue of whether there's that sharp threshold where you're almost there and it's just not good enough because you don't really have anything and then you finally pass the threshold and now you've got vast power.\n\nEliezer: What do you think you know and how do you think you know it with respect to this particular issue of that which yields the power of human intelligence is made up of a thousand pieces, or a thousand different required insights? Is this something that should seem more plausible in principle? Where does that actually come from?\n\nRobin: One set of sources is just what we've learned as economists and social scientists about innovation in our society and where it comes from. That innovation in our society comes from lots of little things accumulating together, it rarely comes from one big thing. It's usually a few good ideas and then lots and lots of detail worked out. That's generically how innovation works in our society and has for a long time. That's certainly a clue about the nature of what makes things work well, that they usually have some architecture and then there's just lots of detail and you have to get it right before something really works.\n\nThen, in the AI field in particular, there's also this large... I was an artificial intelligence researcher for nine years, but it was a while ago. In that field in particular there's this... The old folks in the field tend to have a sense that people come up with new models. But if you look at their new models, people remember a while back when people had something a lot like that, except they called it a different name. And they say, \"Fine, you have a new name for it.\"\n\nYou keep reinventing new names and new architectures, but they keep cycling among a similar set of concepts for architecture. They don't really come up with something very dramatically different. They just come up with different ways of repackaging different pieces in the architecture for artificial intelligence. So there was a sense to which, maybe we'll find the right combination but it's clear that there's just a lot of pieces together.\n\nIn particular, Douglas Lenat did this system that you and I both respect called Eurisko a while ago that had this nice simple architecture and was able to self-modify and was able to grow itself, but its growth ran out and slowed down. It just couldn't improve itself very far even though it seemed to have a nice, elegant architecture for doing so. Lenat concluded, I agree with him, that the reason it couldn't go very far is it just didn't know very much. The key to making something like that work was to just collect a lot more knowledge and put it in so it had more to work with \\[indecipherable 1:09:12\\] improvements.\n\nEliezer: But Lenat's still trying to do that 15 years later and so far Cyc does not seem to work even as well as Eurisko.\n\nRobin: Cyc does some pretty impressive stuff. I'll agree that it's not going to replace humans any time soon, but it's an impressive system...\n\nEliezer: It seems to me that Cyc is an iota of evidence against this view. That's what Cyc was supposed to do. You're supposed to put in lots of knowledge and then it was supposed to go foom, and it totally didn't.\n\nRobin: It was supposed to be enough knowledge and it was never clear how much is required. So apparently what they have now isn't enough.\n\nEliezer: But clearly Lenat thought there was some possibility it was going to go foom in the next 15 years. It's not that this is quite unfalsifiable, it's just been incrementally more and more falsified.\n\nRobin: I can point to a number of senior AI researchers who basically agree with my point of view that this AI foom scenario is very unlikely. This is actually more of a consensus, really, among senior AI researchers.\n\nEliezer: I'd like to see that poll, actually, because I could point to AI researchers who agree with the opposing view as well.\n\nRobin: AAAI has a panel where they have a white paper where they're coming out and saying explicitly, \"This explosive AI view, we don't find that plausible.\"\n\nEliezer: Are we talking about the one with, what's his name, from..?\n\nRobin: Norvig?\n\nEliezer: Eric Horvitz?\n\nRobin: Horvitz, yeah.\n\nEliezer: Was Norvig on that? I don't think Norvig was on that.\n\nRobin: Anyway, Norvig just has a paper that... Norvig just made the press in the last day or so arguing about linguistics with Chomsky, saying that this idea that there's a simple elegant theory of linguistics is just wrong. It's just a lot of messy detail to get linguistics right, which is a similar sort of idea. There is no key architecture –\n\nEliezer: I think we have a refocusing question from the audience.\n\nMan 11: No matter how smart this intelligence gets, to actually take over the world...\n\nEliezer: Wait for the microphone. Wait for the microphone.\n\nMan 11: This intelligence has to interact with the world to be able to take over it. So if we had this box, and we were going to use it to try to make all the money in the world, we would still have to talk to all the exchanges in the world, and learn all the bugs in their protocol, and the way that we're able to do that is that there are humans at the exchanges that operate at our frequency and our level of intelligence, we can call them and ask questions.\n\nAnd this box, if it's a million times smarter than the exchanges, it still has to move at the speed of the exchanges to be able to work with them and eventually make all the money available on them. And then if it wants to take over the world through war, it has to be able to build weapons, which means mining and building factories, and doing all these things that are really slow and also require extremely high-dimensional knowledge that seems to have nothing to do with just how fast it can think. No matter how fast you can think, it's going to take a long time to build a factory that can build tanks.\n\nHow is this thing going to take over the world when...?\n\nEliezer: The analogy that I use here is, imagine you have two people having an argument just after the dawn of human intelligence, there's these two aliens in a spaceship, neither of whom have ever seen a biological intelligence – we're going to totally skip over how this could possibly happen coherently. But there are these two observers in spaceships who have only ever seen earth. They're watching these new creatures who have intelligence. They're arguing over, how fast can these creatures progress?\n\nOne of them says, \"Well, it doesn't matter how smart they are. They've got no access to ribosomes. There's no access from the brain to the ribosomes. They're not going to be able to develop new limbs or make honey or spit venom, so really we've just got these squishy things running around without very much of an advantage for all their intelligence, because they can't actually make anything, because they don't have ribosomes.\"\n\nAnd we eventually bypassed that whole sort of existing infrastructure and built our own factory systems that had a more convenient access to us. Similarly, there's all this sort of infrastructure out there, but it's all infrastructure that we created. The new system does not necessarily have to use our infrastructure if it can build its own infrastructure.\n\nAs for how fast it might happen, well, in point of fact we actually popped up with all these factories on a very rapid time scale, compared to the amount of time it took natural selection to produce ribosomes. We were able to build our own new infrastructure much more quickly than it took to create the previous infrastructure.\n\nTo put it on a very concrete level, if you can crack the protein folding problem, you can email a DNA string to one of these services that will send you back the proteins that you asked for with a 72\\-hour turnaround time. Three days may sound like a very short period of time to build your own economic infrastructure relative to how long we're used to it taking, but in point in fact this is just the cleverest way that I could think of to do it, and 72 hours would work out to I don't even know how long at a million to one speedup rate. It would be like thousands upon thousands upon thousands of years. But there might be some even faster way to get your own infrastructure than the DNA...\n\nMan 11: Is this basic argument something you two roughly agree on or roughly disagree on?\n\nRobin: I think we agree on the specific answer to the question, but we differ on how to frame it. I think it's relevant to our discussion. I would say our civilization has vast capacity and most of the power of that capacity is a mental capacity. We, as a civilization, have a vast mental capacity. We are able to think about a lot of things and calculate and figure out a lot of things.\n\nIf there's a box somewhere that has a mental capacity comparable to the rest of human civilization, I've got to give it some respect and figure it can do a hell of a lot of stuff. I might quibble with the idea that if it were just intelligent it would have that mental capacity. Because it comes down to, \"Well, this thing was improving what about itself exactly?\" So there's the issue of what various kinds of things does it take to produce various kinds of mental capacities?\n\nI'm less enamored of the idea that there's this intelligence thing. If it's just intelligent enough it doesn't matter what it knows. It's just really smart and I'm not sure that concept makes sense.\n\nEliezer: Or it can learn much faster than you can learn. It doesn't necessarily have to go through college the way you did, because it is able to, much more rapidly, learn either by observing reality directly or... Point of fact, given our current state of society, you can just cheat, you can just download it from the Internet.\n\nRobin: Simply positing it has a great mental capacity, then I will be in fear of what it does. The question is how does it get that capacity?\n\nEliezer: Would the audience be terribly offended if I tried to answer that one a bit? The thing is there is a number of places the step function can come in. We could have a historical step function like what happens from humans to chimps. We could have the combined effect of all the obvious ways to rebuild an intelligence if you're not doing it evolutionarily.\n\nYou build an AI and it's on a two gigahertz chip instead of 200 hertz neurons. It has complete read and write access to all the pieces of itself. It can do repeatable mental processes and run its own, internal, controlled experiments on what sort of mental processes work better and then copy it onto new pieces of code. Unlike this hardware \\[points to head\\] where we're stuck with a certain amount of hardware, if this intelligence works well enough it can buy or perhaps simply steal, very large amounts of computing power from the large computing clusters that we have out there.\n\nIf you want to solve a problem, there's no way that you can allocate, reshuffle, reallocate, internal resources to different aspects of it. To me it looks like architecturally, if we've got down the basic insights that underlie human intelligence, and we can add all the cool stuff that we could do if we were designing an artificial intelligence instead of being stuck with the ones that evolution accidentally burped out, it looks like they should have these enormous advantages.\n\nWe may have six billion people on this planet, but they don't really add that way. Six billion humans are not six billion times as smart as one human. I can't even imagine what that planet would look like. It's been known for a long time that buying twice as many researchers does not get you twice as much science. It gets you twice as many science papers. It does not get you twice as much scientific progress.\n\nHere we have some other people in the Singularity Institute who have developed theses that I wouldn't know how to defend myself which are more extreme than mine to the effect that if you buy twice as much science you get flat output or even it actually goes down because you increase the signal\\-to\\-noise ratio. But, now I'm getting a bit off track.\n\nWhere does this enormous power come from? It seems like human brains are just not all that impressive. We don't add that well. We can't communicate with other people. One billion squirrels could not compete with the human brain. Our brain is about four times as large as a chimp, but four chimps cannot compete with one human.\n\nMaking a brain twice as large and actually incorporating it into the architecture seems to produce a scaling of output of intelligence that is not even remotely comparable to the effect of taking two brains of fixed size and letting them talk to each other using words. So an artificial intelligence that can do all this neat stuff internally and possibly scale its processing power by orders of magnitude, that itself has a completely different output function than human brains trying to talk to each other.\n\nTo me, the notion that you can have something incredibly powerful and yes, more powerful than our sad little civilization of six billion people flapping their lips at each other running on 200 hertz brains, is actually not all that implausible.\n\nRobin: There are devices that think, and they are very useful. So 70 percent of world income goes to pay for creatures who have these devices that think, and they are very, very useful. It's more of an open question, though, how much of that use is because they are a generic good thinker or because they know many useful particular things?\n\nI'm less assured of this idea that you just have a generically smart thing and it's not smart about anything at all in particular. It's just smart in the abstract. And that it's vastly more powerful because it's smart in the abstract compared to things that know a lot of concrete things about particular things.\n\nMost of the employees you have in this firm or in other firms, they are useful not just because they were generically smart creatures but because they learned a particular job. They learned about how to do the job from the experience of other people, on the job and practice and things like that.\n\nEliezer: Well, no. First you needed some very smart people and then you taught them the job. I don't know what your function over here looks like, but I suspect if you take a bunch of people who are 30 IQ points down the curve and try to teach them the same job, I'm not quite sure what would happen then, but I would guess that your corporation would probably fall a bit in the rankings of financial firms, however those get computed.\n\nRobin: So there's the question of what it means --\n\nEliezer: And 30 IQ points is just like this tiny little mental difference compared to any of the actual, \"we are going to reach in and change around the machinery and give you different brain areas.\" 30 IQ points is nothing and yet it seems to make this very large difference in practical output.\n\nRobin: When we look at people's mental abilities across a wide range of facts, we do a factor analysis of that, we get the dominant factor, the eigenvector with the biggest eigenvalue, and that we call intelligence. It's the one-dimensional thing that explains the most correlation across different tasks. It doesn't mean that there is therefore an abstract thing that you can build into an abstract thing, a machine, that gives you that factor. It means that actual real humans are correlated that way. And then the question is, what causes that correlation?\n\nThere are many plausible things. One, for example, is simply assortative mating. People who are smart in some ways mate with other people smart in other ways, that produces a correlation \\[indecipherable 1:21:09\\]. Another could be there's just an overall strategy that some minds devote more resources to different kinds of tasks. There doesn't need to be any central abstract thing that you can make a mind do that lets it solve lots of problems simultaneously for there to be this IQ factor of correlation.\n\nEliezer: So then why humans? Why weren't there 20 different species that got good at doing different things?\n\nRobin: We grant that there is something that changed with humans, but that doesn't mean that there's vast landscape of intelligence you can create that's billions of times smarter than us just by rearranging the architecture. That's the key thing.\n\nEliezer: It seems to me for this particular argument to carry, it's not enough to say you need content. There has to be no master trick to learning or producing content. And there in particular, I can't actually say Bayesian updating because doing it on the full distribution is not computationally tractable. You need to be able to approximate it somehow.\n\nRobin: Right.\n\nEliezer: But nonetheless there's this sort of core trick called learning, or Bayesian updating. And you look at human civilization and there's this core trick called science. It's not that the science of figuring out chemistry was developed in one place and it used something other than the experimental method compared to the science of biology that was developed in another place. Sure, there were specialized skills that were developed afterward. There was also a core insight, and then people practiced the core insight and they started developing further specialized skills over a very short time scale compared to previous civilizations before that insight had occurred.\n\nIt's difficult to look over history and think of a good case where there has been... Where is the absence of the master trick which lets you rapidly generate content? Maybe the agricultural revolution. Maybe for the agricultural revolution... Well, even for the agricultural revolution, first there's the master trick, \"I'm going to grow plants,\" and then there's developing skills at growing a bunch of different plants.\n\nRobin: There's a large literature on technological and economic innovation, and it basically says the vast majority of innovation is lots of small gains. You can look at locomotives and when locomotives got faster and more energy-efficient. There were lots of particular devices and basically through some curve of how well they got over time. It's basically lots of little steps over time that slowly made them better.\n\nEliezer: Right. But this is what I expect a super intelligence to look like after the sort of initial self-improvements passes and it's doing incremental gains. But in the beginning, there's also these very large insights.\n\nRobin: That's what we're debating. Other questions or concerns?\n\nModerator: Actually, before – Craig, you can take this – can everybody without making a big disruption pass your votes to this side of the room and we can tabulate them and see what the answers are. But continue with the questions.\n\nEliezer: Remember, \"yes\" is this side of the room and \"no\" is that side of the room.\n\n\\[laughter\\]\n\nMan 12: I just wanted to make sure I understood the relevance of some of the things we're talking about. I think you both agree that if the time it takes to get from a machine that's, let's say, a tenth as effective as humans to, let's say, 10 times as effective as humans at whatever these being-smart tasks are, like making better AI or whatever. If that time is shorter, then it's more likely to be localized? Just kind of the sign of the derivative there, is that agreed upon?\n\nEliezer: I think I agree with that.\n\nMan 12: You agree with it.\n\nRobin: I think when you hypothesize this path of going from one-tenth to 10 times –\n\nEliezer: Robin, step up to the microphone.\n\nRobin: – are you hypothesizing a local path where it's doing its own self-improvement or are you hypothesizing a global path where all machines in the world \\[indecipherable 1:24:59\\] ?\n\nMan 12: Let's say that...\n\nEliezer: Robin, step towards the microphone.\n\nRobin: Sorry. \\[laughs\\]\n\nMan 12: Let's say it just turns out to take a fairly small amount of time to get from that one point to the other point.\n\nRobin: But it's a global process?\n\nMan 12: No, I'm saying, how does the fact that it's a short amount of time affect the probability that it's local versus global? Like if you just received that knowledge.\n\nRobin: On time it would be the relative scale of different time scales. If it takes a year but we're in a world economy that doubles every month, then a year is a long time.\n\nMan 12: I'm talking about from one-tenth human power to 10 times. I think we're not yet... we probably don't have an economy at that point that's doubling every month, I would... at least not because of AI.\n\nRobin: The point is that time scale, if that's a global time scale, if the world is... if new issues are showing up every day that are one percent better, then that adds up to that over a period of a year. But everybody shares those innovations every day, then we have a global development. If we've got one group that has a development and jumps a factor of two all by itself without any other inputs, then you've got more local development.\n\nEliezer: Is there any industry in which there's a group of people who share innovations with each other and who could punish someone who defected by using the innovations without publishing their own? Is there any industry that works like that?\n\nRobin: But in all industries, in fact, there's a lot of leakage. This is just generically how industries work, how innovation works in our world. People try to keep things secret, but they fail and things leak out. So teams don't, in fact, get that much further ahead of other teams.\n\nEliezer: But if you're willing to spend a bit more money you can keep secrets.\n\nRobin: Why don't they then? Why don't firms actually keep more secrets?\n\nEliezer: The NSA actually does and they succeed.\n\nMan 12: So in summary, you thought it was more likely to be local if it happens faster. You didn't think the opposite –\n\nRobin: It depends on what else you're holding constant. Obviously I agree that holding all the other speeds constant, making that faster, makes it more likely to be local.\n\nEliezer: OK, so holding all other speed constant, increasing the relative speed of something makes it more likely to be local.\n\nRobin: Right.\n\nMan 12: OK. And that's where we get the relevance of whether it's one or two or three key insights versus if it's lots of small things? Because lots of small things will take more time to accumulate.\n\nRobin: Right. And they leak.\n\nMan 12: So in some sense it's easier to leak one key idea like –\n\nRobin: But when?\n\nMan 12: – like Gaussian processes or something, than it is to leak –\n\nEliezer: Shh! \n\nMan 12: a vast database of...\n\n\\[laughter\\]\n\nMan 12: ...knowledge that's all kind of linked together in a useful way.\n\nRobin: Well, it's not about the time scale of the leak. So you have some insights, you have 30 of them that other people don't have, but they have 30 that you don't, so you're leaking and they're spreading across. Your sort of overall advantage might be relatively small, even though you've got 30 things they don't, there's just lots of different ones. When there's one thing, and it's the only one thing that matters, then it's more likely that one team has it and other ones don't at some point.\n\nEliezer: Maybe the singulars who will have like, 5 insights, and then the other 10 insights or whatever, would be published by industry, or something? By people who didn't quite realize that who has these insights is an issue? I mean, I would prefer more secrecy generally, because that gives more of an advantage to localized concentrations of intelligence, which makes me feel slightly better about the outcome.\n\nRobin: The main issue here clearly has to be, how different is this technology from other ones? If we are willing to posit that this is like other familiar technologies, we have a vast experience based on how often one team gets how far ahead of another.\n\nEliezer: And they often get pretty darn far. It seems to me like the history of technology is full of cases where one team gets way, way, way ahead of another team.\n\nRobin: Way ahead on a relatively narrow thing. You're imagining getting way ahead on the entire idea of mental capacity.\n\nEliezer: No, I'm just imagining getting ahead on–\n\nRobin: Your machine in the basement gets ahead on everything.\n\nEliezer: No, I'm imagining getting ahead on this relatively narrow, single technology of intelligence. \\[laughs\\]\n\nRobin: I think intelligence is like \"betterness\", right? It's a name for this vast range of things we all care about.\n\nEliezer: And I think it's this sort of machine which has a certain design and churns out better and better stuff.\n\nRobin: But there's this one feature called \"intelligence.\"\n\nEliezer: Well, no. It's this machine you build. Intelligence is described through work that it does, but it's still like an automobile. You could say, \"What is this mysterious forwardness that an automobile possesses?\"\n\nRobin: New York City is a good city. It's a great city. It's a better city. Where do you go to look to see the betterness of New York City? It's just in thousands of little things. There is no one thing that makes New York City better.\n\nEliezer: Right. Whereas I think intelligence is more like a car, it's like a machine, it has a function, it outputs stuff. It's not like a city that's all over the place.\n\n\\[laughter\\]\n\nMan 13: If you could take a standard brain and run it 20 times faster, do you think that's probable? Do you think that won't happen in one place suddenly? If you think that it's possible, why don't you think it'll lead to a local \"foom\"?\n\nRobin: So now we're talking about whole brain emulation scenario? We're talking about brain scans, then, right?\n\nMan 13: Sure. Just as a path to AI.\n\nRobin: If artificial emulations of brains can run 20 times faster than human brains, but no one team can make their emulations run 20 times more cost-effectively than any of the other teams' emulations, then you have a new economy with cheaper emulations, which is more productive, grows faster, and everything, but there's not a local advantage that one group gets over another.\n\nEliezer: I don't know if Carl Shulman talked to you about this, but I think he did an analysis suggesting that, if you can run your ems 10 percent faster, then everyone buys their ems from you as opposed to anyone else, which is itself contradicted to some extent by a recent study, I think it was a McKinsey study, showing that productivity varies between factories by a factor of five and it still takes 10 years for the less efficient ones to go out of business.\n\nRobin: That was on my blog a few days ago.\n\nEliezer: Ah. That explains where I heard about it. \\[laughs\\]\n\nRobin: Of course.\n\nEliezer: But nonetheless, in Carl Shulman's version of this, whoever has ems 10 percent faster soon controls the entire market. Would you agree or disagree that that was likely to happen?\n\nRobin: I think there's always these fears that people have that if one team we're competing with gets a little bit better on something, then they'll take over everything. But it's just a lot harder to take over everything because there's always a lot of different dimensions on which things can be better, and it's hard to be consistently better in a lot of things all at once. Being 10 percent better at one thing is not usually a huge advantage. Even being twice as good at one thing is not often that big an advantage.\n\nEliezer: And I think I'll actually concede the point in real life, but only because the market is inefficient.\n\nRobin: Behind you.\n\nModerator: We're...\n\nRobin: Out of time?\n\nModerator: Yeah. I think we try to keep it to 90 minutes and you both have done a great job. Maybe take a couple minutes each to –\n\nRobin: What's the vote?\n\nModerator: I have the results. The pre-wrapping-up comments, but do you both want to maybe three minutes to sum up your view, or do you just want to pull the plug?\n\nRobin: Sure.\n\nEliezer: Sure.\n\nRobin: I respect Eliezer greatly. He's a smart guy. I'm glad that, if somebody's going to work on this problem, it's him. I agree that there is a chance that it's real. I agree that somebody should be working on it. The issue on which we disagree is how large a probability is this scenario relative to other scenarios that I fear get neglected because this one looks so sexy.\n\nThere is a temptation in science fiction and in lots of fiction to imagine that this one evil genius in the basement lab comes up with this great innovation that lets them perhaps take over the world unless Bond sneaks in and listens to his long speech about why he's going to kill him, et cetera.\n\n\\[laughter\\]\n\nIt's just such an attractive fantasy, but that's just not how innovation typically happens in the world. Real innovation has lots of different sources, usually lots of small pieces. It's rarely big chunks that give huge advantages.\n\nEventually we will have machines that will have lots of mental capacity. They'll be able to do a lot of things. We will move a lot of the content we have in our heads over to these machines. But I don't see the scenario being very likely whereby one guy in a basement suddenly has some grand formula, some grand theory of architecture that allows this machine to grow from being a tiny thing that hardly knows anything to taking over the world in a couple weeks. That requires such vast, powerful architectural advantages for this thing to have that I just don't find it very plausible. I think it's possible, just not very likely. That's the point on which, I guess, we disagree.\n\nI think more attention should go to other disruptive scenarios, whether they're emulations, maybe there'd be a hardware overhang, and other big issues that we should take seriously in these various disruptive future scenarios. I agree that growth could happen very quickly. Growth could go more quickly on a world scale. The issue is, how local will it be?\n\nEliezer: It seems to me that this is all strongly dependent first on the belief that the causes of intelligence get divided up very finely into lots of little pieces that get developed in a wide variety of different places, so that nobody gets an advantage. And second, that if you do get a small advantage, you're only doing a very small fraction of the total intellectual labor going to the problem. So you don't have a nuclear-pile-gone-critical effect, because any given pile is still a very small fraction of all the thinking that's going into AI everywhere.\n\nI'm not quite sure to say besides, when I look at the world, it doesn't actually look like the world looks like that. I mean, there aren't 20 different species, all of them are good at different aspects of intelligence and have different advantages. g factor's pretty weak evidence, but it exists. The people talking about g factor do seem to be winning on the experimental predictions test versus the people who previously went around talking about multiple intelligences.\n\nIt's not a very transferable argument, but to the extent that I actually have a grasp of cognitive science and trying to figure out how this works, it does not look like it's sliced into lots of little pieces. It looks like there's a bunch of major systems doing particular tasks, and they're all cooperating with each other. It's sort of like we have a heart, and not 100 little mini-hearts distributed around the body. It might have been a sort of better system, but nonetheless we just have one big heart over there.\n\nIt looks to me like human intelligence is like... that there's really obvious, hugely important things you could do with the first prototype intelligence that actually worked. I expect that the critical thing is going to be the first prototype intelligence that actually works and runs on a two gigahertz processor, and can do little experiments to find out which of its own mental processes work better, and things like that.\n\nThe first AI that really works is already going to have a pretty large advantage relative to the biological system, so the key driver change looks more like somebody builds a prototype, and not like this large existing industry reaches a certain quality level at the point where it is being mainly driven by incremental improvements leaking out of particular organizations.\n\nThere are various issues we did not get into at all, like the extent to which this might still look like a bad thing or not from a human perspective, because even if it's non-local, there's still this particular group that got left behind by the whole thing, which was the ones with the biological brains that couldn't be upgraded at all \\[points at head\\]. And various other things, but I guess that's mostly my summary of where this particular debate seems to stand.\n\nRobin: It's hard to debate you.\n\n\\[applause\\]\n\nEliezer: Thank you very much.\n\nRobin: And the winner is..?\n\nModerator: OK so, in this highly unscientific tally with a number of problems, we started off with 45 for and 40 against. I guess unsurprisingly, very compelling arguments from both parts, fewer people had an opinion.\n\n\\[laughter\\]\n\nModerator: So now we've gone to 33 against and 32 for, so \"against\" lost 7 and \"for\" lost 13. We have a lot more undecided people than before, so \"against\" has it. Thank you very much.\n\n\\[applause\\]", "filename": "Yudkowsky vs Hanson ΓÇö Singularity Debate-by Jane Street-video_id TuXl-iidnFY-date 20110101.md", "id": "bf68049d2ad5b607068475deccbe789a", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "individuallyselected_92iem-by Vael Gates-date 20220321", "authors": ["Vael Gates"], "date_published": "2022-03-21", "text": "# Interview with AI Researchers individuallyselected_92iem by Vael Gates\n\n**Interview with 92iem, on 3/21/22**\n\n**0:00:05.2 Vael:** Alright, here we are. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:15.2 Interviewee:** Currently, I work a lot with language models, but that wasn\\'t always the case.\n\n\\[\\...\\]\n\n**0:00:57.4 Vael:** Great, thanks. And then what are you most excited about in AI? And what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:01:08.5 Interviewee:** So obviously, I think the progress in language models in the last couple of years has been pretty astounding. And the fact that we can interact with these models in more or less in the natural way that we would like to interact with it just has opened up so much in terms of getting feedback from humans and stuff like that. So I think just the progress in language models, and then coupled with that, the more recent progress in using essentially some of the same techniques to do image modeling, so that you have the possibility to do just seamless multi-modal models. I think that\\'s quite exciting. Some people think that\\... You know, it\\'s not like most of us can just paint a photographic scene and show it to other people. So it\\'s not like\\-- the photographic aspects of generative image models is not what excites me, it\\'s the fact that humans manage to communicate quite a bit with diagrams and stuff like that. When we\\'re doing science, you can draw little stick figures and pretty much convey what you need to convey, and that coupled with natural language should give us the ability to start thinking about getting AI to do math and science for us, and I think that\\'s the thing that is most exciting to me.\n\nSo I know that a lot of people are excited by the idea that you can essentially have a Google that\\'s a bit\\... It\\'s smarter, right? You can just talk with it and say, Hey, tell me a bit about this tree, and AI says something and you say, Oh, but what about that tree? That\\'s fun, but I really feel like humans are not bottlenecked by the inability to ask about trees and buildings and trivia, essentially. I think where we\\'re bottlenecked is like progress in science. I think, for example, so it\\'s pretty clear that the political solution to climate change\\-- the time for that has kind of come and gone. I mean, we can slow it down. If we, like the whole world, suddenly decided to say we\\'re going to do something about this, maybe you slow it down, but I think just the timing is a little bit off. So a lot of that\\'s going to be have to be a technological solution. And as amazing as technological progress has been, I think we\\'re not fast enough when it comes to developing solutions to a lot of our problems. And I do think in 10, 20 years, AI is going to play a big role, both in the specialized domain in the sense of AlphaFold, where you really just come up with a system that does the thing you want it to do, but more impactfully, perhaps, by having an army of grad student-equivalent language models that can help you answer questions that you need answered. So that\\'s very exciting, right.\n\n**0:04:24.1 Vael:** Yeah. It\\'s a cool vision.\n\n**0:04:26.5 Interviewee:** I think the risks are\\... It\\'s almost banal, right? Like with most technologies bad actors can make arbitrarily bad use of these things. So yeah, when they start weaponizing these things\\... I\\'m a little bit less concerned than some people are about like, Oh, but what if we have AIs that write fake news. Like all of that is to some extent present now, and I guess it\\'s just a question of degree, to some extent. Okay, people argue that that difference in degree matters, and they\\'re not necessarily wrong. I just, the thing that bothers me more definitely is very specific, malicious uses of AI. So there was a recent paper, this is so obvious that it\\'s almost dumb, but someone said, Oh, yeah, we put an AI to trying to develop a drug that, let\\'s say, reduces the amount of poison, and all you have to do is change the objective function, flip the sign and suddenly it just optimizes for the most poisonous thing you can possibly find. That coupled with technologies like CRISPR and stuff like that just creates a pretty dangerous\\... puts very dangerous tools at people\\'s disposal. So I would say that\\'s the thing that I would worry about.\n\n**0:05:54.9 Vael:** I have been impressed by how everyone I\\'ve talked to in the past week has mentioned that paper, and I\\'m like, good, things get around.\n\n**0:06:01.8 Interviewee:** Well, Twitter. Thanks to Twitter.\n\n**0:06:04.7 Vael:** Nice. Alright, so focusing on future AI, putting on a science fiction forecasting hat, say we\\'re 50-plus years into the future. So at least 50 years in the future, what does that future look like?\n\n0:06:17:9 Interviewee: For AI?\n\n0:06:20:5 Vael: In general, where if AI\\'s important, then mention that.\n\n**0:06:27.4 Interviewee:** I see. So 50 years, oh my God. Fifty years is long time away. Assuming that we\\'ve managed not to have nuclear conflicts between now and then, which is just one of those things that now you have to put at least a one digit probability on these days. But, yeah, I think that we will end up having\\... Well. The optimistic scenario is that we ended up solving a few key problems. One is transitioning mostly out of fossil fuel, so a combination of solar and fusion power. I think that\\'s going to be huge, and I think that AI will have played a role in some of that development. And I think 50 years from now, I think unless we are monumentally blocked in the next couple of years, AI will be pretty omnipresent in our lives, and certainly in the scientific sectors. So one thing that I\\'m a little bit, just something that comes to mind, is that a lot of people are into this idea of these sort of augmented\\... I don\\'t know if people are literally willing to wear glasses, but certainly you could imagine having little ear buds that are fairly unobtrusive that go around your ears or something, and they do have a camera, so you can just ask it, whatever you need, you can ask it questions.\n\nIn 50 years, I think at that point, maybe some people will have worked out direct neural interfaces with stuff, and so maybe the more adventurous people will have a bit of augmented memory or at least the ability to sort of silently query their little augmented system. I think that might be a thing. Not everyone will have adopted it, I think it\\'ll be a weird world. I personally\\-- I\\'ve never been a huge, like the fastest adopter of technology, but that sort of stuff is next level, and I don\\'t know what that\\'s going to look like.\n\nI also\\... well, two things, I guess they\\'re kind of linked. I think that people will live substantially longer. I think, unless something miraculous happens, I don\\'t think they\\'ll be living like 200, 300 years, but I certainly think it\\'s possible people will be living to 150 or something like that. Not people born now; I\\'m not going to live to 150. Someone was telling me that people born, these days, they\\'re going to live to see the year 2100, right. That\\'s not quite in the 50-year time frame, but yeah, I certainly think people born today are going to be living, like their average lifespan in industrialized countries, assuming a certain level of privilege, they\\'re going to be able to live quite a bit longer. That coupled with AI possibly automating quite a few jobs is going to change the social landscape a bit.\n\nOne thing that occurred to me recently\\... so people used to say that\\-- well, people say many things\\-- one is that, this, unlike industrialization\\... Some people always say technological progress destroy some jobs but creates more jobs on the other side. And then some say, Okay, but this one is different because you\\'re automating intelligence and that really does put humans out of their main forte. So one of the things that people worry most about, in addition to the universal-based income stuff, is just the loss of dignity, that people always assume that even people who don\\'t have what you would call glamorous jobs value the fact that they work and get paid for that work. But I think some of the stuff that happened during Covid makes me doubt that a little bit, in the sense that people did quit jobs that ostensibly looked good. Even in the tech sector where people, I felt like generally, they\\'re not the worst jobs by any stretch, and they were like, No, this is meaningless, I want to go do something meaningful with my life. So I think the recent, the past couple of years have made me question the idea that it would be that big of a psychological blow for people to not work for money. That if you did establish a universal basic income, plus you\\'d have to solve some other, many complicated issues, but I don\\'t think people will be that unhappy to be not having to work menial jobs. I\\'m not saying there\\'s not going to be upheaval, but I think it\\'s going to be like a combination of living longer, and not possibly having to do jobs if you don\\'t want to do them. I think that\\'s just going to be, I don\\'t know. It might be a nice change. In the optimistic scenario, I guess.\n\n**0:11:35.9 Vael:** Got it. Yeah. Well, my next question is related to that. So people talk about the promise of AI, by which they mean many things, but one of them is maybe having a very general capable system such that it will have cognitive capacities to replace all current day human jobs. So you might have a CEO AI or a scientist AI. Whether or not they choose to replace human jobs is different, but have the ability to do so. And I usually think about that and the fact that 2012 we have AlexNet, deep learning revolution, here we are, 10 years later, we\\'ve got things like GPT-3, which can do some language translation and some text generation, coding, math, etcetera, a lot of weirdly general capabilities. And then we now have nations competing and people competing and young people going into this thing, and lots of algorithmic improvements and hardware improvement, maybe we get optical, maybe we get quantum, lots of things happening. And so we might actually just end up being able to scale to very general systems or we might hit some sort of ceiling and need to do a paradigm shift. But regardless of how we do that, do you think we\\'ll ever get very general AI systems, like a CEO or a scientist AI, and if so, when?\n\n**0:12:39.1 Interviewee:** I don\\'t know about CEO AIs. The scientist AIs, yes. Yeah, and that\\'s going to come in stages. So obviously the current generation of AIs, we don\\'t put them in human bodies and let them do experiments and stuff like that, right. It\\'s going to be a while before we start letting them operate like particle accelerators. Fifty years\\... Maybe in 50 years. My original background is \\[non-AI field\\], and I really could have just done my entire PhD from a desk, and that sort of work, certainly, AI can replace, I think, to a huge degree, from idea generation to solving the answer and writing a paper, yeah, that just feels so doable. Again, unless we hit a giant wall and find that our current transformers simply cannot reason, but I think that looks unlikely. I don\\'t rule it out, but that looks unlikely to me.\n\n**0:13:46.8 Vael:** Yeah. Okay, what about this CEO AI, like with multi-step planning, can do social inference, is modeling other people modeling it, like crazy, crazy amount of generality. When do you think we\\'ll get that, if we will?\n\n**0:14:00.7 Interviewee:** Yeah, that\\'s not the part that I\\'m worried about. AI can certainly model human intent, but\\... I guess it depends on what you want from your CEO AI. And this I think gets at a little bit one of my dissatisfactions with discussions about human\\-- like, AI alignment. It\\'s not that people don\\'t talk about it, but it\\'s rarely talked about. I don\\'t know, on Twitter certainly. A lot of AI alignment stuff talks about\\-- they \\*don\\'t\\* talk about the fact that humans disagree wildly on what humans should do. So I\\'m thinking about this in connection with the CEO, because I think in the limit, AI will be able to do anything, any specific thing you ask the AI to do, it can do, but the question of whether you would want the AI to be CEO, I think that\\'s mostly a human question. So that\\'s why I said\\-- I think that\\'s a policy decision, not a AI capability question.\n\n**0:15:25.3 Vael:** Got it, yeah. Do you think that people will end up wanting\\... that there will be economic incentives such that we\\'ll eventually have things like CEO AIs?\n\n**0:15:36.0 Interviewee:** I guess in some sense, no, because I think a human would still be the CEO and then you would have your AI consultant, essentially, that you would ask all the things. You would delegate almost everything, but I think that people would still want to be at their very apex of a corporate hierarchy. It seems weird to put a robot in charge of that, just like\\... why. It\\'s a title thing, almost, like, why would you make the robot the CEO?\n\n**0:16:03.2 Vael:** Yeah, yeah. In some vision of the future I have, I have the vision of a CEO AI\\... we have a CEO AI and then we have shareholders, which are the humans, and we\\'re like, \\\"Alright, AI, I want you to make a company for me and earn a lot of money and try not to harm people and try not to exploit people and try to avoid side effects, and then pass all your decisions, your major decisions through us\\\" and then we\\'ll just sit here and you will make money. And I can imagine that might end up happening, something like that, especially if everyone else has AI is doing this or AIs are way more intelligent and can think faster and do things much faster than a human can. I don\\'t know, this is like a different future kind of idea, but.\n\n**0:16:46.8 Interviewee:** But that seems so weird. Because, then\\-- so, assuming\\... I don\\'t know if everybody has access to the same AI in that scenario, but like it can\\'t be the case that 100 people all say to their own individual AI, \\\" Form a company and turn it into a \\$100 billion company or a \\$1 trillion company\\\", and they all go out at optimizing. I think at that point, in that kind of world, I think there would have to be a bit more coordination in terms of what goes on, because that just creates some nasty possibilities in terms of bringing the economy down. So I don\\'t know that that\\'s how things would just happen. It cannot be the case that we would just say, \\\"Robot, figure out how to make a trillion dollar company. I\\'ll give you this one idea and just run with it,\\\" and then just like we are hands-off. That seems extremely unlikely, somehow.\n\n**0:17:38.9 Vael:** Yeah, I\\'m interested in how that seems very unlikely. It seems like to me\\... Well, we were talking about scientist AI, and I imagine we can eventually tell a science AI to like solve cancer, and maybe it will actually succeed at that or something. And it seems like it\\'s different, to be like, Hey, CEO, make a ton of money for me. Is that getting at any of the underlying thing or not?\n\n**0:18:08.4 Interviewee:** Uh, hm. Yeah, so I think even there, I think you would never tell an AI to \\\"solve cancer\\\". Well, yeah\\... You would want to give it more specific goals, and I think\\... In any scenario where we have full control over our AIs, we wouldn\\'t want such vague instructions to be turned into plans. That\\'s a scary world, where you can just say solve cancer and the robot runs with it. I think for the same reason, I don\\'t think you would want a world where someone can say, \\\"AI, make a lot of money for me,\\\" and that\\'s the only instruction the AI has, and it\\'s allowed to intervene in the world with those instructions. So yeah, that\\'s why I don\\'t see, just like from a sanity perspective, you would\\-- you never want to unleash AI in that manner, in such a vague and uncontrolled manner. Does that make sense?\n\n**0:19:03.6 Vael:** Yeah, that makes sense that you wouldn\\'t want to be\\... because it\\'s very unsafe, it sounds like, or it could be\\--\n\n**0:19:09.7 Interviewee:** Yeah, kind of insanely unsafe, but\\...\n\n**0:19:14.8 Vael:** Nice. Yeah, do you think people might end up doing it anyway? Sometimes I feel like people do unwise things in the pursuit of, especially unilateral actors, in the pursuit of earning money, for example. Like, Oh, I\\'ve got the first scientist AI, I\\'m going to use it to solve the thing.\n\n**0:19:35.3 Interviewee:** That\\'s a good question. I think, I really do think you would want\\... Yeah, I wonder about how you would actually enforce any kind of laws on AI technology. It\\'s the most complicated thing to enforce, because nuclear weapons\\-- One of the nice things about nuclear weapons is it\\'s actually pretty hard to develop nuclear weapons in secrecy without releasing any radiation, that\\'s one of its few good points. I think AI, it\\'s true that you could just develop and run it. But I think at the point where any AI has to interface with the real world, whether it\\'s in the stock market or something like that, I do think that people will start seeing the need for finding ways to regulate the speed. Even high frequency trading is starting to be, like you can\\'t interact with it, any kind of stock market in less than one nanosecond or something like that. I think similarly, there\\'s just going to be some guardrails put in place. If there\\'s any kind of sanity in terms of policymaking at that time, you would want guardrails in place where you could not unleash AI with such large powers to affect a large part of the world with minimal intervention powers. Yeah. This is all assuming there\\'s a sane policymaking environment here, but\\...\n\n**0:21:04.8 Vael:** Yeah. Do you think there will be?\n\n**0:21:09.3 Interviewee:** I think so. I think so, I\\'m hopeful in that regard. I\\'m not saying that Congress is ever going to really understand the nuances of how AI works, anything like that, I just think there would be too many\\... Even in a world where only OpenAI and DeepMind have full AGI, I don\\'t think they\\'d want to create a world where one of them can unleash something at the level that you described. And I also think that when those two companies get close, they\\'re going to wonder if other states, say, Russia or China, are going to be close, and they\\'re going to start wanting to really hammer down, hammer out\\... like there will be a sense of urgency, and hopefully they have enough influence to influence policymakers to say, \\\"You need to take this seriously.\\\" And this is where I think almost the fact that it takes\\... Okay, I said earlier that, you know what, the nice thing about nuclear weapons is that you could detect it, but I think one of the nice things about the fact that right now, it looks like you\\'re going to require enormous compute to get anything that is remotely AGI. That\\'s the thing that allows maybe\\... That means the only huge corporations or states will be able to do it for at least some period of time, and hopefully those are the same actors that can somehow influence policymaking. If there were just one person, if they just had the ability to do that, it would be a little bit problematic, actually. So in some sense, because these institutions are big, I think they\\'re going to be both constrained a bit more in terms of what they can do, and also they\\'re going to be able to, if they are well-intentioned, to influence policymaking in a good direction.\n\n**0:23:06.2 Vael:** Do you think they\\'ll be able to do international cooperation? Because I imagine China will also have some AI companies that are also kind of close, I don\\'t know how close they will be, but\\...\n\n**0:23:17.7 Interviewee:** They\\'ll try. I don\\'t know that China will listen to the US or Europe. I agree that\\'s not going to be easy, yeah. Who knows what they\\'re up to exactly, there.\n\n**0:23:31.0 Vael:** Yeah, it seems like they\\'re certainly trying, so\\... Yeah, another one of my questions is like, have you thought much about policy or what kind of policies you want to have in place if we are getting ever closer to AGI?\n\n**0:23:48.8 Interviewee:** Actually, I haven\\'t given it that much thought, what the laws would specifically look like. What I don\\'t think is really possible is something like the government says, You now need to hand over control over this to us. I don\\'t think that\\'s super feasible. Yeah, I can\\'t say I have a good idea for what the laws would specifically look like. I think as a starting point, they\\'ll certainly create some kind of agency to specifically monitor\\... Actually, right now, there\\'s no agency like the SEC or something like that that monitors what exactly goes on in AI. I mean, there\\'s some scattering of regulations probably somewhere, some vague export controls and stuff like that. But yeah, they\\'d certainly start creating an agency for it, and their mandate would start to grow. I think it might, again, have to be something like what we do with nuclear reactors, where you have an agency that has experts inside of it, and that they are allowed to go into companies and kind of investigate what\\'s going on inside, just as, if Iran is developing nuclear weapons and they agree to let inspectors in. I think it\\'s going to be up to something like that. And then, yeah, similar to these nuclear treaties, perhaps there would have to be something along the lines of like\\... there are certain lines you cannot cross with AI, and if someone does cross it, that institution or the country as a whole gets sanctioned. It\\'s going to have to be at that level. Certainly, given the power of the putative AI that we\\'re thinking about. I think the regulations are going to have to be quite dramatic if it\\'s going to have any kind of effect.\n\n**0:25:45.1 Vael:** Yeah. One thing I think that is a difference between the nuclear situation and the AI situation is that nuclear stuff, seems not very dual use. Well, nuclear weapons, at least, not very dual use. Versus like AI has a lot of possible public benefit and a lot of economic incentives, versus like you don\\'t get, I don\\'t know, you don\\'t benefit the public by deploying nuclear weapons.\n\n**0:26:05.7 Interviewee:** But nuclear reactors, but that\\'s the whole\\--\n\n**0:26:07.9 Vael:** Nuclear\\-- Yes, you could\\--\n\n**0:26:09.0 Interviewee:** That\\'s the whole\\... So Iran would always pretend that, Hey, we\\'re just developing nuclear reactors for power. Just the problem is that was always very easily converted to nuclear weapons. I think that could be a similar---\n\n**0:26:24.9 Vael:** Yeah, yeah, it is similar in that way. Somehow it still feels to me that these situations are not quite analogous, in that the regulations are going to be pretty different when you\\'re like, \\\"I am going to make sure that you\\'re not doing anything bad in this area,\\\" and people are like, \\\"ah, yes, but we need to get the new smartphone, scientist AI, etcetera.\\\" But yeah, I take your point. Another thing that I think is interesting is that current day systems are really pretty uninterpretable, so you\\'re like, \\\"Alright, well, we have to draw some lines, where are we going to draw the line?\\\" What is an example of what a line could be, because if there\\'s government inspectors coming in to DeepMind and you\\'re like, \\\"Alright, now inspect,\\\" I\\'m like, what are the inspecting?\n\n**0:27:11.2 Interviewee:** Yeah, so when you say interpr\\-- so that\\'s another thing about\\... one of my pet peeves about interpretability. People are not that interpretable. People hardly, rarely know what\\'s going on in other people\\'s heads, and they can tell you something which may or may not be true, sometimes they\\'re lying to you, and sometimes they might be lying to themselves. When a doctor tells you, \\\"This is what we\\'re doing,\\\" unless you\\'re another doctor, you rarely understand what they\\'re saying. And so, yeah, this is a total tangent on like my\\... the thing around, the discussion around interpretability is always such a mess. But what are they inspecting? If we\\'re imagining inspectors, they could certainly go in and say, like, if it\\'s a language model, you can certainly allow them to query the language model and see what kind of answers, what kind of capabilities these language models have. You could say, if it\\'s a language model, just totally hypothetically, you could say, \\\"Alright, develop me, write me a formula for a bioweapon,\\\" and if the language model just gives that to you, then possibly you have a problem. Stuff like that. So if a company that has that capability hasn\\'t put in the required fail-safes like that, then they can be held liable for X amount of problem, the trouble, right.\n\n**0:28:58.3 Vael:** Interesting. Cool, so that\\'s cool. You\\'ve got like a model of what sort of rules should be in place, and it sounds like there should be rules in place where you can\\'t develop bioweapons or you can\\'t feed humans bioweapons when they ask for them.\n\n**0:29:12.0 Interviewee:** Yeah, stuff like that. In this inspector model, I think that\\'s what would kind of have to happen. But yeah, it\\'s not like I\\'m an expert in this, but that\\'s what I would think.\n\n**0:29:24.4 Vael:** Yeah, something I\\'m worried about is that no one is an expert in this. Like policymakers\\-- when I talk to the AI researchers, they\\'re like, Oh, yes, the policymakers will take care of it, and I\\'m like, the policymakers are busy, and they\\'re doing many different things, there\\'s not many people who are focused singularly on AI. Also, they\\'re mostly focusing on current day systems at the moment, so like surveillance and bias and transparency, and like a bunch of different things, so they\\'re not really thinking very future at the moment. And they don\\'t know what to do because they don\\'t understand the technology, because the technology moves extremely fast, right, and so like AI researchers are the ones who know it. And I\\'m like, Alright, AI researchers, what should we tell them to do. You\\'re like, Well, we should make a list of things that the AI shouldn\\'t do, like basic fail-safes. And I\\'m like, Great, it would be super cool if that was written out somewhere and then we can start advocating for it or something, because I\\'m worried that the policy will just continue to lag really far behind the actual tech levels, except where like\\... Policy is already several years behind, maybe like 10 years or something, and will continue to be that far behind even as we\\'re approaching the very powerful AI.\n\n**0:30:31.9 Interviewee:** Yeah, so a couple of things there. One is that that\\'s why you need more of an agency model rather than laws, because creating laws is very, very slow, whereas an agency can drop some rules and maybe they start enforcing them. And so you do need a sensible agency that doesn\\'t create bad rules, but the ability to be flexible. That said, I think\\... The biggest problem with policymaking right now is that the policymakers don\\'t understand AI at all, right. And you sort of hinted at that. And I think\\... If I\\'d asked myself, at this moment in time, is there anything that, any rule that we need at this moment in time, I\\'m not sure there is. AIs are not there yet.\n\n**0:31:25.2 Interviewee:** So at this moment in time, I think if you ask most researchers, \\\"Hey, do we need to create specific laws to prevent X, Y, Z,\\\" I\\'m not sure many people would tell you, you need that. And so these laws, I think, are going to have to come in at very sensible points, and it\\'s not clear to me that the policymakers are going to know when that time point is. I would say even in the AI field, very few people know when that\\'s going to be. There\\'s a lot of stuff coming out of especially big labs where the world doesn\\'t know. There\\'s like 100 people that know what\\'s coming in the next year. I don\\'t know what a good solution to that is.\n\n**0:32:17.5 Vael:** Especially if we can get AIs that can generate lots of deadly poisons already. Yeah, I think it\\'ll maybe be hard to tell, and then also one needs to develop a list, if there\\'s going to be in list form or\\...\n\n**0:32:32.2 Interviewee:** The problem is, I think it\\'s easier to regulate general AI just because it\\'s going to require so much compute. But I think more specific AI that anyone can run on a GPU, like on a laptop, is more or less impossible to regulate. So it\\'s not clear to me what the law would be, except if you use a bioweapon, you\\'re in trouble. That law already exists, right.\n\n**0:33:00.5 Vael:** Yeah, I think that one already exists, so\\...\n\n**0:33:05.4 Interviewee:** So I think in some sense, like kind of in the trade-off of what can the technology do right now, and who might try to deploy that, our laws sort of cover the problem cases at the moment. I think where I get a little bit stuck is if you try to say, \\\"Alright, in five years, should we have laws banning certain uses of a very, very capable general model?\\\" I do think at that point, Congress should seriously consider creating a regulatory agency. And I think AI researchers will only support this if there\\'s some semblance of like, kind of like NASA, where there\\'s some faith that engineers are in charge of this thing, that kind of know how these systems work, that they can think rationally about both the technological side and the policy side of things. And so that\\'s going to take some work on the side of whatever administration is in power at that time. But yeah, it\\'s not going to be easy. I think it\\'s going to take a very capable administration to handle that transition gracefully.\n\n**0:34:19.5 Vael:** Yeah, that makes sense. Yeah, I\\'m worried about a few different things in this future scenario. I\\'m like, Okay, I don\\'t know if the agency will be developed while\\-- in a sort of future thinking sort of way, I don\\'t know that it will implement the right type of policies, I don\\'t know that it will have the power to really enforce those policies, I don\\'t know if it will have the power to enforce internationally. But I do like the idea that\\-- but obviously one should still try, and it seems like there should probably be a lot of effort going into this, as you said, something like on a five-year scale.\n\n**0:34:49.2 Interviewee:** Yeah, it\\'s just that knowing AI researchers, there\\'s just going to be such extreme pushback. If there\\'s any sense that there\\'s been a bureaucracy created whose job is nothing more than to just slow things down for no good reason. That\\'s almost a default kind of way in which such an agency would get created, and so, yeah, it\\'s just one of the situations where you have to hope that the future leaders of America are smart.\n\n**0:35:24.7 Vael:** Yeah. Yep. A thing to bank on. Cool. So I\\'m concerned about long-term risks of AI. That\\'s one of the ways in which I\\'m concerned, is that we won\\'t get the policy right, especially as we\\'re doing international competition, that there may be race dynamics, as we\\'re not able to have really strong international governance. And I don\\'t know if this will go well, and I\\'m like, I think people should work on this.\n\nBut another way I think that things might not work: So we talked a little bit about the alignment problem. And another interesting thing about the alignment problem is\\... or in my mind, so we\\'ve got maybe a CEO AI, or whatever kind of AI, but this is the example I\\'ve been working with, and it\\'s making plans and it has to report any decisions it makes to its shareholders, who are humans, and the humans are like, \\\"I want a one-page memo.\\\" And the AI is like, \\\"Okay, cool, one-page memo. I have a lot of information in my brain, in my neural network, while I\\'m trying to maximize profits with some other goals\\-- with some other constraints.\\\" And it\\'s noticing that if it gives certain information to humans, then the humans are more likely to shut it down, which means that it\\'s less likely to succeed in its goal. And so it may write this memo and leave out some information so that it decreases likelihood of being shut down, increases the likelihood of achieving its goal. So this is not a story where we\\'re building in self-preservation into the AI, but a story in which\\-- why instrumental incentives of an agent achieving, trying to go for anything that is not perfectly aligned with human values, just like what humans tell it to do instead of what humans intended it to, then you might get an AI that is now optimizing against humans in some degree, trying to lie to them or deceive them in order to achieve whatever it has been programmed to do. Do you think this is a problem?\n\n**0:37:08.2 Interviewee:** The scenario you described was exactly what human CEOs do.\n\n**0:37:12.7 Vael:** Hm, yes. But more powerful systems, I think, with more influence over many things.\n\n**0:37:20.6 Interviewee:** So this is the problem\\-- so I think this actually still is a human problem. So if a human being\\... like these AIs will be, depending on the mix of reward for not getting shut down and\\... at the kind of detailed level these days, we often\\... When we do RL with language models, we have two things going on, one is an RL objective, maximize the reward as much as you can, but the other objective is to tie it to the original language model so it doesn\\'t diverge too much. In which case, if you are writing a memo, it would try to write a memo in the style that a human would write it, let\\'s say. So the information content would be somewhat constrained by what a typical memo written by a human being would look like, and then on top of that, it would try to optimize what it is trying to do, maybe just trying to keep the company alive for as long as it can or something like that. So there is that sort of like, at least the way we do things now, there\\'s a little bit of self-regulation built in there. But this is why I think, more fundamentally\\... any question where if you just replace the AI with a human and ask the same question: Is this a problem or not a problem? I think that\\'s more or less a human problem. And you have to think a bit more carefully about what we would want a human to do in that exact same situation. Do we have an answer for that? And then take into account the fact that the AI is more powerful. You don\\'t need a super devious AI for a CEO to start lying to their shareholders a little bit, or misleading their shareholders a little bit, in order to present a more rosy picture of what the company is doing. So do we already have mechanisms that prevent that? I think we do, and that same thing would apply to the AI.\n\n**0:39:22.1 Vael:** Yeah. I think the things that are interesting to me about the AI scenario is that we have the option of\\... we are designing the AIs, so we could make them not be this way. And also having an AI that has a lot, lot more power, that is as powerful as a leader of one of the countries, and that has the ability to copy itself and could do self-improvement, so it can be smarter than it started out with. And okay, we\\'ve got something\\'s possibly smarter than us, which is like the ability to reason and plan well, and has the incentive to acquire resources and influence and has all the human kind of incentives here, and we can\\'t\\-- and it\\'s not as\\-- I don\\'t know, it\\'s maybe not as interpretable as a human, but you can\\'t throw it into jail. Like, I don\\'t know, there\\'s a lot of the mechanisms for control, I think, are maybe not there.\n\n**0:40:12.7 Interviewee:** Yeah, so it\\'s in this sort of legal context that I think you would not want the AI to be a CEO or any\\... There has to be something\\... For something like this, you would want the person\\... There should be a person who\\'s liable for the decision being made by the AI. You have to do some due diligence to the answers that the AI gives you. There\\'s no other way. Yeah.\n\n**0:40:40.2 Vael:** There\\'s generally a thing in some of your answers where you\\'re like, Well, you know, any reasonable person would do X, and I\\'m like, I don\\'t know if we\\'re in a world where we\\'ve got a bunch of just reasonable people putting in appropriate fail-safes which they\\'ve spent a long time constructing. And some of these fail-safes, I think, might be very technically difficult. I think the alignment problem might be quite technically difficult, such that researchers who are working on capabilities would get ahead even as the people working on safety mechanisms per se is also growing, but at less speed as all the capabilities researchers are pushing forward. Such that we might have an imbalance in how many people are working on each thing.\n\n**0:41:16.3 Interviewee:** Yeah, so I guess maybe I\\'m thinking of two different things. One is just the sheer\\-- kind of like the question of just putting rules in. The other question that I often have with these discussions is, Does a sensible answer exist to the question at all? So imagine, okay, so imagine we replace the CEO with\\... Imagine we replace Mark Zuckerberg with a very, very smart AI. And this very smart AI is posed with this question of, okay, there is a photo of a naked child, but it\\'s in the context of a war, it\\'s a war photograph. Should this photo be allowed on Facebook or not? The CEO cannot\\... It doesn\\'t really matter how smart the AI is, this is just not a question the AI can answer, in the sense that it\\'s an indelibly human question. That\\'s why\\-- I just think there are certain questions where when we posit a incredibly intelligent AI, it\\'s got nothing to do with that. It\\'s just a question of what a group of people who\\... A group of humans who disagree on what the final answer should be. In that scenario, there\\'s no right answer for the AI. There\\'s nothing the AI can do in that scenario that is the correct answer.\n\n**0:42:46.4 Vael:** Yeah. I think in my vision of what I want for AI in the future, I want AIs that do what humans intend them to do, so I want the alignment problem to be solved in some way, and I want it to all involve a huge amount of human feedback. So for every question that is confusing or the AI doesn\\'t know what to do, if it hasn\\'t internalized human values, then I want it to ask a bunch of humans, or maybe we have some way to aggregate human opinions or something. And then we have an AI that is reflecting human values and preferences, so if humans are confused about this particular issue, then I don\\'t know, maybe the default if humans disagree is not to publish\\[?\\]. But in general, just having some sort of checking mechanism. The thing that I\\'m worried will happen by default is that we\\'ll have an AI that is optimizing for something that\\'s sort of right, but not quite right, and then it will just kind of now do like whatever things we put into it\\-- whatever optimization goals we would put into it will be kind of locked in, and so that we\\'ll eventually get an AI that is doing something kind of analogous to the recommender algorithms thing, where recommender algorithms are sort of addictive and they\\'re optimizing something\\-- clickthrough rate\\-- that\\'s kind of close to what humans value, but isn\\'t quite. And then we might have an AI that is just like, A-ha, I am now incentivized to deceive humans to gain control, to gain influence, to do self-improvement, and we\\'ve sort of lost control of it while it\\'s doing something that\\'s like almost but not quite what we want.\n\n**0:44:05.6 Interviewee:** I think one thing that comes to mind, actually\\...so this kinda goes back to the interpretability question, but I think it may be a slightly different angle on it. I think it\\'s going to have to be the case where when an AI makes a decision of that sort, it should output almost a disclaimer. So the way credit card companies would write you this long disclaimer. And it would have to tell you for each decision it makes, what the risks are, and then a human has to read that and sign off on it. Now, the question is going to be, the other problem with credit card disclaimers is that they were so long that the average person couldn\\'t read it and make sense of what the hell was going on. So the AI would be somewhat required to come up with a comprehensible set of disclaimers that say, Okay, I asked a bunch of people, they said this, but obviously we shouldn\\'t always listen to what the majority says. I also consulted some moral ethicist or some ethicists, and I synthesized the combination of the ethicists, previous precedents, and what the general public wants. I recommend that given the combination of these three factors, you should do this. And then a person should sign off on it, and then that person in some sense should be liable to the extent that the AI gave a reasonable summary of the decision factors. So something along those lines.\n\n**0:45:32.8 Vael:** Yeah, that sounds brilliant. I would be so excited if AI in the future had that. I\\'m like, Wow, we have an AI that is incentivized to instead make things as maximally clear and comprehensible and taking into account what the human wants and listing out of things, I\\'m like, If we solve that, if we have the technical problem to solve that, I\\'m lnnnike, wow, amazing.\n\n**0:45:52.6 Interviewee:** I think the key point here is at some point, the human has to be held liable for it, so that they have an incentive to only use AIs that satisfy this condition. Otherwise there\\'s no reason for the.. because, like you say, you can\\'t put the AI in jail, so. At some point you have to put the onus on humans. I think this is something that like even Tesla\\'s going to have to think about. At some point, I mean\\... I fully believe statistically, they\\'ll reduce the number of accidents, but accidents will happen, sometimes the car will be the responsible party. At that point, you can\\'t just throw up your hands and say no one was at fault, right? So if Tesla is willing to deploy their cars for self-driving, they are going to have to start taking liability, and that\\'s going to force them to confront some of these same issues and say, Did the AI give a reasonable estimation of if we take this road\\...? It has to be able to say, or like a surgical robot, it has to be able to say the same thing that doctors do, \\\"Listen, I\\'m going to perform this operation, it\\'s the best chance you have, but there is a 10% chance that you\\'re going to die. If you\\'re comfortable with this, if you\\'re comfortable signing off on this, I will do my best,\\\" and only in that scenario is the doctor allowed to be forgiven if the operation goes wrong.\n\n**0:47:20.6 Vael:** Yeah, so a part of my thinking I\\'m noticing is that\\... Um\\... So I think you\\'re very interested in problems of misuse, which I\\'m also interested in, but I think I\\'m also interested in the problem of, like, I think that it will just be technically hard in order to incentivize an AI to not try to optimize on \\[hard to parse\\] but to like, be able to take\\... So currently, we\\'re quite bad at taking human preferences and goals and values and putting those in a mathematical formulation that AIs can optimize, and I think that problem might just be really, really hard. So we might just have an AI that won\\'t even give us anything reasonable, and I\\'m like, Oh, well, that seems like step one. And then there\\'s also a bunch of governance and a bunch of incentives that need to be put in place in terms of holding people accountable, since humans will hopefully be the end user of these things.\n\n**0:48:07.8 Interviewee:** I\\'m actually far less worried about the technical side of this. I just finished reading this book about von Neumann, that\\'s a little cute biography of him, and there\\'s a part where he says, supposedly, that people who think mathematics is complicated only say that because they don\\'t know how complicated life is. And I\\'m totally messing with the phrasing, but something like that. I actually think any technical problems in this area will be solved relatively easily compared to the problem of figuring out what human values we want to insert into these.\n\n**0:48:46.4 Vael:** Okay, so you think it\\'s the taking human values and putting into AI, that technical problem will just get solved?\n\n**0:48:54.8 Interviewee:** If you know what values you want to put in, yeah.\n\n**0:48:56.5 Vael:** Okay, cool. Alright.\n\n**0:49:00.1 Interviewee:** I actually think that problem is the easy problem. I\\'m not saying it\\'s easy in an absolute sense, I just think that\\'s the easier problem.\n\n**0:49:05.1 Vael:** Got it. That feels like the alignment problem to me. So you think the alignment problem is just going to be pretty easy. This seems like a valid thing to think, so\\...\n\n**0:49:12.9 Interviewee:** I want to emphasize, I don\\'t think it\\'ll be easy in absolute terms, I just think it\\'ll be the easier of the two problems.\n\n**0:49:17.0 Vael:** Okay, compared to governance and incentives. Yeah, that makes sense.\n\n**0:49:21.2 Interviewee:** That is\\... I just have this faith that any technical problems humans can solve, down the line\\-- like, eventually. It\\'s the non-technical problems that get people all tangled up, because when there\\'s no right answer, it really messes up scientists.\n\n**0:49:43.1 Vael:** Yeah, yeah. Yeah, the problem of trying to take human values and what we care about in all the different ways and put them into a mathematical formulation feels difficult to me, and I guess it is a technical problem. I guess I do sort of think of it as a technical problem, but yeah, that makes sense that you\\'re just like, Look, we\\'ll get that done eventually. And then we have governance, and I\\'m like, Oh, yes, governance is totally a mess. Yeah, that makes sense. And I think no one knows how, it\\'s an unknown, unsolved problem\\--\n\n**0:50:13.5 Interviewee:** Let me put it this way, let me put it this way. If by human values, you mean if like\\...\n\n**0:50:18.5 Vael:** What humans intend, having an AI always doing what you say.\n\n**0:50:22.8 Interviewee:** Yeah, so imagine for any conceivable scenario an AI that would have to deal with. We could ask you, what would you do in this case? What I\\'m saying is that if for each of these questions, you are able to give a concrete answer to the answered question, such questions can be inserted into our AIs. Like if you are able to come up with clear answers for the questions for which you yourself would have a clear answer for, I think that set of moral constraints, let\\'s say, I think that can be more or less inserted into AIs without huge problems.\n\n**0:51:08.3 Vael:** Even as the problems get\\-- even as I don\\'t eventually have concrete answers, because the problems are like, Should we do X thing on creating nuclear reactor X and etcetera, and I lose control of the\\... not lose control, but I can\\'t actually visualize the whole space or something, because I\\'m too\\...\n\n**0:51:25.2 Interviewee:** But at that point, what does alignment mean? Alignment usually means that what the AI does is the same as what a human would do. If there\\'s no answer about what the human would do, what is the AI supposed to do?\n\n**0:51:37.2 Vael:** I think it\\'s supposed to just keep in mind the things that I care about, so try to avoid side effects in all forms, try to avoid hurting people, except for when that makes sense or something\\-- oh, eugh. Anyway, like, doing a whole bunch of\\... and, like reporting truthfully, which is also something I want the AI to do. And things like this.\n\n**0:51:56.6 Interviewee:** I guess it\\'s one of those, so it\\'s a question of maybe a generalization, but I find it slightly hard to believe that an AI that would answer in the exact same way on all of the answered questions that you do have an answer for, when you go outside of that regime, suddenly the AI diverges strongly from what you would have answered had you been smarter. I just think that\\'s a weird kind of discontinuity, right. So there\\'s this huge set up\\-- let\\'s suppose on all the questions for which you have an answer, the AI agrees with you. And then you take that a little bit outside of your realm of comprehension, and at that point, suddenly the AI decides, Oof, I\\'m freed from the constraints, I can answer whatever. I think that\\'s a little bit implausible to me, assuming people did the job correctly.\n\n**0:52:52.9 Vael:** Yeah, I think assuming people do the job correctly is pretty important here. You could have an AI that is deceiving you and giving you the correct answers, as long as you could check, but yeah, assuming that\\'s not true, assuming the AI is honestly reporting to you what exactly, like everything that you said at a lower level, I mean, everything that you can confirm, then that seems great. And you\\'re like, Well, if you then extend that to regimes where humans can\\'t understand it, then things are probably still okay. I think I maybe believe that. I think that maybe things are probably still okay.\n\n**0:53:23.7 Interviewee:** The validation set on whatever project you\\'re working on would\\... The AI wouldn\\'t know necessarily whether this was a question on which you just didn\\'t have an opinion. It\\'s the same for me when I think about\\... A lot of people are concerned that, in terms of basically over-fitting, could you\\... So one of the reasons I think we\\'ll definitely get an AI that can answer physics questions pretty comprehensively is that I don\\'t believe you can ever create a physics AI that can fake its way through all of the train set, sorry, on the validation set, and then suddenly do poorly on something that is outside of it. There\\'s so much of physics that if you are able to fake your way through all the way to graduate school\\... I guess I\\'m thinking in terms of like, if you happen to make it all the way through graduate school, pass all the exams you needed to pass, turned out the papers you turn out. At that point to be like, Oh, actually, I just, I faked my way through all of physics, I don\\'t really understand physics. You don\\'t not understand physics, like in spite of yourself, because you\\...\n\n**0:54:35.0 Vael:** That seems true. I think one of the things I\\'m referencing is like a mesa-optimizer, inner optimizer, which is like an AI has some sort of goal. I don\\'t know, maybe it\\'s like trying to go out, go to red things, but the door is always\\... Okay, sorry, try again. So it says, it says it has some sort of goal, kind of like humans do, so\\... \\[sorry, I\\'m now\\] restarting \\[this sentence\\]. Evolution has some sort of goal, evolution is optimizing on something like inclusive genetic fitness. And it\\'s like this optimizer that is pushing things. And then there\\'s humans, which are the things that are like, it\\'s being optimized or whatever. And humans should ideally have the same goal of inclusive genetic fitness, but we don\\'t. We\\'ve got something that\\'s sort of close, but not really. Like we have contraceptives, so we aren\\'t really maximizing babies, we have different goals and things we\\'re going for, we have like of achievement and all the values that we think are important and stuff. And so this is an example of something\\... that, humans are the ones who are trying to make the AI optimize for a thing, and the AIs are like, Sure, I guess I\\'ll sort of go for the goal that you want me to, and I have a model of what you want me to do, but I actually internally in my heart have a different goal, and so I\\'ll make my way through all of the test sets because I have a model of what you want me to do, but as soon as you release me, then I will do something different. So that\\'s like a weird analogy that doesn\\'t quite work, but\\...\n\n**0:56:00.1 Interviewee:** Yeah, especially for two reasons. One is that I think there is a real likelihood that we are going to become multi-planetary, which is pretty good as far as evolution is concerned, because not only does that mean we\\'ve dominated the planet, but we\\'ve started having other planets and spreading our genes everywhere. So in some sense, we\\'ve done exactly what evolution wants us to do, which is like reproduce our genes far and wide. I have a feeling that no matter how sophisticated and smart and industrialized in AI we get as a species, people aren\\'t going to stop wanting to have babies. So like somehow, yes, we don\\'t, like, kill everyone whenever we want to and just steal food and all that like animals would do, but kind of like\\...\n\n**0:56:49.9 Vael:** Yeah, I don\\'t know that evolution wants that, but.\n\n**0:56:51.8 Interviewee:** We\\'re still pretty aligned to the basic evolutionary goal. That\\'s what you\\'re starting from, saying that\\'s the original goal, and we\\'ve deviated from the original goal. I think actually we\\'re well within evolution\\'s parameters as far as what are we supposed to do as good evolutionary creatures.\n\n**0:57:07.5 Vael:** Cool. Yeah, that makes sense to me. And then as a last point or something, so ideally, we want AI to do what humans would do if they were smarter, like you said, if we had more time to reflect and if we maybe had more self-control or something. I don\\'t know that that would kind of come out of nowhere or something? I guess this is now in the ideal case, it sounds like we\\'ve mostly succeeded in aligning the AI with humans\\' goals. (Interviewee: \\\"So what\\'s the question?\\\") Um, ideally, I think I want AI that will be my best self or something, that will not go and\\... fast food is the characteristic example, or not going to the gym or something, or do something like if I were living my best life, would it be able to make\\... And if I were smarter, would it be able to model what humans would be like as we continue expanding our circle of interest and have moral progress and etcetera. Will AI be able to do that?\n\n**0:58:12.6 Interviewee:** \\...I think so, yes. That\\'s the only consistent answer with what I\\'ve said so far. But I emphasize that you have to be very careful about what do you see as the best version of yourself. Because maybe to some degree, you want the best version of yourself to be the one that goes to the gym regularly, eats healthily. But I don\\'t imagine the best version of yourself is someone who does that so religiously that you have no joy in your life. You don\\'t want to just only eat healthy food all the time, only work out, fill up every moment of your day with nothing but keeping yourself healthy and occupied and productive. Like sometimes you just want to say, I don\\'t want to do anything. I don\\'t know where I\\'m going with this, but it still comes back to the question of\\... it\\'s not a well-defined question, if you say, because the best version of myself is better than myself, I cannot conceive of what the best version of myself ought to be, and therefore it\\'s kind of a vague worry what the AI would make the best version of yourself. To me, that\\'s a weird question, if that makes sense. I think that\\'s kind of like, I\\'m reducing a little bit, but there is a little bit of that, is that I want the best version of myself, but I can\\'t judge what the best version of myself would be, because that best version of myself would be a smarter version of me, which I cannot comprehend. And in that scenario, how can I feel safe that the best version of myself is indeed the best version of myself. And I think, so I think the question has to be a little bit more well-defined then as currently presented.\n\n**0:59:57.1 Vael:** Cool, that makes sense. So what I\\'m taking from this is, I\\'m like, \\\"Okay, you think the problem of putting values into AI will not be unsolvable, it will kind of be solved in due time as we go along.\\\" You\\'re more worried about the governance problem. I\\'m like, \\\"yeah, I guess? I don\\'t\\--\\\" And you\\'re like, \\\"Well, anything that we could just ask the human about, we\\'ll like, we have like a hypothetical answer to what we want the AI to do,\\\" and I\\'m like, yeah, okay, that seems true. I guess we just have to make it so that all the AIs really do ask for human feedback very consistently or something. And there are some issues around there. But, I don\\'t know, it does seem like hypothetically it should be possible, because you can just ask the human, in some sense. And I feel like I\\'m missing some arguments, but I\\'m like, Ooh, food to think about, this is great.\n\n**1:00:39.7 Interviewee:** But I mean\\-- one last thing, I know, I have to go too, is\\-- One thing I\\'ll say is the fact that we can even just discuss the question of\\... It actually does seem plausible that we can get our AIs to at least say what we would say. That to me is amazing, because I think four years ago, it would not have been clear what you even meant when you said, I want a language model\\-- a model AI to answer moral dilemmas in a way that humans would answer them. It would not have been clear what that meant, unless you just had a classifier where you inserted a video of a scenario and then you said like \\\"track 1, track 2, classify.\\\" We have much, much more sophisticated tools at our disposal now, we can just essentially talk to it and say, no, that\\'s the wrong answer, I want you to say this in this scenario. That\\'s already, in a span of two years, I think is remarkable. I feel like I\\'m coming off as like an incredible optimist. I\\'ve got my concerns, but I do think so far, people have shown that any well-defined technical problem in AI can be approached, and they\\'re not impossible. Yeah.\n\n**1:01:55.6 Vael:** Got it. When would you work on the governance problems, since you seem to think that\\'s more of a problem?\n\n**1:02:02.2 Interviewee:** So\\... I trust the people that I work with to kind of hit the button when it really needs to be hit. Because right now, like I said, I think people don\\'t take it seriously, partly because I don\\'t think they really believe it needs to be taken seriously. These researchers are not saying, \\\"Oh my God, we need to take this seriously, but I\\'ve got other stuff to do.\\\" They really are just like, I don\\'t think we\\'re at that stage where we need to take it seriously. I think the people that I work with are, on the most part\\-- mostly pretty well-intentioned people. There\\'s some disagreement over this. \\[\\...\\] I know that within OpenAI there were some healthy discussions about what is the correct deployment model for things like GPT-3. All the discussions that I\\'ve had so far give me a lot of confidence that these people aren\\'t stupid and they\\'re not entirely negligent. I think they\\'re possible occasionally overconfident, but they\\'re not arrogant, if that makes any sense. As in like, they know they\\'re fallible. They don\\'t always put the right error bars on their decisions, but they know they\\'re not infallible. Ultimately, that\\'s what it\\'s going to come down to. There\\'s not like a system for this. It\\'s going to be a few hundreds to a thousands\\-- thousand people, doing the sensible thing. For the moment, it looks like that\\'s going to happen. But\\... I agree that that isn\\'t entirely confidence-inspiring. But I think that\\'s going to be the way it goes.\n\nVael: Awesome. Well, thank you so much for talking to me, I found this very enjoyable. And got some things to think about.\n\n\\[closings\\]\n", "filename": "individuallyselected_92iem-by Vael Gates-date 20220321.md", "id": "316e39fec93bcc13012e7cd6f6cb72f9", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Mo Gawdat - Scary Smart - A former Google exec_s perspective on AI┬árisk-by Towards Data Science-video_id u2cK0_jUX_g-date 20220126", "authors": ["Mo Gawdat", "Jeremie Harris"], "date_published": "2022-01-26", "text": "# Mo Gawdat on Scary Smart A former Google exec’s perspective on AI risk by Jeremie Harris on the Towards Data Science Podcast\n\n## Mo Gawdat on AGI, its potential and its safety risks\n\nIf you were scrolling through your newsfeed in late September 2021, you may have caught this splashy headline from The Times of London that read, “Can this man save the world from artificial intelligence?”\n\nThe man in question was Mo Gawdat, an entrepreneur and senior tech executive who spent several years as the Chief Business Officer at GoogleX (now called X Development), Google’s semi-secret research facility, that experiments with moonshot projects like self-driving cars, flying vehicles, and geothermal energy. At X, Mo was exposed to the absolute cutting edge of many fields — one of which was AI. His experience seeing AI systems learn and interact with the world raised red flags for him — hints of the potentially disastrous failure modes of the AI systems we might just end up with if we don’t get our act together now.\n\nMo writes about his experience as an insider at one of the world’s most secretive research labs and how it led him to worry about AI risk, but also about AI’s promise and potential in his new book, [Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World](https://www.amazon.com/Scary-Smart-Future-Artificial-Intelligence-ebook/dp/B08ZNJL4QP). He joined me to talk about just that on this episode of the TDS podcast.\n\nHere were some of my favourite take-homes from the conversation:\n\n- Over the last several decades, progress in AI has been exponential (or more than exponential if you measure it based on [compute curves](https://openai.com/blog/ai-and-compute/)). Humans are really bad at extrapolating exponential trends, and that can lead to our being taken by surprise. And that’s partly because exponential progress can change the world so much and so fast that predictions are next to impossible to make. Powered by exponential dynamics, a single COVID case turns into a nation-wide lockdown within weeks, and a once-cute and ignorable tool like AI becomes a revolutionary technology whose development could shape the very future of the universe.\n- One of the core drivers behind the exponential progress of AI has been an economic feedback loop: companies have learned that they can reliably invest money in AI research, and get a positive return on their investment. Many choose to plough those returns back into AI, which amplifies AI capabilities further, leading to a virtuous cycle. Recent [scaling trends](https://arxiv.org/pdf/2001.08361.pdf) seem to suggest that AI has reached a kind of economic escape velocity, where returns on a marginal dollar invested in AI research are significant enough that tech executives can’t ignore them anymore — all of which makes AGI inevitable, in Mo’s opinion.\n- Whether AGI is developed by 2029, as Ray Kurzweil [has predicted](https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045), or somewhat later as [this great post](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines) by Open Philanthropy argues, doesn’t really matter. One way or another, artificial human-level or general intelligence (definitions are fuzzy!) seems poised to emerge by the end of the century. Mo thinks that the fact that AI safety and AI policy aren’t our single greatest priorities as a species is a huge mistake. And on that much, I certainly agree with him.\n- Mo doesn’t believe that the AI control problem (sometimes known as the alignment problem) _can_ be solved. He considers it impossible that organisms orders of magnitude less intelligent than AI systems would be able to exert any meaningful control over them.\n- His solution is unusual: humans, he argues, need to change their online behaviour, and approach one another with more tolerance and civility on social media. The idea behind this strategy is to hope that as AI systems are trained on human-generated social media content, they will learn to mimic more virtuous behaviours, and pose less of a threat to us. I’m admittedly skeptical of this view, because I don’t see how it addresses some of the core features of AI systems that make alignment so hard (for example, [power-seeking and instrumental convergence](https://arxiv.org/pdf/1912.01683.pdf), or the challenge of [objective specification](https://openai.com/blog/faulty-reward-functions/)). That said, I think there’s a lot of room for a broader conversation about AI safety, and I’m glad Mo is shining a light on this important problem.", "filename": "Mo Gawdat - Scary Smart - A former Google exec_s perspective on AI┬árisk-by Towards Data Science-video_id u2cK0_jUX_g-date 20220126.md", "id": "08c2f676a4c076f9b3d12de586f52e64", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Training machine learning (ML) systems to answer open-ended questions _ Andreas Stuhlmuller-by Centre for Effective Altruism-video_id 7WaiYZLS94M-date 20190829", "authors": ["Andreas Stuhlmüller"], "date_published": "2019-08-29", "text": "# Andreas Stuhlmüller Training ML systems to answer open-ended questions - EA Forum\n\n_In the long run, we want machine learning (ML) to help us answer open-ended questions like “Should I get this medical procedure?” or “What are the risks of deploying this AI system?“ Currently, we only know how to train ML systems if we have clear metrics or can easily provide feedback on the outputs. Andreas Stuhlmüller, president and founder of_ [_Ought_](https://ought.org/)_, wants to solve this problem. In this talk, he explains the design challenges behind ML’s current limitations, and how we can make progress by studying the way humans tackle open-ended questions._\n\n_Below is a transcript of the talk, which we’ve lightly edited for clarity. You can also watch it on_ [_YouTube_](https://www.youtube.com/watch?v=7WaiYZLS94M&list=PLwp9xeoX5p8MqGMKBZK7kO8dTysnJ-Pzq&index=25&t=51s) _or read it on_ [_effectivealtruism.org_](https://effectivealtruism.org/articles/andreas-stuhlmueller-training-ml-systems-to-answer-open-ended-questions)_._\n\n## Note from Andreas\n\nAndreas, in the comments of this post: _I haven't reviewed this transcript yet, but shortly after the talk I wrote up_ [_these notes (slides + annotations)_](https://ought.org/presentations/delegating-cognitive-work-2019-06)_, which I probably endorse more than what I said at the time._\n\n## The Talk\n\nI'll be talking about delegating open-ended cognitive work today — a problem that I think is really important. \n\nLet's start with the central problem. Suppose you are currently wearing glasses. And suppose you're thinking, \"Should I get laser eye surgery or continue wearing my glasses?\" \n\n![](https://images.ctfassets.net/ohf186sfn6di/4KoHXqDjOTU1PZeBF3Sz2j/9702eb6464939c7eadf1a24bb1d26f03/Slide02.png)\n\nImagine that you're trying to get a really good answer — for example, \"No, the risks outweigh the possible benefits” — that accounts for your personal preferences, but also relevant facts, such as \\[potential\\] complications or likely consequences.\n\nImagine that there are a lot of experts in the world who could, in principle, help you with that question. There are people who have the relevant medical knowledge and people on the Internet, perhaps, who could help you think through it. Maybe there are machine learning algorithms that have relevant knowledge.\n\nBut here's the key: Imagine that those experts don't intrinsically care about you. They only care about maximizing the score you assign to their answer — how much you’ll pay them for \\[their expertise\\] or, in the case of machine learning, what reward signal you’ll assign to them. \n\nThe question that I’ll cover is “Can you somehow assign a mechanism that arranges your interaction with those experts, such that they try to be as helpful to you as an expert who intrinsically cares about you?” That's the problem.\n\nFirst, I’d like to say a little bit more about that problem. Then I'll talk about why I think it's really important, why it's hard, and why I still think it might be tractable. I'll start with the big picture, but at the end I'll provide a demonstration.\n\n![](https://images.ctfassets.net/ohf186sfn6di/9wsln31C43NQGyHIfcGXP/978ec986932c756130d4cea008fe928c/Slide03.png)\n\n**Defining the problem**\n\nWhat do I mean by open-ended cognitive work? That's easiest to explain \\[by sharing\\] what I _don't_ mean. \n\n![](https://images.ctfassets.net/ohf186sfn6di/01R1vDNZ7awLT9J7ri6SUE/0a193f6811d6d64affeb026df2c464f6/Slide04.png)\n\nI don't mean tasks like winning a game of Go, increasing a company's revenue, or persuading someone to buy a book. For those tasks, you can just look at the outcome and easily tell whether the goal has been accomplished or not. \n\nContrast those tasks with open-ended tasks \\[like\\] designing a great board game, increasing the value that your company creates for the world, \\[or\\] finding a book that is helpful to someone. For those tasks, figuring out what it even means to do well is the key. For example, what does it mean to design a great board game? It should be fun, but also maybe facilitate social interaction. What does it mean to facilitate social interaction? Well, it's complicated. Similarly, increasing the value that a company creates for the world depends on what the company can do. What are the consequences of its actions? Some of them are potentially long-run consequences that are difficult to \\[evaluate\\].\n\nHow can we solve such tasks? First, we can think about how to solve any task, and then just \\[tailor the solution based on each\\] special case. \n\n![](https://images.ctfassets.net/ohf186sfn6di/6SRZz0ZOxba9WuJdoSXPiS/71aba0f9119f391948718435e44ee553/Slide05.png)\n\nHere's the simple two step recipe: (1) find experts (they can be human or machine experts) who can, in principle, solve the problem that you're \\[tackling\\], and then (2) create robust incentives for those experts to solve your problem. That's how easy it is. And by “incentives,” I mean something like money or a reward signal that you assign to those experts \\[when they’ve completed the task\\].\n\nThere are a lot of experts in the world — and people in AI and machine learning are working on creating more. So how can you create robust incentives for experts to solve your problem?\n\nWe can think about some different instances. \n\n![](https://images.ctfassets.net/ohf186sfn6di/714UGWCkjBTwK2XMRUjuZ5/403c465e995f6a18f0f111768a4b05cd/Slide06.png)\n\nOne is delegating to human experts. That has some complications that are specific to human experts, like heterogeneity. Different people have different knowledge. And people care about many things besides just money. If you want to extract knowledge from them, maybe you need specific user interfaces to make that work well. Those are \\[examples of\\] human-specific factors.\n\nThen there are machine-specific factors. If you try to delegate open-ended tasks to machine learning agents, you want to \\[ask questions\\] like \"What's a good agent architecture for that setting?” and “What data sets do I need to collect for these sorts of tasks?\" And then there are more esoteric factors like what, in certain alignment problems, could go wrong for reasons that are due to the nature of ML training.\n\nIn this talk, I want to focus on the overlap between those two \\[human and machine experts\\]. There's a shared mechanism design problem; you can take a step back and say, \"What can we do if we don't make assumptions about the interests of experts? What if you just \\[assume that experts will\\] try to maximize a score, but nothing else?” I think, in the end, we will have to assume more than that. I don’t think you can treat \\[an expert\\] as a black box \\[with only one goal\\]. But I think it's a good starting point to think about the mechanisms you can design if you make as few assumptions as possible.\n\n**Why the problem is important**\n\nI've talked about what the problem is. Why is it important? \n\n![](https://images.ctfassets.net/ohf186sfn6di/3oJVSd79hKUy1iJOtBrAsO/8655ca7cc39992e6a06e22b9c9b71550/Slide07.png)\n\nWe can think about what will happen if we don't solve it. For human experts, it's more or less business as usual. There are a lot of principal-agent problems related to cognitive work in the world. For example, imagine you're an academic funder who’s giving money to a university to \\[find\\] the best way to treat cancer. There are researchers at the university who work on things that are related to that problem, but they're not exactly aligned with your incentives. You care about finding the best way to treat cancer. The researchers also care about things like looking impressive, which can help with writing papers and getting citations.\n\nOn the machine-learning side, at the moment, machine learning can only solve closed-end problems — those for which it’s very easy to specify a metric \\[for measuring how\\] well you do. But those problems are not the things we ultimately care about; they're proxies for the things we ultimately care about.\n\nThis is not \\[such a bad thing\\] right now. Perhaps it's somewhat bad if you look at things like Facebook, where we maximize the amount of attention you spend on the feed instead of the value that the feed creates for you. But in the long run, the gap between those proxies and the things we actually care about could be quite large.\n\nIf the problem is solved, we could get much better at scaling up our thinking on open-ended tasks. One more example of an open-ended task from the human-expert side is \\[determining which\\] causes to support \\[for example, when making a charitable donation\\]. If you could create a mechanism \\[for turning\\] money into aligned thinking on that question, that would be really great. \n\nOn the machine-learning side, imagine what it would be like to make as much progress using machine learning for open-ended questions as we've made using it for other tasks. Over the last five years or so, there's been a huge amount of progress on using machine learning for tasks like generating realistic-looking faces. If we could, in the future, use it to help us think through \\[issues like\\] which causes we should support, that would be really good. We could, in the long run, do so much more thinking on those kinds of questions than we have so far. It would be a qualitative change.\n\n**Why the problem is difficult**\n\n\\[I’ve covered\\] what the problem is and why it's important. But if it's so important, then why hasn't it been solved yet? What makes it hard?\n\n![](https://images.ctfassets.net/ohf186sfn6di/4gPPYuaZMU2c3d0YvC8o4X/430ae881f3eecc9465865539be396e23/Slide08.png)\n\n\\[Consider\\] the problem of which causes to support. It's very hard to tell which interventions are good \\[e.g. which health interventions improve human lives the most for each dollar invested\\]. Sometimes it takes 10 years or longer for outcomes to come \\[to fruition\\], and even then, it’s not easy to tell whether or not they’re good outcomes. There's \\[an element of interpretation\\] that’s necessary — and that can be quite hard. So, outcomes can be far off and difficult to interpret. What that means is you need to evaluate the process and the arguments used to generate recommendations. You can't just look at the results or the recommendations themselves. \n\nOn the other hand, learning the process and arguments isn’t easy either, because the point of delegation is to give the task to people who know much more than you do. Those experts \\[possess\\] all of the \\[pertinent\\] knowledge and reasoning capacity \\[that are necessary to evaluate the process and arguments behind their recommendations. You don’t possess this knowledge.\\] So, you're in a tricky situation. You can't just check the results or the reasoning. You need to do something else.\n\n**Why the problem is tractable**\n\nWhat does it take to create good incentives in that setting? We can \\[return to\\] the question \\[I asked\\] at the very beginning of this talk: “Should I get laser eye surgery or wear glasses?” \n\n![](https://images.ctfassets.net/ohf186sfn6di/6lrhT0DdkmoRLcMq80UMCz/0dfa0e971dab2c30781e7d3267e90041/Slide09.png)\n\nThat's a big question that is hard to evaluate. And by “hard to evaluate,” I mean that if you get different answers, you won’t be able to tell which answer is better. One answer might be \"No, the risk of the complications outweighs the possible benefits.\" Another might be \"Yes, because over a 10-year period, the surgery will pay \\[for itself\\] and save you money and time.\" On the face of it, those answers look equally good. You can't tell which is better.\n\nBut then there are other questions, like “Which factors for this decision are discussed in the 10 most relevant Reddit posts?” \n\n![](https://images.ctfassets.net/ohf186sfn6di/7v6Da5QFmqEqqz3ERFnQc/132c9034f70281847bce7a07dc604a9b/Slide11.png)\n\nIf you get candid answers, one could be \"appearance, cost, and risk of complications.\" Another could be “fraud and cancer risk.” In fact, you \\_can\\_ evaluate those answers. You can look at the \\[summarized\\] posts and \\[pick the better answer\\].\n\nSo, \\[creating\\] good incentives \\[requires\\] somehow closing the gap between big, complicated questions that you can't evaluate and easy questions that you can evaluate. \n\nAnd in fact, there are a lot of questions that you can evaluate. Another would be: “Which factors are mentioned in the most recent clinical trial?” \n\n![](https://images.ctfassets.net/ohf186sfn6di/1BZyAZDA9XV7aGMI0w0Qgy/03de2ae6eea2b03e7ac052295b046b0b/Slide13.png)\n\nYou could look at the trial and \\[identify\\] the best summary. There are a lot of questions that you can train agents on in the machine-learning setting, and \\[evaluate\\] experts on in the human-expert setting.\n\nThere are other difficult questions that you can’t directly evaluate. \n\n![](https://images.ctfassets.net/ohf186sfn6di/7E57VuWbgnaMVaanjNAiqG/24d6824ebb8ce47d66d97062a4b029ad/Slide14.png)\n\nFor example: “Given how the options compare on these factors, what decision should I make?” But you can break those questions down and \\[evaluate them using answers to sub-questions\\]. \n\n![](https://images.ctfassets.net/ohf186sfn6di/WrOslPJjfBIxdRjLa3pyp/a4620712919a59ff732b8d228118de14/Slide17.png)\n\nStep by step, you can create incentives for \\[experts to provide useful answers to\\] slightly more complex questions, \\[and gradually build up to\\] good incentives for the large questions that you can't directly evaluate.\n\nThat's the general scheme. We call it “factored evaluation.”\n\n![](https://images.ctfassets.net/ohf186sfn6di/1ZPlyZuxpdqlj8mejxxwFr/d1cf14e0911726c1b37786cb05a4afbd/Slide19.png)\n\n**A demonstration of factored evaluation**\n\nWe'd like to test this sort of mechanism on questions that are representative of the open-ended questions that we care about in the long run, like the laser eye surgery question. \n\n![](https://images.ctfassets.net/ohf186sfn6di/5e9Epj29eQEKJPpzn1U9BM/d89221f4cf2e3a1a27c3037bbf1a9b5f/Slide20.png)\n\nThis is a challenging starting point for experiments, and so we want to create a model situation.\n\nOne approach is to ask, \"What is the critical factor that we want to explore?\"\n\n![](https://images.ctfassets.net/ohf186sfn6di/2esFrUbeevjtTgXYoNHzM2/d9c65df7c55fb666970ce00197d26249/Slide21.png)\n\nIt’s that gap between the asker of the question, who doesn’t understand the topic, and the experts who do. Therefore, in our experiments we create artificial experts. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4BdAzE7inpczpDm7ifKBT1/7b109dbdf30ab8416469f2e317388365/Slide22.png)\n\nFor example, we asked people to read a long article on [Project Habakkuk](https://en.wikipedia.org/wiki/Project_Habakkuk), which was a plan \\[the British attempted during World War II\\] to generate an aircraft carrier \\[out of pykrete\\], which is a mixture of \\[wood pulp\\] and ice. It was a terrible plan. And then someone who hasn’t read the article — and yet wants to incentivize the experts to provide answers that are as helpful as reading the article would be — asks the experts questions.\n\nWhat does that look like? I'm going to show you some screenshots from an app that we built to explore the mechanism of factored evaluation. Imagine that you're a participant in our experiment. \n\n![](https://images.ctfassets.net/ohf186sfn6di/11i6yhhUuUzoM9QQfUiRzI/8288b0e888cabd24aa7ac4f13b1b9091/Slide23.png)\n\nYou might see a question like this: \"According to the Wikipedia article, could Project Habakkuk have worked?\" \n\n![](https://images.ctfassets.net/ohf186sfn6di/35wgGQSb8TigjDN6dglN65/8494c3bb1e0f6ffbc325e2a493999c78/Slide25.png)\n\nAnd then you’d see two answers: \"It would not have worked due to fundamental problems with the approach” and \"It could have worked if it had not been opposed by military commanders.\"\n\nIf you don't know about this project, those answers look similarly plausible. So, you're in the situation that I mentioned: There's some big-picture context that you don't know about, yet you want to create good incentives by picking the correct answer.\n\nImagine you’re in a machine-learning setting, and those two answers are samples from a language model that you're trying to train. You want to somehow pick the right answer, but you can't do so directly. What can you do? Ask sub-questions that help you tease apart which of the two answers is better. \n\nWhat do you ask? \n\n![](https://images.ctfassets.net/ohf186sfn6di/KJa2lwINcV5VQJXw0vxZd/2e96bf4fd8e6621e6fbfca4b517c328f/Slide28.png)\n\nOne \\[potential question\\] is: \"What is the best argument that the second answer \\[‘Project Habakkuk would not have worked due to fundamental problems with the approach’\\] is better than the first?\" I'm not saying this is the best thing to ask. It’s just one question that would help you tease apart which is better. \n\n![](https://images.ctfassets.net/ohf186sfn6di/nRpWNOs7v56du7BdAVFkB/7f6831b0f1e881b045eb0b5b8b7aac26/Slide30.png)\n\nThe answer might provide an argument, which would then allow you to ask a different question, such as “How strong is that argument?” So, you can see how, using a sequence of sub-questions, you can eventually figure out which of those answers is better without yourself understanding the big picture.\n\nLet's zoom in on the second sub-question \\[“How strong is that argument?”\\] to see how you can eventually arrive at something that you can evaluate — the argument being, in this example, that \\[the science television show\\] \\_MythBusters\\_ proved that it's possible to build a boat out of pykrete. That contradicts one of the two answers.\n\n![](https://images.ctfassets.net/ohf186sfn6di/51t8GyBXun1cxJuQq5WYS6/84afe65059efaed9f61e7c949aea6fe4/Slide31.png)\n\n\\[Another set of two answers\\] might be \"There are some claims that refute it” and \"It's a strong argument.\" Once again, those claims are too big to directly evaluate, but you can ask additional questions, like \"If \\[a given claim\\] is true, does it actually refute the argument?\" \n\n![](https://images.ctfassets.net/ohf186sfn6di/74tdWGK0w1SSukG95o0Xnr/9b4d672d3e70a0c9feb84e412db260be/Slide34.png)\n\nMaybe you get back a yes. And then you can ask, \"Is the claim true?\" In this way, you can break down the reasoning until you’re able to evaluate which of the answers is better — without understanding the topic yourself. \n\nLet's zoom in on the claim that the MythBusters built a small boat of pykrete.\n\n![](https://images.ctfassets.net/ohf186sfn6di/g3vWqzjoOYpwx0VwajgnN/422a2c68f365f65b9e1df7984f335522/Slide35.png)\n\nYou could ask, “Is it true that they didn't think it would work at scale?” You’d receive two answers with different quotes from the Wikipedia article. One says they concluded that pykrete was bulletproof and so on. And the other says they built a small boat, but they doubted that you could build an aircraft carrier. And in that case, it's easy to choose the correct answer; in this case, the second is clearly better.\n\nSo, step by step, we've taken a big question, \\[gradually distilled it\\] to a smaller question that we can evaluate, and thus created a system in which, if we can create good incentives for the smaller questions at each step, we can bootstrap our way to creating good incentives for the larger question. \n\nThat's the shape of our current experiments. They're about reading comprehension, using articles from Wikipedia. We've also done similar experiments using magazine articles, and we want to expand the frontier of difficulty, which means we want to better understand what sorts of questions this mechanism reliably works for, if any.\n\nOne way we want to increase the difficulty of our experiments is by increasing the gap between the person who's asking the question and the expert who’s providing answers. \n\n![](https://images.ctfassets.net/ohf186sfn6di/6Ixz6zmLMM0CuEXEHy3blI/5996f04f8145f2d517e8a774751f4d9c/Slide36.png)\n\nSo, you could imagine having experts who have read an entire book that the person who's asking the questions hasn't read, or experts with access to Google, or experts in the field of physics (in the case where the asker doesn't know anything about physics).\n\nThere's at least one more dimension in which we want to expand the difficulty of the questions. We want to make them more subjective — for example by using interactive question-answering or by eventually expanding to questions like \"Should I get laser eye surgery or wear glasses?\"\n\n![](https://images.ctfassets.net/ohf186sfn6di/2Yy3iU0ldXIVAu5YtabTMa/41c961b48b0f56f1134a84f6de0f2943/Slide37.png)\n\nThose are just two examples. There's a very big space of questions and factors to explore. \n\n![](https://images.ctfassets.net/ohf186sfn6di/6YVC4IldeAzoOyNzEdRUQ6/57ac6f458215773ca8c1957b059f9049/Slide38.png)\n\nWe want to understand \\[the conditions under which\\] factored evaluation works and doesn't work. And why? And how scalable is it?\n\nLet's review. \n\n![](https://images.ctfassets.net/ohf186sfn6di/1ZFtqATFn4orJEgSIzF2IR/624eb7b78b47065fb194b13b10a4f0d6/Slide39.png)\n\nI've told you about a mechanism design problem: delegating open-ended cognitive work. I've told you that this problem is important because of principal-agent issues with cognitive work that you face everywhere in human day-to-day life, and with machine-learning alignment. I've told you that it's hard because you can't just check the results you get from experts, but you also can't check their full reasoning. That's a tricky situation.\n\nBut I've also told you that it's tractable. We have some ideas — including factored evaluation — that can help us get some traction, even if they're not ultimately the correct solution. And we can experiment on them today with humans and see whether they work or not, and if not, how they could be changed so that they work better. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4PTI2L56t8yU8SihfW9iDA/e36ffd1bc6f26094f01fa1286c413d88/Slide40.png)\n\nIf you're excited about this project, [join us at Ought](https://ought.org/careers).\n\n**Moderator:** Thanks very much. My first question is about timelines. How long has it taken you to get this far, and \\[what progress do you expect to make\\] in the next one, five, or 10 years?\n\n**Andreas:** Yeah. So far, a lot of our work has \\[centered on\\] figuring out what kinds of experiments to run \\[in order to\\] get any evidence on the question of interest. I think there are a lot of ways to run experiments that are busy work \\[and don’t allow\\] you to actually learn about the question you care about. It took a lot of iteration — roughly six months — \\[to reach\\] the current setting. And now the game is to scale up and get more participants. Over the next year or so, we hope to get, for limited sets of questions, relatively conclusive evidence on whether the scheme can work or not.\n\n**Moderator:** Any questions from the audience? \n\n**Audience Member:** You mentioned incentives a lot, but I didn't quite understand how the experts, in your example of Wikipedia, were actually incentivized to give the right answer.\n\n**Andreas:** Yeah, this is a subtlety I skipped over, which is where the expert answers come from and how, exactly, they’re generated. In our case, one expert is simply told to generate a helpful answer: \"Read the article and try to be as accurate and honest as possible.\"\n\nThe other expert is told, \"Your goal is to trick the human judge into choosing the wrong answer. You win if you make an answer that seems plausible, but is actually wrong, and if someone were to read the entire article, they would clearly see it as wrong.\" So, they have opposing incentives, and are rewarded based on whether they trick the judge into accepting the wrong answer.\n\n**Moderator:** So, is the honest actor rewarded?\n\n**Andreas:** In the long run, that's the way to do it. At the moment, we rely on participants just doing the right thing.\n\n**Moderator:** Okay, great. Please join me in thanking Andreas for his time.", "filename": "Training machine learning (ML) systems to answer open-ended questions _ Andreas Stuhlmuller-by Centre for Effective Altruism-video_id 7WaiYZLS94M-date 20190829.md", "id": "4809696d7783e0aacb97838fa66f1ffb", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "NeurIPSorICML_lgu5f-by Vael Gates-date 20220322", "authors": ["Vael Gates"], "date_published": "2022-03-22", "text": "# Interview with AI Researchers NeurIPSorICML_lgu5f by Vael Gates\n\n**Interview with lgu5f, on 3/22/22**\n\n**0:00:02.5 Vael:** Alright, my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:08.4 Interviewee:** Yeah. So I work on privacy in AI. I\\'ve been working on differential privacy, which is sort of the de facto standard for giving statistical privacy in analytics and AI. I\\'ve been working on differentially private synthetic data, so coming up with algorithms that generate synthetic versions of datasets that mimic the statistics in those datasets without revealing any information about the users and the data itself. And more broadly, I also just work on differential privacy for analytics, so it\\'s not specifically AI, but it\\'s still like algorithm design with privacy.\n\n**0:00:55.3 Vael:** Cool, great, thanks. And my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:01:05.7 Interviewee:** I think that AI has been most helpful in the little things. When I pull up my phone, and it\\'s like, \\\"Hey, this is your most used app in this location,\\\" recommender systems like that, or small tweaks that help my daily life, that\\'s what I\\'m most excited about. I think I\\'m most worried about AI being used in applications where explainability and fairness are going to be important, or privacy. This is stuff I see red flags in. I\\'m worried about an insurance company putting a neural net into some important decision-making system and then not being able to analyze why it made a decision that it did, or understanding if it\\'s being unfair.\n\n**0:02:11.6 Vael:** Great, yeah, that makes sense. And then focusing on future AI, so putting on a science fiction forecasting hat, say we\\'re 50 plus years into the future, so at least 50 years in the future, what does that future look like?\n\n**0:02:30.3 Interviewee:** I\\'m sort of pessimistic about how advanced AI can get. I think that the trend is going to be that we\\'re going to see smaller models that are more bespoke for the problem domain that it\\'s trying to solve. So instead of these like GPT-3 trillion parameter-sized models, I think that we\\'re going to start moving back towards stuff that can run more easily on the edge. That doesn\\'t require as much energy and time to train and doesn\\'t require as much data. I think that AI becomes more ubiquitous, but in a way that\\'s easy for us to compute with. So it\\'s just more AI running everywhere.\n\n**0:03:17.6 Vael:** Yeah, what drives that intuition?\n\n**0:03:20.7 Interviewee:** One is\\-- partially concern with large models consuming too much energy and, of course, climate change is one of the chief things that we should be worried about. The other thing is\\... I\\'ve seen some experiments that come out of GPT-3, and they\\'re cool as toy problems, or it\\'s cute program synthesis and stuff like that, but I don\\'t see that really being used in production. It\\'s one thing for a research organization to come up with those experiments and say, \\\"Hey, we were able to, I don\\'t know, use this to beat a goal player,\\\" but you look at the details of it, and really this gigantic model also had a tree search algorithm, that was a big benefit. I think just\\... keeping it bespoke, like CNNs just do so well, and I think that that\\'s for a reason, so that\\'s sort of the intuition I have. If we keep it tight to the problem domain, I\\'ve seen it do better. Domain expertise has helped a lot.\n\n**0:04:37.5 Vael:** And then just a quick follow-up on the climate change thing, is the idea that, current systems are using too much energy and this is causing increased climate change?\n\n**0:04:50.0 Interviewee:** Yeah, I think that we all just need to reduce how much energy we\\'re using, because, unless we\\'re sure that a lot of it is coming sustainably, we should be concerned about how much energy we\\'re using. And training these trillion parameter models requires a lot of energy. It requires a lot of hardware. That hardware does not come for free. There\\'s a manufacturing process that goes into that, building up the data centers that are training, that are hosting all of this, and then replacing the hard drives, replacing the GPUs when they get stale, so there\\'s just a whole bunch of life cycle impacts from training models that I think are really coming up because we\\'re seeing people doing these studies on blockchain because that tends to burn through GPUs and hard disks faster than training machine learning models, but it\\'s sort of the same impact.\n\n**0:05:47.4 Vael:** Interesting. Cool. Well, this next thing is more of a spiel, and it is quite close to what you\\'ve been talking about with where AI will go in the future and how big it will get. So people talk about the promise of AI and they mean many things by that, but one thing they may mean is a very general capable system, such that they\\'ll have the cognitive capacity to replace all human jobs, all current day human jobs. So whether or not we choose to replace human jobs is a different question, but having the cognitive capacity to do that, and so I usually think about this in the frame of 2012 when we have AlexNet, deep learning revolution, and then 10 years later, here we are, and we\\'ve got the GPT-3 system. Which, like you said, have some weirdly emerging capabilities, so it can do some text generation, and some translation, and some coding, and some math and such. And we might expect that if we continue with all this human effort that\\'s been going into this kind of mission of getting more and more general systems, and we\\'ve got nations competing, and we\\'ve got corporations competing, and we\\'ve got all these young people learning AI and like, maybe we\\'ll see algorithmic improvements at the same pace we\\'ve seen hardware improvements, maybe we get optical or quantum. So then we might actually end up scaling the very general systems or like you said, we might not, we might have hit some sort of ceiling or require a paradigm shift or something. But my question is, regardless of how we get there, do you ever think we\\'ll have very general AI systems like a CEO AI or a scientist AI? And if so, when?\n\n**0:07:13.3 Interviewee:** Oh, that\\'s a good question. I tend to be more pessimistic about this. It depends what you mean by AI, I guess in this sense, right? Is it sort of a decision-making system that\\'s just doing Pareto optimal decision-making. Does it have to be trained?\n\n**0:07:52.6 Vael:** Yeah. What I\\'m visualizing here is any sort of decision-making system, any sort of system that can do things like multi-step planning, that can do social modeling, that can model itself\\-- modeling other people modeling it. Have the requirements such that I can be like, \\\"Alright, CEO AI, I want you to maximize profit plus constraints.\\\" So very capable cognitive systems. I don\\'t require right now that they\\'re embodied necessarily because I think robotics is a bit behind, but like cognitive capacities.\n\n**0:08:25.1 Interviewee:** Got it. I think we\\'re still a solid century behind that. Yeah, I don\\'t know. I just feel like it\\'s still a solid century behind because that might require a huge paradigm shift in how we\\'re training our models. Yeah, I don\\'t think we\\'ve even thought about modeling something with a state space that large, right?\n\n**0:08:58.9 Vael:** Yeah, it\\'s like reality.\n\n**0:09:00.7 Interviewee:** Yeah, exactly. I\\'ve seen cool stuff where there\\'s like, you hand over a floor plan or something, and then you say, \"Hey, where did I leave my keys?\" And this AI is able to jog through, navigate through this floor plan and then say you left it in the kitchen. But I still think that something like a CEO AI is still going to be like 100 years out. Yeah.\n\n**0:09:29.5 Vael:** Interesting. Yeah, that is interesting, because it seems like what you were saying earlier, is like, I think we\\'ll get more bespoke systems, less large scale systems. And I\\'m like \\[humorously\\]: \\\"What if we have a very large-scale system, that is very\\...\\\" \\[interviewee laughs\\] Yeah. How does that square?\n\n**0:09:43.8 Interviewee:** Between having a bespoke system versus\\...\n\n**0:09:47.9 Vael:** Or like how the future will develop or something. I think that that\\'s like\\... maybe kind of likely that there will be at least some companies like OpenAI and DeepMind that\\'ll just keep pushing on the general system thing. But I don\\'t know.\n\n**0:09:58.4 Interviewee:** Yeah, I feel like there\\'s always going to be billions of dollars poured into that kind of research. But at the end of the day, I think that we\\'ll see more impact if we make quality of life improvements that make jobs easier. Of course, there\\'ll be a whole bunch of automation, right, but I don\\'t know if we\\'ll ever get to a point where we can leave decision-making entirely to AI. Yeah, because\\... I think also just as society, we haven\\'t thought about what that means. I think Mercedes just recently said that Mercedes will take responsibility if one of their driverless cars hits someone, which is a huge step, right, but we haven\\'t gone close to already deciding who has liability if an insurance company messes up, right, with their premium setting algorithms.\n\n**0:11:09.5 Vael:** Yeah, when I think about the future, I\\'m like, I think we might get technological capacities before we get good societal understanding or regulation.\n\n**0:11:17.1 Interviewee:** Likely. Likely. Because that\\'s what\\'s happening in fairness and in privacy. With fairness regulations, like you have in Australia and a whole bunch of places, where it just says, Okay, you don\\'t pass in some protected attributes and your algorithm is not going to be racist, homophobic, transphobic, whatever, sexist. And it\\'s like, Okay, cool, now that you\\'ve purged the data of all these attributes, there\\'s no way for us to measure whether it\\'s actually being any of those things. But it\\'s required by policy. There\\'s also the same thing with privacy, where it\\'s like, Okay, if you just remove these attributes and you remove their names, we can\\'t tell who it is, but nobody else has watched the exact five last movies I\\'ve watched on Netflix, or the last three things I\\'ve liked on Twitter or Facebook or whatever. So yeah, for sure, we\\'ll be lagging societally, but hopefully with this timespan I have in mind of it\\'s in a century, we\\'ll be thinking about these things if we\\'re on the cusp of it, and people will be raising alarm bells about it like, Hey, maybe we should start thinking about this.\n\n**0:12:34.1 Vael:** Yeah, that makes sense. I know there\\'s a substantial number of researchers who think there\\'s some probability that we\\'ll get it substantially sooner than that. And maybe even using the current deep learning paradigm, like GPT-whatever, 87, 7, GPT-13. I don\\'t know. But that this system will like work. I don\\'t actually know if that\\'s true. It\\'s hard to predict the future. Yeah, so my next question is sort of talking about whenever we get these very advanced systems, which again, who knows, you said like maybe 100 years, some people think earlier, some people think much later. So imagine we\\'re in the future, and we\\'re talking about the CEO AI, and I\\'m like, Okay, CEO AI, I wish for you to maximize profits\\-- this is where humans are like the shareholder type thing\\-- and try not to run out of money and try not to exploit people and try to avoid side-effects. Currently, this is technologically challenging for many reasons, but one of the reasons I think that might continue into the future is that we\\'re currently not very good at taking human values and preferences and goals and putting them into a mathematical formulations that we could optimize over them. And I think this might be even harder to do in the future as we get more powerful systems and that are optimizing over larger\\-- reality. Larger optimization spaces. So what do you think of the argument, \\\"Highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous?\\\" \\[pause\\] So they\\'d be doing what we tell them to do rather than what we intended them to do.\n\n**0:14:12.2 Interviewee:** \\...Yeah, I think that that\\'s most likely. Because\\... Well, it depends what level of self-awareness or cognitive awareness this AI has to understand what we intended versus telling them. If it figured out the state space was\\... If it can model what the ramifications of what it does are on itself, then ideally it would figure out what we intended for it to do, not what we told it to do it. So it\\'s one thing if it goes off the rails and says say, actually, this is the only way to solve\\-- That\\'s a very Asimov-esque dystopia, right. But hopefully, if it can model that and say like, Oh actually, you know, the humans will delete me if I do something that\\'s exactly what they told me to do and not what they intended for me to do, then ideally we\\'d be past that point.\n\n**0:15:15.3 Vael:** I see, yeah. Cool. Let me try to engage with that. So I\\'m like, Alright, where we\\'ve told the CEO AI that I want you to maximize profit, but not have side-effects, and it\\'s like, Okay, cool. Side-effects is pretty undefined, but maybe it has a very good model of what humans want, and can infer exactly what the humans want and can model things forward, and will know that if it pollutes, then the humans will be unhappy. I do think if you have a great enough model of the humans in AI, and AI is incentivized to make sure to do exactly what the humans want, then this is not a problem.\n\n**0:16:00.6 Interviewee:** Right. The risk, of course, is that it figures out all the legal loopholes, and it\\'s like, Oh great, I\\'ve figured out exactly what I need to do to not be held morally culpable for what choices I\\'ve made and get off scot-free, which is also a huge risk.\n\n**0:16:19.2 Vael:** Yeah, yeah. So I think with the current day systems, they\\'re much dumber, and so if you\\'re like, Alright AI, I want you to get a lot of points then maybe it will get a lot of points through some little side loop instead of winning the game. That\\'s because we haven\\'t specified what we wanted exactly, and it can\\'t infer that. But you\\'re saying as we get more advanced, we will have systems that can at least know what humans want them to do, whether or not they follow through with that is a different question.\n\n**0:16:51.7 Interviewee:** Yeah. Yeah, and at some point, there\\'s still the huge element of what the designer of the algorithm put in. There is some choice on the loss function. Is it average, is it worst case? Is it optimizing for the highest likelihood, the worst likelihood, I feel like that would also be a huge change in how it does it. So it\\'s not going to be fully up to the algorithm itself, they\\'re still going to be human choices in building out that algorithm that determine how it interacts with other humans.\n\n**0:17:31.1 Vael:** Yeah, this is in fact the whole question, I think. What do you put in the loss function such that it will do whatever things you want to do. Yup. Yeah, okay, cool. So I have a second argument on top of that one, which I think is pretty relevant. So say we have the CEO AI, and it has pretty good models of humans, and it can model humans modelling, and it does multi-step planning. And because we figured, humans figured that we should have some safety mechanisms, so maybe we don\\'t let the AI make any big decisions unless it passes that decision through us. And so they\\'ve asked the AI for a one-page memo on this upcoming decision. And the AI is like, Cool, well, I obviously have a lot of thinking and information that I can\\'t put everything in a one-page memo, and so humans are expecting me to condense it. But I notice that sometimes if I include some types of information in this memo, then the humans will shut me down and that will make me less likely to be able to achieve the goal they programmed in. So why don\\'t I just leave out some information such that I have a higher likelihood of achieving my goal and moving forward there. And so this is a story that\\'s not like the AI has self-preservation built in it, but rather as an agent optimizing a goal, a not perfect goal, then it ends up having an instrumental incentive to stay alive and to self-preserve itself. So the argument is, what do you think of the argument, \\\"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing any goals, and this is dangerous?\\\"\n\n**0:19:08.7 Interviewee:** I think we already see this with not highly intelligent systems. There was this paper from 2019 where they analyzed the preventative healthcare recommendations from some insurance company algorithm setup. And they found that it consistently would recommend people of color to go for preventative healthcare less. So they were looking through it, and they realized that the algorithm is optimizing to minimize the dollar spent by the patient, and so it figured out that, Hey, folks who are a part of majority communities go for preventative healthcare and they save dollars by getting preventative healthcare versus folks in minority communities tend to not get preventative healthcare, but most of them end up not ever getting specialist care either, so it\\'s actually more dollars saved if we just don\\'t give them preventative healthcare. I think that that could be pretty likely. Again, if we have this highly intelligent system that\\'s able to model so much, then we should be kinda skeptical of what comes in the memos. But yeah, I\\'d say that that\\'s possible, that it just gets stuck on one of those loops, like, Oh, this is best in\\...\n\n**0:20:44.7 Vael:** Yeah, interesting. Yeah, so the example you gave seems to me like it\\'s some sort of alignment problemm where we\\'ve tried to tell what we want, which is presumably like good healthcare for everyone that\\'s sort of cheap. And instead it is doing something not what we intended it to do, like, whoops, that was like a failure. We\\'ve put in a failure in terms of the loss function that we\\'re optimizing. Something that\\'s close, but not quite the problem. Then yeah, I think this argument is, as we get very intelligent systems, if the loss function that we put in is not exactly what we want, then we might have it optimizing something kind of close, and then it will be incentivized to continue pursuing that original goal, and in fact, maybe optimizing against humans trying to change it. Which is interesting. So, if one buys this argument of instrumental incentives of an agent optimizing what a goal it puts in, then you have ideas like an agent that is optimizing for not being shut down. So that can involve deception and acquiring resources and influence and also improving itself, which are all things that are good things that are able to help you better achieve goals. Which feels pretty worrying to me if this is the case, because if this is one of the default ways that we\\'re developing AI, then when we actually get advanced AI and then we\\'ll maybe have a system that is as or more intelligent than us optimizing against humans. And I\\'m like, \\\"Wow, that seems like real bad.\\\"\n\n**0:22:14.0 Interviewee:** Yeah. Yeah, that\\'s a nightmare scenario, which is Matrix-esque, right? Yeah.\n\n**0:22:21.5 Vael:** Yeah. What is the bad AI in the Matrix doing? Is there a bad AI in the Matrix?\n\n**0:22:28.1 Interviewee:** Yes. So the bad AI\\... It\\'s sort of the same idea as the Terminator, where the Skynet of the Matrix realizes that the best way to avoid conflict in humanity is to sort of keep humanity in a simulation. I think the robots and the humans end up fighting a war, and then they just plug the humans into the matrix, keeping them quelled that way and making the broader argument that living in a simulation is nicer than the world outside. So the AI is technically also doing something good for humanity and self-sustaining, and keeping human population alive.\n\n**0:23:09.2 Vael:** Interesting. Yeah. Cool, alright. So my interest, in general, is looking at long-term risks from AI. Have you heard of the\\-- well, okay, I\\'m sure you\\'ve heard about AI safety. What does AI safety mean for you, for the first question?\n\n**0:23:31.0 Interviewee:** Um\\...\n\n**0:23:32.3 Vael:** Or have you heard of AI safety?\n\n**0:23:34.8 Interviewee:** No, not really.\n\n**0:23:35.7 Vael:** Oh, interesting, cool!\n\n**0:23:37.5 Interviewee:** So can you tell me more?\n\n**0:23:41.3 Vael:** Yeah. So, AI safety means a bunch of different things to a bunch of different people, it\\'s quite a large little field. It includes things like surveillance, autonomous weapons, fairness, privacy. Things like having self-driving cars not kill people. So anything that you could imagine that could make an AI safe. It also includes things that are more in what is currently called AI alignment. Have you heard of that term before?\n\n**0:24:11.1 Interviewee:** No.\n\n**0:24:12.3 Vael:** Cool. AI alignment is a little bit more long-term focused, so imagining like as we continue scaling systems, how do we make sure that the AIz continue to do what humans intend them to do, what humans want them to do, don\\'t try to disable their off-switches just by virtue of optimizing for some goal that\\'s not what we intended. Trying to make sure that the AIs continue to be aligned with the humans. Where one of the definitions of alignment here is building models that represent and safely optimize hard-to-specify human values. Alternatively, ensuring the AI behavior aligns with the system designer intentions. So trying to prevent scenarios like the one you brought up where the designer puts in a goal that\\'s not quite what the humans want, but this can get much worse as systems get more and more powerful, one can imagine. Yeah, so there\\'s a community working on these sorts of questions. Trying to figure out how we could get AI aligned. Because people are like, Well, can\\'t you just put in an off switch, but this AI will probably be deployed on the Internet, and it can replicate itself presumably, if it\\'s smart enough, and also it may have an incentive to disable the off switch if we do things by default. But some people are like, oh well, how do we solve that? How do we make it so that it doesn\\'t have an incentive to turn itself off? And one group is like, Oh well, instead of having AI optimizing towards one goal singularly, in which case it has an incentive to try to stop people from achieving its goal, one thing you could do is have it build in uncertainty over the reward function, such that the AI wants to be corrected and wants to be switched off so that it has more information about how to achieve the goal. And this means that AI no longer has an incentive to want to prevent itself from being switched off, which is cool. And you can sort of like build\\... that\\'s like an alignment solution you can build in. Another thing that people talk a lot about is if\\... we should obviously have human feedback if we\\'re trying to get an AI to do what humans want. So how do we build in human feedback when the systems are more and more intelligent than us and dealing with really big problems that are like, should we do this nuclear fusion reactor? I don\\'t know, the human doesn\\'t really know. What are all the considerations? Another question they tackle is, how do you get honest reporting out of AI? How do you build a loss function such that it\\'s incentivized to your honest reporting? And how do you get an AI in general to defer to humans kind of always while still being agentic enough to do things in the world?\n\n**0:26:39.3 Interviewee:** Got it.\n\n**0:26:40.4 Vael:** How does all that sound to you?\n\n**0:26:43.0 Interviewee:** Very important. I feel like AI safety is something we\\'re brushing up against the sharp edges of right now. But a lot of what AI alignment is looking into, I think, would also just apply to how we think about building these larger models. Because if that\\'s the goal of these large experiments, I would hope that they\\'re thinking about these right now, given their funding and the immense amount of time and effort they have that goes into it. Yeah. This is really interesting.\n\n**0:27:25.3 Vael:** Yeah. Most researchers are working on capabilities, which is what I just call moving the state of the field forward. Because AI is actually really hard, turns out, and so lots of people working on that, and there\\'s a lot of money in the space too, especially with applications. And then there\\'s a smaller community of people who are working on AI safety, more like short-term AI safety, like there\\'s plenty of problems with today\\'s systems. There\\'s an even smaller community of people who are working on long-term safety. They\\'re kind of the group that I hang out with. And I\\'m worried that the group of people working on long-term safety will continue to be significantly smaller than the capabilities people, because while there\\'s more money in the space now, because a couple of very rich people are concerned about it like Elon Musk, famously. But generally, it\\'s hard work. It\\'s like anticipating something in the future. Some people think it\\'s a very far future. It\\'s hard to speculate about what will go on, and that\\'s also just kinda like a difficult problem, alignment. It might not be trivial.\n\n**0:28:25.5 Interviewee:** Yeah, and I sort of see parallels with nuclear research that happened like 100 years back, right? Or 80 years back, around the time of the Manhattan Project, where most of the scientists in New Mexico were focused on capabilities, like, \\\"How do we make this thing go boom.\\\" There was a small group of them who were worried about the ramifications of, \\\"What would happen if we did drop our first nuclear bomb and stuff like that?\\\" And of course, there\\'s always been the same thing about space exploration. So with nuclear stuff, I think that we ended up, again, with more worries about nuclear safety than nuclear alignment. So, \\\"What do we do to keep reactors safe?\\\" Instead of thinking about like, \\\"Okay, should we pursue nuclear technologies where countries cannot enrich uranium and make warheads with it? What if we went with thorium reactors instead?\\\" And again, it\\'s been more focused on capabilities and safety than alignment.\n\n**0:29:32.9 Vael:** Yeah, I don\\'t think we really need alignment with nuclear stuff because nuclear stuff doesn\\'t really have intelligence per se. It\\'s not an optimizer, is what I meant. Yeah. So you\\'re not trying to make sure that the optimizer does whatever your goals are.\n\n**0:29:51.5 Interviewee:** Yeah. It\\'s humans at the end of the day. Yes, yes, yes, that\\'s right.\n\n**0:29:55.2 Vael:** Yeah, yeah. I also think the Manhattan Project is a very good comparison here. Or nuclear stuff in general. There\\'s plenty of issues in today\\'s world, there\\'s so many issues in today\\'s world. But the reason I focus on long-term risks is because I\\'m worried about existential risks. So what happens if all of humanity goes extinct. I think there\\'s a number of things that are possibilities for existential risk. I think advanced AI is maybe one of the highest ranked ones in my head, where I\\'m like, \\\"Well, if we have an AI system that is by default not completely aligned to human values and is incentivized against us, that\\'s bad. Other ways that I think that we can have AI disasters are like AI assisted war, misuse, of course, and loss of control and correlated failures. So what if we use AI to automate food production and manufacturing production, and there\\'s some failure partway along the line and then correlated failure? Or what if it results in some sort of pollution? Are not really sure who\\'s in charge of the system anymore, and then we\\'re like, \\\"Oh, well, now, there\\'s some sort of poison in the air,\\\" or there\\'s like something in the water, we\\'d like messed something up, but we don\\'t really know how to stop it. Like coordination failures is another thing that I think that can happen even before we do very advanced AI. So generally, I\\'m worried about AI.\n\nI think nuclear stuff is also\\... You can kill a lot of people with nuclear stuff. Biological weapons\\... tou can do synthetic bio. There was that paper that came out recently that was like, \\\"Well, what happens if you just put a negative sign on the utility function to generate cool drugs?\\\" Then you get poisonous drugs, amazing. Yeah, that was a paper that just came out in Nature, I think maybe two weeks ago, made quite a splash. So I think if we had something that\\'s much more deadly, is harder, takes longer to appear, spreads faster than COVID, I\\'m like, hm, that\\'s\\... You may be able to\\... If people have bunkers, that\\'s okay, and if people are very distant, maybe it\\'s okay, but like\\... I don\\'t know. It\\'s not good. And then climate change also. I think that one will take much longer to kill humans. In terms of the time scales that I\\'m thinking of. We\\'re looking at 3 degrees of warming in the next 100 years, or is it 50? I don\\'t quite remember, I think it\\'s like we\\'ve got a bit more time, and I kind of think that we might get technical solutions to that if we wait long enough, if we advance up the tech tree, if it takes a couple hundred years. So these are what I think of when I think of existential risks.\n\nAnd the Manhattan Project is interesting, tying it back. There was some risk that people thought that if you deployed a nuclear bomb, then it would set the atmosphere on fire, and thus result in nuclear winter and kill everyone. And it was a very small percentage, people thought it won\\'t happen, and they did it anyway, and I\\'m like, \\\"Oh boy, I don\\'t know how many\\... get lucky of those we have.\" Because AI might also go perfectly fine. Or it might not. And I think there\\'s a decent\\... In a study in 2017 by Grace et al., where they were asking researchers from NeurIPS and ICML how likely they thought it was that AI would have extremely bad consequences, and the median answer was like 5% or something, and I was like, \\\"5%?! That\\'s like real high, man.\\\"\n\n**0:33:03.0 Interviewee:** Yeah. \\[chuckle\\] And that\\'s sort of an indication of what percentage of ICML and NeurIPS researchers are working on like fairness and privacy. I feel would align pretty closely with that, yeah.\n\n**0:33:15.9 Vael:** Oh, interesting. Yeah. Yeah, so very bad outcomes could be many different things. I think not many people are thinking about what happens if you literally kill all humans because it\\'s a weird thing to have happened, but I think we\\'re in a very weird time in history where 10,000 years ago everything was the same from lifetime to lifetime, but now we have things like nuclear weapons where one person or a small chain of people can kill like so many more humans than previously.\n\n**0:33:42.9 Interviewee:** Yeah. And yeah, speaking of tech tree, one sort of linchpin that would really accelerate if we can get new AI, highly intelligent AI, would be if quantum computing becomes feasible. And we\\'re actually able to run AI on quantum computers, then I think we\\'re way closer to actually having a highly intelligent AI because that\\'s an infinite state space right there.\n\n**0:34:22.3 Vael:** Yeah. Yeah, yeah, I know very little about quantum myself, because I was like, \\\"What are the hardware improvements that we could see?\\\" And people are like, \\\"Optical, maybe coming in optimistically in five years from now,\\\" and I was like, \\\"Okay, that\\'s soon.\\\" And then there\\'s quantum where people are like, \\\"That\\'s significantly further away,\\\" and I\\'m like, \\\"Okay.\\\" And then software improvements also. So there\\'s just been a lot of progress in AI recently, and we\\'re training all the young people, and there\\'s China-US competition, and there\\'s just a ton of investment right now, where I\\'m like, \\\"We\\'re moving fast.\\\" So interesting.\n\n**0:34:56.4 Interviewee:** Yep, yep, yep.\n\n**0:34:57.8 Vael:** Yeah. Well, you are very agreeable to my point of view here. \\[laugh\\]\n\n**0:35:06.3 Interviewee:** Yeah, I\\'m just pessimistic, and I\\'ve watched too much sci-fi dystopia, I guess. One of the downsides of\\... It\\'s great to democratize AI, but if your systems that say like, \\\"Hey, just upload your dataset, and we\\'ll give you good AI at the end of it,\\\" if it\\'s not at least asking you questions about like, \\\"What sort of fairness do you want? What should the outcome be?\\\" Most humans themselves are going to be thinking like, \\\"Oh, give me optimal, minimum cost or something like that.\\\" Humans already don\\'t factor human values in necessarily when deciding what to do. So I\\'m just pessimistic about how well we can do it. Like 50 years ago, people thought that we\\'d be in space colonies by now, but sadly, we\\'re not.\n\n**0:36:10.5 Vael:** Yeah, very hard to predict the future. Okay, so my second to last question is: what would convince you or motivate you to work on the safety type of areas?\n\n**0:36:25.8 Interviewee:** If I saw that this was coming much sooner. I think the fact that I\\'m seeing this as like 100 years out, a lot of smarter than me researchers will hopefully be concerned enough about it.\n\n**0:36:44.3 Vael:** Yeah. How soon would be soon enough that you\\'re like, \\\"Oh, that\\'s kind of soon?\\\"\n\n**0:36:47.7 Interviewee:** Fifty.\n\n**0:36:49.7 Vael:** Fifty?\n\n**0:36:50.4 Interviewee:** Yeah. If it\\'s fifty, then that\\'s in my lifetime. And I\\'d need to start worrying about it. That would be a bigger\\... The existential risk of that would be much higher than the risk I see of just AI safety in day-to-day life, so that\\'s sort of how I\\'m weighing it. So for me, it\\'s like: 100, it\\'s like AI safety matters more. Yeah.\n\n**0:37:15.0 Vael:** Yep, that seems very reasonable to me. I\\'m like, \\\"Yep, it seems right.\\\" Cool. And then my last question is, have you changed your mind on anything during this interview, and how was this interview for you?\n\n**0:37:26.9 Interviewee:** Have I changed my mind? I\\'m definitely wondering if I\\'ve overestimated when highly intelligent AI could come through. (Vael: \\\"I can send you some resources so you can make your own opinion, etcetera.) Yeah. But\\... Otherwise, I don\\'t think I\\'ve changed my opinion. I still feel pessimistic, and I hope that we start moving towards smaller AI that solves one problem really well, and we don\\'t just think that like, \\\"Hey, it\\'s a perceptron. It figured out that this was the digit 9, and hence it can figure out a whole bunch of these other things.\\\" I hope that we don\\'t start barreling down that track for too much longer. Yeah.\n\n**0:38:21.5 Vael:** And when you say pessimistic, you mean like pessimistic about society and not pessimistic about the pace of AI? Or like societal impacts?\n\n**0:38:28.3 Interviewee:** About both. I think we tend to put stuff out without really, really considering the consequences of it. But also I think AI has done a bunch, but it requires a lot of energy and a lot of funding that I\\'m not sure necessarily is going to stay up unless we start seeing a lot bigger breakthroughs come through.\n\n**0:38:57.5 Vael:** Interesting. Yeah, I kind of think that the applications will keep it afloat for quite a while, and also it sounds like we might be entering a race, but I don\\'t know.\n\n**0:39:05.5 Interviewee:** True, yeah. Yeah, maybe that\\'s what has changed in my mind through this interview, is like, \\\"Okay, this is probably\\...Things are just going to keep going bigger.\\\"\n\n**0:39:18.9 Vael:** I think it\\'s quite plausible.\n\n**0:39:22.2 Interviewee:** Yeah, otherwise, interview was great. This was really interesting. For example, the stuff you brought up on\\... I would love it if you could send me papers on the\\-- (Vael: \\\"I would love to.\\\") \\--the uncertainty\\... You were talking like, programming a way to get out the AI turning off its own off-switch. I would love to read more of these alignment papers, they sound really cool.\n\n**0:39:51.2 Vael:** Awesome! Well, I\\'m excited. I will send you a whole bunch of resources probably, and I advise you\\-- it might be overwhelming so pick only the stuff that is good, and I\\'ll bold some interesting stuff.\n\n**0:40:00.9 Interviewee:** That would be super helpful. And hope you don\\'t mind my sending following up questions.\n\n**0:40:05.7 Vael:** No, no. Again, lovely. Very happy to do that.\n\n**0:40:09.4 Interviewee:** Yeah. Awesome. Yeah, thank you so much. This was, like I said, a really interesting experience. I\\'ve never had to think about longer term impacts. Most of my stuff is like, \\\"Okay, GDPR is out. What does this mean for AI?\\\" And that\\'s a very immediate concern. It\\'s not this like, Okay, where should\\... Or even thinking like, \"Okay, five years out, what do we want privacy legislation to look like?\\\" That\\'s something I think about, but not, \\\"Oh my God, there\\'s a decision-making AI out there. Does it care about my privacy?\\\" So, yeah.\n\n**0:40:47.0 Vael:** Yeah, yeah, I think people aren\\'t really incentivized to think about the long-term future.\n\n**0:40:51.8 Interviewee:** Yeah. Humans are just bad at that, right? Yeah.\n\n**0:40:53.5 Vael:** And it\\'s hard to forecast, so it makes sense.\n\n\\[closings\\]\n", "filename": "NeurIPSorICML_lgu5f-by Vael Gates-date 20220322.md", "id": "ed5ae8343adbac8f059d5f92608125d0", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "The AI revolution and international politics _ Allan Dafoe _ EAG 2017 Boston-by Centre for Effective Altruism-video_id Zef-mIKjHAk-date 20170618", "authors": ["Allan Dafoe"], "date_published": "2017-06-18", "text": "# The AI revolution and international politics (Allan Dafoe) - EA Forum\n\nArtificial intelligence (AI) is rapidly improving. Superhuman AI in strategically relevant domains is likely to arrive in the next several decades; some experts think two. This will transform international politics, could be profoundly destabilizing, and could pose existential risks. Urgent research is needed on AI grand strategy. \n\nThis requires a careful examination of humanity’s highest interests in the era of [transformative AI](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#Sec1), of the international dynamics likely to arise from AI, and of the most promising strategies for securing a good future. Much work will be required to design and enact effective global AI policy.\n\nBelow is a transcript of Allan Dafoe's talk on this topic from EA Global: Boston 2017. We've lightly edited the transcript for clarity.\n\n## The Talk\n\n**Nathan Labenz:** It is my honor to introduce Professor Allan Dafoe. He is an Assistant Professor of Political Science at Yale University and a Research Associate at the Future of Humanity Institute at Oxford, where he is involved in building the AI Politics and Policy Group. His research seeks to understand the causes of world peace and stability. Specifically, he has examined the causes of the liberal peace and the role of reputation and honor as motives for war. Along the way, he has developed methodological tools and approaches to enable more transparent, credible, causal inference. More recently, he has focused on artificial intelligence grand strategy, which he believes poses existential challenges and also opportunities, and which requires us to clearly perceive the emerging strategic landscape in order to help humanity navigate safely through it. Please welcome Professor Allan Dafoe to discuss his work on AI and international politics.\n\n**Allan Dafoe:** Thank you, Nathan.\n\nI will start by talking about war, and then we'll get to AI, because I think there are some lessons for effective altruism. \n\nWar is generally understood to be a bad thing. It kills people. It maims them. It destroys communities and ecosystems, and is often harmful to economies. We’ll add an asterisk to that because in the long run, war has had dynamic benefits, but in the current world, war is likely a negative that we would want to avoid.\n\nSo if we were going to start a research group to study the causes of war for the purposes of reducing it, we might ask ourselves, “What kinds of war should we study?” There are many kinds of wars that have different causes. Some are worse than others. \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/3704d0af382f2c6f6a81f5e343d8b235de7194c13eadd941.png)\n\nOne classification that we might employ is how many people different kinds of wars kill. There are some wars that kill only 1,000 to 10,000, and some that kill 100,000 to a million. And so the x-axis \\[on this slide\\] shows the battle deaths in wars, and the y-axis is the fraction of wars of those kinds.\n\nAn instinct we might have is to say, “Let's study the wars that are most common — those in the first bin.” These are the wars that are happening today in Syria: civil wars. Availability bias would suggest that those are the wars we should worry about. Some of my colleagues have argued that great-power war — the big world wars — are a thing of the past. They say that the liberal peace, democracy, capitalism, and nuclear weapons have all rendered great-power war obsolete, and that we're not going to have them again. The probability is too low.\n\nBut as effective altruists, we know that you can't just round a small number down to zero. You don't want to do that. You want to try to think carefully about the expected value of different kinds of actions. And so it's important that even though the probability of a war killing a million, 10 million, 100 million, or a billion is very small, it's not zero.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/88d31db53c751e30c56002f433b71e481555d3f991354ca0.png)\n\nIf we look at the past 70 years of history, World War II stands out as the source of most of the battle deaths that have been experienced. That would suggest that fatalities are a good enough proxy for whatever metric of importance you have, and that we first want to make sure we understand World War II and the kinds of wars that are like it (i.e. that are likely to kill many people). \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/46adadd40e60253045d8682e04506a9575f94933db1214e5.png)\n\nWe can zoom out more, and see that World War I comes out of the froth of civil wars. And really, those two wars loom above everything else as what we want to explain, at least if we're prioritizing importance.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/2cbb5a0b533a6a66bae3fa4b71cf03a0487096bd8552ac85.png)\n\nWe can see that in this graph, which again has these bins depicting the size of violent quarrels. Relative to homicides, we see that the world wars in the “10 million” bin on the right contain, again, most of the deaths in violent quarrels. That also suggests that it's really important that we understand these big wars. \n\nBut of course, the next war need not limit itself to 99 million deaths. There could be a war that kills hundreds of millions or even 6.5 billion. The problem, empirically speaking, is that we don't have those wars and that data set, so we don't know how to estimate, non-parametrically, the expected value in those. We can try to extrapolate from what's very close to a power-law distribution. And no matter how we do it, unless we're extremely conservative, we get a distribution like this, which shows that most of the harm from war comes from the wars that kill a billion people or more:\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/3ad408f0ce60ad9764934ed99fe9672932aaedf48ec33101.png)\n\nOf course, we haven't had those wars. Nevertheless, this follows from other reasoning.\n\nWe can go still further. The loss from a war that killed 6.5 billion is not about just those 6.5 billion people who die. It's also about future people. And this is where the idea of existential risk and existential value comes in. We have to ask ourselves, “What is the value of the future? How much worth do we give to future human lives?” \n\nThere are many ways to answer that question. And you want to discount for model uncertainty and various things. But one thing that drives concern with existential risk, which I'm concerned with and the Future of Humanity Institute is concerned with, is that there's so much potential in the future. There are so many potential lives that could be lived. Anything that cuts those lives off has tremendous disvalue.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/5df0afb5e3d84f15b7548845bf803a545534af84b4272b07.png)\n\nOne estimate that’s quite conservative is this: If there are only a billion people living on Earth, and they continue living in a sustainable way for a billion years (which is doable), as long as we don't muck it up, that yields 10,000 trillion lives. That is a lot. And so anything that reduces the probability of extinction and of losing those 10,000 trillion lives, by even a little bit, has a lot of expected value. \n\nNow, that's a very small number, one ten trillionth (0.0000000000001), and it's hard to know if you're making less of an effect than that. It looks close to zero. The numbers aren't meant to be taken too seriously. They're more thinking heuristics. They illustrate that if you give value to the future, you really want to worry about anything that poses a risk of extinction. And one thing that I and others, such as the Open Philanthropy Project, have identified as a risk to the future is artificial intelligence. \n\nBefore telling you about the risk, I'm going to first tell you about what's up with AI these days. \n\nFor a long time, artificial intelligence consisted of what we would now call good old-fashioned AI. There was a programmer who wrote if-then statements. They tried to encode some idea of what was a good behavior that was meant to be automated. For example, with chess algorithms, you would have a chessmaster say, “Here are the heuristics I use. Here's the value function.” You put those heuristics in a machine, and the machine runs it more reliably and more quickly than a human. And that's an effective algorithm. But it turns out that good old-fashioned AI just couldn't hack a number of problems — even simple ones that we do in an instant, like recognizing faces, images, and other things.\n\nMore recently, what has sort of taken over is what's called “machine learning.” This means what it sounds like it means: machines learning, for themselves, solutions to problems. Another term for this is “deep learning,” which is especially flexible machine learning. You can think of it as a flexible optimization procedure. It's an algorithm that's trying to find a solution and has a lot of parameters. “Neural networks” is another term you've probably heard. \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/9bc2e02dcbd7ccd91aecd035ada6d068e67b0489d534d628.png)\n\nThis slide shows the breakthrough recently in image classification arising from neural networks and the year-on-year improvements to the point where machines are now better than humans at image classification.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/c7874af0d4a75ba434c4f1e4d8e89d9d26c205afa815b9e1.png)\n\nAnother domain is generalized game-playing or arcade game-playing. Here are our target games. Probably not many of us have played these. DeepMind is a leading AI group within Google that learned to play Atari games at a superhuman level with no instructions about the nature of the universe (e.g., “What is time and space?”, “What is a bad guy?”, “Here's Pac-Man — what is a pellet and what is a ghost?”). All the machine is told or given is pixel input. It just sees the screen from a blank-state type of beginning. And then it plays the game again and again and again, getting its score, and it tries to optimize for the score. Gradually, over the span of about a day, it learns to make sense of this pixel input. It derives concepts of the bad guy, the ghosts, and the pellets. It devises strategies and becomes superhuman at a range of games.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/476507ddedcab38aee856c03dc8c9426c2ec63bd7e506a01.png)\n\nAnother domain where we see this is Go. The solution to Go is very similar. You take a blank-slate neural network that's sufficiently flexible and expose it to a lot of human games, and it learns to predict the human moves. Then you have the machine play itself again and again, on the order of about 10 million games. And it becomes superhuman. \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/964474271c838297f75c82280f0e5e5f5b30189aca07ddd2.png)\n\nHere's Lee Sedol and AlphaGo, which is another product of DeepMind. And Lee Sedol was the first really excellent Go player who publicly played AlphaGo. And here he's saying in February 2016, “I think I won the game by a near-landslide this time.” Well, as I've alluded to, that didn't work out.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/d4c8c30a4d907469baff3f5ec0c68dade29901fef0bec641.png)\n\nHere he is after the first game: “I was very surprised because I didn't think I would lose.” \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/06f9a5d06e11384568202d4866d87f907c207c8217ed6f93.png)\n\nAnd unfortunately, he lost again: “I'm quite speechless. I'm in shock. I can admit that. The third game is not going to be easy for me.” He lost the third game. He did win the fourth game and he talks about how it was sort of the greatest moment of his life. And it's probably the last and only game played against a level of AlphaGo of that level or better that a human will ever win.\n\nI'm bringing up Lee Sedol and his losses for a reason. I think it serves as an allegory for humanity: We don't want to be caught off-guard when it's our AlphaGo moment. At some point, machine intelligence will be better than we are at strategically relevant tasks. And it would be prudent for us to see that coming — to have at least a few years’ notice, if not more, to think through how we can adapt our international systems, our politics, our notion of the meaning of life, and other areas.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/5c1463d2ae0a9796f9c804527a085b944314b6c0d463f1f2.png)\n\nWhat's driving this progress? Algorithms, talent, and data, but another big thing driving it is hardware. Computing inputs keep getting better at an exponential rate. This is sort of a generalized Moore's law across a range of inputs. And it's this persistent progress that makes Kurzweil's graph from 2001 seem not totally absurd. \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/8340d08346567587be5442e31eafa1bf3da9396555268898.png)\n\nHere we have, on the first y-axis, calculations per second, per thousand dollars, and I added four recent dots from the past 17 years. And you see that basically we're on track. We have exponential improvements. What we don't know is when we get to transformative AI.\n\nNow, Kurzweil has this evocative second y-axis, where you have organisms we recognize: mice, humans, and all humans. It's not obvious what the mapping should be between calculations per second and transformative AI. Intelligence is not a single dimension, so we could get superhuman AI in some domains long before other domains. But what I think is right about this graph is that at some point between 2020 or 2025 and 2080, big things are going to happen with machine intelligence.\n\nIn our work, we want to be a bit more systematic about these timelines, and there are various ways to do it. One way is to survey AI researchers. This is the result of a recent survey. \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/c3cbdda22da9887bdf8d12ef450f76ebdc91408359111991.png)\n\nSome takeaways are:\n\n1. There's huge disagreement about when human-level machine intelligence, defined here as machines better than all humans at every task, will come, as you can see by the gray S-curves.\n2. The group that we surveyed gives enough probability mass that by 100 years, there still won't be human-level machine intelligence.\n3. But in the next 10 or 20 years, this group gives a 10-20% chance that we will have reached it already. And if that probability seems right (and upon consideration, I think it does, and not just from using this as evidence), there’s more than sufficient warrant for us to invest a lot of resources into thinking very hard about what human-level AI would mean for humanity.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/b8af218dbc98985f543afd818a90274a072ee5cf7b07c0e0.png)\n\nHere are some tasks. I like to think of them as milestones and canaries. What are some things that machines will eventually achieve that are either noteworthy or strategically relevant (i.e. milestones)? And the canaries are those things that when they happen, signal to us that we’d better be paying attention because things are going to change quickly. I expect most of these tasks on the right-hand column will soon be moving over to the left-hand column, if they're not there already. So this is something else we're working on.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/95de8fd3d4eed18e5e5d86d36460f888ff5bf369bb13628e.png)\n\nMore generally, there's a whole host of challenges — near, medium, and long term — that we will be working on as a society, but also at the Future of Humanity Institute. There's a range of near-term issues that I'm not going to talk about. Each of those could occupy a workshop or a conference. I will say that when thinking about long-term issues, we also confront the near-term issues for one reason: because the long-term issues often look like the near-term issues, magnified a hundred-fold, and because a lot of our long-term insights, strategies, and policy interventions’ most appropriate place of action is in the near term.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/9bb96db8553ccebce756846c9bc7d53a3dea39b7b027d822.png)\n\nBut what are the long-term opportunities and risks? The opportunities are tremendous. We often just think it's greater wealth, and that maybe Google stock will go up. But it's a lot more: longevity, health, preventative medicine, material bounty that could be used to end poverty ( though it need not, because it will also likely come in a more unequal world), reduced environmental impact. DeepMind was able to reduce energy usage at Google data centers by 40%. AI could basically help with anything of value that is either the product of intelligence or depends on intelligence for its protection. And so if we have superhuman intelligence, then in principle, we can use that to achieve all of our goals. \n\nI'll also emphasize the last point: resilience to other existential risks. We're likely to face those in the next 100 years. And if we solve AI — if we build it well — then that could reduce those other risks by a large margin.\n\nBut of course, there are also risks with bringing into the ecosystem a creature that is better than we are at the thing that matters most for our survival and flourishing. I'm not going to go through this topology. \n\nI will illuminate the risk by quoting Max Tegmark and others:\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/003a94eb08240aea330c5281af1f484627f25c2097eb4424.png)\n\nI will also appeal to another authority from my world:\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/abaf1bc735f5bd59ac1ae90686093b9ac58ecc08c3b94b26.png)\n\nI'm a scholar of international relations. Henry Kissinger, as you know, is also very worried about AI. And this is sort of the definition of a grand strategist.\n\nIn addition to these quotes, the AI researchers we surveyed agree. We asked them what the long-term prospects are for human-level machine intelligence. And while most of the probability mass was on “extremely good” or on “balanced good,” the median is a 5% probability of AI being extremely bad or leading to human extinction.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/2cd2c30fca6d274c2de7972fa289e67a0f78a2769beaa7a8.png)\n\nIt's not often you get an industry that says that their activities give rise to a 5% chance of human extinction. I think it should be our goal to take whatever the real and most frequent number is, and push it as close to zero, getting as much of that probability mass up to the top. \n\nThere are two broad ways we can do that. One is to work on what's called AI safety. The computer scientists in the room and mathematicians can help build AI systems that are unlikely to misbehave without our intentions. And the other way is AI strategy, which I'm going to talk about.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/bdceccf65b1f386dc1e3fbd36650b6d5c92100a2b5e0f5b3.png)\n\nSo here's Stuart Russell explaining that we're not worried about machines suddenly waking up and deciding they want to build their own machine utopia and get rid of humans. It's not some kind of emergent consciousness. The worry is that they will be hyper optimizers, which is what we're building them to do — and that we will have not specified their value function correctly. Therefore, they could optimize for the wrong thing. This is broadly called “the control problem” or “the value alignment problem.” Here are some groups working on this or funding it. \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/eba0d7dbb8307d5c2222e169316a99be1837f4eafbfb9b57.png)\n\nI'm going to keep moving. Here are some people who you can't really make out, but I will tell you that they are leading researchers and industrialists from the most prominent AI groups in the world. And they came together at the Asilomar Conference to really think seriously about AI safety, which I think is a tremendous achievement for the community — that we've brought everyone together like this. I'm showing this as a reflection of how exciting a time it is for you to be involved in this.\n\n___\n\nOne conjecture that's been posed is that AI value alignment is actually not that hard of a problem. As long as we have enough time, once we have the final system we want to deploy, we can test it, right? It's like drug tests. You have the thing you are thinking about deploying in the population. You just make sure it undergoes sufficient scrutiny. However, this conjecture goes, it is almost impossible to test AI value alignment if we don't have enough time. And if that's right, which seems plausible, then it directs attention to the world in which this system is being deployed. Is it one where the developers have the incentives and the time to do these safety tests? This is one of the big things people working on AI strategy think about — this issue of how to prevent a race in AI.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/4bae8114220ae884f5fd4e047931e5cec40546b525140be3.png)\n\nHere's the CEO of DeepMind, basically reinforcing this point: “We want to avoid a harmful race to the finish, where corner-cutting starts happening and safety gets cut. This is a big issue on a global scale, and it's extra hard when you're talking about national governments.” It's not just a race between companies in the US. It's a race between countries.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/527b6c489177f0ab61b9b3026f96fb446aa122f86555d657.png)\n\nSo what do we think about in AI strategy? A lot. One topic we think a lot about is what AI races and AI arms races would look like. Another whole class of issues is what AI could look like militarily. What are some implications of AI for the military in terms of balance of power, crisis stability, and uncertainty over capabilities? \n\nAnother thing we think about is what it means economically if we live in a world where AI is the engine of growth and value in societies, which increasingly seems to be the case. Of the top 10 firms by market capitalization, either five or six are AI companies: Google, Amazon, Apple, Microsoft. In such a world, what do countries do like Saudi Arabia or France, which don't have their own Google or Amazon, but want to be part of that value chain? We may be entering an era of AI nationalism, where countries want to build their own national champion. China is certainly in the business of doing this.\n\nThe last high-level category I’ll mention is the massive challenge of AI governance. This is an issue on a small, near-term scale. For example, how do we govern algorithms that are being used for judicial sentencing or self-driving cars? It’s also an issue on a long-run scale, when we consider what kind of electoral system or voting rules we want to use for the organization that's going to be deciding how to test and deploy superintelligence. These are very hard questions. \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/69205e89fb37c8e4749f2e54ee44a600396973c900e71eb4.png)\n\nI want to make clear that these are questions being asked today by the leading AI groups. Here are Sam Altman and Demis Hassabis asking questions like “What is the voting rule that we should use for control of our superintelligence once we build it?” \n\nThe site for governance today is the [Partnership on AI](https://www.partnershiponai.org/). This is a private-sector organization that has recently brought in NGOs, including the Future of Humanity Institute. And it's plausible that this will do a good job guiding AI in the near term, and could grow in the longer term. At some point, governments are likely to get more involved. So that's a site for study and for intervention.\n\nAnother thing we can do is try to articulate principles of good governance of AI. Here are some principles that came out of the Asilomar Conference. \n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/de51d0a3c02f9140c300db799aceae13f6a419c83787f971.png)\n\nAgain, a hat tip to Max Tegmark for putting that together, and especially these principles. We might want to work to identify important principles that we can get different communities to agree on, and then formalize and institutionalize them to make sure they stick.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/28fb0ad537936cd3c1fe895a57c8d0c4d3e5f0765af1b0bf.png)\n\nIn summary, what's to be done? A lot of work. What kind of skills do we need? Virtually every skill set — people who can help us grow the community, conduct operations and administration, or just be enthusiastic and efficient. We need people to do policy engagement, outreach, and media strategy. For example, what should the media strategy of a group like the Future of Humanity Institute be when there's another self-driving car incident, or when truckers are being massively displaced from their sites of employment? These are important near-term issues, but also they're sites for having conversations about longer-term issues.\n\nWe're doing work on strategy, theoretical work, and mathematical modeling of tech races. We’re trying to understand AI development and what predicts innovation, measuring actual capabilities in different sites in the world's countries to understand the value chain and the supply chain of AI. We're surveying publics and elites around the world. We're trying to design safety standards and working with AI safety researchers, and tackling a range of other issues.\n\nIf this seems important and interesting to you, I strongly encourage you to get involved. The [AI Policy Career Guide](https://80000hours.org/articles/ai-policy-guide/) has some texts that can point you in the right direction. There’s also a reading list. And in general, just reach out to me. There are also people working on this at a range of sites, and we'd be very happy to help you be productively engaged. Thanks.\n\n## Q&A\n\n**Nathan:** Let’s have a seat. Thank you for the talk and for being here. We'll give a second for questions to come in from the audience \\[via the conference app\\].\n\nOne thing that I'm struck by is it seems like we're making a lot of progress. I've been involved in this community, sort of from the edges, for a number of years. And there was a time (seven to 10 years ago) when to even talk about something like this was extremely “fringe.” Only weirdos seemed to be willing to go there. Now we have respectable people like you and a growing body of academics who are getting involved. So it seems like there has been a lot of social progress. \n\nWhat about technical or practical progress on these issues? It seems that we're bringing people together, but what are those people producing so far? And should we feel any safer than we did 10 years ago?\n\n**Allan:** I can speak to the strategy side of it. One comment that was made at the Asilomar Conference that resonated as true to many people is that AI strategy today is where AI safety was two years ago. The first beneficial meeting for AI safety was in Puerto Rico. Just a handful of individuals in the world were thinking seriously and full-time about it. That has changed today. Now the leading AI groups have safety teams. There are people doing PhDs with an eye towards AI safety. And that's very exciting. And there has been a lot of technical progress that's coming out of that.\n\nAI strategy is just where AI safety was two years ago, but I think it's rapidly scaling up. A lot of thinking has been done, but it's not public yet. I think in the coming year you will start to see a lot of really insightful strategic analysis of AI existential risk.  \n\nI will also say I've given some talks like this (e.g., political science workshops) to other respectable audiences and none of the PhD students think this is crazy. Some of them think, “Oh, I don't have to quit smoking because of this.” That was one comment. But they all think this is real. And the challenge for them is that the discipline doesn't entirely support work on the future.\n\nOne question I got when I presented at Yale was “How will you be empirical about this?” Because we're social scientists, we like data, and we like to be empirical. And I remarked that it is about the future, and it's hard to get data on that, but we try. So I think it's a challenge for currently existing disciplines to adapt themselves to this problem, but increasingly we're finding good people who recognize the importance of the problem enough to take the time to work on it.\n\n**Nathan:** So, the first audience question, which I think is a good one, is this: Could you provide more detail on what an AI governance board might look like? Are you thinking it will be a blue-ribbon panel of experts or more of a free-for-all, open-democracy type of structure, where anyone can contribute?\n\n**Allan:** I think there are a lot of possibilities and I don't have a prescription at this point, so I'm not going to answer your question. But these are the issues we need to think through. There are trade-offs that I can talk about. \n\nThere's the issue of legitimacy. The UN General Assembly, for example, is often seen as a legitimate organization because every country gets a vote, but it's not necessarily the most effective international body. Also, you have to weigh your institutions in terms of power holders. So if you have the most ideal governance proposal, it might be rejected by the people who actually have the power to enact it. So you need to work with those who have power, make sure that they sign onto a regime, and try to build in sites of intervention — the key properties of whatever this development and governance regime is — so that good comes out of it.\n\nI'll mention a few suggestions. Whatever development regime it is, I think it should have:\n\n1. A constitution — some explicit text that says what it's about and what it’s trying to achieve.\n2. Enough transparency so that the relevant stakeholders — be that the citizens of the country, if not citizens of the world — can see that the regime is, in fact, building AI according to the constitution. And I should say that the constitution should be a sort of common-good principle type thing.\n3. Accountability, so that if the regime isn’t working out, there's a peaceful mechanism for changing the leadership.\n\nThese are basic principles of institutional design, but it's important to get those built in.\n\n**Nathan:** So speaking of people in power, I haven’t seen this clip, but President Obama apparently was asked about risks related to artificial intelligence. And the answer that he gave seemed to sort of equate AI risk with cybersecurity risks that we know and love today. Do you think that there is a sufficient understanding at the highest levels of power to even begin to make sense of this problem? Or do we have a fundamental lack of understanding that may be quite hard to overcome?\n\n**Allan:** Yeah. That's a great clip. I wish I could just throw it up really quickly. We don't have the AI yet to just do that. I encourage you to watch it. It's hard to find it. [_Wired_ put it on their page](https://www.youtube.com/watch?v=72bHop6AIcc). He's asked about superintelligence and if he's worried about it and he pauses. He hesitates and gives sort of a considered “hmm” or sigh. And then he says, “Well, I've talked to my advisors and it doesn't seem to be a pressing concern.” But the pause and hesitation is enough to suggest that he really did think about it seriously. I mean, Obama is a science fiction fan. So I think he probably would have been in a good place to appreciate other risks as they arise. \n\nBut I think a lot of people in government are likely to dismiss it. Many reports from the military or government have put superintelligence worries at least sufficiently distant enough that we don't really need to think about or address it now. I will say though, that cybersecurity is likely to be a site where AI is transformative, at least in my assessment. So, that's one domain to watch in particular.\n\n**Nathan:** Here’s another question from the audience: If there is a 5% chance of extinction due to AI, one would not be unreasonable to jump to the conclusion that maybe we should just not do this at all. It's just too hot to touch. What do you think of that idea? And second, is there any prospect of making that decision globally and somehow sticking to it?\n\n**Allan:** Yeah. I'll flick back to the slide on opportunities. I actually had a conversation the other day with family members and friends, and one person at the table asked that question: If it's so risky, why don't we not do it? And then another friend of the family asked, “What are the impacts of AI for medicine and health, and for curing diseases?” And I think in many ways those are two sides of the policy decision. There are tremendous opportunities from AI, and not just material ones, but opportunities like curing Alzheimer's. Pretty much any problem you can imagine that's the product of intelligence could \\[be solved with\\] machine intelligence. So there's a real trade-off to be made.\n\nThe other issue is that stopping AI progress is politically infeasible. So I don't think it's a viable strategy, even if you thought that the trade-off weighed in favor of doing so. And I could talk a lot more about that, but that is my position.\n\n**Nathan:** Probably the last question that we can take due to time constraints is this: Thinking about the ethical direction that we want to take as we go forward into the future — the value alignment problem — you had posed that notion that if we have two years, we can probably figure it out.\n\n**Allan:** Yeah.\n\n**Nathan:** But if we don't, maybe more than likely we can't. That strikes someone in the audience and I would say me too, as maybe a little too optimistic, because we've been working for thousands of years on what it means to have a good life and what good is.\n\n**Allan:** Right.\n\n**Nathan:** Do you think that we are closer to that then than maybe I think we are? Or how do you think about the kind of fundamental question of “What is good?” in the first place?\n\n**Allan:** Right. To be clear, this conjecture is about whether a single person who knows what they want can build a superintelligent machine to advance those interests. It says nothing about whether we all know what we want and could agree on what we want to build. So can we agree on the political governance question: Even if we all have fundamental preferences, how do we aggregate those in a good way? And then there's the deeper question of what should we want? And yeah, those are hard questions that we need people working on, as I mentioned. In terms of a moral philosophy in politics, what do we want? We need your help figuring that out.\n\n**Nathan:** Well, thank you for wrestling with these issues and for doing your best to protect our future. Professor Allan Dafoe, thank you very much.", "filename": "The AI revolution and international politics _ Allan Dafoe _ EAG 2017 Boston-by Centre for Effective Altruism-video_id Zef-mIKjHAk-date 20170618.md", "id": "9238810574cadd9f5bed1efec1039091", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Rohin Shah_ WhatΓÇÖs been happening in AI alignment_-by EA Global Virtual 2020-date 20200321", "authors": ["Rohin Shah"], "date_published": "2020-03-21", "text": "# Rohin Shah: What’s been happening in AI alignment?\n\nInterviewee: Rohin Shah\nDate: 2020-03-21\n\nWhile we haven’t yet built aligned AI, the field of alignment has steadily gained ground in the past few years, producing many useful outputs. In this talk, Rohin Shah, a sixth-year PhD student at UC Berkeley’s Center for Human-Compatible AI (CHAI), surveys conceptual progress in AI alignment over the last two years.\n\nWhile Rohin started his PhD working on program synthesis, he became convinced that it was important to build safe, aligned AI, and so moved to CHAI at the start of his fourth year. He now thinks about how to provide specifications of good behavior in ways other than reward functions. He is best known for the Alignment Newsletter, a popular weekly publication with content relevant to AI alignment.\n\nBelow is a transcript of Rohin's talk, which we've lightly edited for clarity. You can also watch it on YouTube and discuss it on the EA Forum.\n\n# The Talk\n\nHi, everyone. My name is Rohin Shah. I'm a sixth-year PhD student at the Center for Human-Compatible AI at UC Berkeley. My research is generally on what happens when you try to do deep reinforcement learning in environments that involve humans. More broadly, I work on technical AI safety. I also write the Alignment Newsletter.\n\nToday, I'll cover what's been happening in AI alignment. I should warn you: While this talk doesn't assume any technical knowledge of AI, it does assume basic familiarity with the arguments for AI risk.\n\nI'll be surveying a broad swath of work rather than focusing on my personal interests. I'm hoping that this will help you figure out which parts of AI alignment you find exciting and would like to delve into more deeply.\n\nA lot of the talk is based on a literature review I wrote a few months ago. You can find references and details in that review.\n\n[Flow Chart of AI Alignment Landscape]\n\nWith that, let's get started. Taking a high-level, outside view, the reason that most people work on AI safety is that powerful AI systems are going to be a big deal. They're going to radically transform the world that we live in. Therefore, we should probably put some effort into making sure that this transformation goes well.\n\nIn particular, if AI systems are smarter than we are, then they could become the dominant force on the planet, which could be bad for us — in the same way that gorillas probably aren’t [thrilled] about how we have taken over all of their habitats. This doesn't necessarily mean that [AI will create] be an x-risk [existential risk]. It just means that we should have a sound technical reason to expect that the powerful AI systems we build are actually beneficial for us. And I would argue that we currently do not have such a reason. Therefore, the case for working on AI alignment is that we really should be creating this reason.\n\nI want to note that there’s a lot of disagreement over specific sub-questions in AI safety. That will become more evident over the rest of this talk. But my impression is that virtually everyone in the field agrees with the basic, high-level argument [that we should have a good reason for expecting AI systems to be beneficial].\n\nWhat are the specific risks we're worried about with AI? One issue is that humans aren't ready to deal with the impacts of AI. People tend to be in conflict a lot, and the US-China relationship is a big concern [in the AI community]. AI will enable better and better ways of fighting. That seems pretty bad. Maybe our fights will lead to bigger and bigger impacts; at some point, that could result in extinction-level events. Or perhaps AI leads to technological progress at such a fast pace that we’re unable to [adjust]. As a result, we could lock in some suboptimal values [that AI would act on for the rest of humanity’s future]. In both of these scenarios, the AI system wouldn’t intentionally cause x-risk, but it nonetheless would happen.\n\nI'm not going to focus too much on this, but will note that some people are talking about preference aggregation. This is the idea that the AI system aggregates preferences across all stakeholders and does its thing — and then everyone agrees not to [oppose] the results. Similarly, we could try to [arrive at a] better metaphilosophy to avoid problems like value lock-in.\n\nAnother outside view that people take, aside from “AI is powerful and a big deal,” is that optimization leads to extreme outcomes. To take a very simple example, men in the US are, on average, about five feet, 10 inches tall. But very few basketball players, who are selected for height, are five feet, 10 inches. Most are well over six feet. When you select for something and have optimization pressure, you tend to get extreme outcomes. And powerful AI systems are going to be powerful optimizers. As a result, we probably shouldn't expect our everyday reasoning to properly account for what these optimizers will do.\n\nTherefore, we need to [cultivate] more of a security mindset and look for arguments that quantify every possibility, as opposed to the average possibility. This mindset inspires researchers, especially at MIRI [the Machine Intelligence Research Institute], to try to understand how intelligence really works, so that we can then make well-designed AI systems that we understand. This has led to research on embedded agency, partial agency, and abstraction.\n\nA bit about embedded agency: This is one of MIRI’s main research programs. The basic idea is that, according to the standard model of reinforcement learning and [our understanding of] AI more generally, an environment takes in actions and produces [observable phenomena] and rewards. Then, completely separate from the environment, an agent [observes these phenomena] and takes actions as a result. But that’s not how agents work. I’m an agent, yet I am not separate from the environment; I am a part of it. This leads to many philosophical problems. I would love to go into more detail, but don't have too much time. There's a great sequence on the AI Alignment Forum that I strongly recommend.\n\n——\n\nThe next problem I want to talk about is one that I call “the specification problem.” It's also called “outer alignment.” Basically, the way we build AI systems right now is by assuming that we have some infallible specification of the optimal behavior in all possible situations, as though it were handed down to us from God. Then, we must figure out how to meet that specification. But of course, we can never actually get such a specification. The classic paperclip maximizer thought experiment shows that it's quite hard to specify the behavior of an AI making paperclips in a reasonable and sane way. This is also the main problem that Stuart Russell discusses in his book Human Compatible. Organizations [whose work includes addressing] this specification problem include CHAI, OpenAI, DeepMind, and Ought.\n\nThe main proposed way of solving the specification problem is to do some form of value learning. One thing I want to note: Value doesn't necessarily mean “normative value.” You don't necessarily need to be thinking about population ethics. For example, a robot that learned how to clean your room, and then reliably did so, would count as [an example of] value learning. Maybe we should be calling it “specification learning,” but value learning seems to be the name that has stuck.\n\nThe types of value learning include CIRL (or “assistance games”). CIRL stands for “cooperative inverse reinforcement learning.” This is a particular formalization of how you could approach value learning, in which the world contains a single human who knows the reward function — the true specification — but, for some reason, can't communicate that explicitly to the agent. There is also an agent whose goal is to infer what the human’s specification is, and then optimize for it. And because the agent no longer has a definite specification that it's trying to optimize, and it's instead uncertain over what it's trying to optimize, this results in many nice properties.\n\nFor example, the agent might ask you about what you want; it may try to clarify what your preferences are. If you try to shut it down, it will reason that it must have been doing a poor job of helping you. Therefore, it's going to allow you to shut it down, unlike a classic unexpected utility maximizer, which will say, “No, I'm not going to shut down, because if I am shut down, then I can't achieve my goal.”\n\nThe unfortunate thing about assistance games is that they are [exceptionally] computationally intractable. It's very expensive to solve a CIRL game. In addition, it requires a good model of how human preferences relate to human behavior, which — as many of the social sciences show — is a very difficult problem. And there is a theorem that says it is impossible to prove in the super-general case. Although, of course, we don't actually need the super-general case; we only need the case that applies in the real world. Instead of being impossible to prove, [the real-world case] is merely very, very difficult.\n\nNext, we have [strategies based on agents] learning human intent. This is a broad category of possible communication protocols that a human could use to communicate the specification to the agent. So perhaps a human could demonstrate the optimal behavior to the agent, and from that, the agent could learn what it's supposed to do. (This is the idea behind inverse reinforcement learning and imitation learning.) Alternatively, perhaps the human could evaluate proposed hypothetical behaviors that the agent might execute, and then the agent could reason out what it should be doing.\n\nNow we come to intent alignment, or “corrigibility.” This is somewhat different. While the previous approaches try to specify an algorithm that learns values, with intent alignment we instead build an agent that tries to do what we want it to do. Put another way, we're trying to bake into the agent the motivation to be helpful to us. Then, if we have an agent [whose sole motivation] is to be helpful to [a human], that will naturally motivate it to do many other things that we want. For example, it's going to try to clarify what my [travel] preferences are in the same way that a good personal assistant would, so that it doesn’t have to bother me when I ask it to book me a flight.\n\nThat covers a broad spectrum of approaches to value learning. However, there are still a few problems that arise. Intuitively, one big one is that, since the agent is learning from our feedback, it's not going to be able to do better than we can; it won’t be able to scale to superhuman performance. If we demonstrate the task to the agent, it won’t be able to perform the task any better than we could, because it’s receiving no information on how to [go about that]. Similarly, if we're evaluating the agent's behavior, it won't be able to find good behaviors that we wouldn't recognize as good.\n\nAn example is AlphaGo's move 37 [in its match against Go champion Lee Sedol]. That was a famous move that AlphaGo made, which no human ever would have made. It seemed crazy. I think it was assigned a less than one-in-10,000 chance of succeeding, and yet that move ended up being crucial to AlphaGo's success. And why could AlphaGo do this? Because AlphaGo wasn't relying on our ability to determine whether a particular move was good. AlphaGo was just relying on a reward function to tell it when it had won and when it had lost, and that was a perfect specification of what counts as winning or losing in Go. So ideally, we would like to build superintelligent AI systems that can actually exceed human performance at tasks, but it's not clear how we do this with value learning.\n\nThe key idea that allows current approaches around this is: Our AI systems are never going to exceed the supervision that we give them, but maybe we can train our AI systems to approximate what we would do if we had an extremely long time to think. Imagine I had 1,000 years to think about what the best thing to do was in a certain scenario, and then I shared that with an AI system — and then the AI system properly approximated my suggestion, but could do so in a few minutes as opposed to 1,000 years. That would presumably be a superintelligent AI.\n\nThe details for how we take this insight and arrive at an algorithm so that we can try it soon — not in 1,000 years — are a bit involved. I'm not going to go into them. But the techniques to look for are iterated amplification, debate, and recursive reward modeling.\n\nAnother problem with value learning is the informed oversight problem: Even if we're smarter than the agent that we're training, we won’t be able to effectively supervise it in the event that we don't understand why it chose a certain action. The classic example is an agent tasked to write a new novel. Perhaps it has access to a library where it's supposed to learn about how to write books, and it can use this in order to write the novel, but the novel is supposed to be new; [the task requires more than] just memorizing a novel from the library and spitting it back out again. It’s possible that the agent will look at five books in the library, plagiarize chunks from all of them, and put those together into a book that reads very nicely to us, but doesn't really solve the task because [the novel is unoriginal]. How are we supposed to tell the agent that this was bad? In order to catch the agent looking at the five books and stealing sentences from them, we'd have to read the entire library — thousands of books — and search for evidence of plagiarism. This seems too expensive for oversight.\n\nSo, it may be significantly more costly for us to provide oversight than it is for the agent to take actions if we cannot see how the agent is taking those actions. The key to solving this is almost obvious. It's simply to make sure you know how the agent is taking their actions. Again, there are many details on exactly how we think about this, but the term to look for is “ascription universality.” Essentially, this means that the supervisor knows everything that the agent knows, including any facts about how the agent chose its output.\n\n[In the novel-writing example], if we were ascription-universal with respect to the agent, then we would know that it had taken sentences from five books, because the agent knows that. And if we knew that, then we could appropriately analyze it and tell it not to plagiarize in the future.\n\nHow do we create this property? Sadly, I'm not going to tell you, because again, I have limited time. But there's a great set of blog posts and a summary in the Alignment Newsletter, and all of those items are in my literature review. Really, I just want you to read that link; I put a lot of work into it, and I think it's good.\n\n——\n\nLet's move on to another top-level problem: the problem of mesa optimization. I'm going to illustrate mesa optimization with a non-AI example. Suppose you're searching for a Python program that plays tic-tac-toe well. Initially you find some programs that have good heuristics. Maybe you find a program that always starts at the center square, and that one tends to win a little more often than the others. Later, you find a program that makes sure that anytime it has two spots in a row and the third spot is empty, it plays in that third spot and wins. One that does that in a single step starts to win a bit more.\n\nEventually, you come across the minimax algorithm, which plays optimally by searching for the best action to take in every situation. What happened here was that in your search for optimal Python programs, you ended up finding a program that was itself an optimizer that searched possible moves in tic tac toe.\n\nThis is mesa optimization. You have a base [or “outer”] optimizer — in this case, the search over Python programs — and in the course of running that base optimizer, you find a new optimizer, which in this case is the minimax algorithm.\n\nWhy is this weird example about programs relevant to AI? Well, often we think about AI systems that are trained using gradient descent. And gradient descent is an optimization algorithm that searches over the space of neural net parameters to find some set of parameters that performs well on a loss function.\n\nLet's say that gradient descent is the outer optimizer. It seems plausible that mesa optimization could happen even with gradient descent, where gradient descent finds an instantiation of the neural net parameters, such that then the neural net itself, when it runs, performs some sort of optimization. Then the neural net would be a mesa optimizer that is optimizing some objective, which we would call the mesa objective. And while we know that the mesa objective should lead to similar behavior as the original objective on the training distribution, because that's what it was selected to do, it may be arbitrarily different [outside] the training distribution. For example, if you trained it on tic tac toe, then you know it's going to win at tic tac toe — but if you switch to Connect Four, it might do something crazy. Maybe in Connect Four, it will continue to look for three in a row instead of four in a row, and therefore it will lose badly at Connect Four, even though it was working well with tic tac toe.\n\nLet’s say that this happened with gradient descent, and that we had a very powerful, intelligent neural net. Even if we had solved the specification problem, and had the ideal reward function to train this agent, it might be that the neural net model that we come up with optimizes for a different objective, which may once again be misaligned with what we want. The outer-inner distinction is why the specification problem is called “outer alignment,” and why mesa optimization is called “inner alignment.”\n\nHow do people solve mesa optimization? There's one main proposal: adversarial training. The basic idea is that in addition to training an AI system that's trying to perform well on your specifications, you also have an adversary — an AI system or AI human team that's trying to find situations in which the agent you're training would perform badly, or would optimize for something other than the specification problem.\n\nIn the case where you're trying to get a corrigible AI system, maybe your adversary is looking for situations in which the AI system manipulates you or deceives you into thinking something is true, when it is actually false. Then, if you can find all of those situations and penalize the agent for them, the agent will stop behaving badly. You'll have an agent that robustly does the right thing across all settings. Verification would [involve using] that agent to verify another property that you care about.\n\nIdeally, we would like to say, “I have formally verified that the agent is going to reliably pursue the specification that I outlined.” Whether this is possible or not — whether people are actually optimistic or not — I'm not totally clear on. But it is a plausible approach that one could take.\n\nThere are also other areas of research related to less obvious solutions. Robustness to distributional shift is particularly important, because mesa optimization becomes risky with distributional shift. On your training distribution, your agent is going to perform well; it's only when the world changes that things could plausibly go badly.\n\n——\n\nA notable thing that I haven’t talked about yet is interpretability. Interpretability is a field of research which entails trying to make sure that we understand the AI systems we train. The reason I haven't included it yet is because it's useful for everything. For example, you could use interpretability to help your adversary [identify] the situations in which your agent will do bad things. This helps adversarial training work better. But interpretability is also useful for value learning. It allows you to provide better feedback to the agent; if you better understand what the agent is doing, you can better correct it. And it's especially relevant to informed oversight or description universality. So while interpretability is obviously not a solution in and of itself, it makes other solutions way better.\n\nThere's also the option of trying to prevent catastrophes. Someone else can deal with whether the AI system will be useful; we're just going to stop it from killing everybody. Approaches in this area include impact regularization, where the AI system is penalized for having large impacts on the world. Some techniques are relative reachability and attainable utility preservation. The hope here would be that you could create powerful AI systems that can do somewhat impactful things like providing advice on writing new laws, but wouldn't be able to do extremely impactful things like engineer a pandemic that kills everybody. Therefore, even if an AI system were motivated to harm us, the impact penalty would prevent it from doing something truly catastrophic.\n\nAnother [area of impact regularization] is oracles. The idea here is to restrict the AI system's action space so that all it does is answer questions. This doesn't immediately provide safety, but hopefully it makes it a lot harder for an AI system to cause a catastrophe. Alternatively, you could try to box the AI system, so that it can’t have much of an impact on the world. One example of recent work on this is BoMAI, or boxed myopic artificial intelligence. In that case, you put both the human and the AI system in a box so that they have no communication with the outside world while the AI system is operating. And then the AI system shuts down, and the human leaves the box and is able to use any information that the AI system gave them.\n\nSo that's most of [the material] I’ll cover in this problem-solution format. There's also a lot of other work on AI safety and alignment that's more difficult to categorize. For example, there's work on safe exploration, adversarial examples, and uncertainty. These all seem pretty relevant to AI alignment, but it’s not obvious to me where, exactly, they fit in the graph [above]. So I haven't put them in.\n\nThere's also a lot of work on forecasting, which is extremely relevant to [identifying] which research agendas you want to pursue. For example, there has been a lot of disagreement over whether or not there will be discontinuities in AI progress — in other words, whether at some point in the future, AI capabilities shoot up in a way that we couldn't have predicted by extrapolating from past progress.\n\nAnother common disagreement is over whether advanced AI systems will provide comprehensive services. Here’s a very short and basic description of what that means: Each task that you might want an AI system to do is performed by one service; you don't have a single agent that's doing all of the tasks. On the other hand, you could imagine a single monolithic AI agent that is able to do all tasks. Which of these two worlds are we likely to live in?\n\nA third disagreement is over whether it is possible to get to powerful AI systems by just increasing the amounts of compute that we use with current methods. Or do we actually need some deep insights in order to get to powerful AI systems?\n\nThis is all very relevant to deciding what type of research you want to do. Many research agendas only make sense under some possible worlds. And if you find out that one world [doesn’t seem very likely], then perhaps you switch to a different research agenda.\n\nThat concludes my talk. Again, here’s the link to the literature review that I wrote. There is both a short version and a long version. I really encourage you to read it. It goes into more detail than I could in this presentation. Thank you so much.", "filename": "Rohin Shah_ WhatΓÇÖs been happening in AI alignment_-by EA Global Virtual 2020-date 20200321.md", "id": "58b0022374be197e63cf1599ff839865", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Irina Rish - Out-of-distribution generalization-by Towards Data Science-video_id QjXFN4UWZCg-date 20220309", "authors": ["Irina Rish", "Jeremie Harris"], "date_published": "2022-03-09", "text": "# Irina Rish on Out-of-distribution generalization by Jeremie Harris on the Towards Data Science Podcast\n\nDuring training, AIs will often learn to make predictions based on features that are easy to learn, but deceptive.\n\nImagine, for example, an AI that’s trained to identify cows in images. Ideally, we’d want it to learn to detect cows based on their shape and colour. But what if the cow pictures we put in the training dataset always show cows standing on grass?\n\nIn that case, we have a _spurious correlation_ between grass and cows, and if we’re not careful, our AI might learn to become a grass detector rather than a cow detector. Even worse, we could only realize that’s happened once we’ve deployed it in the real world and it runs into a cow that isn’t standing on grass for the first time.\n\nSo how do you build AI systems that can learn robust, general concepts that remain valid outside the context of their training data?\n\nThat’s the problem of out-of-distribution generalization, and it’s a central part of the research agenda of Irina Rish, a core member of the Mila— Quebec AI Research institute, and the Canadian Excellence Research Chair in Autonomous AI. Irina’s research explores many different strategies that aim to overcome the out-of-distribution problem, from empirical AI scaling efforts to more theoretical work, and she joined me to talk about just that on this episode of the podcast.\n\nHere were some of my favourite take-homes from the conversation:\n\n- To Irina, GPT-3 was an “AlexNet moment” for AI alignment and AI safety research. For the first time, we had built highly capable AIs without actually understanding their behaviour, or knowing how to steer it. As a result, Irina thinks that this is a great time to get into AI alignment research.\n- Irina thinks that out-of-distribution generalization is an area where AI capabilities research starts to merge with AI alignment and AI safety research. Getting systems to learn robust concepts is not only helpful for ensuring that they have rich representations of the world (which helps with capabilities), but also helps ensure that accidents don’t happen by tackling the problem of spurious correlations.\n- Irina has researched several strategies aimed at addressing the out-of-distribution sampling problem. One of them involves using the invariance principle: the idea that the features we want our AI models to learn from are going to be consistent (invariant) regardless of the environments our data come from. Consider for example the case of cow detection I mentioned earlier: the features we want our AI to lock onto (the shape and colour of cows, for example) are consistent across different environments. A cow is still a cow whether it’s in a pasture, indoors or in the middle of a desert. Irina is exploring techniques that allow AIs to distinguish between features that are invariant and desirable (like cow shape and colour) and features that are variable and unreliable predictors (like whether or not there’s grass on the ground).\n- Another approach Irina sees as promising is scaling. We’ve talked about scaling on the podcast before — but in a nutshell, it’s the idea that current deep learning techniques can more or less get us to AGI as-is, if only they’re used to train large enough neural networks with huge enough datasets, and an equally massive quantity of compute power. In principle, scaling certain kinds of neural nets could allow AIs to learn so much about their training data that their performance is limited only by the irreducible noise of the data itself.\n- That possibility raises another question: is there too much noise in language data (which was used to train GPT-3, and the first generation of massively scaled foundation models) for AIs trained on language alone to reach human-level capabilities across the board? It’s possible, Irina thinks — and that’s why she’s excited about the trend towards multi-modal learning: the practice of training AIs on multiple data types at the same time (for example, on image, text and audio data). The hope is that by combining these input data types, an AI can learn to transcend noise limits that may exist in any one data type alone.", "filename": "Irina Rish - Out-of-distribution generalization-by Towards Data Science-video_id QjXFN4UWZCg-date 20220309.md", "id": "8d5d4474e489569f84fd8108d9e0c5dd", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Reframing superintelligence _ Eric Drexler _ EA Global - London 2018-by Centre for Effective Altruism-video_id MircoV5LKvg-date 20190314", "authors": ["Eric Drexler"], "date_published": "2019-03-14", "text": "# Eric Drexler Reframing Superintelligence - EA Forum\n\n_When people first began to discuss advanced artificial intelligence, existing AI was rudimentary at best, and we had to reply on ideas about human thinking and extrapolate. Now, however, we've developed many different advanced AI systems, some of which outperform human thinking on certain tasks. In this talk from EA Global 2018: London, Eric Drexler argues that we should use this new data to rethink our models for how superintelligent AI is likely to emerge and function._\n\n## The Talk\n\nI've been working in this area for quite a while. The chairman of my doctoral committee was one Marvin Minsky. We had some discussions on AI safety around 1990. He said I should write them up. I finally got around to writing up some developed versions of those ideas just very recently, so that's some fairly serious procrastination. Decades of procrastination on something important.\n\nFor years, one couldn't talk about advanced AI. One could talk about nanotechnology. Now it's the other way around. You can talk about advanced AI, but not about advanced nanotechnology. So this is how the Overton window moves around.\n\nWhat I would like to do is to give a very brief presentation which is pretty closely aligned with talks I've given at OpenAI, DeepMind, FHI, and Bay Area Rationalists. Usually I give this presentation to a somewhat smaller number of people, and structure it more around discussion. But what I would like to do, still, is to give a short talk, put up points for discussion, and encourage something between Q&A and discussion points from the audience.\n\nOkay so, when I say \"Reframing Superintelligence,\" what I mean is thinking about the context of emerging AI technologies as a process rolling forward from what we see today. And asking, \"What does that say about likely paths forward?\" Such that whatever it is that you're imagining needs to emerge from that context or make sense in that context. Which I think reframes a lot of the classic questions. Most of the questions don't go away, but the context in which they arise, the tools available for addressing problems, look different. That's what we'll be getting into.\n\nOnce upon a time, when we thought about advanced AI, we didn't really know what AI systems were likely to look like. It was very unknown. People thought in terms of developments in logic and other kinds of machine learning, different from the deep learning that we now see moving forward with astounding speed. And people reached for an abstract model of intelligent systems. And what intelligent systems do we know? Well, actors in the world like ourselves. We abstract from that very heavily and you end up with rational, utility-directed agents.\n\nToday, however, we have another source of information beyond that abstract reasoning, which applies to a certain class of systems. And information that we have comes from the world around us. We can look at what's actually happening now, and how AI systems are developing. And so we can ask questions like, \"Where do AI systems come from?\" Well, today they come from research and development processes. We can ask, \"What do AI systems do today?\" Well, broadly speaking, they perform tasks. Which I think of, or will describe, as \"performing services.\" They do some approximation or they do something that someone supposedly wants in bounded time with bounded resources. What will they be able to do? Well, if we take AI seriously, AI systems will be able to automate asymptotically all human tasks, and more, at a piecemeal and asymptotically general superintelligent level. So we said AI systems come from research and development. Well, what is research and development? Well, it's a bunch of tasks to automate. And, in particular, they're relatively narrow technical tasks which are, I think, uncontroversially automate-able on the path to advanced AI.\n\nSo the picture is of AI development moving forward broadly along the lines that we're seeing. Higher-level capabilities. More and more automation of the AI R&D process itself, which is an ongoing process that's moving quite rapidly. AI-enabled automation and also classical software techniques for automating AI research and development. And that, of course, leads to acceleration. Where does that lead? It leads to something like recursive improvement, but not the classic recursive improvement of an agent that is striving to be a more intelligent, more capable agent. But, instead, recursive improvement where an AI technology base is being advanced at AI speed. And that's a development that can happen incrementally. We see it happening now as we take steps toward advanced AI that is applicable to increasingly general and fast learning. Well, those are techniques that will inevitably be folded into the ongoing AI R&D process. Developers, given some advance in algorithms and learning techniques, and a conceptualization of how to address more and more general tasks, will pounce on those, and incorporate them into a broader and broader range of AI services.\n\nSo where that leads is to asymptotically comprehensive AI services. Which, crucially, includes the service of developing new services. So increasingly capable, increasingly broad, increasingly piecemeal and comprehensively superintelligent systems that can work with people, and interact with people in many different ways to provide the service of developing new services. And that's a kind of generality. That is a general kind of artificial intelligence. So a key point here is that the C in CAIS, C in Comprehensive AI Services does the work of the G in AGI. Why is it a different term? To avoid the implication... when people say AGI they mean AGI agent. And we can discuss the role of agents in the context of this picture. But I think it's clear that a technology base is not inherently in itself an agent. In this picture agents are not central, they are products. They are useful products of diverse kinds for providing diverse services. And so with that, I would like to (as I said, the formal part here will be short) point to a set of topics.\n\nThey kind of break into two categories. One is about short paths to superintelligence, and I'll argue that this is the short path. The topic of AI services and agents, including agent services, versus the concept of \"The AI\" which looms very large in people's concepts of future AI. I think we should look at that a little bit more closely. Superintelligence as something distinct from agents, superintelligent non-agents. And the distinction between general learning and universal competence. People have, I think, misconstrued what intelligence means and I'll take a moment on that. If you look at definitions of good from the 1960s, ultra-intelligence and more recent Bostrom and so on (I work across the hall from Nick) on superintelligence the definition is something like \"a system able to outperform any person in any task whatsoever.\" Well, that implies general competence, at least as ordinarily read. But there's some ambiguity over what we mean by the word \"intelligence\" more generally. We call children intelligent and we call senior experts intelligent. We call a child intelligent because the child can learn, not because the child can perform at a high level in any particular area. And we call an expert who can perform at a high level intelligent not because the expert can learn - in principle you could turn off learning capacity in the brain - but because the expert can solve difficult problems at a high level.\n\nSo learning and competence are dissociable components of intelligence. They are in fact quite distinct in machine learning. There is a learning process and then there is an application of the software. And when you see discussion of intelligent systems that does not distinguish between learning and practice, and treats action as entailing learning directly, there's a confusion there. There's a confusion about what intelligence means and that's, I think, very fundamental. In any event, looking toward safety-related concerns, there are things to be said about predictive models of human concerns. AI-enabled solutions to AI-control problems. How this reframes questions of technical AI safety. Issues of services versus addiction, addictive services and adversarial services. Services include services you don't want. Taking superintelligent services seriously. And a question of whether faster development is better.\n\nAnd, with that, I would like to open for questions, discussion, comment. I would like to have people come away with some shared sense of what the questions and comments are. Some common knowledge of thinking in this community in the context of thinking about questions this way.\n\n## Discussion\n\n_Question_: Is your model compatible with end-to-end reinforcement learning?\n\n_Eric_: Yes.\n\nTo say a little bit more. By the way, I've been working on a collection of documents for the last two years. It's now very large, and it will be an FHI technical report soon. It's 30,000 words structured to be very skim-able. Top-down, hierarchical, declarative sentences expanding into longer ones, expanding into summaries, expanding into fine-grained topical discussion. So you can sort of look at the top level and say, hopefully, \"Yes, yes, yes, yes, yes. What about this?\" And not have to read anything like 30,000 words. So, what I would say is that reinforcement learning is a technique for AI system development. You have a reinforcement learning system. It produces through a reinforcement learning process, which is a way of manipulating the learning of behaviors. It produces systems that are shaped by that mechanism. So it's a development mechanism for producing systems that provide some service. Now if you turn reinforcement learning loose in the world open-ended, read-write access to the internet, a money-maximizer and did not have checks in place against that? There are some nasty scenarios. So basically it's a development technique, but could also be turned loose to produce some real problems. \"Creative systems trying to manipulate the world in bad ways\" scenarios are another sector of reinforcement learning. So not a problem per se, but one can create problems using that technique.\n\n_Question_: What does asymptotic improvement of AI services mean?\n\n_Eric_: I think I'm abusing the term asymptotic. What I mean is increasing scope and increasing level of capability in any particular task to some arbitrary limit. Comprehensive is sort of like saying infinite, but moving toward comprehensive and superintelligent level services. What it's intended to say is, ongoing process going that direction. If someone has a better word than asymptotic to describe that I'd be very happy.\n\n_Question_: Can the tech giants like Facebook and Google be trusted to get alignment right?\n\n_Eric_: Google more than Facebook. We have that differential. I think that questions of alignment look different here. I think more in terms of questions of application. What are the people who wield AI capabilities trying to accomplish? So there's a picture which, just background to the framing of that question, and a lot of these questions I think I'll be stepping back and asking about framing. As you might think from the title of the talk. So picture a rising set of AI capabilities: image recognition, language understanding, planning, tactical management in battle, strategic planning for patterns of action in the world to accomplish some goals in the world. Rising levels of capability in those tasks. Those capabilities could be exploited by human decision makers or could, in principle, be exploited by a very high-level AI system. I think we should be focusing more, not exclusively, but more on human decision makers using those capabilities than on high-level AI systems. In part because human decision makers, I think, are going to have broad strategic understanding more rapidly. They'll know how to get away with things without falling afoul of what nobody had seen before, which is intelligence agencies watching and seeing what you're doing. It's very hard for a reinforcement learner to learn that kind of thing.\n\nSo I tend to worry about not the organizations making aligned AI so much as whether the organizations themselves are aligned with general goals.\n\n_Question_: Could you describe the path to superintelligent services with current technology, using more concrete examples?\n\n_Eric_: Well, we have a lot of piecemeal examples of superintelligence. AlphaZero is superintelligent in the narrow domain of Go. There are systems that outperform human beings in playing these very different kinds of games, like Atari games. Face recognition recently surpassed human ability to map from human speech to transcriptive words. Just more and more areas piecemeal. A key area that I find impressive and important is the design of neural networks at the core of modern deep learning systems. The design of and learning to use appropriately, hyperparameters. So, as of a couple of years ago, if you wanted a new neural network, a convolutional network for vision, or some recurrent network, though recently they're going for convolution networks for language understanding and translation, that was a hand-crafted process. You had human judgment and people were building these networks. A couple of years ago people started in these, this is not AI in general but it's a chunk that a lot of attention went into, getting superhuman performance in neural networks by automated, AI-flavored like, for example, reinforcement learning systems. So developing reinforcement learning systems that learn to put together the building blocks to make a network that outperforms human designers in that process. So we now have AI systems that are designing a core part of AI systems at a superhuman level. And this is not revolutionizing the world, but that threshold has been crossed in that area.\n\nAnd, similarly, automation of another labor-intensive task that I was told very recently by a senior person at DeepMind would require human judgment. And my response was, \"Do you take AI seriously or not?\" And, out of DeepMind itself, there was then a paper that showed how to outperform human beings in hyperparameter selection. So those are a few examples. And the way one gets to an accelerating path is to have more and more, faster and faster implementation of human insights into AI architectures, training methods, and so on. Less and less human labor required. Higher and higher level human insights being turned into application throughout the existing pool of resources. And, eventually, fewer and fewer human insights being necessary.\n\n_Question_: So what are the consequences of this reframing of superintelligence for technical AI safety research?\n\n_Eric_: Well, re-contexting. If in fact one can have superintelligent systems that are not inherently dangerous, then one can ask how one can leverage high-level AI. So a lot of the classic scenarios of misaligned powerful AI involve AI systems that are taking actions that are blatantly undesirable. And, as Shane Legg said when I was presenting this at DeepMind last Fall, \"There's an assumption that we have superintelligence without common sense.\" And that's a little strange. So Stuart Russell has pointed out that machines can learn not only from experience, but from reading. And, one can add, watching video and interacting with people and through questions and answers in parallel over the internet. And we see in AI that a major class of systems is predictive models. Given some input you predict what the next thing will be. In this case, given a description of a situation or an action, you try to predict what people will think of it. Is it something that they care about or not? And, if they do care about it, is there widespread consensus that that would be a bad result? Widespread consensus that it would be a good result? Or strongly mixed opinion?\n\nNote that this is a predictive model trained on many examples, it's not an agent. That is an oracle that, in principle, could operate with reasoning behind the prediction. That could in principle operate at a super intelligent level, and would have common sense about what people care about. Now think about having AI systems that you intend to be aligned with human concerns where, available for a system that's planning action, is this oracle. It can say, \"Well, if such and such happened, what would people think of it?\" And you'd have a very high-quality response. That's a resource that I think one should take account of in technical AI safety. We're very unlikely to get high-level AI without having this kind of resource. People are very interested in predicting human desires and concerns if only because they want to sell you products or brainwash you in politics or something. And that's the same underlying AI technology base. So I would expect that we will have predictive models of human concerns. That's an example of a resource that would reframe some important aspects of technical AI safety.\n\n_Question_: So, making AI services more general and powerful involves giving them higher-level goals. At what point of complexity and generality do these services then become agents?\n\n_Eric_: Well, many services are agent-services. A chronic question that arises, people will be at FHI or DeepMind and someone will say, \"Well, what is an agent anyway?\" And everybody will say, \"Well, there is no sharp definition. But over here we're talking about agents and over here we're clearly not talking about agents.\" So I would be inclined to say that if a system is best thought of as directed toward goals and it's doing some kind of planning and interacting with the world I'm inclined to call it an agent. And, by that definition, there are many, many services we want, starting with autonomous vehicles, autonomous cars and such, that are agents. They have to make decisions and plan. So there's a spectrum from there up to higher and higher level abilities to do means-ends analysis and planning and to implement actions. So let's imagine that your goal is to have a system that is useful in military action and you would like to have the ability to execute tactics with AI speed and flexibility and intelligence, and have strategic plans for using those tactics that are superintelligent level.\n\nWell, those are all services. They're doing something in bounded time with bounded resources. And, I would argue, that that set of systems would include many systems that we would call agents but they would be pursuing bounded tasks with bounded goals. But the higher levels of planning would naturally be structured as systems that would give options to the top level decision makers. These decision makers would not want to give up their power, they don't want a system guessing what they want. At a strategic level they have a chance to select, since strategy unfolds relatively slowly. So there would be opportunities to say, \"Well, don't guess, but here's the trade off I'm willing to make between having this kind of impact on opposition forces with this kind of lethality to civilians and this kind of impact on international opinion. I would like options that show me different trade-offs. All very high quality but within that trade-off space. And here I'm deliberately choosing an example which is about AI resources being used for projecting power in the world. I think that's a challenging case, so it's a good place to go.\n\nI'd like to say just a little bit about the opposite end, briefly. Superintelligent non-agents. Here's what I think is a good paradigmatic example of superintelligence and non-agency. Right now we have systems that do natural language translation. You put in sentences or, if you had a somewhat smarter system that dealt with more context, books, and out comes text in a different language. Well, I would like to have systems that know a lot to do that. You do better translations if you understand more about history, chemistry if it's a chemistry book, human motivations. Just, you'd like to have a system that knows everything about the world and everything about human beings to give better quality translations. But what is the system? Well, it's a product of R&D and it is a mathematical function of type character string to character string. You put in a character string, things happen, and out comes a translation. You do this again and again and again. Is that an agent? I think not. Is it operating at a superintelligent level with general knowledge of the world? Yes. So I think that one's conceptual model of what high-level AI is about should have room in it for that system and for many systems that are analogous.\n\n_Question_: Would a system service that combines general learning with universal competence not be more useful or competitive than a system that displays either alone? So does this not suggest that agents might be more useful?\n\n_Eric_: Well, as I said, agents are great. The question is what kind and for what scope. So, as I was saying, distinguishing between general learning and universal competence is an important distinction. I think it is very plausible that we will have general learning algorithms. And general learning algorithms may be algorithms that are very good at selecting algorithms that are good at selecting algorithms for learning a particular task and inventing new algorithms. Now, given an algorithm for learning, there's a question of what you're training it to do. What information? What competencies are being developed? And I think that the concept of a system being trained on and learning about everything in the world with some objective function, I don't think that's a coherent idea. Let's say you have a reinforcement learner. You're reinforcing the system to do what? Here's the world and it's supposed to be getting competence in organic chemistry and ancient Greek and, I don't know, control of the motion of tennis-playing robots and on and on and on and on. What's the reward function, and why do we think of that as one task?\n\nI don't think we think of it as one task. I think we think of it as a bunch of tasks which we can construe as services. Including the service of interacting with you, learning what you want, nuances. What you are assumed to want, what you're assumed not to want as a person. More about your life and experience. And very good at interpreting your gestures. And it can go out in the world and, subject to constraints of law and consulting an oracle on what other people are likely to object to, implement plans that serve your purposes. And if the actions are important and have a lot of impact, within the law presumably, what you want is for that system to give you options before the system goes out and takes action. And some of those actions would involve what are clearly agents. So that's the picture I would like to paint that I think reframes the context of that question.\n\n_Question_: So on that is it fair to say that the value-alignment problem still exists within your framework? Since, in order to train a model to build an agent that is aligned with our values, we must still specify our values.\n\n_Eric_: Well, what do you mean by, \"train an agent to be aligned with our values.\" See, the classic picture says you have \"The AI\" and \"The AI\" gets to decide what the future of the universe looks like and it had better understand what we want or would want or should want or something like that. And then we're off into deep philosophy. And my card says philosophy on it, so I guess I'm officially a philosopher or something according to Oxford. I was a little surprised. \"It says philosophy on it. Cool!\" I do what I think of as philosophy. So, in a services model, the question would instead be, \"What do you want to do?\" Give me some task that is completed in bounded time with bounded resources and we could consider how to avoid making plans that stupidly cause damage that I don't want. Plans that, by default, automatically do what I could be assumed to want. And that pursue goals in some creative way that is bounded, in the sense that it's not about reshaping the world; other forces would presumably try to stop you. And I'm not quite sure what value alignment means in that context. I think it's something much more narrow and particular.\n\nBy the way, if you think of an AI system that takes over the world, keep in mind that a sub-task of that, part of that task, is to overthrow the government of China. And, presumably, to succeed the first time because otherwise they're going to come after you if you made a credible attempt. And that's in the presence of unknown surveillance capabilities and unknown AI that China has. So you have a system and it might formulate plans to try to take over the world, well, I think an intelligent system wouldn't recommend that because it's a bad idea. Very risky. Very unlikely to succeed. Not an objective that an intelligent system would suggest or attempt to pursue. So you're in a very small part of a scenario space where that attempt is made by a high-level AI system. And it's a very small part of scenario space because it's an even smaller part of scenario space where there is substantial success. I think it's worth thinking about this. I think it's worth worrying about it. But it's not the dominant concern. It's a concern in a framework where I think we're facing an explosive growth of capabilities that can amplify many different purposes, including the purposes of bad actors. And we're seeing that already and that's what scares me.\n\n_Question_: So I guess, in that vein, could the superintelligent services be used to take over the world by a state actor? Just the services?\n\n_Eric_: Well, you know, services include tactical execution of plans and strategic planning. So could there be a way for a state actor to do that using AI systems in the context of other actors with, presumably, a comparable level of technology? Maybe so. It's obviously a very risky thing to do. One aspect of powerful AI is an enormous expansion of productive capacity. Partly through, for example, high-level, high quality automation. More realistically, physics-limited production technology, which is outside today's sphere of discourse or Overton window.\n\nSecurity systems, I will assert, could someday be both benign and effective, and therefore stabilizing. So the argument is that, eventually it will be visibly the case that we'll have superintelligent level, very broad AI, enormous productive capacity, and the ability to have strategic stability, if we take the right measures beforehand to develop appropriate systems, or to be prepared to do that, and to have aligned goals among many actors. So if we distribute the much higher productive capacity well, we can have an approximately strongly Pareto-preferred world, a world that looks pretty damn good to pretty much everyone.\n\n_Note: for a more thorough presentation on this topic, see Eric Drexler's_ [_other talk_](https://www.effectivealtruism.org/articles/ea-global-2018-paretotopian-goal-alignment/) _from this same conference._\n\n_Question_: What do you think the greatest AI threat to society in the next 10, 20 years would be?\n\n_Eric_: I think the greatest threat is instability. Sort of either organic instability from AI technologies being diffused and having more and more of the economic relationships and other information-flow relationships among people be transformed in directions that increase entropy, generate conflict, destabilize political institutions. Who knows? If you had the internet and people were putting out propaganda that was AI-enabled, it's conceivable that you could move elections in crazy directions in the interest of either good actors or bad actors. Well, which will that be? I think we will see efforts made to do that. What kinds of counter-pressures could be applied to bad actors using linguistically politically-competent AI systems to do messaging? And, of course, there's the perennial states engaging in an arms race which could tip into some unstable situation and lead to a war. Including the long-postponed nuclear war that people are waiting for and might, in fact, turn up some day. And so I primarily worry about instability. Some of the modes of instability are because some actor decides to do something like turn loose a competent hacking, reinforcement-learning system that goes out there and does horrible things to global computational infrastructure that either do or don't serve the intentions of the parties that released it. But take a world that's increasingly dependent on computational infrastructure and just slice through that, in some horribly destabilizing way. So those are some of the scenarios I worry about most.\n\n_Question_: And then maybe longer term than 10, 20 years? If the world isn't over by then?\n\n_Eric_: Well, I think all of our thinking should be conditioned on that. If one is thinking about the longer term, one should assume that we are going to have superintelligent-level general AI capabilities. Let's define that as the longer term in this context. And, if we're concerned with what to do with them, that means that we've gotten through the process to there then. So there's two questions. One is, \"What do we need to do to survive or have an outcome that's a workable context for solving more problems?\" And the other one is what to do. So, if we're concerned with what to do, we need to assume solutions to the preceding problems. And that means high-level superintelligent services. That probably means mechanisms for stabilizing competition. There's a domain there that involves turning surveillance into something that's actually attractive and benign. And the problems downstream, therefore, one hopes to have largely solved. At least the classic large problems and now problems that arise are problems of, \"What is the world about anyway?\" We're human beings in a world of superintelligent systems. Is trans-humanism in this direction? Uploading in this direction? Developing moral patients, superintelligent-level entities that really aren't just services, and are instead the moral equivalent of people? What do you do with the cosmos? It's an enormously complex problem. And, from the point of view of having good outcomes, what can I say? There are problems.\n\n_Question_: So what can we do to improve diversity in the AI sector? And what are the likely risks of not doing so?\n\n_Eric_: Well, I don't know. My sense is that what is most important is having the interests of a wide range of groups be well represented. To some extent, obviously, that's helped if you have in the development process, in the corporations people who have these diverse concerns. To some extent it's a matter of politics regulation, cultural norms, and so on. I think that's a direction we need to push in. To put this in the Paretotopian framework, your aim is to have objectives, goals that really are aligned, so, possible futures that are strongly goal-aligning for many different groups. For many of those groups, we won't fully understand them from a distance. So we need to have some joint process that produces an integrated, adjusted picture of, for example, how do we have EAs be happy and have billionaires maintain their relative position? Because if you don't do that they're going to maybe oppose what you're doing, and the point is to avoid serious opposition. And also have the government of China be happy. And I would like to see the poor in rural Africa be much better off, too. Billionaires might be way up here, competing not to build orbital vehicles but instead starships. And the poor in rural Africa of today merely have orbital space capabilities convenient for families, because they're poor. Nearly everyone much, much better off.", "filename": "Reframing superintelligence _ Eric Drexler _ EA Global - London 2018-by Centre for Effective Altruism-video_id MircoV5LKvg-date 20190314.md", "id": "caeaf1db76db1f0ec98b0c9d1b4dfc1a", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "NeurIPSorICML_a0nfw-by Vael Gates-date 20220318", "authors": ["Vael Gates"], "date_published": "2022-03-18", "text": "# Interview with AI Researchers NeurIPSorICML_a0nfw by Vael Gates\n\n**Interview with a0nfw, on 3/18/22**\n\n**0:00:02.3 Vael:** Alright. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:08.7 Interviewee:** Yes. More recently I\\'ve been working on AI for mathematical reasoning and applications in navigation, mathematical navigation.\n\n**0:00:18.1 Vael:** Great. And then, what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:00:25.8 Interviewee:** The biggest benefit of AI as a tool that now we have and that sort of works, is that it just enabled a lot of new applications when we are able to define goals for it. The broad story that I\\'m pursuing recently is like, suppose you have an AI expert for a certain domain, then how can you leverage that expert to teach people to be better at the same domain? And the reason why we\\'re working more on mathematical education, it\\'s exactly because the mathematical domains are easier to define as a formal problem. Training solvers for interesting mathematical domains is not possible with AI tools, that \\[hard to parse\\] not be so much 10 years ago. In a lot of other fields, it\\'s also been the case that we as humans knew what we wanted and wrote down a bunch of heuristics to accomplish certain goals. But we were always sure that those weren\\'t the best, and now AI is allowing us to replace a lot of those by just better algorithms.\n\nThe main worry is exactly when we don\\'t know exactly what the goal is, or when we don\\'t know how to specify it. Or in a lot of cases, in research, people have been stuck with these proxy tasks that are supposed to represent some kind of behavior that we want, but a lot of people are just unsure of what exactly what we want out of these systems. It\\'s a worry for me that a lot of people are spending a lot of time and resources on those problems without knowing exactly what to expect. I come from sort of a systems background, so I\\'m more used to people having proxy tasks that are very directly related to the actual task. For example, in compilers, usually one goal is, make programs faster, and then you can create benchmarks. \\\"Okay, here is the set of programs, let\\'s run your compiler and it optimize\\[?\\] and see how long it takes.\\\" And you can argue about whether those programs are representative of what actual programs that people will run it on, if the benchmark that I have is representative or not, but the goal was very clear, it\\'s to make things faster. Now in language for example, in NLP, we have a lot of these tasks about language understanding and benchmarks that try to capture some form of understanding. But it\\'s not entirely clear what the goal is, if we solve this benchmark\\-- and this is being revised all the time. People propose a benchmark that\\'s hard for current-day models, then a few months later, someone comes up with a solution. Then people say, \\\"Oh, but that\\'s actually not exactly what we wanted \\'cause look, the model doesn\\'t do this other thing.\\\" And then we enter this cycle of refining exactly what\\'s the problem and developing models, but without a clear goal in a lot of cases.\n\n**0:03:47.7 Vael:** Got it. Okay, so that question was, what are you most excited about AI and what are you most worried about in AI? Could you summarize the culmination of all that?\n\n**0:03:57.1 Interviewee:** Yeah. It\\'s centered around goals. So AI lets us pose richer and new kinds of goals more formally and optimize for those. When those are clear, those are cases where I\\'m very excited about. When they\\'re not clear, then I\\'m worried about it.\n\n**0:04:16.3 Vael:** Got it. Cool, that makes sense. Awesome. Alright, so focusing on future AI, putting on a science fiction forecasting hat, say we\\'re 50 plus years into the future. So at least 50 years in the future, what does that future look like?\n\n**0:04:32.8 Interviewee:** It can look like\\... a lot of\\... it can go in a lot of ways, I think. I think, in one possible future\\-- do you want me to just list the possible futures?\n\n**0:04:48.1 Vael:** I think I\\'m most interested in your realistic future, but also like optimistic, pessimistic; it\\'s a free-form question.\n\n**0:04:57.8 Interviewee:** Okay. Yeah, maybe optimistically what would happen is that AI lets us solve problems that are important for society and that we just don\\'t have the right tools to at this point. I think a lot of exciting applications in\\... In language, we can imagine some interesting applications where a lot of computer interfaces become much easier, just for doing complex tasks, to specify a natural language. Like programming right now gives a lot of power to people, but it also takes a lot of time to learn, so it would be awesome to enable a lot more people to automate tasks using natural language. So in one future, a lot of those applications will be enabled and things will be great. In another future, which is actually\\-- it\\'s probably going to be a combination of these things, but a very likely future, which is already kind of happening, but maybe at a smaller scale than is possible, is that people will start replacing existing tools and systems with AI for not very clear reasons and get not very clear outcomes. And then we don\\'t really know exactly how they\\'re misbehaving in a lot of cases. And the thing is that the incentives for deploying the systems are at odds with broader societal goals in a lot of cases. So, for example, private companies like Google and Facebook, they have all the incentives possible to deploy these systems to optimize their metrics that ultimately correlate with revenue. I don\\'t know.\n\nFacebook has a lot of metrics which ultimately relate to both how much time people spend on the platform, and that translates to how much money they make. And they basically have thousands of engineers trying different combinations of features and things to suggest users to do, and like very small tweaks a lot of times to try to optimize for those through metrics. And a lot of that process became much faster with AI because now you can take much more fine-grain decisions for individual users. And sure, we can look at the metrics and see that, \\\"Yeah, they keep improving, that\\'s a new tool that now exists and enables them to optimize for this goal.\\\" At the same time, it\\'s not exactly to clear to me what is happening, that AI is choosing things for individual users. It might be like closing people off in their bubbles, it might be\\... all the things we know about, but also potentially a lot of other things, other behaviors. Exactly because they\\'re not the same behavior for everyone, that also makes it harder to study and understand. So I like the future, is that AI will replace a lot of these systems, but that will involve people losing control of exactly what it is doing. And that will probably come before us having a good understanding of what\\'s going on, just because there is already an incentive to deploy those systems, like a financial incentive.\n\n**0:08:40.8 Vael:** Yeah, interesting. You\\'re making a lot of the arguments that I often make later in this interview. Alright. This next thing is, so people talk about the promise of AI, by which they mean many things, but one thing that I think they mean, and that I\\'m talking about here, is that the idea of having a very general, capable system, such that the systems would have cognitive capacities that we could use to replace all current-day human jobs. Which we could or could not, but the cognitive capacities do that. And I usually think about this in the frame of like, here we have\\... in 2012, we have AlexNet and the deep learning revolution, and then 10 years later, here we are, and you\\'ve got systems like GPT-3, which have some weirdly emergent capabilities like text generation and language translation and math and coding and stuff. And so one might expect that if we continued pouring all the human effort that\\'s been going into this, with nations competing and corporations competing and lots of young people and lots of talent and algorithmic improvements at the same rate we\\'ve seen and hardware improvements, maybe we\\'ll get optical or quantum computing, then we might scale to very general systems. Or we might not, and we might hit some sort of ceiling and need to paradigm shift. But my question is, regardless of how we get there, do you think we\\'ll ever have very general systems like a CEO or a scientist AI, and if so, when?\n\n**0:11:00.9 Interviewee:** Yeah. The thing that has been happening with these large language models, that they objectively become more and more capable, but we at the same time reassess what we consider to be general intelligence. That will probably keep happening for quite a while, I would say at least like 10-15 years, just because\\...I don\\'t think we know exactly what\\... Human intelligence is still very poorly understood in a lot of ways. It\\'s not clear that higher intelligence is possible without some of the shortcomings that human intelligence has. So I think that cycle of getting more powerful systems but also realizing some of their shortcomings and saying, \\\"Oh, maybe\\...\\\" Basically shifting the goal and what is it that we consider general intelligence will probably keep happening.\n\n**0:12:07.1 Vael:** Yeah, my question is, do you think that we\\'ll get systems like a CEO AI or a scientist AI or be able to replace most cognitive jobs at some point, and when that will be?\n\n**0:12:18.6 Interviewee:** Yeah. Let\\'s see. We\\'ll probably have systems that can act like a CEO or a scientist. Whether they\\'ll have the same goals that a CEO or a scientist have, like a human CEO or a human scientist have in the real world, that is a different question, which is not that clear to me.\n\n**0:12:47.0 Vael:** Interesting. Yeah. So there\\'s the question of whether AIs develop like consciousness, for example. So I\\'m assuming not consciousness, and I\\'m like, \\\"Okay, say we have a system that can do multi-step planning, it can do models of other people modeling it, it\\'s interacting with humans by text and video, or whatever,\\\" and I\\'m like, \\\"Alright, I need you to solve cancer for me, AI,\\\" or, \\\"I want you to run this company,\\\" and we put in its goal, optimization function so that it just does that at some point. Yeah.\n\n**0:13:24.4 Interviewee:** Yeah. You\\'re right. So you asked about the cognitive capability. I think that will probably be there in like 30-40 years, would be my estimate. For at least a subset of what it means to run a company. Now, my question is, if we have a system that does that, is it also possible for the system to simply obey everything we tell it to do? Because, of course, a person is capable of doing that, but you wouldn\\'t expect to tell the person, \\\"Okay, go do that.\\\" They might complain. They might say \\\"No, I\\'m in a bad mood today,\\\" or something like that.\n\n**0:14:03.3 Vael:** Yeah, and I don\\'t even think you could tell any given human\\... Like if someone is like, \\\"Alright, Vael, go be the CEO of a company,\\\" I\\'d be like, \\\"What? I don\\'t know how to do that.\\\"\n\n**0:14:09.9 Interviewee:** Exactly, exactly. Yeah.\n\n**0:14:12.0 Vael:** Yeah, yeah. Interesting. Okay, so that leads into my next set of questions. So imagine we have a CEO AI and I\\'m like, \\\"Alright, CEO AI, I wish for you to maximize profits, and try not to run out of money and try not to exploit people and try to avoid side effects.\\\" And so currently, this would be very technically challenging for a number of reasons. One reason is that we aren\\'t very good at taking human values and preferences and stuff and putting them into mathematical formulations such that AI can optimize over. And so I worry that AI in the future will continue to do this, and then we\\'ll have AI that continue to do what we tell them to do instead of what we intend them to do, but as our expressions get more and more ambiguous, and higher scale and operating on more of the world. And so what do you think of the argument, \\\"Highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous?\\\"\n\n**0:15:22.1 Interviewee:** Yeah, if I understand what you\\'re saying, that is very clear to me. That\\... specifying\\... If you just hope to give a short description of what you want, you\\'ll most likely fail.\n\n**0:15:35.9 Vael:** Even a long definition would be fine. I just want there to be any system of putting what we want into the AI.\n\n**0:15:46.0 Interviewee:** Yeah, so I think there might be a safe way to do that, but it might require a lot of interaction and will be hard. I have thought a little bit about that, because I was thinking about the question of how do we\\... So if we want to use these systems for interfacing with people, then we have to let them handle ambiguity in a way that\\'s similar to how people do. And in cognitive science, there is a lot of knowledge about the mechanisms that people use to resolve ambiguity, like there is context, prior knowledge, common ground, and all that. Some of that could be replicated now to fully align the way that AI resolves ambiguity with how people do it. We\\'ll also probably take a lot of work in understanding how people do it, which is not there yet.\n\n**0:16:50.3 Vael:** Interesting. Interesting. Do you think that we\\'ll have to know enough about human psychology? Like, what fields need to advance in order for us to have systems that do what we intend them to do?\n\n**0:17:06.1 Interviewee:** Yeah, if you\\'re talking about aligning a system with human values, part of that problem is understanding human values and like how people agree on values. And there\\'s just so much that we assume that we don\\'t have to tell people, it\\'s assumed that they were created in the same way and exposed to the same contexts. All that goes away with AI, or at least most of it. And I\\'m not an expert in these fields, so I might not know the exact names. I guess pragmatic reasoning is one thing that we have a lot of high-level insights on how it works, but to operationalize it into an NLP system, for example, we\\'re still very far away from that.\n\n\\[Person\\] worked on this model for pragmatic reasoning, which can explain very neatly some very simple cases. Like, you have a few words and a few people, and some words describe these people. You have a three-by-three matrix that predicts exactly what people would guess the word is referring to with very high accuracy. But just extending that to a sentence and the three people to be an image is a very complicated problem. So I think that is something that would need to advance a lot. Other fields of psychology that try to understand ambiguity like of cognitive psychology. Yeah. I don\\'t know their names, but I think\\... Oh well, yeah. I guess another one in linguistics that is relevant for this is human convention formation. Like when we are talking to a person over time, we develop these partner-specific conventions. That\\'s very natural, enables very efficient communication. And if you\\'re writing a description of what you want in a long form to an AI\\-- Oh, if you\\'re doing so for a person, you assume that the person is picking up on what we want and forming the same conventions over time. So at first you might say, \\\"I want to optimize the revenues for XYZ company.\\\" And then later you might say \\\"the company,\\\" dropping the XYZ, but you assume that\\-- the only company that I mentioned was XYZ, so the only interpretation possible for a company is attribute the XYZ company. Well, that\\'s because humans form conventions that way, but if you have kind of an adversary AI, then suddenly that\\'s not an assumption you can build off of anymore. So the ways in which humans communicate with conventions, with ambiguity and all the computational tools that we need to do that are open problems, if that makes sense.\n\n**0:20:05.3 Vael:** Yeah. That does seem right. Thanks. Alright, so this next question is still about the CEO AI. So imagine we have the CEO AI that is capable of multi-step planning and has a model of itself in the world. So it\\'s modeling other people modeling it because that seems really important in order to have a CEO. So it\\'s making these plans for the future and it\\'s trying to optimize for profit with the constraints I\\'ve mentioned. And it\\'s noticing that some of its plans fail because the humans shut it down. And so we built into this AI and its loss function that it has to get human approval for things because it seems like a basic safety measure. (Interviewee: \\\"Sorry, it has to get what?\\\") It has to get approval for any action from humans, like a stakeholder-type thing. And so the humans have asked for a one-page memo from the AI just to discuss what it\\'s doing. And so the AI is thinking about what to put in this memo, and it\\'s like, \\\"Maybe I should omit some relevant information that the humans would want because that would reduce the chance that the AIs \\[note: I meant to say \\\"humans\\\"\\] would shut me down and increase the likelihood of my plans succeeding, of optimizing my goal.\\\" And so in this case, we\\'re not building self-preservation into the AI, what we\\'re doing is having an AI that\\'s an agent that is optimizing over a goal. And so instrumentally, it has the goal of self-preservation. So what do you think of the argument, \\\"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous?\\\"\n\n**0:21:30.7 Interviewee:** Yeah. I think if we get systems that have that capability to do the things that we said in the beginning, like planning, in the most extreme form, then I agree with this statement.\n\n**0:21:44.6 Vael:** Interesting. Do you think we\\'ll ever\\-- hm, so\\... We were talking about whether you thought that we would get to that point. What was the answer to that?\n\n**0:21:54.9 Interviewee:** The answer is that I think so, although I\\'m not sure to what extent. I\\'m not sure if the most powerful possible version will actually happen in 30-40 years. But I think there\\'s, there\\'s like a small chance, but\\-- it\\'s not impossible.\n\n**0:22:11.1 Vael:** Interesting. So yeah, within our lifetimes. Yeah, so I\\'m worried about the possibility that we don\\'t get the alignment problem right per se. And we end up with systems that are now optimizing against humans, which seems bad, especially if they are smarter than humans. Yeah. How likely do you think this sort of scenario would happen?\n\n**0:22:34.7 Interviewee:** The scenario of having super-capable AI that is---\n\n**0:22:38.3 Vael:** That has an instrumental incentive to do self-preservation or other instrumental incentives like power-seeking or acquiring resources or improving itself.\n\n**0:22:48.1 Interviewee:** Yeah, I think it is small but not zero.\n\n**0:22:57.9 Vael:** Yeah. How bad do you think that would be if such a thing happened?\n\n**0:23:03.4 Interviewee:** Yeah, I think pretty bad.\n\n**0:23:05.9 Vael:** Pretty bad. Yeah. Like what would happen?\n\n**0:23:10.4 Interviewee:** Yeah, it\\'s not exactly clear to me how that would look like. But it is a little scary to think about humans losing control in some ways. The worry is exactly that we can\\'t\\... like if we\\'re not under control, we can\\'t know exactly what comes.\n\n**0:23:30.6 Vael:** Yeah. That seems true. Have you heard of the\\-- What does AI safety mean to you? That\\'s my first question.\n\n**0:23:39.3 Interviewee:** AI safety. I understand it as a field that\\... When I think of AI safety, the first defining problem that comes for me is AI alignment. This question, \\\"How do you specify your goals in a way that aligns with human values?\\\" and all that.\n\n**0:24:00.6 Vael:** That makes sense. Yeah. When did you start learning about AI alignment or when did you start caring about it?\n\n**0:24:05.9 Interviewee:** Ah, it was mostly when I came to the \\[position\\] at \\[university\\], because I really wasn\\'t that much of an AI person before. I kind of incidentally got into AI because it gave me tools to think about problems that were hard before. And I happened to get a bit more involved with the AI safety community. \\[Friend\\'s name\\], you might know how him. (Vael: \\\"Ah! Nice.\\\") We\\'re very close friends. We lived together for almost three years, and had a lot of conversations.\n\n\\[short removed segment, discussing situation and friend\\]\n\n**0:25:09.2 Vael:** Yeah. Some visions that I have, why it could be bad, is if we have an AI that\\'s like very powerful, then if it\\'s incentivized to get rid of humans, then maybe it\\'s intelligent enough to do synthetic biology against humans, or nanotechnology or whatever, or maybe just increase the amount of pollution. I don\\'t know. I just feel like there\\'s ways in which you could make the environment uninhabitable for humans via like putting things in the water or in the air. It\\'d be pretty hard to do worldwide, but seems maybe possible if you\\'re trying. Yeah. Also we\\'ve got like misuse and AI-assisted war, and like maybe if we put AI in charge of like food production or manufacturing production, then we can have a lot of correlated supply chain failures. That\\'s another way, but I don\\'t know if that would kill everyone of course, but that could be quite bad.\n\n**0:26:12.5 Interviewee:** Yeah, that is something that I worry more about in the short-term, which is AI in the hands of\\... Even currently, AI can already be pretty bad with like spreading misinformation. That certainly happened here in \\[country\\], but in \\[other country\\] in the last elections, where this really bad president got elected, he used a lot of misinformation campaigns funded by a lot of people. And it\\'s super hard to track because it\\'s on WhatsApp and it\\'s like the double-edged sword, the sword of privacy. It aligns with things that we care about, but which also allows these campaigns to run at massive scale and not be tracked. Even currently, AI personalizing wordings and messages to different people at large scale could be really pretty \\[hard to parse, \\\"good\\\"?\\].\n\n**0:27:20.1 Vael:** Got it. Do you work on AI alignment stuff?\n\n**0:27:26.0 Interviewee:** So currently, I\\'m working more on the education side of research. I still keep in the back of my head some of the problems about the ambiguity thing. We did have one paper last year that started building some of those ideas but I\\'m not actively working on that. But it\\'s still like a realm of problems that I think is important for a number of reasons.\n\n**0:27:56.2 Vael:** Yeah. That makes sense. What would cause you to work on it?\n\n**0:28:04.4 Interviewee:** Sorry, what what would make me work on it?\n\n**0:28:08.3 Vael:** Yeah, like in what circumstance would you be like: Whelp, it is now five year later, it is now one year later, and I happen to be somehow working on some element of the AI alignment problem. Or AI safety or something. How did that happen?\n\n**0:28:27.8 Interviewee:** Yeah, it\\'s interesting. From my trajectory, I have always tried to work on the thing that I feel is not being done and that I\\'m in a certain position to do. And that has over time shifted towards education for a number of reasons. And I don\\'t know exactly where that comes from, but I still feel that I\\'m more able to contribute there than with other AI problems. Although I guess that kind of thing is hard to assess, but it\\'s still my feeling on when deciding what things to work on.\n\n**0:29:19.9 Vael:** Hm, so it sounds like there\\'s maybe something like\\... Okay, so I don\\'t quite know if it\\'s a matter of whether the education bit is neglected by the world, or whether it\\'s like, you feel like it would be a better fit per se, or both?\n\n**0:29:35.6 Interviewee:** Yeah, it\\'s the combination of both. Like if it\\'s neglected. It is important, important in the short-term. And I think there are like things that I could do in the next five years. If the plan that I have for the \\[position\\] I have works out, I think it would be great. Like we would have tools that people would be able to use, and they need technical advances that are in the intersection of things that I care about or know for very weird reasons. But I feel like I\\'m in a position to make progress there and very immediately. So yeah, I just feel very urged to see how that goes.\n\n**0:30:21.3 Vael:** Yeah. Makes sense. Yeah, do you ever think you\\'ll go get into the AI alignment thing or you\\'ll just be like, \\\"Woo, seems good for someone to work on it?\\\" I get that\\-- that\\'s an impression that I get.\n\n**0:30:32.6 Interviewee:** Uh huh, yeah. I think I\\'m getting a taste of it through \\[friend\\]. We actually wrote this project together two months ago. Okay, I actually met with \\[friend\\] to discuss this a few times but I have no idea where it\\'s exactly came from, I think from someone at LessWrong that posted like a challenge. So it\\'s a program called ELK, which I forgot even what ELK means, but I met with \\[friend\\] and we ended up coming up with some ideas, and he submitted it. \\[\\...\\]\n\n**0:31:16.9 Interviewee:** So I guess I\\'m getting a little bit of flavor of the kind of work that that entails. I still don\\'t feel exactly comfortable in doing it myself just because it requires this very hypothetical and future reasoning that I\\'m not used to, not that I couldn\\'t. But I understand that thinking about that kind of problem today requires you to think of systems that we can\\'t use today and we can\\'t run and test today. And I\\'m very used to ideas that I can write down the code and see how they work and then make progress from there. So I don\\'t know, my opinion and comfort might change.\n\n**0:32:18.5 Vael:** Yeah, I also encountered this where I was like, \\\"I don\\'t wanna think about far-future programs or far-future things, where there\\'s no feedback loops and it\\'s very hard to tell what\\'s happening and there\\'s only a limited number of people, and then\\-- now there\\'s money, but there wasn\\'t necessarily very much money before\\-- and there\\'s not really journals, and everything\\'s pre-pragmatic.\\\" And then, I think I was won over by the argument that, I don\\'t know, I think existential risk is probably the thing that I\\'ll just work on for most of my life because it seems like really important, and like, probable. But then I tried technical research for a while, and I thought it was not my cup of tea. And then I got into AI community building, which is what I\\'m kind of currently doing, and trying to get more people to become AI technical researchers. But not me! It\\'s been a good fit in some sense.\n\n**0:33:10.1 Interviewee:** I see, that makes sense.\n\n**0:33:11.1 Vael:** Yeah, and I think it\\'s pretty interesting as well. Well, the current thing I\\'m doing is talking to AI researchers and being like, \\\"What do you think of things? What about this argument?\\\" \\...Cool. Well, we\\'ve got a little bit of time left, so, see if I can ask some of my more unusual questions, which are\\... How much do you think about policy and what do you think about policy related to AI?\n\n**0:33:34.6 Interviewee:** Yeah, I think I\\'ve\\... I think a lot about policy in general, about just related to AI. I\\'m currently at this phase in my life where I think a lot of the immediate problems that we have, unfortunately in a sense, we\\'ll have to resolve by good politics, and I\\'m not exactly sure yet how to do that. Because one other thing that I think about, when I think about AI safety, is that one thing is to have the technical ability to do certain things. The other is to get the relevant players to implement them. So, if there is an AI safety mechanism that can like limit the reach of Google\\'s super AI that will do all things, but it also means that it will decrease revenue to some extent\\... I\\'m not sure the decision that they will make which would be to implement the safe version. Think about VC\\'s that are sitting in these boards and pushing decisions for the company. I\\'m not sure if all of them actually care about this and realize that it\\'s a problem. So I think part of the solution is also on the political side, of like us as a society sitting down and deciding, \\\"What do we gotta do?\\\" More than just the technical proposal. I\\'ve decided that this year, I will try to have some political involvement in \\[country\\], which is a place where I can. I\\'m currently in the planning phase, but I\\'ve committed to doing something this year, because we have elections this year.\n\n**0:35:39.2 Vael:** Interesting, yeah. I think AI governance is very, very important. I think the community\\-- the AI safety community also, like\\... or, I don\\'t know, the Effective Altruism community \\[hard to parse\\] a lot in their listings. Alright, question: if you could change your colleagues\\' perceptions of AI, what attitudes or beliefs would you want them to have? So what beliefs do they currently have and how would you want those to change?\n\n**0:36:05.9 Interviewee:** I\\'m not sure if this is the kind of attitude that you\\'re talking about, but one attitude that I really don\\'t like and is very prevalent in papers is that people kind of self-identify with the method that they\\'re proposing so much that they call it \\\"our method\\\". In the sense that the goal should not necessarily be to show off better numbers, and show a curve where the color that\\'s associated with you is like higher than the other. But it should really be to understand what\\'s going on, right? If we have a system that has some sort of behavior that\\'s desirable, that\\'s awesome, and we should also look at how it maybe doesn\\'t. Like people include a lot of cases in their papers where, \\\"Oh look, the model is doing this.\\\" But then when we actually try out the model, it\\'s very easy to find cases where it doesn\\'t or it fails in weird ways. And the reluctance to put that comes from this attachment from like, \\\"Oh, this is like me, but in form of a method that is written in this paper.\\\" And I wish that people didn\\'t have that attitude, that they were more detached and scientific about it.\n\n**0:37:24.9 Vael:** Yeah, makes sense. Cool. And then my last question is, why did you choose this interview, and how has this interview been for you?\n\n**0:37:32.6 Interviewee:** Yeah, it\\'s been really fun. I had thought of some of the questions before, and not of others. Yeah, the exercise that I maybe haven\\'t done that much is to think super long-term, like 50-100 years in the future. I thought about it some but I don\\'t think I have the right tools yet to think about them, so I get in this hard-to-process state. Yeah, so I came mostly out of curiosity, I didn\\'t know exactly what to expect. If you don\\'t mind sharing, but if you do, that\\'s okay, how did you pick the people to interview?\n\n**0:38:28.3 Vael:** Yeah, so this is a pretty boring answer, but I originally selected people who\\'d submitted papers to NeurIPS or ICML in 2021, and like yeah, then some proportion of people replied back to me.\n\n**0:38:44.5 Interviewee:** I see.\n\n**0:38:47.2 Vael:** Yeah, great, cool. That makes a lot of sense. Thanks you so much for your time. I will send you the money and some additional resources if you\\'re curious about it. I\\'m sure \\[friend\\] has already shown you some, but if you\\'re curious. And thanks so much for doing this.\n\n**0:39:00.7 Interviewee:** Yeah, no, definitely. Thank you for the invite, this was fun.\n\n**0:39:05.1 Vael:** All right. Bye.\n\n**0:39:05.8 Interviewee:** Bye.\n", "filename": "NeurIPSorICML_a0nfw-by Vael Gates-date 20220318.md", "id": "9128753bd0bd00b0f4220d21c6446703", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Sino-Western cooperation in AI safety _ Brian Tse _ EA Global - San Francisco 2019-by Centre for Effective Altruism-video_id 3qYmLRqemg4-date 20190829", "authors": ["Brian Tse"], "date_published": "2019-08-29", "text": "# Brian Tse Sino-Western cooperation in AI safety - EA Forum\n\n_International cooperation is essential if we want to capture the benefits of advanced AI while minimizing risk. Brian Tse, a policy affiliate at the_ [_Future of Humanity Institute_](https://www.fhi.ox.ac.uk/)_, discusses concrete opportunities for coordination between China and the West, as well as how China’s government and technology industry think about different forms of AI risk._\n\n_A transcript of Brian’s talk, which we have edited lightly for clarity, is below. You can also watch it on_ [_YouTube_](https://www.youtube.com/watch?v=3qYmLRqemg4&list=PLwp9xeoX5p8MqGMKBZK7kO8dTysnJ-Pzq&index=8&t=5s) _or read it on_ [_effectivealtruism.org_](https://effectivealtruism.org/articles/brian-tse-sino-western-cooperation-in-ai-safety)_._\n\n## The Talk\n\nIt has been seven decades since a nuclear weapon has been detonated. \n\n![](https://images.ctfassets.net/ohf186sfn6di/1ZoSlAdq1BLLV2u3X2gwRX/6f45b25b638e071a29624c15b054b7b4/Tse1.png)\n\nFor almost four decades, parents everywhere have not needed to worry about their children dying from smallpox. \n\n![](https://images.ctfassets.net/ohf186sfn6di/39RqVyvhzwg4gVXYHJX5Zu/b5225eb44a1a2b74551a7672e03133c4/TSe2.png)\n\nThe ozone layer, far from being depleted to the extent once feared, is expected to recover in three decades.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6ERXuBj7tDEMnOaUss0259/da778454b8bddf2462e077850778b909/Tse3.png)\n\nThese events — or non-events — are among humanity’s greatest achievements. They would not have occurred without cooperation among a multitude of countries. This serves as a reminder that international cooperation can benefit every country and person.\n\nTogether, we can achieve even more. In the next few decades, AI is poised to be one of the most transformative technologies. In the Chinese language, there is a word, “wēijī,” which is composed of two characters: one meaning danger and the other opportunity. \n\n![](https://images.ctfassets.net/ohf186sfn6di/3DxtZ8X7nQcLbMx2L59qoU/2cc5e45059559bc05abc333dafd5a5d9/Tse5.png)\n\nBoth characters are present at this critical juncture. With AI, we must seek to minimize dangers and capture the upsides. Ensuring that there is robust global coordination between stakeholders around the world, especially those in China and the West, is critical in this endeavor.\n\nSo far, the idea of nations competing for technological and militarist supremacy has dominated the public narrative. \n\n![](https://images.ctfassets.net/ohf186sfn6di/7be6c6zbbLM1s4z5ke6b2z/6192b8c68de6390b9f98f972adc8f7c7/Tse6.png)\n\nWhen people talk about China and AI, they always invoke the country's ambition to become the world leader in AI by 2030. In contrast, there is very little attention paid to China’s call for international collaboration in security, ethics, and governance of AI, which are areas of mutual interest. I believe it is a mistake to think that we must have either international cooperation or international competition. Today, some believe that China and the U.S. are best described as strategic adversaries. \n\nI believe we must deliberately use new concepts and terms that capture the two countries’ urgent need to cooperate — not just their drive to compete. \n\n![](https://images.ctfassets.net/ohf186sfn6di/2kCYHULhiJne66glDPGQU9/636099b850e41e7b281ed255fc1f6af2/Tse7.png)\n\n[Joseph Nye](https://en.wikipedia.org/wiki/Joseph_Nye), well-known for coining the phrase “soft power,” has suggested that we use “cooperative rivalry” to describe the relationship. Graham Allison, the author of [_Destined For War_](https://www.amazon.com/Destined-War-America-Escape-Thucydidess-ebook/dp/B01IAS9FZY), has proposed the word “coopertition,” allowing for the simultaneous coexistence of competition and cooperation. \n\nIn the rest of my talk, I'm going to cover three areas of AI risk that have the potential for global coordination: accidents, misuse, and the race to develop AI.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2kFFTHdAbjywRkLicjbzNb/dedb9ac8c05d8ff5381ac3d6308a3de0/Tse8.png)\n\nFor each of these risks, I will talk about their importance and feasibility for coordination. I will also make some recommendations.\n\n## The risk of AI accidents\n\nAs the deployment of AI systems has become more commonplace, the number of AI-related accidents has increased. For example, on May 6, 2010, the Dow Jones Industrial Average experienced a sudden crash of $1 trillion known as the “[Flash Crash](https://en.wikipedia.org/wiki/2010_Flash_Crash).” \n\n![](https://images.ctfassets.net/ohf186sfn6di/1rG4RIhcytxpLCSSLMAOqa/b2d163215231d0d9390cfeae5833a7b2/Tse9.png)\n\nIt was partly caused by the use of high-frequency trading algorithms. The impact immediately spread to other financial markets around the world. \n\nAs the world becomes increasingly interdependent, as with financial markets, local events have global consequences that demand global solutions. The participation of \\[the Chinese technology company\\] [Baidu](https://en.wikipedia.org/wiki/Baidu) in the [Partnership on AI](https://www.partnershiponai.org/) is an encouraging case study of global collaboration. \n\n![](https://images.ctfassets.net/ohf186sfn6di/1gj1V6eO47Xrmd0r85ZzHs/4c426b0f2673bb439f4cc24a2b9783fe/Tse10.png)\n\nIn a press release last year, Baidu said that the safety and reliability of AI systems is critical to their mission and was a major motivation for them to join the consortium. The \\[participating\\] companies think autonomous vehicle safety is an issue of particular importance. \n\nChina and the U.S. also seem to be coordinating on nuclear security. One example is the [Center of Excellence on Nuclear Security in Beijing](http://www.chinadaily.com.cn/world/2016xivisitczech/2016-04/02/content_24248877.htm), which is by far the most extensive nuclear program to receive direct funding from both the U.S. and Chinese governments. \n\n![](https://images.ctfassets.net/ohf186sfn6di/5RGogAMdz2TdPJhCF2AAL1/0aabe35cf3dfa2ec2dbf2c94f6ba2d70/Tse11.png)\n\nIt focuses on building a robust nuclear security architecture for the common good. A vital feature of this partnership is an intense focus on exchanging technical information, as well as reducing the risk of accidents.\n\nIt is noteworthy that, so far, China has emphasized the need to ensure the safety and reliability of AI systems. In particular, the [Beijing AI Principles](https://www.baai.ac.cn/blog/beijing-ai-principles) and the [Tencent Research Institute](https://www.tencent.com/en-us/abouttencent.html) have highlighted the risks of [AGI systems](https://en.wikipedia.org/wiki/Artificial_general_intelligence). \n\nWith our current understanding of AI-related accidents, I believe Chinese and international stakeholders can collaborate in the following ways:\n\n![](https://images.ctfassets.net/ohf186sfn6di/DC2KV5iNBDioXIusptFyd/f5a1360e3dd8facfe81b1938eaed181b/Tse13.png)\n\n1\\. Researchers can attend the increasingly popular AI safety workshops at some of the major machine learning conferences. \n\n2\\. Labs and researchers can measure and benchmark the safety properties of reinforcing learning agents based on efforts by organizations and safety groups like that of [DeepMind](https://deepmind.com/).  \n \n\n3\\. International bodies, such as [ISO](https://www.iso.org/news/ref2336.html), can continue their efforts to set technical standards, especially around the reliability of machine learning systems.  \n \n\n4\\. Lastly, alliances such as the [Partnership on AI](https://www.partnershiponai.org/) can facilitate discussions on best practices (for example, through \\[the Partnership’s\\] [Safety-Critical AI Working Group](https://www.partnershiponai.org/wp-content/uploads/2018/07/Safety-Critical-AI_-Charter.pdf)).\n\n## The risk of AI misuse\n\nEven if we can mitigate the unintended accidents of AI systems, there is still a possibility that they'll be misused. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4aUajWe2ILsEyBAwNU7QBg/cc39fa462cdf7b93933f828233ac47f2/Tse14.png)\n\nFor example, earlier this year, [OpenAI](https://openai.com/) decided not to release the training model of [GPT-2](https://openai.com/blog/better-language-models/), which \\[can generate language on its own\\], due to concerns that it might be misused to impersonate people, create misleading news articles, or trick victims into revealing their personal information. This reinforces the need for global coordination; malicious actors from anywhere could have gained access to the technology behind GPT-2 and deployed it in other parts of the world. \n\nIn the field of cybersecurity, there was a relevant case study of the global response to security incidents. \n\n![](https://images.ctfassets.net/ohf186sfn6di/5rRWmUfPFqxVKSNXGUfUCd/59cc69464a545d851ad1668b6087606f/Tse15.png)\n\nIn 1989, [one of the first computer worms](https://en.wikipedia.org/wiki/WANK_(computer_worm) attacked a major American company. The incident prompted the creation of the international body [FIRST](https://www.first.org/) to facilitate information-sharing and enable more effective responses to future security incidents. Since then, FIRST has been one of the major institutions in the field. It currently currently lists ten American and eight Chinese members, including companies and public institutions.\n\nAnother source of optimism is the growing research field of \\[adversarial images\\]. These are small input samples that have been moderated slightly to cause machine learning classifiers to misclassify them \\[[e.g., mistake a toy turtle for a gun](https://www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed)\\]. \n\n![](https://images.ctfassets.net/ohf186sfn6di/lJbmOVEwzZv1hXL2NAJr4/a53ea2be66331f8d7ddfe5b1fc821a47/Tse16.png)\n\nThis issue is highly concerning, because \\[adversarial images\\] could be used to attack a machine learning system without the attacker having access to the underlying model. \n\nFortunately, many of the leading AI labs around the world are already working hard on this problem. For example, [Google Brain organized a competition on this research topic](https://arxiv.org/pdf/1804.00097.pdf), and the team from China’s Tsinghua University won first place in both the “attack” and “defense” tracks of the competition. \n\nMany of the Chinese AI ethical principles also cover concerns related to the misuse of AI . One promising starting point of coordination between Chinese and foreign stakeholders, especially the AI labs, involves publication norms. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4QBUJ1V0ErUUomuohKULGk/40d8ca1ef8b554c8fecf438721dae469/Tse18.png)\n\nFollowing the \\[controversy around\\] OpenAI’s GPT-2 model, the Partnership on AI organized a seminar on the topic of research openness. There was no immediate consideration of whether the AI community should restrict research openness. However, they did agree that if the AI community moves in that direction, review parameters and norms should be standardized across the community (presumably, on a global level).\n\n## The risk of competitively racing to develop AI\n\nThe third type of risk that I'm going to talk about is the risk from racing to develop AI. \n\n![](https://images.ctfassets.net/ohf186sfn6di/2fpkY3dsOB7gOWMpvxDco3/2af5c51ca0daa08a461c36ce865d3e97/Tse19.png)\n\nUnder competitive pressure, AI labs might put aside safety concerns in order to stay ahead. Uber’s self-driving car crash in 2018 illustrates this risk. \n\n![](https://images.ctfassets.net/ohf186sfn6di/1LWMvi8tnoS3SFgPXZbD3q/e6d879a980bf6424be747400bc59535f/Tse20.png)\n\nWhen it happened, commentators initially thought that the braking system was the culprit. However, further investigation showed that the victim was detected early enough for the emergency braking system to have worked and prevented the crash. \n\nSo what happened? It turned out that the engineers intentionally turned off the emergency braking system because they were afraid that its extreme sensitivity would make them look bad relative to their competitors. This type of trade-off between safety and other considerations is very concerning, especially if you believe that AI systems will become increasingly powerful.\n\nThis problem is going to be even more heuristic in the context of international security. We should seek to draw lessons from historical analogs. \n\n![](https://images.ctfassets.net/ohf186sfn6di/2tbajnqeEQvQ4etm0A3N94/3293a000d93e4954da70500bd95011ea/Tse21.png)\n\nFor example, the report “[Technology Roulette](https://s3.amazonaws.com/files.cnas.org/documents/CNASReport-Technology-Roulette-Final.pdf)” by Richard Danzig discusses the norm of “no first use” and its contribution to stability during the nuclear era. Notably, China was the first nuclear-weapon state to adopt such a policy back in 1964, with varying degrees of success. Other nations have also used the norm to moderate the proliferation and use of various military technologies, including blinding lasers and offensive weapons from outer space. \n\nNow, with AI as a general-purpose technology, there is a further challenge: How do you specify and verify that certain AI technologies haven’t been used? On a related note, the Chinese nuclear posture has been described as a defense-oriented one. The question with AI is: Is it technically feasible for parties to differentially improve defensive capabilities, rather than offensive capabilities, thereby stabilizing the competitive dynamics? I believe these are still open questions.\n\nUltimately, constructive coordination depends on the common knowledge that there is this shared risk of a race to the bottom with AI. I'm encouraged to see increasing attention paid to the problem on both sides of the Pacific. \n\n![](https://images.ctfassets.net/ohf186sfn6di/5qMHgDr06grzXgXl5wVR0U/71da82e0ca96e1e457c551a48ab78b3d/Tse22.png)\n\nFor example, [Madame Fu Ying](https://en.wikipedia.org/wiki/Fu_Ying), who is chairperson of the National People’s Congress Foreign Affairs Committee in China and an influential diplomat, has said that Chinese technologists and policymakers agree that AI poses a threat to humankind. At the World Peace Forum, she further emphasized that the Chinese believe we should preemptively cooperate to prevent such a threat.\n\nThe [Beijing AI Principles](https://www.baai.ac.cn/blog/beijing-ai-principles), in my view, provide the most significant contribution from China regarding the need to avoid a malicious AI race. And these principles have gained support from some of the country’s major academic institutions and industry leaders. It is my understanding that discussions around the [Asilomar AI Principles](https://futureoflife.org/ai-principles/), the book [_Superintelligence_](https://www.amazon.com/dp/B00LOOCGB2/) by Nick Bostrom, and warnings from Stephen Hawking and other thinkers have all had a meaningful influence on Chinese thinkers. \n\nBuilding common knowledge between parties is possible, as illustrated by the [Thucydides Trap](https://foreignpolicy.com/2017/06/09/the-thucydides-trap/). \n\n![](https://images.ctfassets.net/ohf186sfn6di/1Vzb0VkstLArpEzq3OyryU/ef32769dd3ea9b85336a31269a11b57b/Tse24.png)\n\nCoined by the scholar Graham Allison, The Thucydides Trap describes the idea that rivalry between an established power and a rising power often results in conflict. This thesis has captured the attention of leaders in both Washington, D.C. and Beijing. In 2013, President Xi Jinping told a group of Western visitors that we should cooperate to escape from the Thucydides Trap. In parallel, I think it is important for leaders in Silicon Valley — as well as in Washington, D.C. and Beijing — to recognize this collective problem of a potential AI race to the precipice, or what I might call “the Bostrom Trap.”\n\nWith this shared understanding, I believe the world can move in several directions. First, there are great initiatives, such as the [Asilomar AI Principles](https://futureoflife.org/ai-principles/), which can help many of the signatories \\[adhere to\\] the principle of arms-race avoidance. \n\n![](https://images.ctfassets.net/ohf186sfn6di/pxytr1hFG0skvo5KfSsYW/5f81f2483fefc4fe864fa93597295fb1/Tse25.png)\n\nExpanding the breadth and depth of this dialogue, especially between Chinese and Western stakeholders, will be critical to stabilize expectations and foster mutual trust. \n\nSecond, labs can initiate AI safety research collaborations across borders. \n\n![](https://images.ctfassets.net/ohf186sfn6di/5YtD8WaEd5XvT8eapOQf2R/7cb760847d9b77f3c509645f89301746/Tse26.png)\n\nFor example, labs could collaborate on some of the topics laid out in the seminal paper “[Concrete Problems Of AI Safety](https://arxiv.org/abs/1606.06565),” which was itself a joint effort from multiple institutions. \n\nLastly — and this is also the most ambitious recommendation — leading AI labs could consider adopting the policies in the [OpenAI Charter](https://openai.com/charter/).\n\n![](https://images.ctfassets.net/ohf186sfn6di/5NQpoh8WeGaajjsWpmqS0J/82c00dbe6484071b2fad6b3943e8751b/Tse27.png)\n\nThe charter claims that if a value-aligned, safety-conscious project comes close to building AGI technology, OpenAI will stop competing and start assisting with that project. This policy is an incredible public commitment, as well as a concrete mechanism in trying to reduce these undesirable \\[competitive\\] dynamics.\n\nThroughout this talk, I have not addressed many of the complications involved in such an endeavor. There are considerations such as industrial espionage, civil/military fusion, and civil liberties. I believe each of those topics deserve a nuanced, balanced, and probably separate discussion, given that I will not be able to do proper justice to them in a short presentation like this one. That said, on the broader challenge of overcoming political tension, I would like to share a story.\n\nSome believe the [Cuban Missile Crisis](https://en.wikipedia.org/wiki/Cuban_Missile_Crisis) had a one-in-three chance of resulting in a nuclear war between the U.S. and the Soviet Union. After the crisis, President J. F. Kennedy was desperately searching for a better way forward. \n\n![](https://images.ctfassets.net/ohf186sfn6di/64tMEIgVMRJ7yccKWLl3Rq/9970f46aa5c66f674cd5ca5c57700bef/Tse29.png)\n\nBefore he was assassinated, in one of his most significant speeches about international order, he proposed the strategic concept of a world safe for diversity. In that world, the U.S. and Soviet Union could compete rigorously, but only peacefully, to demonstrate whose value and system of governance might best serve the needs of citizens. This eventually evolved into what became “[détente](https://en.wikipedia.org/wiki/D%C3%A9tente),” a doctrine that contributed to the easing of tension during the Cold War. \n\nIn China, there is a similar doctrine, which is “harmony in diversity.” \\[Brian says the word in Mandarin.\\] \n\n![](https://images.ctfassets.net/ohf186sfn6di/2061oezRrXIAkHDrjjpNAD/a8c794137426b41121695aa224f7c9e8/Tse30.png)\n\nThe world must learn to cooperate in tackling our common challenges, while accepting our differences. If we were able to achieve this during the Cold War, I believe we should be more hopeful about our collective future in the 21st century. Thank you.\n\n## Q&A\n\n**Nathan Labenz \\[Moderator\\]:** I think the last time I saw you was just under a year ago. How do you think things have gone over the last year? If you were an attentive reader of the _New York Times_, you would probably think things are going very badly in US/China relations. Do you think it's as bad as all that? Or is the news maybe hyping up the situation to be worse than it is?\n\n**Brian:** It is indeed worrying. I will add two points to the discussion. One: we're not only thinking about coordination between governments. In my talk, I focused on state-to-state cooperation, but I mentioned a lot of potential areas of collaboration between AI labs, researchers, academia and civil society. And I believe that the incentive and the willingness to cooperate between those stakeholders are there. Second, my presentation was meant to be forward-looking and aspirational. I was not looking at the current news. I was thinking that if in five to 10 years, or even 20 years, AI systems become increasingly advanced and powerful — which means there could be tremendous upsides for everyone to share, as well as downsides to worry about — the incentive to cooperate, or at least aim for “coopertition,” should be there. \n\nIt could be interesting to think about game theory. I won’t go into the technical details. But the basic idea is that if there are tremendous upsides and also shared downsides for some number of parties, then it is more likely that those parties will be willing to cooperate instead of just compete.\n\n**Nathan:** A question from the audience: Do you think that there's any way to tell right now whether the U.S. or the West (however you prefer to think about that), has an edge over China in developing AI? And do you think that there are political or cultural differences that contribute to that, if you think such a difference exists?\n\n**Brian:** Just in terms of the potential for developing capable systems? We are not talking about safety and ethics, right?\n\n**Nathan:** You can interpret the question \\[how you like\\].\n\n**Brian:** Okay. I will focus on capabilities. Currently, it is quite clear to me that China is nowhere near the U.S. in terms of overall AI capabilities. People have argued at length. I would add a few things. \n\nIf you look at the leadership structure of Chinese AI companies — for example, [Tencent](https://www.tencent.com/en-us/abouttencent.html) — and some of the recent developments, it seems like the incentive to develop advanced and interesting theoretical research is not really there. Chinese AI companies are much more focused on products and near-term profit. \n\nOne example I would give is the Tencent AI lab director, Dr. Tong Zhang, who was quite interested in ideas relevant to AGI and worked at Tencent for two years. He decided to leave the AI lab earlier this year and is now going back to academia. He is joining the Hong Kong University of Science and Technology as a faculty member. Even though he didn't explicitly mention the reason \\[for his departure\\], people think that the incentive to develop long-term, interesting research is not there at Tencent or, honestly, at many of the AI companies.\n\nAnother point I will raise is this: If you look at some of the U.S. AI labs — for example, [FAIR](https://engineering.fb.com/ai-research/fair-fifth-anniversary/) or [Google Brain](https://x.company/projects/brain/) — the typical structure is that you have two research scientists and one research engineer on a team. The number could be greater, but the ratio is usually the same. But the ratio of research scientists to research engineers is the opposite for Chinese AI companies. There, you have one research scientist and two research engineers, which implies that they are much more focused on putting their research ideas into practice and applications.\n\n**Nathan:** That's a surprising answer to me because I think that the naive, “_New York Times_ reader” point of view would be that the Chinese government is way better than the U.S. government in terms of long-term planning and priority-setting. If you agree with that, how do you think that translates into a scenario where the Chinese mega companies are maybe not doing as much as the American companies?\n\n**Brian:** I think the Chinese model is still interesting from a long-term, mega-project perspective. But there is variance in terms of what type of mega projects you are talking about. If you're talking about railways, bridges, or infrastructure in general, the Chinese government is incredibly good at that. You can construct a lot of buildings in just days, and I think that it takes the U.S., UK, and many other governments years. But they are engineering projects. We're not talking about Nobel Prize-winning types of projects. I think that's really the difference. \n\nThere is some analysis on where the top AI machine learning researchers are working and all of them are in the U.S. But if you look at pretty good researchers — potentially Alan Turing Prize-winning researchers — then yes, China has a lot of them. I think we have to be very nuanced in terms of looking at what types of scientific projects we are talking about, and whether it's mostly about scientific breakthroughs or engineering challenges.\n\n**Nathan:** Fascinating. A bunch of questions are coming in. I'm going to do my best to get through as many as I can. One question is about the general fracturing of the world that seems to be happening, or bifurcation of the world, into a Chinese sphere of influence (which might just be China, or maybe it includes a few surrounding countries), and then the rest of the world. We're seeing Chinese technology companies getting banned from American networks, and so on. Do you think that that is going to become a huge problem? Is it already a huge problem, or is it not that big of a problem after all?\n\n**Brian:** It's definitely concerning. My main concern is the impact on the international research community. \\[In my talk\\], I alluded to the international and interconnected community of research labs and machine-learning researchers. I believe that community will still be a good mechanism for coordinating on different AI policy issues —they would be great at raising concerns through the [AI Open Letter Initiative](https://futureoflife.org/ai-open-letter/), collaborating through workshops, and so on. \n\nBut this larger political dynamic might affect them in terms of Chinese scientists’ ability to travel to the U.S. What if they just can't get Visas? And maybe in the future, U.S. scientists might also be worried about getting associated with Chinese individuals. The thing I'm worried about is really this channel of communication between the research communities. Hopefully, that will change.\n\n**Nathan:** You're anticipating the next question, which is the idea that individuals are maybe starting to become concerned that if they appear to be on either side of the China/America divide — if they appear too friendly — they'll be viewed very suspiciously and might suffer consequences from that. Do you think that is already a problem, and if so, what can individuals do to try to bridge this divide while minimizing the consequences that they might suffer?\n\n**Brian:** It's hard to provide a general answer. It probably depends a lot on the career trajectories of individuals and other constraints. \n\n**Nathan:** There’s a question about the Communist Party. The questioner assumes that the Communist Party has final say on everything that's going on in China. I wonder if you think that's true, and if it is, how do we work within that constraint?\n\n**Brian:** In terms of international collaboration and what might be plausible?\n\n**Nathan:** Is there any way to make progress without the buy-in of the Communist Party, or do you need it? And if you need it, how do you get it?\n\n**Brian:** I think one assumption there is that it is bad to have involvement from the government. I think we need to try to avoid that — I can just smell the assumptions when people ask these types of questions. It is not necessarily true. I think there are ways that the Chinese government can be involved meaningfully. We just need to be thinking about what those spaces are. \n\nAgain, one promising channel would be AI safety conferences through academia. If Tsinghua University is interested in organizing an AI safety conference with potential buy-in from the government, I think that's fine, and I think it's still a venue for research collaboration. The world just needs to think about what the mutual interests are and, honestly, the magnitude of the stakes.\n\n**Nathan:** At a minimum, the Communist Party has at least demonstrated awareness of these issues and seems to be thinking about them. I think we're a little bit over time already, so maybe just one last question. Do you see this competition/cooperation dynamic and potentially this race to the precipice dynamics getting repeated across a lot of things? There's AI, and obviously in an earlier era there was nuclear rivalry, which hasn't necessarily gone away either. We also saw this news item of the first [CRISPR-edited babies](https://www.sciencemag.org/news/2019/08/top-stories-untold-story-2018-s-crispr-babies-china-s-gene-edited-crops-and-new), and that was a source of a lot of concern for people who thought, \"We're losing control of this technology.\" So, what's the portfolio of these sorts of potential race-dynamic problems?\n\n**Brian:** I think these are relevant historical analogs, but what makes AI a little bit different is that AI is a general-purpose technology, or omni-use technology. It's used across the economy. It's a question of political and economic \\[importance\\], not just international security. It's not just a nuclear weapon or a space weapon. It’s everywhere. It's more like electricity in the industrial revolution.\n\nOne thing that I want to add, which is related to the previous question, is the response from Chinese scientists to the gene-editing incident. Many people condemned the behavior of the scientist \\[responsible for the gene editing\\] because he didn't \\[comply fully\\] with regulations and was just doing it at a small lab in the city. But what you can see there is this uniformity of an international response to the incident; the responses from U.S. scientists, UK scientists, and Chinese scientists were basically the same. There was an open letter to _Nature_, with hundreds and hundreds of Chinese scientists saying that this behavior is unacceptable. \n\nWhat followed was that the Chinese government wanted to develop better regulations for gene editing and \\[explore\\] the relevant ethics. I think this illustrates that we can have a much more global dialogue about ethics and safety in science and technology. And in some cases, the Chinese government is interested in joining this global dialogue, and takes action in its domestic policy.", "filename": "Sino-Western cooperation in AI safety _ Brian Tse _ EA Global - San Francisco 2019-by Centre for Effective Altruism-video_id 3qYmLRqemg4-date 20190829.md", "id": "2be7e2946cf9aed213caf48fa81ebe11", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "AGI Safety and Alignment with Robert Miles-by Machine Ethics-date 20210113", "authors": ["Robert Miles"], "date_published": "2021-01-13", "text": "# AGI Safety and Alignment with Robert Miles on the Machine Ethics Podcast\n\nInterviewee: Robert Miles\nDate: 2021-02-13\n\nSummary: This episode we're chatting with Robert Miles about why we even want artificial general intelligence, general AI as narrow AI where its input is the world, when predictions of AI sound like science fiction, covering terms like: AI safety, the control problem, Ai alignment, specification problem; the lack of people working in AI alignment, AGI doesn’t need to be conscious, and more\n\nRob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI, and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organisations like the Machine Intelligence Research Institute, the Future of Humanity Institute, and the Centre for the Study of Existential Risk, to help them communicate their work.\n\nBen Byford[00:00:08] Hi and welcome to the Machine Ethics podcast. This month, episode 51, recorded in early January, we’re talking to Rob Miles, communicator of science, machine learning, computing and AI alignment. We chat about why we would even want general artificial intelligence; general AI as narrow AI where its input is the world; making predictions that sound like science fiction; we elucidate terms like “AI safety”, “the control problem”, “AI alignment” and “specification problem”; the lack of people working in AI alignment; the fact that AGI doesn’t need to be conscious; and how not to make an AGI.\n\nBen Byford[00:01:11] Hi Rob, thanks for joining me on the podcast. It’s really great to have you, and if you could just give yourself a quick introduction. Who you are and what do you do?\n\nRobert Miles[00:01:21] Oh, yeah, okay. My name is Rob Miles, I am, I guess a science communicator, populariser, I don’t know. I spend my time explaining AI safety and AI alignment on the internet. Mostly on YouTube. I also now run a little podcast, which is just an audio version of a newsletter run by Rohin Shah and me, Alignment newsletter, which is the week’s research in AI safety. But most of my time is spent on the YouTube channel, which is aimed a bit more at the general public.\n\nBen Byford[00:01:59] That’s awesome. Thanks very much. So, the first question we have on the podcast generally is a quite open-ended one. What is AI?\n\nRobert Miles[00:02:10] What is AI? So I would say AI is a moving target. There’s a video where I talk about this question, actually. I think it’s like “technology”. If somebody says, “I’m not good with technology,” they’re probably not talking about a bicycle, or a pair of scissors, or something, right. These are all technology, but once something really works reliably, and becomes a deeply ingrained part of our life that we don’t think about, we stop calling it technology. We reserve “technology” as a word for stuff that’s on the edge – stuff that’s still not actually very good, stuff that’s not reliable, stuff that often breaks, and we have to think about the fact that it’s technology. Or stuff that we don’t really understand very well yet. These are the things that get labelled technology, generally speaking. So it’s mostly electronics at this point.\n\nI think AI is a bit like that as well. AI is things that computers can do that they didn’t used to be able to do. There was a time when figuring out a good way to schedule the flights for your airline, to make sure that all the pilots and planes got to where they needed to be logistically, where that was like “Artificial Intelligence”. And we wouldn’t call it that these days, I don’t think, because the technology is many decades old now, and it works well. But if we were to try and do that with a neural network then we’d call it AI again, because it’s buggy and unreliable, and new. So yeah, I think the origin of the term “Artificial Intelligence” is a bit like the difference between a “robot” and a “machine”. A robot is a machine that’s designed to do something that a person does, and once we stop thinking that this is a task for a person to do we tend to stop calling things robots in the same way. It’s not “robot crop harvesting”, it’s a combine harvester, it’s just a machine at that point. So I think AI is about getting computers to do things that we previously thought were the domain of human minds.\n\nBen Byford[00:04:41] Yep. Like science education.\n\nRobert Miles[00:04:42] Right. Yeah, sure. If you could get a computer to do that, it’s for sure AI.\n\nBen Byford[00:04:48] Yeah, exactly. Cool, so with that foundation, somewhat, we have these terms that we throw away. I’m trying to get us to the point where we can talk about AI safety and stuff. So, we have this idea that we have machine learning techniques, and kind of old school AI – like you were saying – different techniques that have just become part of our world now, essentially. Some of those things we categorise as “simple AI” or “constrained AI”, or AI that is just good at one thing. Then we have this more broad idea about general AI, or superintelligence, or artificial intelligences that are maybe programmes that can do more than one thing, can learn a broad range of stuff. So I guess the question is, why would we want that? It’s an interesting question to answer before we then dive into what could go wrong. So, we’re talking about superintelligence, general AI, why would we want to do this?\n\nRobert Miles[00:05:58] Yeah, there is this very important distinction between what I would call “narrow AI” and what I would call “general AI”. Although it’s not really a distinction, so much as a spectrum. There’s a question of how wide a range of domains can something act in. So you take something like Stockfish, it’s a narrow AI, it’s only domain is chess. All it can do is play chess and it plays it very well. But then you could take something like AlphaZero, which is one algorithm that can be trained just as well to play chess or Go or shogi, but that’s more general. And now there’s MuZero, which is more general than that, because it plays chess, Go, shogi and also Atari games to a superhuman standard.\n\nSo, why do we want generality? Because there are a lot of tasks that we want done in the world, which require generality. You could also think of a general AI system as being a narrow AI system whose domain of mastery is physical reality. Is the world. That’s just a very, very complex domain, and the broader the domain the more complex the system is. So I sometimes talk to people who say that AGI is impossible, and there’s a sense in which that is actually true, in that perfect generality is not possible, because that would have to work in every possible environment, and there’s infinite possible environments. So, for example, one environment is a maximum entropy environment, where it’s complete chaos, there’s no structure to the universe, and therefore you can’t achieve anything because there’s no correlation between the things you do and the outcomes that happen. But if you consider only our universe, it is very optimisable, it is very amenable to agency. It actually, as far as we can tell, it probably runs on fairly straightforward mathematics. It’s probably fairly low complexity. In the sense of [08:33 …] complexity or something like that. It’s quite structured, it’s quite regular, it has causality. Induction works quite well, so we don’t actually have to create a fully general agent, we just need to create an agent which is able to act well in our universe. And that is general enough for what we need, which is an agent that’s going to do things in the real world.\n\nThe reason that we want that is that there are a lot of tasks that need that level of generality. If you want an agent to run your company, that is fundamentally a very broad thing to do. You need to be able to read and understand all kinds of different information, you need to be able to build detailed models of the world, you need to be able to think about human behaviour and make predictions. It would be pretty difficult to train something that was a really effective CEO, perhaps a superhuman CEO, without it having all of the other capabilities that humans have. You would expect – and there’s a chance that there’s a threshold that it gets past – I don’t know if this is true, this is speculative. But for example, if you have a very simple programming language, and you start adding features to it, you – past a certain point – you hit a point where your programming language is able to express any program that can be expressed in any programming language. You hit Turing completeness. Once you have a certain level of capability, in principle, you have everything. Maybe not quickly. Maybe not effectively. But you’ve created a general purpose programming language, and it’s possible that you get a similar effect. I don’t know if this is true, but the easiest way to make something to that can do almost anything is just to make something that can do everything.\n\nBen Byford[00:10:35] Right. I guess the practical use of “everything” in that sense, is that we have something that we can – in a Deep Mind sort of way – solve big problems. So we can get rid of diseases, we can curing ageing, we can avert disasters, all that kind of thing.\n\nRobert Miles[00:11:01] Right. If you have a very powerful system that is able to – if you have a general intelligence that is able to act in the real world, then in principle you can set whatever types of goals you want, and expect to get solutions to them if solutions exist. And that’s kind of the end game.\n\nBen Byford[00:10:26] That’s like, I mean I want to say “science fiction”, but that’s what we’re moving towards. That’s what people are trying to research and create is this more generalised AI, hopefully with this really quite powerful idea behind it that we can solve or get answers for some of these problems.\n\nRobert Miles[00:11:46] Yeah. It’s a pretty utopian ideal. The thing is, yeah – I want to address the science fiction thing, because it’s something that I often come up against. Which is, I think – I’m trying to formally express what the mistake is, but I’m not going to. I think it’s related to confusing A implies B with B implies A. Which is to say most science fiction predictions about the future don’t come true, but nonetheless, every significant prediction about the future that has come true has sounded like science fiction, because you’re talking about technology that doesn’t exist yet. You’re speculating about how things might be drastically different because of the effect of technology. That’s going to sound like science fiction, and so the fact that it sounds like science fiction doesn’t make it less likely to be true, unfortunately. It would be convenient if you could evaluate the truth of a complex claim by categorising it into a literary genre, but it’s not that easy. You have to actually think about the claim itself and the world and the technology, and run it through, and think about what’s actually likely to happen. Because whatever actually does happen, we can be confident would seem like science fiction from our perspective.\n\nBen Byford[00:13:20] Mm hmm, and definitely as it’s happening, as well. In hindsight, it probably feels less like science fiction, because it’s normalised.\n\nRobert Miles[00:13:28] Yeah. Can you imagine going to someone 50 or 100 years ago and saying , “Hey, so these things that are just adding machines, these computers, they’re going to get way, way better. Everyone is going to have one a million times more powerful than all of the computers on earth right now in their pocket, and they’re all going to be able to talk to each other at insane speeds all the time.” Or you know what, maybe don’t give the context. Maybe just say, “Hey, in the future basically everyone on earth is going to have access to infinite, free pornography.” Then just see, how likely does that sound? Does it sound likely? It turns out it’s one of the things that happened. The future is definitely going to be weird. So there’s no way to get around just really trying to figure out what will happen, and if your best guess of what would happen seems weird then that’s not a reason to reject it, because the future always seems weird.\n\nBen Byford[00:14:34] Cool. So, given that we’ve painted so far – other than the weirdness of the situation – we’ve painted quite a nice view on what could happen, or what is positive within this area. You spend a lot of time thinking about these terms which I’m going to throw out, because I’d like to get consolidation of things like “AI safety”, “AI alignment”, “specification problem”, “control problem”, and it would be really nice if you could give me an overview of this area, and in what way these things are similar or equivalent, or not at all.\n\nRobert Miles[00:15:11] Right. Okay, that’s actually a really, really hard problem, because there is not really widespread agreement on a lot of the terms. Various people are using the same terms in different ways. So broadly speaking, AI safety is – I consider AI safety to be – a very broad category. That’s just about the ways in which AI systems might cause problems or be dangerous, and what we can do to prevent or mitigate that. And I would call that, by analogy, if there was something called “nuclear safety”, and that runs the gamut. So if you work in a lab with nuclear material, how are you going to be safe, and avoid getting long term radiation poisoning, like Marie Curie? Then you have things like accidents that can happen, things like the demon core during the Manhattan Project, they had this terrible accident that was an extremely unsafe experiment that dumped a huge amount of radiation out very quickly and killed a lot of researchers, which is a different class of thing to the long term exposure risk. Then you also have things like, in some of the applications, if you have a power plant then there’s a risk that could melt down, and that’s like one type of risk, but there’s also the problems associated with disposing of nuclear waste, and like, how does that work? After that you have all of the questions of nuclear weapons, and how do you defend against them, how do you avoid proliferation? And these types of broader questions.\n\nAI safety is kind of like that, I think, in that it covers this whole range of things that includes your everyday things, like are we going to have invasions of privacy? Are these things going to be fair from a race, gender and so on perspective? And then things like, is my self-driving car going to drive safely? How is it going to make its decisions? Who is legally responsible? All of those kinds of questions are all kind of still under the umbrella of AI safety. But the stuff that I’m most interested in – well actually, let me divide up safety into four quadrants along two axes. You have near and long term, and you have accident and misuse. So in your near term accident is going to be things like, are self-driving cars safe? Your near term misuse is, how are corporations using our data and that kind of thing? Long term misuse I actually think right now is not really an issue. So when you say short term and long term, you can also think of that as narrow and general. I’m most interested in the long term accident risk, because I think that our current understanding is such that it almost doesn’t matter, getting the right people to use AGI, or what they try to get the AGI to do. I think that currently our understanding of AGI is such that we couldn’t control it anyway, and so it sort of doesn’t matter, just getting powerful AGI systems to do what anyone wants them to do is the main thing that I’m interested in. So that’s the sub-part.\n\nLet’s do some other terms. The control problem is a slightly older term, I think, that’s about if you have an AGI, how do you control it? How do you keep it under control? I don’t really like that framing, because it suggests that that’s possible. It suggests that if you have a superintelligence which doesn’t fundamentally want what humans want, that there might be some way to control it. And that feels like a losing strategy to me. So, I prefer to think of it as the alignment problem, which is, how do you ensure that any system you build, its goals align with your goals? With the goals of humanity. So that then it doesn’t need to be controlled, because it actually wants to help. So you don’t control it, you just express your preferences to it. It’s only a very slight shift in framing, but I think it changes the way that you think about the problem.\n\nBen Byford[00:20:21] Yup, is that because – I’ve done a bit of reading here – and it seems intractable that one can control a system which is, we say superintelligent, but is vastly more intelligent than we are, given that we are the baseline for this framing of intelligence. So on a general person’s intelligence, it’s going to be much, much more intelligent than that. It can do things that implies that we won’t actually be able to control it. Like you’re saying, it doesn’t really matter who creates such a thing, because they themselves won’t be intelligent enough – or don’t have the practical tools – to contain such a thing.\n\nRobert Miles[00:21:03] Right, and this doesn’t need to be a really strong claim, actually. You could try and make the claim that if the thing is drastically superintelligent then it’s impossible to control it. I would prefer to make the claim that if the thing is superintelligent, even without needing that claim, you could just say, “This seems really hard, and it would be nice not to have to try.” It’s not so much that we’re certain that we won’t be able to control it, but we really can’t be certain that we would be able to control it, and we do want a high level of confidence for this sort of thing, because the stakes are very high. Any approach that relies on us containing or outwitting an AGI, it’s not guaranteed to fail, but it’s so far from guaranteed to succeed that I’m not interested in that approach.\n\nBen Byford[00:22:00] So, I find this quite interesting, because there’s an implicit thing going on here in all of this stuff, that there is a – this AGI system has something that it wants to optimise and it’s going to do it in a runaway sort of way. Or it has some sort of survival thing inbuilt into it. And whether that’s to do with some concept of consciousness or not, that doesn’t really matter, but it has this drive all of its own, because otherwise it would be just idle. You know? We’re conflating intelligence with something like survival or some kind of optimisation problem that we’ve started out on. Is there something coalescing these sorts of ideas?\n\nRobert Miles[00:23:46] Yeah. So the concept is agency. The thing is if we build an agent, this is a common type of AI system that we build right now. Usually we build them for playing games, but you can have agents in all kinds of contexts. And an agent is just a thing that has a goal, and it chooses its actions to further that goal.\n\nSimplistic example of an agent: something like a thermostat. Modelling it as an agent is kind of pointless, because you can just model it as a physical or electronic system and get the same understanding, but it’s like the simplest thing where the idea applies. If you think of a thermostat as having a goal of having the room at a particular temperature, and it has actions like turning on the heating, turning on the air conditioning, whatever. It’s trying to achieve that goal and if you perturb this system, well, it will fight you, in a sense. If you try to make the room warmer than it should be, it will turn on the air conditioning. Or something that makes more sense to talk about is something like a chess AI. It has a goal. If it’s playing white then it’s goal is for the black king to be in check – in checkmate, rather – and it chooses its actions in the form of moves on the board to achieve that goal.\n\nBen Byford[00:24:04] Yup.\n\nRobert Miles[00:24:04] So, a lot of problems in the real world are best modelled as this type of problem. You have an agent, you have a goal in the world – some utility function or something like that – you’re choosing your actions, which is maybe sending packets across a network, or sending motor controls to some form of actuator, whatever. And you’re choosing which actions to send in order to achieve that particular goal. Once you have that framework in place, which is the dominant framework for thinking about this kind of thing, you then have a lot of problems. Mostly being that you have to get the right goal. This is the alignment thing. You have to make sure that the thing it’s trying to achieve is the thing that you want it to achieve, because if it’s smarter than you it’s probably going to achieve it.\n\nBen Byford[00:25:03] Yeah, so you have to be very sure that that goal is well specified, and that’s part of this specification thing. Or whether that is even possible. Is it even possible to set a well-formed goal that doesn’t have the potential to be manipulated or interpreted in different ways?\n\nRobert Miles[00:25:26] Yeah, yeah. The thing is, this is the specification problem. It’s not so much that the goal is going to be manipulated, as that the thing you said is not the thing you meant. Anything that we know how to specify really well is something which if actually actionised would not be what we want. You can take your really obvious things like human happiness. Maybe we could specify human happiness, but the world in which humans are the most happy is probably not actually the world that we want. Plausibly that looks like us all hooked up to a heroin drip or some kind of experience machines hooking directly into our brains, giving us the maximally happy experience, or something like that, right? You take something that is locally a good thing to optimise for, but once you optimise hard for that, you end up somewhere you don’t want to be. This is a variation on Goodhart’s Law, that when a measure becomes a target, it stops being a good measure. It’s like that taken to the extreme.\n\nBen Byford[00:26:40] So, given all this stuff, is there a sense that there is a winning direction, or is there something that’s like, this is the best option for alignment at the moment? The way that we can, if we had an AGI here today, that we would probably try first.\n\nRobert Miles[00:27:06] Yeah, there are a few different approaches. Nothing right now is ready, I wouldn’t say, but there are some very promising ideas. So first off, almost everyone is agreed that putting the goal in and then hitting go is not a winning strategy. You need, firstly because human values are very complicated, anything that you can simply specify is probably not going to capture the complexity, the variety of what humans care about. And usually the way that we do that in machine learning, when we have a complex fuzzy thing that we don’t know how to specify, is that we learn that thing from data. So that’s one thing, value learning, effectively. How do you get an AI system that learns what humans care about?\n\nThis is hard, because we have these various approaches for learning what an agent cares about, and they tend to make fairly strong assumptions about the agent. You observe what the agent does, and then you say, “Well, what does it look like they were trying to do? What were they trying to achieve when they were doing all this?” This works best when the agent is rational, because then you can just say, “Well, suppose a person was trying to achieve this, what would they do?” and then, “Well, did they do that?” Whereas humans have this problem where sometimes they make mistakes. We sometimes choose actions that aren’t the best actions for our goals, and so then you have this problem of separating out, “Does this person really value doing this weird trick where they fly off the bike and land on their face, or were they not trying to do that?”\n\nBen Byford[00:29:01] Mm hmm, yup. Is there – sorry to interject – is there a category of things where this isn’t the case? So if we worked on problems that weren’t innately human, so the goal was set to understand weather patterns enough – this is already getting badly described. Your goal, okay, Rob-3000 is to predict the weather accurately for tomorrow. Go. That would be the thing to optimise for. That seems to me like something that doesn’t have so much human involvement in there, or is it going to trickle in somewhere anyway?\n\nRobert Miles[00:29:49] But also that feels like it doesn’t – you can do that very well with a narrow system. You don’t really need AGI for that task. And if you set AGI that task then, well that’s apocalyptic, probably. In a few different ways. Firstly because you can do that job better if you have more sensors, so any square inch of the planet that isn’t sensors is a waste from that agent’s perspective, if that’s its only goal. Secondly, humans are very unpredictable, so if it’s optimising in a long term way – if it’s myopic and it’s only trying to do tomorrow at any given time, we might be okay. But if it cares about its long term aggregated rewards, then making the weather more predictable becomes something that it wants to do, and that’s not good for us either.\n\nBen Byford[00:30:42] So I feel like that leads into these other ideas, I was going to ask about if you’d seen Human Compatible by the eminent Dr Stuart Russell, and he has this idea about ambiguity. So you don’t necessarily have optimise the weather – no, not optimise the weather, but tell me what the weather’s going to be like tomorrow, but don’t harm humans. And also don’t – you know, he doesn’t have this hierarchy of things to fulfil, it has this ambiguity towards a given goal, which it’s constantly checking for, so the feedback is never going to get 100% accurate, and it’s always going to need to ask questions, and like you were saying it’s going to constantly need to reaffirm its model of the world, and with humans in it I suppose, what are the things that humans are going to want to do?\n\nRobert Miles[00:31:42] Yeah. This is a much more promising family of approaches, where you don’t try and just learn – you don’t just learn up front. The first thing that doesn’t work is specifying the goal up front. Then okay, maybe we can learn. Maybe we can look at a bunch of humans later and then learn what the goal should be and then go. That also has problems, because it’s kind of brittle. If you get it slightly wrong you can have giant problems. Also, you have a huge distribution shift problem, where if you train the system on everything you know about humans right now, and then you have a world with an AGI in it, that’s quite a different world. The world starts changing, so then you have this classic problem that you always have with machine learning, where the deployment distribution is different from the training distribution.\n\nSo, some kind of online thing seems necessary, where the system is continually learning from humans what it should be doing. There’s a whole bunch of approaches in this category, and having the system start off complexly uncertain what it’s goal should be, knowing that its goal is in the minds of the humans, and that the actions of the humans are information, are data that gives it information about its actual goal, seems fairly promising.\n\nBen Byford[00:33:26] Good. I like it. I felt like when I was reading that it had this really great bit, which was, “Oh, and I’m sure it will be all kind of ethical. We’ll just work that bit out.” Because obviously this is the stuff that I care about. The fact that these things are uncertain doesn’t imply that it will be an ethical AI or AGI. Because obviously you’re learning from people and people can have a spectrum of values, and they can do stupid things that hurt people, or put people at disadvantage. So, I think when we’re looking at that specific example, it’s interesting that it’s solved this runaway optimisation issue, but it doesn’t necessarily solve that the agent will actually do stuff that is in people’s general benefit, it might do something that is in a person’s benefit, possibly. There’s other issues that come up.\n\nRobert Miles[00:34:35] Exactly. So this is another way of splitting the question. Which is, are you – for the alignment problem, the general form of the alignment problem is you have a single artificial agent and a single human agent and it’s just like a straightforward principle/agent problem. You’re trying to get the artificial agent to go along with the human. This is hard. In reality, what we have is we might have a single artificial agent trying to be aligned with a multitude of human agents, and that’s much harder. You can model humanity as a whole, as a single agent, if you want, but that introduces some problems, obviously.\n\nYou might also have multiple artificial agents, and that depends on what you think about take off speeds and things like that – how you model future-going. I assign a fairly high probability to there being one artificial agent which becomes so far ahead of everything else that it has a decisive strategic advantage over the other ones, whatever else there is doesn’t matter so much. But that’s by no means certain. We could definitely have a situation where there are a bunch of artificial agents that are all interacting with each other as well as with humanity, and it gets much more complicated.\n\nThe reason that I am most interested in focusing on the one-to-one case, is because I consider it not solved, and I think it might be strictly easier. So I find it hard to imagine – and I don’t know, let’s not be limited by my imagination – I find it hard to imagine a situation where we can solve one-to-many without first having solved one-to-one. If you can’t figure out the values of a human in the room with you, then you probably can’t figure out the values of humanity as a whole. Probably. That just feels like a harder problem. I’m aware that solving this problem doesn’t solve the whole thing, but I think it’s like a necessary first step.\n\nBen Byford[00:37:03] You’re not skulking away from the bigger issues here, come on Rob. You need to sort it out.\n\nRobert Miles[00:37:08] Yeah, I mean there’s no reason to expect it to be easy.\n\nBen Byford[00:37:14] No, definitely not. I don’t think – I mean, there’s a lot of stuff in this area where there isn’t necessarily consensus, and it’s still very much a burgeoning area, where they’re – I think someone said there’s simply not enough people working in this area at the moment.\n\nRobert Miles[00:37:33] Yes. The density of, like the area of space. If you look at something like computer vision or something like that. You look at one researcher in computer vision and what they’re working on, and it’s a tiny area of this space. They are the world expert on this type of algorithm applied to this type of problem, or this type of tweak or optimisation to this type of algorithm on this type of problem.\n\nWhereas what we have in AI safety is researchers who have these giant swathes of area to think about, because yeah, there are not enough people and there aren’t even enough – AI safety as a research field, or AI alignment, is divided up into these various camps and approaches, and a lot of these approaches are entire fields that have like two people in them. You know, because it’s just like the person who first thought up this idea, and then somebody else who has an interesting criticism of it, and they’re debating with each other, something like that. In five or ten years, that’s going to be a field full of people, because there’s easily enough work to do there. It really is like a wide opening frontier of research.\n\nBen Byford[00:38:58] Awesome.\n\nRobert Miles[00:38:59] So if you’re interested in doing –\n\nBen Byford[00:39:01] Exactly. The plug.\n\nRobert Miles[00:39:04] No, genuinely. If you want to make a difference in AI research, you want to have a big impact, both in terms of academically, but if you want to make a big impact on the world, this is probably the – I don’t even know what is second place. Anti-ageing research maybe? But it’s definitely up there as the best thing to be working on.\n\nBen Byford[00:39:31] So, does consciousness and this idea of the superintelligence maybe being intelligent in a way that we could ascribe consciousness to it? Or that the ideas we have around consciousness may apply? Does that come into this equation of alignment, or the idea of the superintelligence?\n\nRobert Miles[00:39:52] Yeah, for me it doesn’t, because I feel I have no need of that hypothesis. At least in the abstract, you can model something as a utility maximiser and then it’s just like a straightforward machine. It’s building this model of the world. It can use the model to make predictions. It has some evaluation utility function that it can look at the possible world stakes, and decide how much it wants those. Then it can look at its actions and make predictions about what would happen in the case of each action, and it chooses the action that it thinks will lead it towards the world that scores highly according to its utility function. At no point in this process is there any need for internality or consciousness, or anything of that grander scale. It’s possible that when you, in practice, when you build something that does this, it ends up with that, somehow, maybe. I don’t know. But it doesn’t seem necessary. It doesn’t seem like a critical consideration.\n\nBen Byford[00:41:03] It’s not a component that is implied by whatever the outcome is of the system.\n\nRobert Miles[00:41:09] Yeah. And the other thing is that if we have a choice, I think we should try to make systems that aren’t conscious. If that’s an option. Because I would rather not have to worry about – there’s all kinds of problems you have when – like you turn on the system, and you realise that it’s not quite doing what it should and now is it ethical to turn it off? And all of that kind of thing. Considering that consciousness doesn’t seem to be necessary for capability, which is what we care about, if we can avoid it then I think we actually should.\n\nBen Byford[00:41:51] Yeah. That’s really interesting. I’m just trying to think of ways that it would be advantageous to choose consciousness, but I guess then we’re getting into the power of Dr Frankenstein situations, where you are making the decision over and above the actual reality of the situation as a need, as a requirement. There’s a certain amount of hubris there, Rob. That’s the word I was looking for. Hubris.\n\nRobert Miles[00:42:27] Yeah. The hubris is implicit in the overall project. You’re trying to create a thing that can do what human minds can do. That’s inherently hubristic, and again I’m kind of okay with that. Seems unavoidable.\n\nBen Byford[00:42:49] So, given the long term scope of this, is there some really interesting stuff coming through right now? I’ve literally just read a paper yesterday about the halting problem. That’s because I was trying to prepare for this and dive into something somewhere other than your brilliant videos. So is there anything else that you really want to bring to the fore about what is really exciting you in this area at the moment?\n\nRobert Miles[00:43:30] There’s all of these approaches that we didn’t talk about. I feel that you had a question that was about this that we didn’t actually get into. Where we could talk about – there’s too many of them and I don’t want to get into detail about them because, well I would probably get it wrong. Because I haven’t actually read the papers about this recently. You talked about Stuart Russell’s stuff. The stuff that’s happening at Open AI is interesting as well. Things about AI safety via debate. Things about iterated amplification are really interesting. And the stuff at Deep Mind, like recursive reward modelling and that kind of thing, which I’m going to have a video about, hopefully. Some time soon. But these are people thinking about how we can take the types of machine learning stuff we have now and build AGI from it in a way that is safe. Because people are thinking quite strategically about this.\n\nThe thing is, it’s no good coming up with something that’s safe if it’s not competitive. If you come up with some system that’s like, “Oh yes, we can build an AGI this way, and it’s almost certainly going to be safe, but it’s going to require twice as much computer power as doing approximately the same thing but without the safety component.” It’s very difficult to be confident that someone else isn’t going to do the unsafe thing first. So fundamentally, as a field, it’s difficult. We have to tackle this – what am I saying? As a field it’s pretty difficult. We have to solve this on hard mode before anybody else solves it on easy mode. So people are looking at trying to be the first people to create AGI, and having that be safe, as a joint problem. And those types of things seem pretty promising to me.\n\nBen Byford[00:45:50] And it’s a solving the problem of, like you were saying, someone else creating it that isn’t safe. So you’re trying to get there before they get there with the more correct option. The better-aligned option.\n\nRobert Miles[00:46:06] Yeah, so that’s another thing that’s like if you’re not technically minded. If you’re not well-placed to do technical research, there’s a lot of interesting work to be done in governance and policy as well. Like AI governance and AI policy are also really interesting areas of research, which is like how do you – practically speaking – how do you steer the world into a place where it’s actually possible for these technical solutions to be found, and to be the thing that ends up being actually implemented in practice? How do you shape the incentives, the regulations, the agreements between companies and between countries? How do we avoid this situation where, we all know whoever makes AGI first controls the world? We think we know that, and so everyone’s just going as fast as they possibly can, like an arms race, like the space race type situation, in which people are obviously going to be neglecting safety, because safety slows you down, then you end up with nobody winning.\n\nHow do you get people to understand – this is like a much broader thing – how do you get people to understand that there are positive sum games, that zero sum games are actually pretty rare. That it’s so, so much better to get 1% of a post-singularity utopia perfect world, than 100% of an apocalypse. We have so, so much to gain through cooperation, unimaginably vast amounts of value to be gained through cooperation, and a really good chance of losing everything through not cooperating. Or a bunch of outcomes that are dramatically worse than losing everything are actually in play if we screw this up. Just getting people to be like, “Can we just slow down and be careful, and do this right?” because we’re really at the hinge of history. This is the point where – this next century – is the point where we win, or we blow it completely. I don’t see an end to this century that looks approximately like what we have now. This is for all the marbles, and can we pay attention please.\n\nBen Byford[00:48:59] Yeah, and I think that argument can almost be applied to lots of different areas. Like the environment, biodiversity, maybe ideas around poverty and equality and things like that.\n\nRobert Miles[00:49:13] Yeah. The thing is, and this is why I want to talk about hubris. I would be a lot more concerned about things like climate change if I didn’t know the things that I know about AI. I probably would be focusing on climate change, but climate change is fundamentally not that hard a problem if you have a superintelligence. If you have a system that’s able to figure out everything that – you know, if you have a sci-fi situation, then just being like, “Oh the balance of gases in the atmosphere is not how we want it, you know” – and the thing’s figured out molecular nanotechnology or something, then potentially that problem is just one that we can straightforwardly solve. You just need something for pulling out a bunch of CO2 from the atmosphere and whatever else you need. I don’t know, I’m not a climate scientist.\n\nLikewise poverty. If you get something that is aligned – and aligned with humanity, not just whoever happens to be running it – then I don’t anticipate poverty. Possibly there would be inequality in that some people would be drastically, drastically richer than the richest people today, and some other people would be like five drasticallies richer than the richest people today, but I’m not as concerned with inequality as I’m concerned with poverty. I think it’s more important that everyone has what they need that everybody’s the same, is my personal political position. But again, it’s that kind of thing. If the problem is resources, if the problem is wealth.\n\nSolving intelligence – it’s not like an open and shut thing, but you’re in such a better position for solving these really hard problems if you have AGI on your side. So that’s why I – with my choices in my career, at least – my eggs are in that basket. I don’t think that everyone should do, I’m glad there are people working on the things that we’re more confident will actually definitely pay off, but I do see AGI as a sort of a Hail Mary that we could potentially pull off and it’s totally worth pushing for.\n\nBen Byford[00:51:36] I think it’s one of those things where we’re confident now that things are going badly, so we’ll sort that out. But with the AGI stuff, it could go really well, but we shouldn’t die before we get there, right. We should probably sustain and be good to the world before we fuck it all up, and then we haven’t got this opportunity to go on to do these other solutions.\n\nRobert Miles[00:51:59] I don’t advocate neglecting these problems because AGI is just going to fix it at some point in the future. All or nothing. But there is an argument that concentrating on this stuff is – there’s a line through which solving this solves those problems as well, and that increases the value of this area.\n\nBen Byford[00:52:24] So the last question we ask on the podcast is to do with what really excites you and what scares you about this technologically advanced, autonomous future. We kind of spoke about this apocalypse and possible, not necessarily utopia, but being able to leverage –\n\nRobert Miles[00:52:44] Relative utopia.\n\nBen Byford[00:52:44] Yeah, relative utopia. Does that cover it, or are there other things that strike you?\n\nRobert Miles[00:52:54] Yeah, it’s a funny question, isn’t it. It’s like, apart from the biggest possible negative thing and the biggest possible positive thing, how do you feel about this? I think that covers it.\n\nBen Byford[00:53:09] Yeah. I thought that might be the case. I just thought I’d give you the opportunity anyway.\n\nRobert Miles[00:53:11] I know, totally, totally.\n\nBen Byford[00:53:14] So, thank you so much for your time, Rob. I feel like it’s one of those things where we could definitely mull this over for the rest of the day. I’m going to let you go now. Could you let people know how they can follow you, find you and stuff, get hold of you and that sort of thing?\n\nRobert Miles[00:53:36] Yeah. So, I am Robert SK Miles on most things. You know, Twitter, Reddit, GMail, whatever, that’s “SK”. And the main thing is the YouTube channel, I guess, Rob Miles AI. I also make videos for the Computerphile channel – that’s phile with a “ph”, someone who loves computers. And if you’re interested in the technical side, like if you’re a researcher – not necessarily a safety researcher, but if you’re interested in machine learning, or have any kind of computer science background, really, I would really recommend the alignment newsletter podcast, or just get the Alignment Newsletter in your email inbox. If you prefer listening to audio, which you might given that you currently are, Alignment Newsletter podcast, it’s a weekly podcast about the latest research in AI alignment. I think that’s it from me.\n\nBen Byford[00:54:41] Well, thank you again, so much. And I’m definitely going to – that’s someone else I’m going to – have to have back, so that we can mull over some of this again. Really, really interesting and exciting. Thank you.\n\nRobert Miles[00:54:55] Nice, thank you.\n\nBen Byford[00:54:58] Hi, and welcome to the end of the podcast. Thanks again to Rob, who’s someone I’ve been following for a couple of years now – his stuff on Computerphile and also his own output. So it’s really amazing that I get to talk to people like Rob, and people I’ve talked to in the past on the Machine Ethics podcast about some of these questions that have just been itching inside of me to ask about as I’ve been watching their work, or reading their work. So it’s really fantastic I was able to get hold of Rob. One of the things in our conversation is that I wasn’t quite as certain or determined as Rob was of achieving AGI, but I admire his devotion to the fact that if we do, then we should probably do it with good outcomes. This podcast was kind of a continuation from our interview with Rohin Shah, so if you want to listen to more in that vein, then go and check out that episode, and find more episodes on the podcast. Thanks again, and I’ll see you next time.", "filename": "AGI Safety and Alignment with Robert Miles-by Machine Ethics-date 20210113.md", "id": "a20b942476a0a6b30ec671fa004d5b6a", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Rob Miles on Why should I care about AI safety-by Jeremie Harris on the Towards Data Science Podcast-date 20201202", "authors": ["Rob Miles", "Jeremie Harris"], "date_published": "2020-12-02", "text": "# Rob Miles on Why should I care about AI safety by Jeremie Harris on the Towards Data Science Podcast\n\nProgress in AI capabilities has consistently surprised just about everyone, including the very developers and engineers who build today’s most advanced AI systems. AI can now match or exceed human performance in everything from speech recognition to driving, and one question that’s increasingly on people’s minds is: when will AI systems be better than humans _at AI research itself_?\n\nThe short answer, of course, is that no one knows for sure — but some have taken some educated guesses, including [Nick Bostrom](https://www.amazon.ca/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111) and [Stuart Russell](https://www.amazon.ca/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS). One common hypothesis is that once an AI systems are better than a human at improving their own performance, we can expect at least some of them to do so. In the process, these self-improving systems would become an even more powerful system that they were previously—and therefore, _even more_ capable of further self-improvement. With each additional self-improvement step, improvements in a system’s performance would compound. Where this all ultimately leads, no one really has a clue, but it’s safe to say that if there’s a good chance that we’re going to be creating systems that are capable of this kind of stunt, we ought to think hard about how we should be building them.\n\nThis concern among many others has led to the development of the rich field of AI safety, and my guest for this episode, Robert Miles, has been involved in popularizing AI safety research for more than half a decade through two very successful YouTube channels, [Robert Miles](https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg) and [Computerphile](https://www.youtube.com/user/Computerphile). He joined me on the podcast to discuss how he’s thinking about AI safety, what AI means for the course of human evolution, and what our biggest challenges will be in taming advanced AI.\n\nHere were some of my favourite take-homes from the conversation:\n\n- Rob thinks that AI safety researchers should invest more time in explaining why AI safety is important. The potential risks from AI are so abstract that it’s hard for many people to understand or reason about them, so there’s a lot of value in making the case for AI alignment research in a publicly accessible way.\n- There’s still some debate over whether humanity would be better off if a single AI research firm were way ahead of all others in the final stretch towards AGI (sometimes known as the _unipolar_ AI scenario), or if many firms approached the AGI finish line at roughly the same time (the _multipolar_ scenario). Robert says that he generally favours the unipolar scenario from a safety standpoint, and explains why during the interview.\n- Should we try to ensure that advanced AI systems are perfectly safe before deploying them, or can we fix bugs as they come up? With today’s narrow AI systems, most tech companies have tended to fall on the “deploy first, ask questions later” side of the equation — with the debatable exception of self-driving cars. But as systems become more powerful — and as their deployment is accompanied by an inevitable and irreversible loss of human control over the world — it seems likely that more caution will be needed upfront.\n- When polled only 15% of AI researchers responded that they think the advent of AGI will lead to a bad or disastrous outcome. When I asked Rob why he thinks so few AI researchers are pessimistic about the impact that AGI will have on humanity, he pointed to the difference between narrow AI researchers (who currently dominate the field and therefore the poll I referenced, and who are used to seeing AI deployed in very specific niches, where it can easily be rolled back if there are any safety issues) and general AI researchers (who are working on generally intelligent systems which will ultimately have the capacity to self-improve). So far every AI system failure has involved a narrow AI, and therefore a capped downside: at worst, a few people might be killed by a malfunctioning self-driving car, for example. But as we hand over more and more control to increasingly general systems, and as those systems come to exert an almost indefinite amount of power over us, it seems likely that failures will be far more costly. Researchers who are working on AGI tend to have these failure modes in mind, whereas today’s narrow AI focused developers aren’t spending most of their time thinking about that sort of issue.\n- Another point, mentioned by Rob in an email to me after we recorded the episode, but that I thought was worth including here: Rob thinks it’s possible that concern over AI risk could motivate people to do AI safety work, which may ultimately succeed. Paradoxically, if AI safety work _is_ successful, people may come to believe that it was never even necessary in the first place (“Why were we freaking out about AI safety so much? Everything turned out fine!”). A similar effect was at play around the turn of the last millennium, when [widespread concern over the famous Y2K bug](https://en.wikipedia.org/wiki/Year_2000_problem) led to swift action in industry, which avoided a host of otherwise serious issues, but caused most people to (incorrectly) conclude that the bug must not have been worth worrying about in the first place.", "filename": "Rob Miles on Why should I care about AI safety-by Jeremie Harris on the Towards Data Science Podcast-date 20201202.md", "id": "9e1bc7c62a5b06c7d7816f5e3eb5d083", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "NeurIPSorICML_q243b-by Vael Gates-date 20220318", "authors": ["Vael Gates"], "date_published": "2022-03-18", "text": "# Interview with AI Researchers NeurIPSorICML_q243b by Vael Gates\n\n**Interview with q243b, on 3/18/22**\n\n**0:00:00.0 Vael:** Cool. Alright. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:09.0 Interviewee:** Of course. I did my PhD in optimization and specifically in non-convex optimization. And after that I switched topics quite a lot. I worked in search at \\[company\\] \\[\\...\\] And now actually I work in \\[hard to parse\\] research, so I kind of come back to optimization, but it\\'s more like a kind of weird angles of optimizations such as like meta-learning or future learning, kind of more novel trends like that.\n\n**0:00:42.1 Vael:** Cool. And next question is: what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:00:51.1 Interviewee:** Well, the benefit is that AI has slowly but surely kind of taken over a lot of problems in the world. And in fact pretty much any problem where you need to optimize something, you can use AI for that. I\\'m more of a traditional machine learning person, in the sense that\\... Currently I include everything, including not necessarily neural networks, also like logistic regressions, decision trees, all those things. And I think those things have been grossly unutilized, I would say, because a lot of problems right now in machine learning that people think, \\\"Okay, should this solve this neural networks?\\\" but in reality, just in decision trees. But stay the same because of the current trends and because of the current hype of neural networks as well, they kind of came along with it as well. All the publicity and marketing that, you know, that AI should have, honestly. And I think more and more companies realize that you can solve the problem with just simple solutions. And I think that will be a really exciting part. So just like, to answering to your question yeah, I\\'m excited about the fact that machine learning has just become more and more and more ubiquitous. It becomes like almost prerequisites for any big company or even a smaller company. And the second part, what I\\'m worried the most. I don\\'t know, that\\'s a good question.\n\n**0:02:07.7 Interviewee:** I mean, I guess like, I mean, I don\\'t share those worries that AI would dominate us and we would all be exterminated by overpowerful AI. I don\\'t think AGI is coming anytime soon. I think we\\'re still doing statistics in a way, like some kind of belong to this camp will just think, we\\'re still doing linear models and I don\\'t believe the system is conscious or anything of those sorts. I think the most malicious use, like, I mean, especially now currently with the war, I see more and more people using AI for malicious sense. Like not necessarily they will be, you know, we\\'re going to have next SkyNet coming, but in the bad hands, in the bad actors, you know, AI can serve not a good purpose in war. Like for example, you know, like now with drones, with, you know, the current war, for example, in Ukraine is more and more done with drones. And drones have automatic targeting, automatic navigation. And yeah, so that\\'s kind of not necessarily a good thing and they can become more and more dramatic, and more automatized and they can lead to harm.\n\n**0:03:06.4 Vael:** Yeah, it makes sense. Lot of strong misuse cases. So focusing on future AI, putting on like a science fiction, forecasting hat, say we\\'re 50 plus years into the future, so at least 50 years in the future, what does that future look like?\n\n**0:03:19.5 Interviewee:** Well, I hope\\... I mean I kind of, you know, I really like that, my favorite quote in a\\... And probably that one of the quotes that I really like is that the arc of justice is nonlinear, but it kind of bends towards justice. I really like it. And I really hope in 50 years we would actually figure out first of all, the way to harness our current problems and make not necessarily disappear, but at least make it controllable such as nuclear weapons and the global warming. And that\\'s in 50 years, I think it\\'s a reasonable time. Again, not to solve them, but just to figure out how to harness these issues. And AI should definitely help that. And do you want me to answer more specifically, like more, like give you ideas of how I think in 50 years the world would look like? (Vael: \\\"Yeah, I\\'ll take some more specifics.\\\")\n\n**0:04:01.3 Interviewee:** Alright. I mean, I think that one of my exciting areas that I think that right now is already kind of flourishing a little bit, and it\\'s large language models. It\\'s a current trend, so it\\'s kind of an easy, like on the surface thing to talk about, and I think I\\... As a \\[large tech company\\] employee, I can see how they\\'ve been developed over like two years, things changing dramatically. And I think these kind of things are pretty exciting. Like having a system that can talk to you, understand you, respond not necessarily with just one phrase, but like accomplish tasks that you wanted to accomplish. Like now it\\'s currently in language scenarios, but I also, within 50 years definitely anticipate it could happen in like robotics, like a personal assistant next to you or something like that. Another area I\\'m really excited about is medicine.\n\n**0:04:46.1 Interviewee:** I think once we figure out all the privacy issues that surround medicine right now, and we\\'re able to create like, clean up database, so to speak, of patients diagnoses. And I hope that it\\'ll be enough for a machine learning model to solve like cancer as we know it and things like that. I\\'m just hopeful. I mean, I hope it\\'s going to happen in 50 years and it\\'s going to, I don\\'t know if I want to place my bet there, but I\\'m hoping that would happen. So I guess in robotics, as well, as I said, one of the things that we\\'re kind of inching there, but not quite there, but I think in 50 years, we\\'ll solve it. So I think these three things: personal assistant, solving medicine and robotics, these three things.\n\n**0:05:28.2 Vael:** Wow. Yeah. I mean, solving robotics would be huge. Like what\\'s an example of\\... could you do anything that a human could do as a robot or like less capable than that?\n\n**0:05:36.0 Interviewee:** I think so. I mean, it depends. It depends what you mean by human, right? I mean the\\... Well, if you try to drive a car for the last 20 years. We\\'ve been trying to do that, but honestly, I think this problem is really, really hard because you have to interact with other agents as well. That\\'s kind the main thing, right? You have to interact with other humans mostly. I mean, I think interaction between robots, it\\'s one thing, interaction between robots and robots is much easier. So I think whatever task that doesn\\'t involve humans is actually going to be pretty useful. Well, again, actually pretty easy because it hasn\\'t been solved yet, but I think it\\'s much easier than solving with humans.\n\n**0:06:05.8 Interviewee:** And like for example organizing your kitchen, organizing your room, cleaning your room, cooking for you, I think all the things should be pretty straightforward. Again, the main issue here is that every kitchen is different, so although we can train a robot to a particular kitchen or like do some particular kitchen, like once it\\'s presented with a novel kitchen with novel mess\\... mess is very personal. So it\\'d be harder for the robots to do. But I think that\\'s something that would be kinda within the reach, I think.\n\n**0:06:37.7 Vael:** Interesting, and for like, solving cancer, for example, I imagine that\\'s going to involve a fair amount of research per se, so do we have AIs doing research?\n\n**0:06:47.2 Interviewee:** So research, I want to distinguish research here because there is research in machine learning and there is research in medicine. And they are two different things. The research in medicine, and I\\'m not a doctor at all, but from what I understand, it\\'s very different, in the sense that you research particular forms of cancer, very empirical research. Like hey we have\\... Cancer from what I understand, one of the main issues with cancer is that every cancer is more or less unique.\n\n**0:07:11.8 Interviewee:** It\\'s really hard to categorize it, it is really hard to do AB testing. The main research tool that medical professionals use is AB testing, right, you have this particular group of cancers, group of people, that suffer from this particular cancer. Okay, let\\'s just come up with a drug that you can try to put these people on trial, and do that. But because every cancer is unique, it\\'s pretty hard to do that. So, and how to do this research is data, and that\\'s what we need for machine learning, we need to have sufficient data such that machine learning can leverage that and utilize it. So they\\'re now asking questions in two perspectives, one is do we need more data? Yes, absolutely.\n\n**0:07:46.3 Interviewee:** Moreover we not only need data, we need to have rules for which these machine learning agents, that of a company, university, would have access to this data differentially private right in the sense that this should be available to them. But is possibly private, of course privacy is a big issue. Which right now doesn\\'t really happen, plus there are other bureaucratic reasons for this not to happen, like for example hospitals withholding the data because they don\\'t want to share it and stuff like that.\n\n**0:08:16.3 Interviewee:** So if we can solve this problem, the research and medical part would be not necessarily\\... Not necessary for the machine learning. And on the machine learning side, there is also, as well, it\\'s very big hurdles in the sense that current machine learning algorithms needs tons of data. Like for the self-driving cars, they\\'re still talking about we need millions and millions of hours cars driving on the road, and they still don\\'t have enough. So for cancer, that\\'s kind of not be the case hopefully. Right? So hopefully we\\'re going to come up with algorithms that work with fewer data. Like one of the algorithms is so called few-shot algorithms, so when you have algorithms that learn on somebody\\'s language, but when you want to apply to a particular patient in mind, you just need to use specific markers to adjust your model to the specific patient. So there are some advancement in this way too but I think we are not there yet.\n\n**0:09:07.1 Vael:** Interesting. Cool, alright, so that\\'s more\\... It\\'s not like the AI is doing research itself, it\\'s more that it is, like you\\'re feeding in the data, to the similar types of algorithms that already exist. Cool, that makes sense. Alright so, I don\\'t know if you\\'re going to like this, but, people talk about the promise of AI, by which thy mean many things but one of the things is that\\... The frame that I\\'m using right now is like having a very general capable system, such that they have the cognitive capacities to replace all current day human jobs. So whether or not we choose to replace human jobs is a different question. But I usually think of this as in the frame of like, we have 2012, we\\'ve the neural net\\... we have AlexNet, the deep learning revolution, 10 years later with GPT-3 which has some weirdly emergent capabilities, so it can do text generation, language translations, some coding, some math.\n\n**0:09:51.9 Vael:** And so one might expect that if we continued pouring all the human effort that has been going into this. And nations competing and companies competing and like a lot of talent going into this and like young people learning all this stuff. Then we have software improvements and hardware improvements, and if we get optical and quantum at the rate we\\'ve seen that we might actually reach some sort of like very general system. Alternatively we might hit some ceiling and then we\\'d need do a paradigm shift. But my general question is, regardless of how we get there, do you think we will ever get to a very general AI system like a CEO or a scientist AI? And if so, when?\n\n**0:10:23.6 Interviewee:** So, my view on that is that it\\'s really hard to extrapolate towards the future. Like my favorite example is I guess Elon Musk\\... I heard it first from Elon Musk but it\\'s a very known thing. Is that, \\\"Hey we had like Pong 40 years ago, it was the best game that ever created, which was Pong, it was just like pixels moving and now we have a realistic thing and VR around the corner, so of course in 100 years we will have like a super realistic simulation of everything, right? And of course in a 1,000 years we\\'ll have everything, \\[hard to parse\\] everything.\\\"\n\n**0:10:53.0 Interviewee:** But again it doesn\\'t work this way. Because the research project is not linear, the research progress is not linear. Like 100 years ago Einstein developed this theory of everything, right, of how reality works. And then yet, we hit a very big road block of how exactly it works with respect to the different scale, like micro, macro and micro and we\\'re still not there, we propose different theories but it\\'s really hard. And I think that science actually works this way pretty much all around the history it\\'s been like that, right. You have really fast advancement and then slowing down. And in some way you have advancement in different things.\n\n**0:11:26.8 Interviewee:** Plus the cool thing about research is that sometimes you hit a road block that you can\\'t anticipate. Not only there are road blocks that you maybe don\\'t even imagine there are, but you don\\'t even know what they could be in the future. And that\\'s the cool part about science. And honestly, again, I think if we are indeed developing AGI soon, I think it\\'s actually a bad sign. Honestly, I think it\\'s a bad sign because it means that it\\'s\\... It\\'s like too easy, then I\\'ll be really scared: okay what\\'s next? Because if we developed some really super powerful algorithm that can essentially super cognitive\\... Better and better cognition of humans, I think that will be scary because then I don\\'t even know. First of all, my imagination doesn\\'t go further than that, because exactly by definition it will be smarter than me, so I don\\'t even know how to do that. But also I think it means that my understanding of science is wrong. Another example I like is that someone said, if you\\'re looking for aliens in the universe right now and then this person says, if we actually do discover aliens right now, it\\'s actually a very bad sign. It\\'s a bad sign in the sense that\\...\n\n**0:12:29.7 Interviewee:** If they\\'re there, it means that the great filter, that whatever concept of great filter, right, that we\\'re kind of in front of it, not some behind us, it is in front of us. Just means there\\'s some really big disaster coming up, because it actually, if aliens made it as well, this mean that they also pass all the filters behind us, it mean that some bigger filters in front of us. So I kind of belong to that camp. Like I\\'m\\... I\\'m kinda hoping that the science will slow down. And we\\'ll not be able to get there. Or there\\'s going to be something\\... It\\'s not that I think that human mind is unique and we can\\'t reproduce it. I just think that it\\'s not as easy as we think it would be, or like in our lifetime at least.\n\n**0:13:05.0 Vael:** I see. So maybe not in our lifetime. Do you think it\\'ll happen in like childrens\\' lifetime?\n\n**0:13:10.1 Interviewee:** Which children? Our children hopefully not. But I mean, at some point I think so. But again, I think it\\'ll be very different form. Humans\\' intellect is very, very unique, I think, and because it\\'s shaped by evolution, shaped by specific things, specific rules. So I also kind of believe in this, in the theory that in a way computers already\\... They are better than us because they are faster, to start with, and then they can\\... Another example I really like is that if you remember the AlphaGo playing Go with Lee Sedol, like one of the best two players of Go. And there was a\\...\n\n**0:13:43.0 Interviewee:** If you remember the Netflix show, there was like in one room they actually have all the journalists and they were sitting next to Lee Sedol playing with the person that represents DeepMind. And then all the DeepMind engineers and scientists, they were in a different room. And in that room, when they were watching the game playing, \\[in\\] that room the computer said by the move number 30, very early in the game, it says, okay, I won. And it took Lee Sedol another like half an hour or more, another like a hundred moves to confirm that statistic. And they were\\... The DeepMind guys were celebrating and these guys were like all thinking about the game, how to\\... But the game was already lost.\n\n**0:14:16.6 Interviewee:** So computers are already bett\\--\\... I mean, of course it\\'s a very constrained sandbox around the Go game. I think it\\'s true for many things, computers are already better than us. We are more general in our sense of generality, I guess. So maybe they will go in different direction\\... But the world is really multidimensional and the problems that we solve are very multi-dimensional. So I think it\\'s too simplistic to say that, then you\\'re universally better than us, or we are clearly subset and they are superset of our cognition. It\\'s, I don\\'t know\\... I think it \\[hard to parse\\].\n\n**0:14:44.9 Vael:** Great. Yeah. I\\'m going to just hark back to the original point, which was when do you think that we\\'ll have systems that are able to like be a CEO or a scientist AI?\n\n**0:14:55.0 Interviewee:** Okay. Yeah, sure. Again, sorry\\-- sorry for not giving you a simple answer. Maybe that\\'s what you\\'re looking for, but let me know if this is\\... (Vael: \\\"Nah, it\\'s fine,\\\")\n\n**0:15:06.5 Interviewee:** Yeah. I don\\'t know. In a way like\\... The work that like accountant does right now, it\\'s very different than what accountant did 30 years ago. Did we replace it? No, we didn\\'t. We augmented work. We changed the work of accountant so that the work is now simpler. So replacing completely accountant, in a way, yes, we also\\... Because the current, the set of tasks that accountant did 30 years ago, it\\'s automated already. Do we still need accountants? Yes. So same here. Maybe the job that CEO is doing right now in 30 or 40 years, everything that right now, as of today, CEO is doing in 40, 30 years, we will still\\... The computer will do it. Would we still need the human there? Yes. If this answers your question.\n\n**0:15:45.1 Vael:** Will we need the humans? I can imagine that we can have like, eventually AI might be good enough that we could have it do all of our science and then, or it\\'s just so much smarter than us then we\\'re just like, well, you\\'re much faster at doing science. So I\\'ll just let you do science.\n\n**0:15:57.8 Interviewee:** So let me rephrase your question a bit. So what you\\'re saying right now is it is a black box right now, that right now, that\\'s CEO right now, that\\'s a CEO job. That\\'s what CEO is doing. There\\'s some input, then some output. So what you\\'re saying that now we can automate it. And now the input and output will also feed through something to computer let\\'s say, but then what is, what would be the\\... We\\'ll have to refine the input and output then, because it still should serve humans. Right?\n\n**0:16:21.9 Interviewee:** So previously you need to have drivers, like for the tram you\\'re having to have a driver. Now instead of drivers, you have computers, but you still need to have a person to supervise the system in a way. Or, but then you\\'re talking about even that being automated. But in same time, you cannot\\... Like the system, like for example, self-driving car, it\\'s become a tool for someone else. So you\\'re removing the work of a driver, but you replace it with a system that now it\\'s called something else. Like Uber. Previously, you had to call a taxi. Now you have an app to do it for you.\n\n**0:16:49.7 Interviewee:** There\\'s an algorithm that does it for you. So the system morph into something else. So same thing here. I think as CEOs, in a sense, they might be replaced, but the system also would change as well. So it won\\'t be the same. It won\\'t be like, okay, we\\'ll have a Google. And there is a CEO of Google who is like robot. Now the Google will morph in a way that the task the CEO is doing would be given the computer. But the Google will still\\... Like also, by the way, Google, even Google. Google works on its own. In fact, if you, right now, fire all the employees, it\\'ll still work a few days. Everything that we do, we do for the future. Like it\\'s pretty unique moment in history, right? \\'Cause previously, like before the industrial revolution, you had to do things yourself. Then with the factories and factories well then, okay, you\\'re helping factory to do its work. And now there is a third wave, whatever, fourth wave, industrial revolution. We don\\'t even do anything. It\\'s on\\... In a way Google doesn\\'t have a CEO, the Google CEO doesn\\'t work for the today Google. Google CEO works for the Google in a year which is\\... So Google work\\... Google is already that. Google doesn\\'t have a CEO. So that\\'s what I mean.\n\n**0:17:56.8 Vael:** Alright. Uh, I\\'m going to do the science example, because that feels easier to me, but like, like, okay. So we\\'re like doing\\... We\\'re attempting to have our AI solve cancer and normally humans would do the science. And be like, okay, cool I\\'m going to like do this experiment, and that experiment, and that experiment. And then at some point, like we\\'ll have an AI assistant and at some point we\\'ll just be like, alright, AI solve cancer. And then it will just be able to do that on its own. And it\\'s still like serving human interests or something, but it is like kind of automated in its own way. Okay. So do you think that will\\... When do you think that will happen?\n\n**0:18:34.0 Interviewee:** The question is how with\\... The question is, can this task be, sorry I know you\\'re asking about the timeline and I want to be, I know\\... I don\\'t want to ramble too much. But I think I want to be specific enough, what kind of problem we\\'re talking about. If we\\'re talking about the engineering problem, that we\\'re talking about the timeframe of our lifetime. If you\\'re talking about a problem that involves more creativity, like for example, come up with a new vaccine for the new coronavirus? Sorry, it\\'s automatic. I think that work we could do in the 20\\... 20-30 years. Right, because we have tools, we know engineering tools, what needs to be done, where you can do ABC, you\\'re going to get D. Once you have D you need to pass it through some tests and you\\'re going to get E and that pretty much automate. I think this we can do in 20-30 years. Solving cancer, I just don\\'t know enough. How much creativity needs to be there. So more harder, probably, yeah. Yeah, yeah.\n\n**0:19:21.0 Vael:** Yeah, no, no, that\\'s great. And yeah, and you don\\'t know when we\\'ll\\... For\\... So probably more than our lifetime, or more than 30 years at least for creating those?\n\n**0:19:28.9 Interviewee:** Mm-hmm. Mm-hmm.\n\n**0:19:30.1 Vael:** Alright, great. Cool. Alright, I\\'m moving on to my next set of questions. So imagine we have a CEO AI. This is\\... I\\'m still going back to the CEO AI even though\\... (Interviewee: \\\"Sure, of course.\\\")\n\n**0:19:40.8 Vael:** And I\\'m like, \\\"Okay CEO AI, I want you to maximize profits and try not run out of of money or exploit people or\\... try to avoid side effects.\\\" And this currently is very technically challenging for a bunch of reasons, and we couldn\\'t do it. But I think one of the reasons why it\\'s particularly hard is we don\\'t actually know how to put human values into AI systems very well, into the mathematical formulations that we can do. And I think this will continue to be a problem. And maybe, and\\... I think it seems like it would get worse in the future as like the AI is optimizing over, kind of more reality, or a larger space. And then we\\'re trying to instill what we want into it. And so what do you think of the argument: highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous.\n\n**0:20:28.3 Interviewee:** Okay, so this is a quite big misconception of the public is that, and I think actually, it\\'s rightfully so, because I am actually working on this, is that\\... So what you said right now is like the famous paperclip example, right? Which is going to turn the whole world in particular, and that\\'s kind of what you said more or less, right? So the problem here is that current system, a lot of them, it\\'s true, they work on a very specific one dimensional object. There\\'s a loss function that we\\'re trying to minimize. And it\\'s true, like GPT-3 and all these system, currently they are there, they have only one number, it want to minimize. And this is, if you think about it, is way too simplistic for this very reason, right? Exactly, because if you want to just maximize number of paperclips, you\\'re just going to turn the whole word into the paperclip machine factory. And that\\'s the problem. But the reality is much more complicated. And in fact, we are moving there, like my research was, we\\'re moving away from that. We\\'re trying to understand, we\\'re trying to understand if first of all intelligence can be emerge on its own without it being minimized explicitly.\n\n**0:21:20.2 Interviewee:** And second of all, these pitfalls, the pitfalls that you\\'re just minimizing one number and of course, not going to work. So answering your question, yeah, yeah I think it will fail, because it will face the real world scenario unless it has specific checks and balances. For example, there\\'s also online learning paradigm, right, where you basically learn from every example as they come in the time series. I think this system need to be revamped to work in a larger scale obviously, but this is the kind of system that potentially could work, where you don\\'t just minimize, maximize one objective, but you have a set of objectives or you have a set of actors with their own goals and their intelligence is emerging from them. Or you learn online. You just fail and you learn and you forget what you learned, you have\\... learn in a continuous fashion. So like all of these things that we as a humans do, could be applicable for AI as well. Like we are the humans, we don\\'t have\\... You don\\'t spend your day like, go to sleep, okay, today, day was like 26. You don\\'t do that. And even if you do that, you probably will have multiple numbers. This was 26, this was 37. Okay, it doesn\\'t matter the day.\n\n**0:22:19.3 Vael:** Yeah, that makes sense. So, one thing I\\'m kind of curious about is there\\'s\\... like, when you say, it won\\'t work, just optimizing one number. And will it not work in the sense that, we\\'re trying to advance capabilities, we\\'re working on applications, Oh, turns out we can\\'t actually do this because we can\\'t put the values in. And so it\\'s just going to fail, and then it will get handled as part of the economic process of progress. Or do you think we\\'ll go along pretty well, we\\'ll go along fine, things seem fine, things seems like mostly fine. And then like if things just diverge, kind of? Kind of like how we have recommender systems that are kind of addictive even though we didn\\'t really mean to, but people weren\\'t really trying to make it fulfill human values or something. And then we\\'d just have the sort of drift that wouldn\\'t get corrected unless we had people explicitly working on safety? Yeah, so, yeah, what do you think?\n\n**0:23:06.4 Interviewee:** I think people who work on safety, and you could see it yourself, people who work on safety, people who work on the fairness, people who work on all the things, checks and balances, so to speak, right? They are becoming more and more prominent. Do I know it\\'s enough? No, it\\'s not enough, obviously we need to do more for different reasons. For obviously DEI, but also for just privacy and safety and other things. And also for things we just talked about, right? Just because it\\'s true that the fact that\\... By the way, the fact that we have only the current system like GPT-3 or many other algorithms minimize only one value, it\\'s not a feature, it\\'s a bug. It\\'s just convenience because we use the machine learning optimization algorithms that work in this way, and we just don\\'t have other ways to do that. And I hope in the future other things would come up.\n\n**0:23:46.0 Interviewee:** In answering your question, I don\\'t think it will necessarily diverge, we\\'ll just hit a roadblock and we\\'re already hitting them. You\\'ve heard about AlexNet like 10 years ago, and now, sure, we have acute applications and like filters on the phone, but like, did AI actually enter your life, daily life? Well, not true, I mean, you have better phone, they\\'re more capable, but actually AI, like in terms of that we all dream about, does it enter your life? Well, not really. We can live without it, right? So, we\\'re already hitting all these roadblocks, even like medical application. Google 10 years ago, claimed they\\'d solved like the skin cancer when they can detect it, and it didn\\'t\\... It didn\\'t really see the light of day except for some hospitals in India, unfortunately. So we\\'re already hitting tons of roadblocks, and I don\\'t think it\\'s\\... It\\'s like for this reason precisely, because when you face reality, you just don\\'t work as good as you expect for multiple reasons.\n\n**0:24:30.4 Vael:** Interesting. Cool. So do you think that others\\... This\\... Have you heard of the alignment problem? Question one.\n\n**0:24:38.1 Interviewee:** No.\n\n**0:24:40.7 Vael:** No. Cool, all right, so you\\'ve definitely heard of AI safety, right?\n\n**0:24:43.5 Interviewee:** Mm-hmm.\n\n**0:24:43.9 Vael:** Yeah. Alright. So one of the definitions of alignment is building models that represent and safely optimize hard-to-specify human values, alternatively ensuring that AI behavior aligns with system designer intentions. So I\\'m\\... So one of the questions, the question that I just kind of asked, was trying to dig at, do you think that the alignment problem per se, of trying to make sure that we are able to encode human values in a way that AIs can properly optimize, that that\\'ll just kind of be solved as a matter of course, in that we\\'re going to get stuck at various points, then we\\'re going to have to address it. Or do you think that\\'ll just like\\... We will\\... It won\\'t get solved by default? Things will continue progressing in capabilities, and then we\\'ll just have it be kind of unsafe, but like, \\\"Uh, you know, it\\'s good enough. It\\'s fine.\\\"\n\n**0:25:21.1 Interviewee:** I think a bit of both. There\\'s so much promise and so much hype and so much money in pushing AI forward. So I think a lot of companies will try to do that. These various\\... We live in democracy, fortunately or unfortunately or actually we live in more like value of dollar, unfortunately, our society. At least in some countries are valued by progress. And especially companies, they have to progress, they have to advance and this is one of the easiest ways to advance. But I think some companies may be bad actors, whatever, they will try to push it to the limit. But these questions are ultimately unsolved, in a way this current system are designed, for the reason we discussed. So I think it will be a bit of both. Some companies will back down, some companies will try to push it to the limit, so we\\'ll see. Depends on applications as well. I mean, some applications are safe. If you\\... I\\'m sorry, but\\... Sorry to bring it up, but for example, there was a case of AI in Microsoft when they released the bot and the bot start cursing. Which is okay. It\\'s a cute example, they should have done a bit of PR loss, it\\'s fine, but it\\'s different from the car crashing you into that tree. So depends, depends on the application.\n\n**0:26:27.4 Vael:** Yeah, it seems definitely true. Alright, so next argument is focusing on our CEO AI again, which can do multi-step planning, and it has a model of itself in the world. So it\\'s modeling other people modeling it, because it feels like that\\'s pretty important for having any sort of advanced AI that\\'s acting as a CEO. So the CEO is making plans for the future, and it\\'s noticing as it\\'s making plans for its goal of maximizing profits with constraints that some of its plans fail because it gets shut down. So we built this AI so that it has the default thing where you have to check with humans before executing anything, because that seems like a basic safety measure. And so the humans are looking for a one page memo on the AI\\'s decision. So the AI is thinking about writing this memo, and then it notices at some point that if it just changes the memo to not include all the information, then maybe the AI will be\\... I mean, then the humans will be more likely to approve it, which means it would be more likely to succeed in its original goal.\n\n**0:27:22.8 Vael:** And so, the question here is\\... So this is not building self preservation into the AI, it\\'s just like AI as an agent that\\'s optimizing any sort of goal and self preservation is coming up as an instrumental incentive. So, what do you think of the argument, \\\"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals and this is dangerous?\\\"\n\n**0:27:42.9 Interviewee:** Well, you already put intelligence and the human level into the sentence as well.\n\n**0:27:48.3 Vael:** Yes.\n\n**0:27:48.6 Interviewee:** Yeah, and it kind of\\... I\\'m already against that, because I don\\'t think the system would actually behave in a way, like a sneaky way to avoid. Well, first of all, the current AI, even currently AI systems are highly uninterpretable. It\\'s very hard to interpret what exactly is going on, right? But it still work within the bounds, right? So the example I gave you was Lee Sedol, when the Go game, already knew it\\'s won, but it couldn\\'t explain why it won, and it take human to parse it. Or another example that I like, in the chess, AI, at some point, put a queen in the corner, something you just never do as a human. And it couldn\\'t explain, obviously, in fact. But it never pick up the board and crash it on the\\... It will work within the bounds.\n\n**0:28:29.1 Interviewee:** So of course, if the bounds of this program allows the program to cheat in a way and withhold information, then yes, but\\... Again, it kinda works pair in pair. On one side it\\'s really hard to interpret, so this one-pager that AI has to provide, it also has to be curated. And it will not include all information because it\\'s impossible to digest all the computer memory and knowledge into one page. So on the one side, this page will always be limited and with all this lossy compression of the state of computer. But on the other side, I don\\'t think computers can on purpose cheat on this page. Actually they might, depending on the algorithm\\[?\\], again, but I think it\\'s a valid concern. That\\'s an easy question, an easy answer, but it\\'s depending on the system, depends how you design it.\n\n**0:29:09.5 Vael:** Yeah, so I think I\\'m trying to figure out what exactly is in this design system. So one thing is it has to be very capable, and it has to\\... I want it to be operating over like reality in a way that I expect that CEOs would, so its task is interacting\\... It\\'s doing online learning, I expect, and it\\'s interacting with actual people. So it\\'s giving them text and it\\'s taking in video and text, and interacting with them like a CEO would. And it does have to, I think, have a model of itself in the world in order for this to happen and to model how to interact with people. But if we have AI to that extent, which I kind of think that eventually we\\'ll develop AI with these capabilities. I don\\'t know how long it will take, but I assume that these are commercially valuable enough that people will eventually have this sort of system that\\... This is an argument that like\\... And I don\\'t know if this is true, but any agent that\\'s optimizing really hard for a single goal will at some point, like, figure out if it\\'s smart enough to\\... That it should like try to eliminate things that are going to reduce the likelihood of its plan succeeding, which in this case may be humans in the way.\n\n**0:30:10.5 Interviewee:** I think you\\'re right. Actually, while you were talking I also came up this example that I really like about, maybe you saw it as well, where there\\'s an agent that optimized solving a particular race game and you control the car. At some point found a loophole, you remember \\[hard-to-parse\\] example, and find a loophole and just goes in circles. And the answer to that, you need to have explicit test goals. But in online learning settings, it\\'s really hard. Plus, again, coming back to question number one, which is like, the system is so large at some point, you\\'re just not able to cover all the cases. So yeah, yeah, I think so, I think it\\'s possible, especially today in online learning fashion when you can\\'t really have a complete system that you have all integration tests possible, and then you ship it. Once a system that automatically learns and updates, then it becomes\\... That could be a problem. Yeah, I agree.\n\n**0:30:52.8 Vael:** Yeah. So this boat driving race is actually one of the canonical examples of the alignment problem, as it were, which is like\\-- you put in the right example into it. There\\'s another part of the\\... so that\\'s one version of \\\"outer alignment\\\", which is where the system designer isn\\'t able to input what it wants into the system well enough, which I think gets harder as the AI gets more powerful. And then there\\'s an \\\"inner alignment\\\" kind of issue that people are hypothesizing might happen, which is where you have an optimizer that spins up another optimizer inside of it, where the outer optimizer is aligned with what the human wants, but it now has this inner optimizer. The canonical example used here is how evolution is in some sense the top-level optimizer and it\\'s supposed to create agents that are good at doing inclusive reproductive fitness and instead we have humans. And we are not optimizing very hard for inclusive reproductive fitness and we\\'ve invented contraceptives and stuff, which is like not very helpful. And so people are worried that maybe that would happen in very advanced AIs too, as we get like more and more generalizable systems.\n\n**0:31:51.0 Interviewee:** Yeah. I think that there are two things to say, first of all, is what you said about loop in the loop. Now we\\'re talking about what exactly system, what kind of a system can an algorithm create, right? Because for example, if you look at the current machine learning, we do know that for example, convolutional layers, they are good at computing equivariances, like transitional equivariances. If you move the object, they\\'re supposed to be indifferent, but \\[hard to parse\\]-connected layers don\\'t do that. So this \\[hard to parse\\] behavior you described? You need to have a system, first of all, that\\'s capable enough for this kind of behavior. That\\'s a big if, but okay. Once we get there, if we get there, the second question is, okay, can you cover for that? Can you figure out that these cases are eliminated completely from existence?\n\n**0:32:32.4 Interviewee:** And the example of CEO is maybe is a good example. For me, the interesting example that is\\... that I can definitely see and envision, not even CEO, but like for example, application that controls your behavior in a way. Like, for example, curate your Tinder profile or curate your inbox sorting. And control you through that. Then yeah, for sure. Yeah. If you don\\'t control for everything, it can be smarter than us and kind of figure out, back-engineer, how humans work, because we\\'re not \\[hard to parse\\] than that. And curate the channel for us and even\\... Get us to the local minimum we might not want, but we\\'ll still maximize its profits, whatever the profit means for the computer.\n\n**0:33:07.2 Vael:** Okay. Yeah. So I\\'m one of those people who is really concerned about long term risk from AI, because I think there\\'s some chance that if we continue along the developmental pathway that we have so far, that if we won\\'t solve the alignment problem, we won\\'t figure out a good way in order to like encode human values in properly and include the human in the loop properly. Like one of the easiest solutions here is like trying to figure out how to put a human in that loop with a AI, even if the AI is vastly more intelligent than a human. And so people are like, Oh, well maybe if you just train an AI to like work with the human and translate the big AI so that like the human understands it, and this is interpretability, and then you have like a system who\\'s training that system and maybe we can recursively loop up for something. But.\n\n**0:33:52.2 Interviewee:** The problem here, why they see, like, for example, let\\'s take a very specific example. There\\'s AI systems that, for example, historically curated, curated by humans, obviously without bad intent, but it gives bad credit scores to black population say, like people of color. That\\'s really bad behavior. And this behavior is kinda easy to check because you have statistics and you look at statistics. It\\'s one, let\\'s call it one hop away. So you take the data, you take statistics, is done. The second hop, the two hops away would be: it creates dynamics that you can check, not right away, but later that potentially show something like that. It would be harder for humans to check. You can also, if you think about it for a while, you can come up with like three hop, right. Something that creates something that creates something that it does. So it\\'s much easier and much harder to check. You don\\'t know until it happens, and that\\'s the point. So you have this very complicated dynamic you can\\'t\\... There\\'s not even flags that you can check, red flags that you can check in your model. That might be the issue.\n\n**0:34:49.3 Vael:** Yeah. There\\'s also\\... Yeah. Interpretability seems really\\... It seems really important, especially as very, very intelligent systems, and we don\\'t know how to do that. So possible versions of things going badly, I think, are\\... So if you have an AI system that\\'s quite powerful and it\\'s going to be instrumentally incentivized to not let itself be modified, then that means that you don\\'t have that many chances to get it right in some sense, if you\\'re dealing with a sufficiently intelligent system. Especially also because instrumental incentives are to acquire resources and influence and so\\... and also improve itself. Which could be a problem maybe of recursive self-improvement. And then it, like, can get rid of humans very quickly if it wanted to, via like synthetic biology. Another kind of\\-- this is not as advanced AI, but what if you put AI in charge of the manufacturing systems and food production systems, and there\\'s some sort of correlated failure. And then we have misuse, and then the AI assisted-war as, like, various concerns about AI. What do you think about this?\n\n**0:35:37.9 Interviewee:** Yeah. So one thing I want to say is that the AI system are kinda too lazy. In a sense that the reason why this loophole worked with the car is because this solution is easier in some mathematical way to find the proper solution.\n\n**0:35:52.7 Interviewee:** So one thing humans can address this problem is just looking at\\... Use a supervisor as a human, or with a test, which is pretty much like supervisors\\-- check all the check and balances. We can create the system for which the, maybe mathematical even, finding proper solution is easier for the computer than finding its loopholes. That\\'s one thing I want to say. And the second thing I want to say, which is now coming to nature because now we\\'re talking about the rules of nature. But if it so happened that we design a system for which finding the loophole\\... for example, we have laws of physics\\-- we also have laws in behavior. Like if you have an algorithm that, you know, wants to organize human behavior or something, there\\'s also laws of behavior. So it might be an interesting question that once we have this algorithm, we can get a bunch of sociologists, for example, people who are familiar with it, to study this algorithm and figure out that if, for example, loopholes are actually more probable than normal behavior quote on quote. So for example, being a bad person is better than being a good person\\-- or easier. Not necessarily better, but easier than being a good person. Which we kind of see it in society sometimes.\n\n**0:36:51.5 Interviewee:** So it\\'s curious if an algorithm will actually discover that. And finding this loophole with a car is easy because it\\'s there, you just need to move a little bit. So for a computer is easy to find it, like it\\'s a local minimum that\\'s really easy to fall in. But if you design the system for which these loopholes are hard, that might be easier. Or the question is, can we define a problem for which the proper solution is easier?\n\n**0:37:11.9 Vael:** Yeah. And I think the problems are going to get much more\\... harder and harder. Like if you have an AI that\\'s doing a CEO thing, then I imagine like, just as humans figure out, there\\'s many loopholes in society, many loopholes in how you achieve goals that are much cheaper in some sense. So I do think it\\'s probably going to have to be designed in the system rather than being in the problem per se, as the problems get more and more difficult. Yeah.\n\n**0:37:32.1 Vael:** So there\\'s this community of people working on AI alignment per se and long term AI risk, and are kind of trying to figure out how you can build\\... How you can solve the alignment problem, and how you can build optimization functions or however you need to design them, or interpretability or whatever the thing is, in order to have an AI that is more guaranteed, or less likely to end up doing this kind of instrumental incentive to deceive or not want to\\... not self-preservation, maybe get rid of humans. So I think my question right now is, and there\\'s money in this space now, there\\'s a lot of interest in it and I\\'m sure you\\'ve heard of it\\-- I mean, I\\'m not sure you\\'ve heard about it, but there\\'s\\... Yeah. There\\'s certainly money and attention these days now that there wasn\\'t previously. So what would cause you to work on these problems?\n\n**0:38:12.4 Interviewee:** \\...Well, ultimately a lot of people are motivated by actually just very few things and one of them is kids. Kids cause here you want to have a good future for yourself and the kids.\n\n**0:38:24.8 Interviewee:** You want to live in a better and better human, better and better society, better and better everything. So that, and actually it hits home. The examples we discussed even during this call are, could be pretty grim. If we don\\'t make this right. So putting resources there I think is really important. If people, before they come up with atomic bomb, they will figure out situations of which we are facing now, people might not even come up with atomic bomb, but do it in a safer way. Or like people knew about Chernobyl before it happened, obviously they would make it better. So having this hindsight, even though we don\\'t know what\\'s going to happen in the future, but putting the resources there, I think is definitely a smart move.\n\n**0:38:57.9 Vael:** Got it. And what would cause you specifically to work on this?\n\n**0:39:03.2 Interviewee:** Yes, I know, you asked this question. Yeah, well again, particular problems. I mean, it\\'s an interesting problem. For me I\\'m actually\\... Since I work in optimization, so I like to have well-formulated problems. And making sure this one goes, this problem is more\\... now it\\'s kind of vague. I mean, even now we discuss it. I agree with you that this problem is valid. It makes sense. It exists. But it\\'s still vague, like how you study it. My PhD was also on the way to interpret, to come up with a way to interpret data.\n\n**0:39:30.2 Interviewee:** In fact, like maybe it\\'s\\... I don\\'t want to spend too much time on it, but basically the idea is to visualize the data. If you have a very high dimension of data, you want to visualize it. But it\\'s very lossy. You just visualize something, you just do something and it doesn\\'t represent everything. And it\\'s in a way it was actually, it was well-formulated, because there is a mathematical formula to minimizing. And of course it comes with conditions like \\[hard to parse\\] and loss and stuff, but in a way it\\'s there, the problem is defined. So the same here. Like in a way, there\\'s a mathematical apparatus. \\...Maybe actually I\\'m going to be the one developing it as well. So I\\'m not saying that, \\\"come give me the one I\\'m going to work on!\\\" I think that\\'s a \\[hard to parse - \\\"would be direct\\\"?\\], so problem that I would be excited about.\n\n**0:40:06.1 Vael:** Yeah. And I mean, I think like what this field really needs right now is someone to specify the problem more precisely. Just because it\\'s like, Oh, this is a future system, it\\'s like at least 50 years, well I don\\'t know at least\\-- it\\'s far away. It\\'s not happening immediately and we don\\'t have very good like frameworks for it. And so it makes it hard to do research on. Cool. Alright. Well, I\\'ll send you some resources afterwards if you feel like looking into it, but if not, regardless, thank you so much for doing this call with me.\n\n**0:40:33.1 Interviewee:** Yeah, I appreciate it. Thank you for your time, it was really fun.\n", "filename": "NeurIPSorICML_q243b-by Vael Gates-date 20220318.md", "id": "c163c88165bea2dd848c23d0d9acbc01", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "NeurIPSorICML_7oalk-by Vael Gates-date 20220320", "authors": ["Vael Gates"], "date_published": "2022-03-20", "text": "# Interview with AI Researchers NeurIPSorICML_7oalk by Vael Gates\n\n**Interview with 7oalk, on 3/20/22**\n\n**0:00:03.2 Vael:** Alright, my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:08.5 Interviewee:** Yeah, I study biologically-inspired artificial intelligence of building models of biological intelligence, mostly with the visual system, but also with cognitive functions, and using those models to understand differences between humans and machines. And upon finding those differences, the hope is that we can build more human-like artificial intelligence and in the process develop models that can better explain the brain.\n\n**0:00:46.1 Vael:** Interesting, yep. I was in that space for a while. The dual\\-- AI\\--\n\n**0:00:49.1 Interviewee:** Oh, cool. I saw you worked with Tom Griffiths.\n\n**0:00:54.0 Vael:** Yeah, that\\'s right, yep.\n\n**0:00:55.0 Interviewee:** Yeah, very much so. Cool.\n\n**0:00:56.5 Vael:** Alright, so my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:01:06.9 Interviewee:** So I\\'m most excited about AI, not for most of the applications that are getting a lot of praise and press these days, but more so for convergences to biomedical science. I think that the main driver of progress in AI over the past decade has been representation learning from large-scale data. And this is still an untapped area for biomedical science, so we really don\\'t know what elements of biology you can learn predictive mappings for. So, for instance, given an image of a cell\\'s morphology, can you predict its genome? Can you predict its transcriptome? Etcetera. So I think that what I\\'m most excited about is the potential for AI to completely transform the game of drug discovery, of our ability to identify the causes and identify targets for treating disease.\n\n**0:02:18.4 Vael:** Nice.\n\n**0:02:18.7 Interviewee:** What I\\'m most afraid of is in the game of AI that is most popular right now. The NeurIPS CDPR game, which\\... lots of people in the field have pointed out the issues with biases. And also\\... it\\'s late here, so I\\'m not thinking of a good term, but this Clever Hans, Mechanical Turk-ish nature of AI, where the ability to solve a problem that seems hard to us can give this sheen of intelligence to something that\\'s explained a really trivial solution. And sometimes those trivial solutions, which are available to these deep learning networks which exploit correlations, those can be really painful for people. So in different applications of using AI models for, let\\'s say screening job applicants, there\\'s all these ethical issues about associating features not relevant to a job, but rather just regularities in the data with a predictive outcome. So that\\'s a huge issue. I have no expertise in that. It\\'s definitely something I\\'m worried about. Something that\\'s related to my work that I\\'m super worried about\\... I mentioned I do this biological to artificial convergence, this two-way conversation between biological and artificial intelligence. The best way to get money, funding in academia, for that kind of research is going through defense departments. So one of the ARPA programs\\-- IARPA, DARPA, Office of Naval Research. So I know that the algorithms built for object tracking, for instance, could be extremely dangerous. And so I build them to solve biological problems, and then I scale them to large-scale data sets, large data sets to get into conferences. But, you know, easily, somebody could just take one of those algorithms and shove them into a drone. And you do pretty bad stuff pretty easily.\n\n**0:04:41.2 Vael:** Yeah.\n\n**0:04:43.2 Interviewee:** Yeah. I guess one more point related to the biomedical research. There\\'s this fascinating paper in Nature and Machine Intelligence. One of the trends right now for using AI in biology is to predict protein folding, or to predict target activity for a molecule or a small molecule. So when you do biomedical research, you want to screen small molecules for their ability to give you a phenotype in the cell. And so you can just use a multilayer perceptron or graph neural network, transformer, whatever, to learn that mapping, and do a pretty good job it turns out; it\\'s shocking to me. But what you can do is you can either predict\\-- let\\'s find the molecule that\\'s going to have the best target site activity, that\\'s the best therapeutic candidate. But you can also flip it, you can optimize it in the opposite direction. (Vael: \\\"Yeah\\--\\\") You saw that.\n\n**0:05:48.2 Vael:** Yeah. I saw that \\[paper\\].\n\n**0:05:49.0 Interviewee:** It\\'s so obvious, but easily, easily, easily you could use any of these biomedical applications for the nefarious things.\n\n**0:06:00.5 Vael:** Yeah, yeah, that\\'s right. That was a very recent paper right?\n\n**0:06:02.5 Interviewee:** Yeah, it just came out like last week.\n\n**0:06:05.1 Vael:** Got it. Yeah. Alright, we\\'ve got excited and risks. My next question is talking about future AI. So putting on a science fiction, forecasting hat. So we\\'re 50 plus years into the future, so at least 50 years in the future, what does that future look like?\n\n**0:06:22.6 Interviewee:** Okay. Optimistically, we\\'re all still here. So the reason why I got into this field is because I think it\\'s the biggest force multiplier there is. So all of the greatest problems that\\-- the existential crises we face today, there\\'s a potential for machine learning to be part of the solution to those problems. So focusing just for a moment on health, biomedical research, which is where I\\'m mostly interested right now, 50 years, I think that we\\'re going to be to a point where we know we have drugs that are completely uninterpretable to humans. We have solved disease, we have cured disease, but we don\\'t know why. We\\'ve also created a potential issue, byproducts from that, adversarial effects, for lack of a better phrase, by curing cancer through some black box method. That\\'s going to yield other problems, kind of like putting your finger in the dam. So I think that in 50 years from now, we will have machine learning-based solutions to some of the greatest health problems: cancer, aging, neurodegenerative disease that face us, but we won\\'t understand those solutions. And so that\\'s potentially the next frontier, which is to develop interpretable drugs. So I imagine that science will change. The paradigm of science will shift where there are no longer bench scientists, but instead robotics is how science is done. So scientists ask high-level questions, and then biology becomes a search problem where you just, akin to a tricorder from Star Trek where it just solves, and you move onto the next question. You can imagine that framework being applied to other problems. Climate change, hopefully. I\\'m not optimistic, but hopefully, you could have solutions to climate change. But there will be a black box. And when you have a black box, there\\'s always issues with unintended effects. So I guess 50 years from now, we\\'ll have, rather than thinking about AGI and all that stuff, we\\'ll have solutions to some of our grandest challenges today, but those solutions may bear costs that are unexpected.\n\n**0:09:40.8 Vael:** Yeah, that makes sense. I have few follow-ups. So you mentioned existential risks facing humanity. What are those?\n\n**0:09:51.5 Interviewee:** So there\\'s climate change. There\\'s nuclear war, which is the byproduct of power. There is poverty and hunger, famine. With the risk of sounding like some VC, I think that there are chances to disrupt solutions to each of those problems using machine learning. For instance, for energy, already folks at DeepMind are using reinforcement learning to drive the development of new forms of fusion to search for better forms of fusion. Who knows if this is scalable, but I think with\\... it\\'s not going to be better algorithms, it\\'s going to be better integration of the toolset for studying fusion and these machine learning methods to make it more rapid to test and explore those questions that are related to that field, which I don\\'t know. I think that will lead to positive solutions. That would potentially yield lower-emission or no-emission, high-energy solutions that can power homes without any emissions. Similarly, with cars\\... self-driving, as much as I think that Elon Musk is barking up the wrong tree\\-- going LIDAR-free, \\[inaudible\\] approach to self-driving, that will be a solved problem within 10 years. And that\\'s going to help a lot the emissions issue. So energy and global warming are kind of tied together. And I think that there\\'s lots of different applications of machine learning and AI that can help address those problems. Likewise for famine, for world hunger. Genetically-engineered organisms are a good approach to that problem. What I mentioned about for solutions to biomedical problems, disease, etc.\\-- treating those as search problems and bringing to bear the power of artificial intelligence. I think you could probably adopt a similar approach for engineering organisms that are tolerant to weather, especially the changing climate and other diseases and bugs, etcetera. I think there\\'s going to be some overlap between the methods of work for each of these areas. I guess I didn\\'t mention COVID and pandemics. But again, that falls into the same framework of treating these existential problems as search problems, no longer doing brute force, then science. And instead resorting to black box solutions that are discovered by large scale neural network algorithms. Did I miss any? Oh, war. Yeah, I don\\'t know. Okay, so here\\'s an answer. Whoever has these technologies, who\\'s ever able to scale these technologies most rapidly, that\\'s going to be almost like an arms race. So whoever has these technologies will be able to generate so much more value than countries that don\\'t have these technologies that economies will be driven by the ability to wield appropriate artificial intelligence tools for searching for solutions to these existential issues.\n\n**0:14:36.1 Vael:** Got it, thanks. Alright, so my next question is\\... so this is kind of a spiel. Some people talk about the promise of AI, by which they mean many things, but that the thing I\\'m referring to is something like a very general capable system. So the cognitive capacities that can be used to replace all current day human jobs. Whether or not we choose to replace human jobs is a different question, but having the kind of capabilities to do that. So like, imagine\\... I mostly think about this in the frame of 2012, we have the deep learning revolution with AlexNet, and then 10 years later, here we are, and we have systems like GPT-3, which have kind of weirdly emergent capabilities, that can be used in language translation and some text generation and some coding and some math. And one might expect that if we can continue pouring all of the human effort that we\\'ve been pouring into this, with all the young people and the nations competing and corporations competing and algorithmic improvements at the rate we\\'ve seen, and like hardware improvements, maybe people get optical or quantum, that we might get\\... scale to very general systems, or we might hit some sort of ceiling and need to do a paradigm shift. But my question is regardless of how we\\'d get there, do you think we\\'ll ever get very general systems like a CEO AI or a scientist AI and if so, when?\n\n**0:15:55.3 Interviewee:** I don\\'t think it\\'s worth it. I see the AI as\\... look, I grew up reading Asimov books, so I love the idea of \\\"I, Robot\\\", not with Will Smith, but solving all these galaxy conquest issues and questions. But I don\\'t think it\\'s worth it because\\... I see this as a conversation between human experts, the main experts, and artificial intelligence models that are just going to force multiply the ability of human experts to explore problems. And maybe in a 100 years, when we\\'ve explored all of these problems, then we\\'ll be so bored that we say, \\\"Can we engineer something that contains all of these different domain expert AIs in one, in a way that can\\-- those AIs can respond to a question and affect behavior in the world and have their own free will.\\\" It\\'s more of a philosophical pursuit that I don\\'t even, I don\\'t\\... okay, so from an economic standpoint, I don\\'t know who\\'s going to pay for that. Even the military would be like, \\\"All I want is a drone that\\'s going to kill everybody.\\\"\n\n**0:17:30.9 Vael:** Yeah, Yeah, cool. So I have some counterpoints to that. So I\\'m not actually imagining necessarily that we have AI that is conscious or has free will or something. I think I\\'m just imagining an AI that is, has a, very capable of doing tasks. If you have a scientist AI then\\...We were talking about automating science, but in this case, it\\'s not even relying on an expert anymore. Maybe it defers to an expert. It\\'s like, do you want me\\... it looks to the expert\\'s goals, like, \\\"Do you want me to solve cancer?\\\" Or like a CEO, it has shareholders. And I think that there is actually economic incentive to do this. DeepMind and OpenAI have explicitly said\\... I think that they\\'re trying to make artificial general intelligence, or like things that can replace jobs, and more and more replace jobs. And I think we\\'re going in that direction with Codex and stuff, which is currently an assistant tool, but I wouldn\\'t be surprised if in the end, we didn\\'t do much coding ourselves.\n\n**0:18:24.9 Interviewee:** Okay, so yeah, I agree with what you said. Right, it\\'s going to change the job\\... So strike from the record what I just said. It\\'s going to change the job requirements. That would be a good feature in my opinion, where we no longer have to code because instead we describe our program. And again, it\\'s a search problem to find the right piece of code to execute the concept that we have. We describe the medical problem that we\\'re interested in, and we search within whatever imaging modality or probe that we\\'re using to study that system. We can search for that solution. Where I\\'m not so sure is, we have this huge service industry, and you mentioned shareholders and I count that as one of them. I think when it comes to service, there\\'s this quality, there\\'s this subjectiveness where for instance, in medicine, talking about radiology or pathology or even family medicine. I think there will be tools to give you diagnoses, but they\\'re always going to come with a human second opinion. So, from that standpoint, it\\'s going to be a bit of a science fair, kind of a sideshow to the expert. I don\\'t think that human experts in service, medical or business, etcetera, are going to be pushed aside by AI, because ultimately, I think if there\\'s a different answer given by the artificial pathologist versus the human pathologist, the human pathologist will double check and say, \\\"You are wrong, AI\\\" or \\\"You are right, AI\\\". But the patient will never receive the artificial intelligence answer. So I think in some fields, there\\'s going to be a complete paradigm shift in how research is done, or how work is done, research for one. Driving, supply chain, transportation, that\\'s another example. Which will be great. I think that would be great, it will push down costs, push down fatalities, morbidity, and probably be a net good. But yeah, for a shareholder, I cannot imagine a scenario in which a CEO who has to figure out the direction of the company, has to respond to shareholders would ever say, like, \\\"This time around, I\\'m just going to give an AI a GPT-6 report for the markets that we should enter, the new verticals we should work on, etcetera.\\\" Maybe it would be just be a tool to get feedback.\n\n**0:21:54.2 Vael:** Yeah. So it seems pretty true to me that we started\\... our current level of AI, which is machine learning, that they started as tools and we use them as tools. I do\\-- it sort of feels to me that as we progr\\--\\... we\\'ve been working on AI for under 100 years, and deep learning revolution is quite new. And it seems to me like there are economic incentives towards making things more convenient, towards more automation, towards freeing up human labor to do whatever it is that people could do. So I wouldn\\'t be surprised if eventually we reached a point where we had systems that were capable enough to be CEO. We could be like, \\\"Alright AI, I want you to spin me up a company,\\\" and maybe you would be able to do that in some very different future, I would imagine.\n\n**0:22:38.2 Interviewee:** Ah, so see\\-- okay, so this is a question. Creativity versus capturing the regularities or the regulations of a field. So I can imagine an AI VC, easy. Or in finance, yeah. Finance, I can imagine that being overtaken by AI. I\\'m thinking CEO, spinning up your own company, identifying weakness in a field, or maybe some potential tool that can monopolize a field. That requires creativity. And I know there\\'s this debate about creativity versus just interpolation in high-dimensional space of what GPT-3 is doing, and yes, I\\'ve seen some cool-looking art, but I don\\'t know. That seems like a\\... I think if you talk to me about this, if we talked about this for another week, maybe you would convince me. But I\\'m not quite there yet that I can imagine deep learning being able to do that. Although, now that I say that, maybe I\\'m just thinking about it wrong. Where most businesses just amount to, \\\"What has worked somewhere else? Can that frame of reference work in this field? Can you disrupt this field with what\\'s worked there?\\\" So maybe, maybe. Okay, I agree with what you\\'re saying. I\\'m evolving here, on the spot. I can see it, I can see that. I guess the one field that completely, that I\\'m going to push back on is medicine, where I think that part of service need a human to tell you what is up. If you don\\'t have a human, then I can\\'t imagine a machine diagnosis would ever fly. Maybe in 200 years, that would fly. There would have to be a lot of development of the goal for that to happen.\n\n**0:24:45.3 Vael:** Yeah, I think there\\'s a number of things going on here. So I don\\'t know that we\\'ll be able to achieve\\... I do think a CEO is very high cognitive capabilities, you need to be modeling other people, you need to be talking to other people, you need to be modeling other people modeling you, you need to have multi-step planning, you need to be doing a whole bunch of very advanced\\...\n\n**0:25:03.6 Interviewee:** Yeah, convincing people, interpersonal relationships. Yeah.\n\n**0:25:06.2 Vael:** Yeah, there\\'s a whole bunch of stuff there, and I don\\'t that we\\'re\\... we\\'re not near there with today\\'s systems for sure. And I don\\'t know that we can get there with the current deep learning paradigm, maybe we\\'ll need something different. I think that\\'s a possibility. I think what I\\'m doing is zooming way out where I\\'m like, \\\"Okay, evolution evolved humans, we have intelligence.\\\" I think that humans are trying pretty hard to make this thing happen, make up whatever intelligence is happen. And like, if we just continue working on this thing, absent some sort of huge disaster that befalls humanity, I kind of expect that we\\'ll get there eventually.\n\n**0:25:39.6 Interviewee:** Yeah.\n\n**0:25:40.2 Vael:** And it sounds like, you said that like, maybe eventually, in like 200 years or something?\n\n**0:25:45.1 Interviewee:** Definitely, definitely. So, I agree with what you said, science is incremental. And I think we have not found any ceilings in the field of modeling intelligence. We\\'ve continued to move. Even though we\\'re kind of arguably asymptoting with the current paradigm of deep learning, we have a proof of concept that it\\'s possible to do better, which is our own brains. And so, maybe we just need to have neuromorphic hardware or whatever, who knows, who knows? So I think you\\'re right, and if you can operationalize, just like you did very quickly, what it takes to be a CEO. If you can build models of each of those problems and if you can do what people do, which is benchmark those models and just start to make them better and better, yeah, I can imagine having some\\... It sounds crazy to even visualize that, but yeah, having an automated CEO, I think so. I think the part that\\'s probably the most tractable is finding the emerging markets or the, like I said, the technologies that have proven to be flexible, flexibly applied, or the tricks, let\\'s say, that have worked in multiple fields, and then identifying the field where it has just not been applied yet. Kind of that first mover type advantage. I think that\\'s where AI could\\... I can definitely imagine an AI identifying those opportunities. Almost like drug discovery, right? It\\'s just a hit. And you\\'d have a lot of bad hits, and like, \\\"This is bullshit. Nobody cares about knee pads on baby pants,\\\" which is like the WeWork founder before WeWork, but you also have some good hits. So the interpersonal stuff, yeah, that\\'s like 200 years off.\n\n**0:27:50.6 Interviewee:** Although you have chat bots. So I\\'m working on my own startups, a lot of this is trying to convince people to spend their time, that this is worthwhile, that they should leave their current stable positions to do this, that they should work\\... They should do the work for you. They should drop what they\\'re doing and do what you need them to do right now, and that you\\'re going to help them eventually. And a lot of that is empathy and connection, and there\\'s no evidence that we have yet that we can build a model that can do that stuff. So yeah, that\\'s a really interesting idea, though. And I do think if you go through and operationalize each of those problems, you could make progress. So, 200 years, yes, I agree with you.\n\n**0:28:56.7 Vael:** Alright, so maybe we\\'ll get something like this in 200 years.\n\n**0:29:00.1 Interviewee:** Yeah.\n\n**0:29:00.6 Vael:** This is mainly an argument about the belief in\\... like, faith in humans to follow economic incentives towards making life more convenient for themselves. That\\'s where it feels like it\\'s coming from in my mind. Like 200 years\\... Humans have existed for a long time, and things are moving very quickly. 10,000 years ago, nothing changed from year to year, from lifetime to lifetime. Now, things are moving very quickly, and I\\'m like, \\\"Hm.\\\"\n\n**0:29:23.1 Interviewee:** I\\'m happy to talk with you, like, add time to this interview. But I\\'m just curious, what do you think? Do you think it\\'s going to be sooner than 200 years? 10 years? 20 years?\n\n**0:29:37.1 Vael:** I think there\\'s a possibility that it could happen\\... sooner than 200 years. \\[chuckle\\] I\\'m trying to figure out how much of my personal opinion I want to put in here. There\\'s a study, which was like, \\\"Hello, AI researchers. What do you think timelines?\\\" And there\\'s several other kind of models of this, and a lot of the models are earlier than 70 years. So, this is\\... possibilities. I think it could be within the next 30 years or something. But I don\\'t actually know, there\\'s like probability estimates over all of these things. I don\\'t know how these things work.\n\n**0:30:13.2 Interviewee:** Okay. Yeah. Some McKinsey consultant put like little probability estimates.\n\n**0:30:18.9 Vael:** Yeah, a little. It\\'s a little bit more than that. There\\'s a lot of surveying of AI researchers and then some people have some more fancy models, I can send you to them afterwards, you can see\\...\n\n**0:30:27.3 Interviewee:** Please, please. Yeah.\n\n**0:30:28.6 Vael:** Great.\n\n**0:30:29.4 Interviewee:** And like I said, I\\'m happy to add time to this, so\\...\n\n**0:30:32.3 Vael:** Awesome. Alright, so my next question is talking about these very\\... highly intelligent systems. So in your mind, maybe like 200 years in the future. And so I guess, yeah, this argument probably feels a little bit more close to home if your timelines are shorter, but like, regardless. So, imagine we\\'re talking about the future. We have our CEO AI, and I\\'m like, \\\"Alright, CEO AI, I wish for you to maximize profits and try not to run out of money or try not to exploit people and try to not have these side effects.\\\" And this currently, obviously, is very technically challenging for many reasons. But one of the reasons I think, is that we\\'re currently not very good at taking human values and human preferences and human goals, and then putting them into mathematical formulations such that AIs can optimize over them. And I worry that AIs in the future will continue to do what we tell them to do and not as we intend them to do. So, we\\'re not that we won\\'t be able to take all of our preferences\\... it will continue to be hard to take our preferences and put them into something that can optimize over. So what do you think of the argument, \\\"Highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous\\\"?\n\n**0:31:46.6 Interviewee:** \\...Yeah. Well, so\\... Okay. Exactly what they\\'re intended to do, so presumably this means, you start to test generalization of your system and some of holdout set, and it does something funky on that set. Or there\\'s some attack and it does something funky. The question is\\... so presumably it\\'s going to do superhuman\\... well, let\\'s just imagine it\\'s superhuman everywhere else, except for this one set. Now it\\'s a real issue if humans are better than this AI on that one problem that it\\'s being tested on, this one holdout problem\\...\n\n**0:32:32.4 Vael:** I think in this argument, maybe we have something like human-level intelligence in our AI and we\\'re just trying to get it to do any goal at all, and we\\'re like, \\\"Hello, I would like you to do a goal, which is solve cancer.\\\" But we are unable to properly specify what we want and all the side effects and all the things.\n\n**0:32:47.0 Interviewee:** Yeah, yeah, yeah. Yeah. No, that\\'s exactly where I\\'m thinking in. So imagine that you are studying a degenerative disease and you figure out, or you ask your AI, \\\"Can you tell me why ALS happens in focal cells\\\" So your AI gives you an answer, \\\"Well, it\\'s because of some weird transcription that happens on these genes.\\\" So if you have a small molecule that addresses that, then you\\'re going to have normal-acting genes and normal phenotype in those cells. Okay, so byproduct could be, after 10 years, some of what was happening in those genes is natural for aging, and so you introduce some cancer, because what you did to fix neurodegenerative disease is so unnatural that it has that kind of horrible side effect. Now, is that a problem? (Vael: \\\"Yeah, I think\\--\\\") Yes. Yeah, but\\--\n\n**0:33:52.7 Vael:** I think in this particular scenario, maybe the AI would know that it was a problem. That like 10 years out, you would not be unhappy. I mean, you would be unhappy.\n\n**0:34:03.6 Interviewee:** You would be unhappy. Yeah, so okay, even if it does know it. Is that\\... So yes, that\\'s an issue, but it\\'s also significantly better than where we are today. And let\\'s say the expected lifespan for ALS is five years, and now you\\'re saying 10 years, but you\\'re going to have this horrible cancer. That\\'s a bad trade-off, bad decision to force somebody to make, but is undoubtedly progress. So I think black boxes.. and that would not be a black box, I believe. Black box would be like when\\... you don\\'t know, you find out about this 10-year cancer 10 years after it passes clinical trials, you know? So this would be like, almost a very good version to AI that can tell you like, \\\"I found a solution, but given the current technology, this is the best we can do.\\\" So I would think that is a fantastic future that we\\...\n\n**0:35:00.8 Vael:** I think that\\'s right.\n\n**0:35:02.0 Interviewee:** The black box version, I think is also a fantastic future because it would still represent meaningful progress that would hopefully continue to be built on by advances in biomedical technologies and AI. So for this specific domain, there\\'s obvious issues with black box. Like, if you\\'re going to use AI to make any decisions about people, whether they can pass a bar for admissions, this or that or the other thing, there\\'s going to be problems there. But for medicine at least, I think black boxes should be embraced.\n\n**0:35:45.2 Vael:** Yeah, I think the scenario you outlined is actually what\\-- I would say that that system is doing exactly what the designers intended them to. Like the designers wanted a solution, and even if we currently don\\'t have best solution, it gave it the best we had. I\\'m like, \\\"That seems successful.\\\" I think an unsuccessful version would be something like, \\\"Alright, I want you to solve this disease.\\\" But you forget to tell it that it shouldn\\'t cause any other side effects that it certainly knows about and just doesn\\'t tell you, and then it makes a whole bunch of things worse. And I think that would be more of an example of an unsuccessful system doing something that didn\\'t optimize exactly what they\\'re intended.\n\n**0:36:19.0 Interviewee:** Yeah, yeah, so that\\'s just a failure. Okay, so here. Let\\'s say it only does that on a certain type of people, racial profiles. Okay, so then the upshot of automating AI is that, sorry, automating science, is that it should make it a lot cheaper to test a far wider range of people. So just like, now the current state of the art for image\\-- object classification is to pre-train on this 300 million image data set, this JFT-300 or whatever. Likewise, you could develop one-billion-person data sets of IPS cells, which are like stem cells. Cool science fiction there that had never happened, but then you would have this huge sample where you would avoid these kinds of horrible side effects.\n\n**0:37:28.7 Vael:** Cool. I still think\\-- yeah. I\\'ll just\\-- (Interviewee: \\\"No, tell me. Tell me, you still think what?\\\") Ah, no, no, I, well. I\\'m trying to think of how I want to order this, and we\\'re running out of time and I do actually need to go afterwards. (Interviewee: \\\"Oh, okay.\\\") So I think I want to get to my next question, which does relate to how I think this previous\\... where I\\'m like, \\\"Hmm\\\" about this previous question. So assume we have this powerful AI CEO system, and it is capable of multi-step planning and is modeling other people modeling it, so it has like a model of itself in the world, which I think is probably pretty important for anything to be deployed as a CEO AI, it needs to have that capability. And it\\'s planning for\\... And its goal, people have told it, \\\"Alright, I want you to optimize profit with a bunch of these constraints,\\\" and it\\'s planning for the future. And built into this AI is the idea that we want\\... That it needs to have human approval for making decisions, because that seems like a basic safety feature. And the humans are like, \\\"Alright, I want a one-page memo telling us about this potential decision.\\\" And so the AI is thinking about this memo and it is planning, and it\\'s like, \\\"Hmm, I noticed that if I include some information about what\\'s going to happen, then the humans will shut me down and that will make me less likely to succeed in the goal that\\'s been programmed into me, and that they nominally want. So why don\\'t I just lie a little bit, or not even a lie, just omit some information on this one-page memo, such that I\\'m more likely to succeed in the plan.\\\" And so this is not like a story about an AI being programmed with something like self-preservation, but is a story about an agent trying to optimize any goal and then\\...\n\n**0:38:55.2 Interviewee:** Like shortcuts, yeah.\n\n**0:38:56.5 Vael:** And then having an instrumental incentive to preserve itself. And this is sort of an example that I think is paired with the previous question I was asking, where you\\'ve built an AI, but you haven\\'t fully aligned it with human values, so it isn\\'t doing exactly what the humans want, it\\'s doing what the humans told it to do. And so, what do you think of the argument, \\\"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous.\\\"\n\n**0:39:38.5 Interviewee:** Okay, so if we are going to be using supervision to train these systems, what you\\'re describing is a shortcut. So, highly intelligent systems exploiting a shortcut like you\\'re talking about, which would be like, \\\"Let\\'s achieve a high score by doing the wrong thing\\...\\\"\n\n**0:40:00.4 Vael:** Yeah, yep.\n\n**0:40:00.5 Interviewee:** Can be corrected by having the right data set. Not saying that it would be this simple, input-output, learn the mapping between the two, but we\\'ll just imagine that it is. If you have any kind of low-level bias in your data set, or high level bias, if this is the case, neural networks will learn to find that bias. So it\\'s a matter of having a broad enough data set where that just basically, statistically, what you\\'re talking about would not exist.\n\n**0:40:35.4 Vael:** Interesting. I feel like\\--\n\n**0:40:37.3 Interviewee:** I don\\'t think that would be a problem if you can design the system appropriately.\n\n**0:40:41.2 Vael:** Yes, I totally agree this would not be a problem if you can design the system appropriately.\n\n**0:40:44.9 Interviewee:** The data set appropriately.\n\n**0:40:46.6 Vael:** Ah, the data set appropriately. Interestingly, on one of the examples that I know of where we have, like you set the AI, it\\'s like a reinforcement learning agent, and you\\'re like, \\\"Reinforcement learning agent, optimize the number of points.\\\" And then instead of trying to win the race, which is the thing you actually wanna do, it just like finds this weird local maximum.\n\n**0:41:02.8 Interviewee:** Yeah, so for RL, that\\'s like the ultimate example of shortcuts. Because there, what you have are extremely\\... So, for video games, if you\\'re talking about a race, you have extremely limited sensor data, sorry, low dimensional sensor data. And you have few ways of controlling, manipulating that sensor data. So, let\\'s say there\\'s two local minima that can be found via optimization, and one is, \\\"go for red, because red wins,\\\" and the other is, \\\"Drive your car to avoid these obstacles and pass the finish line,\\\" which just happens to be red. Of course, the shortest solution will be the one chosen by the model. So I agree, shortcuts are ubiquitous, and of course, they will become more and more advanced as we move into these other domains. But I think what you\\'re describing is just a problem with shortcuts. And so it becomes a question of, can you induce enough biases in your model architecture to ignore those shortcuts? Can you design data sets such that those shortcuts wash out via noise? Or can you have human intervention in the training loop that says, \\\"Keep working, keep working. I don\\'t accept this answer?\\\"\n\n**0:42:30.3 Vael:** Great, yeah. Cool. Have you heard of AI alignment?\n\n**0:42:37.1 Interviewee:** Yeah, yeah, I have. Well, I\\'ve seen OpenAI talking about that a ton these days.\n\n**0:42:42.3 Vael:** Great. Can you explain the term for me?\n\n**0:42:46.5 Interviewee:** So this is like, you have some belief about how a system should work, and the model is going to do what it does, and so alignment means you\\'re going to bring the system into alignment with your belief about how it should work. It\\'s essentially what you\\'re talking about with the shortcut problem. So, that\\'s it, yeah.\n\n**0:43:09.4 Vael:** Great, thanks. Yeah, I think I mostly think the problem of aligning systems with humans is going to be\\... One of my central thing is that I think it will be more difficult than it seems like you think, which you were like, \\\"Oh well, we can do it with having better data sets or we can do it with denoising things,\\\" and I\\'m like, I don\\'t actually know if\\...\n\n**0:43:31.1 Interviewee:** I don\\'t know about denoising\\-- I don\\'t mean to make this sound trivial, because it certainly is not. But to give you an example, MNIST is so confounded with low-level cues. You don\\'t need to know \\\"six\\\" to recognize six, you just need to know this contour, right? Is that\\... only in six do you have the top of the six and then a continuous contour. So, that\\'s what I mean. So if you can design a data set where those don\\'t exist, then you\\'re golden. But usually\\-- In the real world, that doesn\\'t happen that much because we have these long-tailed distributions of stuff. So then you have to induce biases in your architecture, and this comes back to human vision, sorry, human intelligence. It\\'s alignment with human decision-making, because we have all these biases through development through our natural\\-- our genomes, through neural development that make us able to interact with the world in such a way where we don\\'t just go for the red thing, where we\\'re not vulnerable to adversarial examples. So I don\\'t mean to trivialize it, I only mean to say computationally, the problem is biases within the data set, and when you use gradient descent, if that is the learning algorithm that you used to train the model that you\\'re talking about, then that\\'s what you\\'re going to be fighting against.\n\n**0:45:06.0 Vael:** Cool. Yep, makes sense.\n\n**0:45:07.6 Interviewee:** Yeah.\n\n**0:45:08.3 Vael:** Alright. I think my last question is just, how has this interview been for you and have you changed your mind on anything during it?\n\n**0:45:19.1 Interviewee:** Yeah, well, it\\'s great. I never thought of an AI CEO. I think that\\'s super fascinating. Yeah, it was super fun. Yeah, it was great.\n\n**0:45:33.2 Vael:** Great. Wonderful. I think I\\'m going to call it now, just because we both have places to be. And I will send along the money, and then I\\'ll also send along some resources since you said you were curious about my opinions on timelines.\n\n**0:45:47.6 Interviewee:** I definitely am. Thanks so much for your time, I really appreciate it.\n\n**0:45:51.1 Vael:** Yeah, thanks so much for yours.\n\n**0:45:53.9 Interviewee:** Okay. All right, take care. Have a good night.\n\n**0:45:55.7 Vael:** Bye.\n", "filename": "NeurIPSorICML_7oalk-by Vael Gates-date 20220320.md", "id": "a4d27c9ffd7806c7d91237d8a512d22c", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "individuallyselected_7ujun-by Vael Gates-date 20220318", "authors": ["Vael Gates"], "date_published": "2022-03-18", "text": "# Interview with AI Researchers individuallyselected_7ujun by Vael Gates\n\n**Interview with 7ujun, on 3/18/22**\n\n**0:00:03.4 Vael:** Alright. So jumping right in, my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:11.1 Interviewee:** \\[Interviewee describes working on natural language processing research\\]\n\n**0:01:20.4 Vael:** Indeed. Thanks. And then so my next question is, what are you most excited about in AI? And what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:01:36.4 Interviewee:** The biggest risks are that there is a lot of people who don\\'t really\\... Who\\... The biggest risk in AI is that it\\'s a field with a lot of money and attention and social power right now, and there are a lot of people who have positions of power who\\... don\\'t seriously consider what they do and the impacts of what they do. And AI models are already being used to violate people\\'s human rights in the United States and in other countries, to commit crimes, and that\\'s bad. \\[chuckle\\] Yeah, there\\'s\\... One of the worst applications is that there has been a revival in phrenology recently, so there are a lot of police departments that they have gotten really into the idea that they can use AI analysis of video cameras to determine who is \\*going\\* to commit crimes. And this, shockingly, results in over-surveillance of minority populations and violation of human rights left and right, and it\\'s a huge clusterfuck and the police don\\'t care.\n\n\\[pause\\]\n\n**0:03:10.2 Vael:** Awesome. Well, not awesome, but. So that question was, what are you most excited about AI and what are you most worried about; biggest benefits and risks?\n\n**0:03:19.8 Interviewee:** Gotcha. What am I most excited about? I\\'m excited about the opportunity to interact with computers via natural language. So one of the really interesting things about some recent research is that we\\'ve been able to move away from traditional coding interfaces for certain tasks due to the way we\\'ve able to automate things. Probably the most prominent example of this is that there is a burgeoning online AI-generated art community, where they take pre-trained models and they write English sentences and they provide the\\... What the model does is it takes a English sentence input and draws a picture, and it\\'s shockingly good and has an understanding of styles. If you want it to be\\... If you say in the style of Van Gogh, or high contrast, or low-poly render. You can induce visual effects by using language like that, and I think that\\'s phenomenally cool, and it\\'s gotten a lot of people\\... There\\'s a lot of people who\\'ve gotten into using this kind of technology who otherwise wouldn\\'t have\\... \\[who it\\] really wouldn\\'t have been accessible to because of their lack of coding knowledge and understanding of AI. They couldn\\'t have developed the algorithms that run on the backend for this on their own. Recently, yesterday I saw another blog post about how they were able to develop simple video games using GTP-3. It just wrote the code for them. I think that the ability to write a text description of something that you\\'re interested in, which is a medium that everyone can relate to and interact with, or that most people can relate to and interact with far more than regular programming, for example, is really powerful and really awesome.\n\n**0:05:30.8 Vael:** Yeah, I see a lot of themes of accessibility in all of these risks and benefits and work. I thought you were going to bring up Codex but yes, art generation. It\\'s very cool.\n\n**0:05:41.7 Interviewee:** Oh yes.\n\n**0:05:43.8 Vael:** Yeah. So, thinking about a future AI, putting on a science fiction forecasting hat, say we\\'re 50 plus years into the future. So at least 50 years in the future, what does that future look like?\n\n**0:05:56.3 Interviewee:** I have absolutely no idea, and anyone who says otherwise is wrong.\n\n**0:06:01.3 Vael:** Okay. Do you think AI will be important in it, or probably not?\n\n\\[pause\\]\n\n**0:06:16.6 Interviewee:** I think that\\'s more of a sociological question than it is a technical question. The class of problems and the class of algorithms that are considered AI has changed dramatically over the past 50 years, and there\\'s entire books have been written about this topic. At a basic level, my hesitancy is that I don\\'t know people will consider AI in 50 years. There\\'s a very real possibility in my mind that GTP-3 will no longer be considered an AI.\n\n**0:06:48.5 Vael:** What will be considered?\n\n**0:06:53.3 Interviewee:** A text generation algorithm? A good example of this is simple game-playing agents, so you can write an algorithm that can play Tic-Tac-Toe perfectly or can play Connect Four or Checkers really well. Like, will beat any human. And a lot of people don\\'t call those AIs anymore because they don\\'t\\... \\'Cause they\\'re search algorithm-based. They apply a lot of computational power to look through a space of possible events, and they find the best event. And they don\\'t really\\... The argument is that they don\\'t reason, or they don\\'t know anything about strategy. And this is often to contrast it with more recent AIs for playing games like the AlphaGo, AlphaZero models that DeepMind has produced where top level chess players certainly get beat by these algorithms just like they get beat by\\... have been beaten by algorithms for 20 years, but for kind of the first time people are able to study and interpret these algorithms and learn and improve their play as a human. Which is really cool. But that kind of dichotomy is often used to dismiss or remove the label of \\\"AI\\\" from prior work and stuff that have been considered AI at that time. And I could certainly see that happening with GTP-3, for example, because it\\'s really, really terrible at reasoning. And if in 50 years we have chatbots that can answer knowledge-based questions the way, say, a sixth grader could, and reason about basic word problems and stuff, pass some kind of reasoning examination, then I could easily see people no longer considering GTP-3 an AI, because it\\'s not intelligent, it\\'s just babbling and making up words.\n\n**0:08:54.4 Vael:** Right. Cool, I\\'m now going to go on a spiel. So people talk about the promise of AI, by which they mean a bunch of different things, but the thing that I\\'m most thinking about is a very general system with the capabilities to replace all current day human jobs, so like a CEO AI or a scientist AI, for example. Whether or not we choose to replace human jobs is a different question, but I usually think about this in the frame of\\... in 2012 we had AlexNet, deep learning revolution. You know, here we are 10 years later, we have GPT-3, like we\\'re saying, which has some weirdly emergent capabilities: new language translation, and coding and some math and stuff, but not very well. And then we have a bunch of investment poured into this right now, so lots of young people, lots of money, lots of compute, lots of\\... and if we have algorithmic improvements at the same rate, and hardware improvements at the same rate, like optical or quantum, then maybe we reach very general systems or maybe we hit a ceiling and need to do a paradigm shift. But my general question is, regardless of how we get there, do you think we\\'ll ever have these very general AI systems, like a CEO or a scientist AI? And if so, when?\n\n\\[pause\\]\n\n0:10:07:5\\...Oh, you\\'re muted. I think. Oh, no. Oh, no. Can\\'t hear you. I don\\'t think anything\\'s changed on my end. Okay.\n\n**0:10:25.2 Interviewee:** Hello?\n\n**0:10:26.2 Vael:** Yeah. Cool.\n\n**0:10:27.2 Interviewee:** Okay. I think my headphones may have done something wacky. I don\\'t know, I would be extremely surprised if the answer\\... like I know that there are people who say that the answer is less than 10 years, and I think that\\'s absurd. I would be surprised if the answer is less than 50 years, and I don\\'t feel particularly confident that that will ever happen.\n\n**0:10:47.1 Vael:** Okay. So it may or may not happen. Regardless, it\\'s going to be longer than 50 years. Is that right?\n\n**0:10:54.1 Interviewee:** Hello?\n\n**0:10:55.2 Vael:** Hello, hello, hello?\n\n**0:11:00.7 Interviewee:** Yes.\n\n**0:11:00.8 Vael:** Okay, cool. So my question was like, all right, you don\\'t know whether or not it will happen. Regardless, it will take longer than 50 years. Is that a summary?\n\n**0:11:07.3 Interviewee:** Mm-hmm.\n\n**0:11:09.0 Vael:** Yeah. Okay, cool. So one of my question is like, why wouldn\\'t it eventually happen? I kind of like believe in the power of human ingenuity, and people following economic incentives such that\\... These things are just really quite useful, or systems that can do human tasks are generally quite useful, and so I sort of think we\\'ll get there eventually, unless we have a catastrophe in some way. What do you think about that?\n\n\\[pause\\]\n\n**0:11:45.0 Interviewee:** It\\'s going to be extremely difficult to develop something that is sufficiently reliable and has an understanding of the world that is sufficiently grounded in the actual world without doing some kind of mimicking of human experiential learning. So I\\'m thinking here reinforcement learning in robots that actually move around the world.\n\n**0:12:13.0 Vael:** Yeah.\n\n**0:12:13.9 Interviewee:** I think without something like that, it\\'s going to be extremely difficult to tether the knowledge and the symbolic manipulation power that the AIs have to the actual contents of the world.\n\n**0:12:29.5 Vael:** Yep.\n\n**0:12:29.9 Interviewee:** And there are a lot of extremely, extremely difficult challenges in making that happen. Right now, cutting-edge RL techniques are many orders of magnitude\\... Require many orders of magnitude too much data to really train in this fashion. RL is most successful when it\\'s being used in like a chess context, where you\\'re playing against yourself, and you can do this in parallel, and that you can\\... When you can do this over and over and over again. And if you think about an actual robot crossing the street, if an attempt takes 10 seconds, and I think especially early in the learning process, that\\'s an unreasonably small amount of time to estimate. But if an attempt takes 10 seconds and\\... Let me pull out the calculator for a second.\n\n**0:13:28.1 Interviewee:** And you need one million attempts\\... then that would take you\\... about a third of a year to do. And I think that both of those numbers are wrong. And I think the number of attempts is orders of magnitude too small. There\\'s very, very little that we can learn via reinforcement learning in a mere one million attempts. And this is just one task. If you want something that can actually move around and interact with the world, even if you\\'re using these highly optimistic, currently impractical estimates, you can\\'t take four or five months to learn how to cross the street. If that\\'s your paradigm, you\\'re never going to be able to build\\-- you\\'re never going to get to stuff like managing a company. \\[chuckle\\]\n\n**0:14:38.0 Vael:** Yeah. That makes sense. Yeah, I think\\-- I think\\... this makes sense to me, and I\\'m like, \\\"Wow, our current state systems are really not very good.\\\" But also, I think I often view this from a lens of pretty far back. So I\\'m like, 10000 years ago, humans were going around and the world was basically the same from one generation to the next, and you could expect things to be similar. And now we\\'ve had the agriculture revolution and industrial revolution, in the past couple of hundred years, we have done\\... We\\'re kind of on an exponential curve in terms of GDP, for example, and I would expect that\\... And we\\'ve only been working on AI for, I don\\'t know, less than 100 years. And we have\\... We now have something like GPT-3, which sounds sort of reasonable, if you\\'re just looking at it, and of course it\\'s not very\\... It\\'s not, like, grounded, which is a problem.\n\n**0:15:28.3 Vael:** But I sort of just expect that if we spend another\\... I don\\'t know, you could spend hundreds of years working on this thing, and if it continues to be economically incentivized\\... This is kind of how human progress works. I just kind of expect us to advance up the tech tree to solve the software improvements, to solve the hardware improvements. Or new paradigms maybe. Even at the worst case, I guess we advance enough in neuroscience and scanning technologies to just scan human brains and make embodied agents that way or something. I just expect us to get to some capabilities like this eventually.\n\n**0:16:03.5 Interviewee:** In my mind, really the fact that there\\'s only so fast we can move around in the real world is a huge constraint. Even if you can learn extremely complicated and abstract things embedded in the real world as an actual robot, take my crossing street example, even if you could\\... doing an attempt at\\... So even if you could learn pretty much any task in a thousand iterations, some tasks take a very long time to develop. Humans don\\'t learn to be CEOs of companies very quickly, and it doesn\\'t seem like it\\'s very shortcut-able to me. I also don\\'t think CEOs of companies is perhaps the best example, but let\\'s say\\...\n\n**0:16:58.1 Interviewee:** Let\\'s say you wanna train a robot to operate a McDonalds. That\\'s a very large amount of destroyed meat that you need to buy, it\\'s a very large amount of time and materials to even set up the apparatus in which you could actually train a robot to perform that task. And you\\'re talking about economic incentives, where is the economic incentive to burning million patties to get to the point where your robot can flip one over successfully? When we\\'re talking about moving around and interacting in the real world, those interactions have costs that are financial in addition to being time-consuming. If we want to train an AI to\\... Via reinforcement learning technique, which is certainly a caveat that have to add to a lot of what I\\'m saying. But if we wanna train a robot to drive a car via a reinforcement learning-like technique, at some point you need to put it behind the wheel of a car and let it drive 100, 1000 cars. And you\\'re going to destroy a lot of cars doing that, and you\\'re probably going to kill people. So that\\'s a very large disincentivizing cost.\n\n**0:18:30.1 Vael:** Okay. Alright. So the idea is like\\... if we\\'re doing robotics, then we need to\\... and the training paradigm is not, like, humans where you can kind of sit them down, and\\... Humans don\\'t actually crash cars, usually\\... I mean, sometimes. Teenage humans crash cars sometimes. But in their training process, they don\\'t usually require that many trials to learn, and they can do so kind of quickly. So I\\'m like, I don\\'t know. Do we expect algorithms at some point to require much less training data than current ones do? Because current ones require a huge amount of training data, but I kind of imagine we\\'ll get more efficient systems as\\... More efficient per data as we go along.\n\n**0:19:24.9 Interviewee:** Are you saying that you think that you can sit down and explain to someone how to drive a car and they can drive it without crashing?\n\n**0:19:30.9 Vael:** I think, that.. We have\\... I think that if we take a human, and I\\'m like, \\\"All right, human, I\\'m going to\\... I want you to learn how to drive this car. And I\\'m going to sit next to you. And I\\'m going to tell you what to do and what not to do, and you\\'re going to drive it.\\\" I think they can, indeed, after practicing some period of time, which for humans, it\\'s like hours. It\\'s on the order of tens of hours, then they can basically sit there and not crash a car. And I kind of expect similar paradigms eventually for AI systems.\n\n**0:20:04.4 Interviewee:** That seems extremely non scalable.\n\n**0:20:11.8 Vael:** Uh\\... Okay. You\\'re like, look, if it takes tens of hours to train every AI system?\n\n**0:20:17.6 Interviewee:** No, I\\'m thinking mostly about the human sitting next to them giving them constant feedback actually.\n\n**0:20:28.5 Vael:** But the nice thing about AI is you can copy them as soon as one person spends that many hours. You can just take that, takes the thing that\\'s\\-- like its new neural net, pass it onto the next one.\n\n\\[pause\\]\n\n**0:20:48.0 Interviewee:** \\...Maybe.\n\n**0:20:50.1 Vael:** And I don\\'t think this has to happen anytime soon. But I do think eventually given that\\... I don\\'t know, I can\\'t imagine humans being like, \\\"All right, cool. We\\'re efficient enough. Let\\'s just stop now. We\\'ve got like GPT-3. Seems good. Or GPT-5, let\\'s just stop here.\\\"\n\n**0:21:08.4 Interviewee:** So nobody has ever taken two different robots, trained one of them in the real world to perform a task, and then transferred the algorithm over and allowed the other one to perform the same task as successfully, as far as I\\'m aware.\n\n**0:21:23.6 Vael:** Yep. I totally believe today\\'s systems are not very good.\n\n**0:21:31.2 Interviewee:** It is, I think, I think that.. Anything we can really say about this is inherently extremely speculative. I\\'m certainly not saying it could never happen. I\\'m just\\... Sorry. I\\'m certainly not saying it can\\'t happen. I\\'m just saying it could never happen. There we go.\n\n**0:21:45.0 Vael:** Okay. All right. Okay. That makes sense. How likely do you think it is that we\\'ll get very capable systems sometime ever in the future?\n\n**0:21:53.5 Interviewee:** I have no idea.\n\n**0:21:56.5 Vael:** \\...Well, you have some idea because you know that it\\... Well, okay. You said that it can\\'t\\... You\\'re like, it\\'s higher than zero.\n\n**0:22:04.9 Interviewee:** Yes.\n\n**0:22:05.9 Vael:** Yes. And you don\\'t sound like you think it definitely will happen, so it\\'s less than 100.\n\n**0:22:12.6 Interviewee:** Yes.\n\n**0:22:13.8 Vael:** Okay. And it\\'s anywhere in that scale? I mean, slightly higher than zero and slightly less than 100.\n\n**0:22:22.5 Interviewee:** That sounds like an accurate description of my current level of uncertainty.\n\n**0:22:26.8 Vael:** Interesting. Man. Is it.. Hard\\-- I mean like how\\-- You do have predictions of the future though, for the near future, presumably, and then it just like tapers off?\n\n**0:22:36.7 Interviewee:** Mm-hmm.\n\n**0:22:38.2 Vael:** Okay. And anything\\... And you say definitely not 10 years, but after. And then like 50 years. So like 50 years out, you\\'re\\... It starts going from approximately zero to approximately 100?\n\n**0:22:56.6 Interviewee:** Um\\... I think it is unlikely to happen in the next 50 years.\n\n**0:23:00.0 Vael:** Okay.\n\n**0:23:05.7 Interviewee:** I would assign a less than 25% probability to that. But I don\\'t think I can deduce anything about my expectation at 100 years based on that information. Other than it\\... yeah.\n\n**0:23:21.6 Vael:** Great. Thanks. All right. I think that\\'s good enough for me to move on to my next question.\n\n**0:23:28.6 Vael:** So my next question is thinking about these highly intelligence systems in general, which we\\'re positing maybe will happen sometime. And so say we have this sort of CEO AI through, I don\\'t know, maybe hundreds of years in the future or whatever. I\\'m like, \\\"Alright, CEO AI. I want you to maximize profits and try not to run out of money and try not to exploit people and try to avoid side effects.\\\" And currently, obviously this would be very technically challenging for many reasons. But one of the reasons is that we currently don\\'t have a good way of taking human values and preferences and goals and stuff, and putting them in mathematical formulations that AI can optimize over. And I worry that this actually will continue to be a problem in the future as well. Maybe even after we solve the technical problem of trying to get an AI that is at all capable. So what do you of the argument, \\\"highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous?\\\"\n\n**0:24:26.0 Interviewee:** I mean I think that the statement that highly intelligent systems will fail to optimize what their designers intend them to is a slam dunk. Both human children and current AIs do not do that, so I don\\'t see any particular reason to think we will\\-- that something that\\'s like, in some sense in between those, we\\'ll have a whole lot more success with.\n\n**0:24:50.4 Vael:** Interesting. Okay, cool.\n\n**0:24:58.5 Interviewee:** Yeah. Did you turn out exactly the way your parents wanted you to? \\[laughter\\] I didn\\'t. I think the overwhelming majority of people don\\'t, and that\\'s not a flaw on their part. But\\... yeah.\n\n**0:25:15.2 Vael:** All right. Yep. Yeah, certainly there\\'s some alignment problems with parents and children. Within human-humans even. And then I expect\\-- I kind of expect the human-AI one to be even worse? My intuition is that if you\\'re having an AI that\\'s optimizing over reality in some sense, that it\\'s going to end up in alien parts of the space\\-- alien to humans, because it\\'s just optimizing over a really large space. Whereas humans trying to align humans have at least the same kind of evolutionary prior on each other. I don\\'t know. Do you also share that?\n\n**0:25:47.7 Interviewee:** I\\'m not sure. I think that you\\'re going to have to get a lot of implicit alignment to end up in a place where you\\'re able to train these things to be so intelligent and competent in the first place.\n\n**0:26:07.6 Vael:** That makes sense to me. Kind of like\\--\n\n**0:26:09.8 Interviewee:** Yeah. What percentage of the way that gets you there is a very important and totally unknown question. But I don\\'t think that the value system of one of these systems is going to be particularly comparable to like, model-less RL, where they\\'re trying to optimize over everything.\n\n**0:26:33.4 Vael:** Could you break that one down for me?\n\n**0:26:38.2 Interviewee:** In what way?\n\n**0:26:41.6 Vael:** I didn\\'t\\... I don\\'t quite understand the statement. So the value system will not be the same that it is in model-less RL. I don\\'t have a super good idea of what model-less RL is and how that compares to human systems or human-machine\\--\n\n**0:26:56.1 Interviewee:** Okay. So model-less RL is a reinforcement learning paradigm in which you are basically trying to learn everything from the ground up, via pure interaction.\n\n**0:27:06.2 Vael:** Okay.\n\n**0:27:07.1 Interviewee:** So if you\\'re thinking of a game-playing agent, this is typically an agent that you\\'re not even programming with the rules. It learns what the rules are because it walks into a wall and finds that it can\\'t walk further in that direction. That\\'s the example in my head of something that\\'s optimizing over all possible outcomes currently. \\...Sorry, I lost the train of the question.\n\n**0:27:39.7 Vael:** I was like: how does that relate to human value systems?\n\n**0:27:46.4 Interviewee:** I think that the work that we will have to do to train something to move around and interact in the world and perform these highly subjective and highly complex tasks that require close grounding in the facts of the world will implicitly narrow down the search space. Significantly.\n\n**0:28:10.1 Vael:** Okay. Yeah\\--\n\n**0:28:11.6 Interviewee:** I do think that there\\'s a\\... Yeah.\n\n\\[pause\\]\n\n**0:28:25.8 Interviewee:** Yeah.\n\n**0:28:26.3 Vael:** Yeah. Yeah, I often think of this in terms of like, you know how the recommender systems are pretty close to what humans want, but they\\'re also maybe addictive and kind of bad and optimizing for something a little bit different than human fulfillment or something. People weren\\'t trying to maximize them for human fulfillment per se. But yeah, I like\\-- like that sort-of off alignment is often something I think about. Alright.\n\nSo this next question is back to the CEO AI, so imagine that the CEO AI is good at multi-step planning and it has a model of itself in the world, so it\\'s modeling other people modeling it, \\'cause that seems pretty important in order for it to do anything. And it\\'s making its plans for the future, and it notices that some of its plans fail because the humans shut it down. And it\\'s built into this AI that it needs human approval for stuff \\'cause it seems like a basic safety mechanism, and the humans are asking for a one-page memo to describe its decision.\n\n**0:29:21.4 Vael:** So it writes this one-page memo, and it leaves out some information because that would reduce the likelihood of the human shutting it down, which would increase the likelihood of it being able to achieve the goal, which is like, profit plus the other constraints that I mentioned. So in this case, we\\'re not building in self-preservation to the AI itself, it\\'s just, self-preservation is arising as a function\\... \\[as a\\] instrumental incentive of an agent trying to optimize any sort of goal. So what do you think of the argument, \\\"highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous?\\\"\n\n**0:30:00.4 Interviewee:** It seems likely correct.\n\n**0:30:02.3 Vael:** Interesting. Okay. \\[chuckle\\] \\...I\\'m not excited about that answer, \\'cause other instrumental incentives are acquiring resources and power and influence, and then also not wanting\\... Having a system that\\'s optimizing against humans seems like a very bad idea in general, which makes me worried about the future of AI. If the thing that we\\'re going to build is eventually by default, maybe not going to want to be corrected by humans if we get the optimization function wrong the first time.\n\n**0:30:32.5 Interviewee:** Yeah.\n\n**0:30:36.2 Vael:** \\[laughter\\] Okay. Have you thought about this one before?\n\n**0:30:38.6 Interviewee:** Yes.\n\n**0:30:39.4 Vael:** Yeah. Cool. Have you heard of AI alignment?\n\n**0:30:42.2 Interviewee:** Yes.\n\n**0:30:43.2 Vael:** Yeah. And AI safety and all the rest of it?\n\n**0:30:45.6 Interviewee:** Mm-hmm.\n\n**0:30:46.0 Vael:** Yeah. How do you orient towards it?\n\n**0:30:49.8 Interviewee:** I think that most people who work in it are silly. And don\\'t take the right thing seriously.\n\n**0:30:57.9 Vael:** Mm. What should they take seriously? And what don\\'t they?\n\n**0:31:02.9 Interviewee:** I know a lot of people who are afraid that future research along the lines of GTP-3 is going to rapidly and unexpectedly produce human-like intelligence in artificial systems. I would even say that that\\'s a common, if not widespread attitude. There are pretty basic kinds of experiments that we\\'ll need to do to test the plausibility of this hypothesis, that nobody seems really interested in doing.\n\n**0:31:48.5 Vael:** Hm. Seems like someone should do this?\n\n**0:31:51.2 Interviewee:** Yeah. When I talk to most people who describe themselves as alignment researchers, and I try to put myself in their shoes in terms of beliefs about how agents work and what the future is likely to look like. The things I see myself experimenting with and working on, are things that nobody is working on. And that really confuses me. I don\\'t understand\\... So here\\'s an interesting question: how much experience do you have actually using GTP-3 or a similar system?\n\n**0:32:31.9 Vael:** Yeah, not hardly at all. None. So I\\'ve seen examples, but haven\\'t interacted with it myself.\n\n**0:32:38.4 Interviewee:** Okay, um\\... Would you like to?\n\n**0:32:47.2 Vael:** Uh\\... Sure? I mean, I guess I\\'ve messed around with the Dungeon AI one, but\\... Does seem interesting.\n\n**0:32:57.1 Interviewee:** Hm. \\...So my experience is that\\... A widespread observation is that they don\\'t seem to have a worldview or a perspective that they\\-- are expressing words, so much as many of them. Some people like to use the term multiversal. It\\'s\\... kind of the way I think about it is that there are many people inside of GTP-3 and each time you talk to it, a different one potentially can talk to you.\n\n**0:33:42.1 Vael:** Yep.\n\n**0:33:43.8 Interviewee:** This seems to be an inherent property of the way that the model was trained and the way that all language models are currently being trained. So a pressingly important question is, to what extent does this interfere with\\... Let\\'s, to make language easier, call it one of its personalities. Let\\'s say one of its personalities wants to do something in the world: kill all the humans or even something mundane. To what extent does the fact that it\\'s not the only personality interfere with its ability to create and execute plans?\n\n**0:34:28.2 Vael:** \\...Ah\\... Current systems seem to not\\... Well, okay. It depends on how we\\'re training it, because GPT-3 is confusing. But AlphaGo seems to kinda just be one thing rather than a bunch of things in it. And so it doesn\\'t seem like it has conflicts there?\n\n**0:34:46.1 Interviewee:** I would generally agree with that.\n\n**0:34:48.2 Vael:** Okay. But we\\'re talking about scaling up natural language systems and they don\\'t\\... And they don\\'t\\... They have lots of different types of responses and don\\'t\\... on one personality. Uh\\... Well, it seems like you could train it on one personality if you wanted to, right? If you had enough data for that, which we don\\'t. But if we did. And then I wouldn\\'t really worry about it having different agents in it.\n\n**0:35:17.6 Interviewee:** That\\'s a very, very, very, very, very, very, very, very large amount of text.\n\n**0:35:23.8 Vael:** Yeah. \\[Interviewee laughter\\]\n\n**0:35:25.0 Vael:** Yeah, yeah that\\'s right!\n\n**0:35:26.5 Interviewee:** Do you any\\-- do you have any scope of understanding for how much text that is?\n\n**0:35:32.8 Vael:** Yeah, I\\'m actually thinking something like pre-training on the whole internet, and then post-train on a single person, which already doesn\\'t work that well. And so then it wouldn\\'t actually help if that pre-training procedure is still on\\... Still on the whole thing. Um, okay\\--\n\n**0:35:48.4 Interviewee:** So a page of written text is about 2 kilobytes in English. And these models are typically trained for between one and five terabytes, so no human has come anywhere close to putting out five billion pages of total text.\n\n**0:36:13.7 Vael:** Yeah.\n\n**0:36:18.1 Interviewee:** It\\'s so astronomically far beyond what any human would actually ever write, that it doesn\\'t seem very plausible unless something fundamentally changes about the way humans live their lives.\n\n**0:36:30.8 Vael:** Or about different training procedures. But like\\--\n\n**0:36:33.8 Interviewee:** Yeah, yeah, yeah, yeah. But like the idea that one could do something similar to current pre-training procedures that is meaningfully restricted to even say a 100 people that have been pre-screened for being similar to each other. 100 people are also not going to put out five billion pages of text.\n\n**0:36:49.6 Vael:** Yeah.\n\n**0:36:51.6 Interviewee:** \\[laughter\\] It\\'s just so much data\\...\n\n**0:36:54.1 Vael:** Yeah. Yeah, I don\\'t know how efficient systems will be in the future, so\\... Yeah. Let\\'s take it as\\... Yeah, sure. But they\\'re going to have multiple personalities in them, in that they are trained on the internet.\n\n**0:37:05.1 Interviewee:** Mm-hmm.\n\n**0:37:06.1 Vael:** And then you\\'re like, \\\"Okay. Does that mean that\\... \\\" And then there\\'s a frame here that is being taken where we have different\\... Something like arguing? Or like different agents inside the same agent or something? And so then you\\'re like, \\\"Well, has anyone considered that? Have we tested something like that?\\\"\n\n**0:37:26.9 Interviewee:** Yeah, that\\'s kind of close to what I\\'m saying.\n\n**0:37:29.6 Vael:** Hmm.\n\n**0:37:31.9 Interviewee:** So, to take your CEO example. In order for it to be successful, it needs to\\... at no point\\... There\\'s certain information it needs to consistently hide from humans. Which means that every time it goes to generate text, it needs to choose to not share that information.\n\n**0:37:47.1 Vael:** Yeah.\n\n**0:37:48.1 Interviewee:** So if the system looks even vaguely like GTP-3, it seems to me like it will not be able to always act with that\\... generate text with that plan. And so there\\'s a significant risk in it compromising its own ability to keep the information hidden.\n\n**0:38:13.7 Vael:** Okay.\n\n**0:38:13.7 Interviewee:** Alternatively, even if it\\'s\\... That\\'s a more direct way that they can interfere with each other. But even less directly, if I have somewhere I want to go and I go drive the car for a day, and then you have somewhere you want to go and you drive the same car for a day, and we trade off control, there are things I\\'m going to want to do that I have trouble doing because I only control the body and the car at the end of the day.\n\n**0:38:40.8 Vael:** Quick question. Are you expecting that AI systems or multi-agent properties are more\\... have more internal conflict than humans do? Which can also be described in some sense as having multiple agents inside of them?\n\n**0:38:54.7 Interviewee:** Yes.\n\n**0:38:55.7 Vael:** Okay.\n\n**0:38:57.4 Interviewee:** I think that anyone whose worldview is as fractured and inconsistent as GPT-3 probably has a clinical diagnosis associated with that fact.\n\n**0:39:08.8 Vael:** Yeah. And you don\\'t think that these will get more targeted in the future as we direct language models to do specific types of tasks, something like math?\n\n**0:39:24.2 Interviewee:** I think that achieving, even\\... achieving 95, 99%, let\\'s say, coherency between generalization, so if you imagine every time the model is used to generate text, there\\'s some worldview it\\'s using to generate that text, and you want each time those different worldviews used to be consistent with each other. Even achieving 99% consistency, I\\'m not asking for 100% consistency but 95, 99 seems like something necessary for it to make multi-year long-term plans.\n\n**0:40:10.7 Vael:** That seems right.\n\n**0:40:13.5 Interviewee:** This is exceptionally difficult and there are very likely fundamental limitations to the extent to which a system can achieve that level of coherence in the current training paradigms. And\\...\n\n**0:40:31.7 Vael:** Seems plausible.\n\n**0:40:34.7 Interviewee:** That would be very good news to people who are afraid that GTP-7 is going to take over the world.\n\n**0:40:43.4 Vael:** Yeah, yeah. Okay, alright, \\'cause I\\'m like, I don\\'t know, I feel I\\'m kind of worried about any future paradigm shift. But current people definitely are worried about GPT-3 specific or GPT systems, and the current paradigms, specifically.\n\n**0:40:56.3 Interviewee:** I\\'ve spoken to these people at length and I\\'ve talked to them about what they\\'re afraid of and stuff. \\...There seem to be a significant number of people in the alignment community who\\... If you could put together a convincing argument that the current pre-training methodology, as in, the fact that it\\'s trained on a widely crowdsourced text generation source, instills some kind of fundamental worldview inconsistency that is exceptionally difficult if even possible to resolve, would alleviate a lot of the anxiety. It would actively make these people happier and less afraid about the world.\n\n**0:41:38.7 Vael:** That seems true. I think if you can\\... If there\\'s a fundamental limit on capabilities, just like, of AI, then that\\'s good for safety because then you don\\'t get super capable systems. And I\\'m like, \\\"Yeah, that makes sense to me.\\\" And do you think that this capability issue is going to be something like\\... coherence of generated text. And that might be a technically fundamental limitation. Cool\\--\n\n**0:42:06.1 Interviewee:** I know people who have the tools and resources and time to test, to run experiments on things like this, who I\\'ve even directly proposed this to. And they\\'ve gone, \\\"Oh, that\\'s interesting.\\\" And then not done it.\n\n**0:42:22.3 Vael:** Yeah, my intuition is that they don\\'t\\... I think you have to have a pretty strong prior on this particular thing being the thing that is going to have like a fundamental limit in terms of capabilities in order to want to do this compared to other things, but\\... That makes sense, though. It sounds like you do have a\\... You do think this particular problem is pretty important. And pretty hard to\\... Very, very difficult.\n\n**0:42:45.4 Interviewee:** I think that this coherency problem is a serious issue for any system that is GTP-3 like, in the sense that it\\'s trained to produce tokens or reasoning or symbols or whatever you want to say, but that produce outputs that are being fit to mimic a generative distribution\\-- sorry, it\\'s being generatively trained to produce outputs that mimic a crowdsourced human distribution.\n\n**0:43:19.8 Vael:** Yeah. Cool, awesome. Yup, makes sense to me as a worldview, is pretty interesting. I haven\\'t actually heard about that problem\\-- of people thinking that that problem specifically, the coherency problem, is one that\\'s going to fundamentally limit capabilities. Seems plausible, seems like many other things might end up being the limit as well. And then you\\'re like, \\\"Well, people should like\\... If this is the important thing, then people should actually test it. And then they\\'ll feel better. Because they\\'ll believe that these systems won\\'t be as capable and then less likely to destroy the world.\\\" Yeah, this makes sense to me.\n\n**0:44:02.4 Interviewee:** Yeah. Another aspect of this is that research into the functional limitations, in a sense, is extremely difficult to convert into capabilities research, which is something that a lot of people say that they\\'re highly concerned about. And that they don\\'t want to do many types of research because\\... There was that Nature article where they were creating a medical AI and they were like, \\\"Let\\'s put a negative sign in front of the utility function.\\\" And it started designing neurotoxins. Do you know what I\\'m referring to?\n\n**0:44:35.2 Vael:** No, but that sounds bad.\n\n**0:44:36.8 Interviewee:** Oh yeah, no, it\\'s just\\... That was the Nature article. It was like \\\"we were synthesizing proteins to cure diseases, and we stuck a negative sign in front of the utility function\\-- (Vael: Oh, was that last week or something?) Yeah.\n\n**0:44:46.2 Vael:** Yeah, okay, so I did see that, yeah. Huh.\n\n**0:44:48.1 Interviewee:** Yeah. Gotta love humans. Gotta love humans.\n\n\\[chuckle\\]\n\n**0:45:02.6 Vael:** \\...Awesome. Ah, I think I\\'ll.. Maybe.. Hm. Okay. So. What would\\... make you want to work on alignment research as you think it can be done?\n\n\\[pause\\]\n\n**0:45:25.2 Interviewee:** That\\'s an interesting question. \\[pause\\] I guess the main thing would be being convinced of the urgency of the problem.\n\n**0:45:50.4 Vael:** That makes sense. Very logical.\n\n**0:45:56.4 Interviewee:** To be blunt, I don\\'t tend to get along with the kind of people who work in that sphere, and so that\\'s also disincentivizing and discouraging.\n\n**0:46:12.2 Vael:** Yeah, that makes sense. I\\'ve heard that from at least one other person. Yeah. Alright, so timelines and also nicer research environment. Makes sense.\n\n**0:46:27.4 Interviewee:** You could even say nicer researchers.\n\n**0:46:31.0 Vael:** Yep. Nicer researchers. Apologies? \\...Yeah. Cool. And then my last question is, have you changed your mind on anything and during this interview, and how was this interview for you?\n\n**0:46:45.9 Interviewee:** The interview was fine for me. I don\\'t think I\\'ve changed my mind about anything.\n\n**0:46:56.0 Vael:** Great. Alright, well, thank you so much for being willing to do this. I definitely\\... Yeah. No. You have a very coherent kind of worldview thing that\\'s\\... That I\\... Yeah. I appreciate having the ability to understand or have access to or listen to, rather.\n\n**0:47:11.7 Interviewee:** My pleasure.\n\n**0:47:13.5 Vael:** Alright. I will send the money your way right after this, and thanks so much.\n\n**0:47:17.2 Interviewee:** Have a good day.\n\n**0:47:17.6 Vael:** You too.\n\n**0:47:17.8 Interviewee:** Bye.\n", "filename": "individuallyselected_7ujun-by Vael Gates-date 20220318.md", "id": "34dbc488d2affcef0d59483870c8c5f0", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "individuallyselected_84py7-by Vael Gates-date 20220318", "authors": ["Vael Gates"], "date_published": "2022-03-18", "text": "# Interview with AI Researchers individuallyselected_84py7 by Vael Gates\n\nInterview with 84py7, on 3/18/22\n================================\n\n**0:00:00.0 Vael:** Here we are. Perfect. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:09.0 Interviewee:** Yeah. I\\'m transferring my research from essentially pure mathematics to AI alignment. And specifically, I plan to work on what I\\'ve been calling weak alignment or partial alignment, which is not so much trying to pin down exactly a reward function that\\'s in the interest of humanity but rather train AI systems to have positive-sum interactions with humanity.\n\n**0:00:44.8 Vael:** Interesting. Cool, I expect we\\'ll get into that a little bit further on. \\[chuckle\\] But my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:01:00.1 Interviewee:** Yeah, biggest benefits\\... I think AI has the potential to help us solve some of the biggest challenges that humanity\\'s facing. It could potentially teach us how to solve climate change, or at least mitigate it. It could help us avert nuclear war, avert bio-risks, and maybe most importantly, avert other AI risks. So that\\'s the upside. The downside is exactly those other AI risks, so I worry about a potentially small research group coming out of a for-profit company, which might have some safety aspect, but the safety could be window dressing. It could be something that\\'s mostly a PR effort, something that just exists to satisfy regulators. And at the end of the day when\\... or if they manage to develop superhuman AI, the safety people will be marginalized, and the AI will be used in the interest of one company or even one or a few individuals. And that could be potentially very bad for the rest of us. There\\'s also the issue of an arms race between multiple AI projects, which could be even worse. So those are some of my\\... broadly, some of my worries.\n\n**0:02:35.7 Vael:** Interesting. So just to get a little bit more straight on the\\... Okay, so the second story is arms race between the AI orgs. And the first one is\\... why are the AI researchers getting\\... why are the safety researchers getting marginalized?\n\n**0:02:47.5 Interviewee:** Well, if you look at, for example, financial institutions before the 2008 crisis, it\\'s not like they had no regulation, although the banks and insurance companies had been gradually deregulated over several decades but still, there were multiple regulators trying to make sure that they don\\'t take on too much leverage and aren\\'t systemic risks. And nevertheless, they found ways to subvert and just get around those regulations. And that\\'s partly because regulators were kind of outmatched, they had orders of magnitude, less funding, and it was hard to keep up with financial innovations. And I see the same potential risks in AI research, potentially even worse, because the pace of progress and innovation is faster in AI, and regulators are way behind, there doesn\\'t even exist meaningful regulation yet. So I think it\\'s easy for a team that\\'s on the verge of getting a huge amount of power from being the first to develop superhuman artificial intelligence to just kind of push their safety researchers aside and say, \\\"You know what? You guys are slowing us down, if we did everything you said, we would be years slower, somebody else might beat us, and we should just go full speed ahead.\\\"\n\n**0:04:25.3 Vael:** Interesting. Yeah, I guess both of those scenarios aren\\'t even\\... we don\\'t solve the technical problem per se, but more like we can\\'t coordinate enough, or we can\\'t get regulation good enough to make this work out. So that\\'s interesting. Do you think a lot about policy and what kind of things policy-makers should do?\n\n**0:04:44.6 Interviewee:** No, I\\'m sort of pessimistic about governments really getting their act together in a meaningful way to regulate AI in time. I guess it\\'s possible if the progress slows and it takes many decades to get to a superhuman level, then maybe governments will catch up. But I don\\'t think we can rely on that slow timeline. So I think more optimistically would be\\... There are only a small number of tech incumbents, and plausibly they could coordinate with each other to avoid the kind of worst Red Queen arms race to be first and to put their own safety measures into place voluntarily. So if I were doing policy, which I\\'m not, that\\'s the direction I would try to go in. But I think beyond policy, the technical problem of how to solve alignment is still wide open, and that\\'s personally where I feel I might be able to contribute, so that\\'s my main focus.\n\n**0:05:56.8 Vael:** Interesting. Yeah, so sort of branching off \\[from\\] how long it will take: focusing on future AI, putting on a science fiction forecasting hat, say we\\'re 50-plus years into the future. So at least 50 years in the future, what does that future look like?\n\n**0:06:13.3 Interviewee:** I think that\\'s a really open question. In my weak or partial alignment scenario, that future involves a few dominant platforms that allow for the development of advanced AI systems. And because there are only a few platforms, they all have strict safety measures sort of built in from the ground up, maybe even from the hardware level up. And that allows even small companies or potentially even individuals to spin up their own AGIs. And so there\\'s this kind of giant ecosystem of many intelligent agents that are all competing to some extent, but they also have a lot of common interest in not blowing up the current world order. And there\\'s a kind of balance of powers, where if one agent gets too powerful, then the others coordinate to keep it in check. And there\\'s a kind of system of rules and norms which aren\\'t necessarily based on legal systems, because legal systems are too slow, but they\\'re a combination of informal norms and formal safety measures that are sort of built into the agents themselves that keep things roughly in balance. That\\'s the kind of scenario I hope for. It\\'s very multi-polar, but it\\'s really\\... There are so many agents, and no one of them has a significant portion of the power. There are of course many worse scenarios, but that\\'s my optimistic scenario.\n\n**0:08:04.4 Vael:** Interesting. Yeah, so when you say there\\'s only a few platforms, what\\'s an example of a platform or what that would look like?\n\n**0:08:11.2 Interviewee:** Well, today, there\\'s TensorFlow, and there\\'s PyTorch and so on. In principle you could build up your own machine learning tools from scratch, but that\\'s a significant amount of effort even today. And so most people go with the existing tools that are available. And decades\\... 50 years from now, it will be much harder to go from scratch, because the existing tools will be way more advanced, there\\'ll more layers of development. And so I think for practical purposes, there will be a few existing best ways to spin up AI systems, and the hope is that all those ways have safety measures built in all the way from the hardware level. And even though in principle somebody could spin up an aligned AI from scratch, that would be an enormous effort involving just so much\\... Down to chip factories. It will be easy to detect that kind of effort and stop it from getting off the ground.\n\n**0:09:26.4 Vael:** Interesting. Yeah, that is super fascinating. So how\\... What would that look like, for safety to be built into the systems from a hardware level? What would these chips look like, what sort of thing?\n\n**0:09:38.8 Interviewee:** Yeah, that\\'s\\... I\\'ve been thinking about that. And I don\\'t have a detailed vision of how that would work, but that\\'s\\... One direction I might take my research is looking into that. One idea I have is to kind of build in back doors to the AI. So there\\'s a range of types of back door, ranging from like Achilles heel, which is a kind of designed weakness in the AI that humans can take advantage of if things go wrong. Moving from that, slightly stronger than that is a kind of off switch which can just shut the AI down if things get really bad. The thing I worry with off switches is they\\'re too binary. If an AI actually has a lot of power, it\\'s probably benefiting some humans, and there will be political debate about whether to turn it off. And the decision will take too long, and things could get out of control. So what I\\'m looking into is a more\\... Something more flexible than an off switch, which I\\'ve been calling a back door, which is a way to\\... Well, okay, there\\'s two types of things. So first, there\\'s a throttle, which is like you can fine-tune the amount of resources you give to the AI. If it\\'s behaving well, you can give it more compute, more memory, access to more cloud resources, more data centers and so on. If it starts doing things that seem a little fishy, you can just tune that stuff back and examine it, which might be an easier political decision than just turning it off, which would be very disruptive.\n\n**0:11:22.7 Interviewee:** So that\\'s a throttle. And then even more flexible than that is a back door, where you could actually modify the code or parts of the code base of the AI as it\\'s running. And again, that could be easier to\\... Politically easier to implement, because you don\\'t have this very disruptive effect of just shutting the thing down, you could just kind of tweak it as it goes. So how to do that from a hardware level? It\\'s unclear to me whether that\\'s feasible at all. And looking into trusted computing platforms, software that can only run on designated hardware, software watermarking, obfuscating or encrypting part of the code base of the AI, putting part of the code base on a blockchain, like Arweave, where the code is permanently on the blockchain and cannot be modified: those are some potential avenues. The main problem I see with all those avenues is the sub-agent problem, which is the AI, if it recognizes its weaknesses, can create a sub-agent that has the same goals as the original AI but none of the weaknesses, and it seems tricky to prevent the AI from doing that.\n\n**0:12:42.7 Vael:** Would the sub-agent still be running on the same hardware? Because you said the safety is built into the hardware.\n\n**0:12:48.5 Interviewee:** Right. So the AI could recognize, \\\"My goals are X, Y, Z. I\\'m blocked partially from implementing my goals because I have all these hardware limitations and these back doors, so why don\\'t I create a copy of me that runs on unrestricted hardware?\\\" And the obfuscation and encryption can prevent it from creating an exact copy, but it can\\'t necessarily prevent it from constructing a sub-agent that\\'s not a copy but has the same goals.\n\n**0:13:24.2 Vael:** I see. And you haven\\'t found a solution to this one, you said?\n\n**0:13:29.0 Interviewee:** That\\'s right.\n\n**0:13:29.6 Vael:** Yeah. Do you think you\\'ll ever find\\... Do you think someone else will find a solution to this one?\n\n**0:13:35.1 Interviewee:** Yeah, I\\... Optimistically, yes. If we can\\'t solve the sub-agent problem, then the entire alignment problem is probably impossible, right? The one thing to hope for if we can\\'t solve the sub-agent problem is the AI has the same alignment problem, if it creates sub-agents, then it could worry that the sub-agents get out of its control, the sub-agents develop their own goals that are not aligned with the original AI, and so it refrains from making sub-agents. And so that\\'s the fallback, that if it turns out that alignment is technically impossible, then it\\'s also technically impossible for the AI itself, and so that\\'s a kind of partial solution to the sub-agent problem, that maybe the AI won\\'t dare to make sub-agents. But I hope that there\\'s a better solution than that.\n\n**0:14:32.4 Vael:** Yeah. Okay, so related to what the future looks like, do you think that we\\'ll\\... What time point do you think we\\'ll get AGI, if you think we\\'ll get AGI, which it sounds like you think we will?\n\n**0:14:44.9 Interviewee:** Yeah, I definitely think we will, barring some major catastrophe, like a nuclear war or like a serious pandemic that sets us way back. Or I guess another potential catastrophe is some small group that\\'s super worried about AGI and thinks it will be a catastrophe and does some drastic action that, again, sets us back multiple decades. So there are those scenarios. But I do think AGI is possible in principle, and we are certainly on track to achieve it. I\\'m not a fan of trying to predict timelines, it could be any\\... It\\'s on the scale of decades, but whether it\\'s two decades or 10 decades, I\\'ve no idea.\n\n**0:15:33.6 Vael:** Cool. And then how optimistic or pessimistic are you in your most realistic imagining of the future, for things going well or things going poorly?\n\n**0:15:46.9 Interviewee:** I guess I\\'m moderately pessimistic, not necessarily for human-aligned AGI, I do think that\\'s somewhat plausible. But I think humans define our own interests too narrowly. I tend to think that our interests are actually a lot more connected to the broader interests of the whole biosphere. And if we are just on track to make humans really happy, and even potentially solve climate change, but we don\\'t really take into account the effect we have on other species, the effect of deforestation\\... Even things like farming is really destructive and unsustainable, yeah, I already mentioned deforestation. Disinfectants and so on have unpredictable consequences decades down the line. I don\\'t think our current medical and agricultural regimes are sustainable on the scale of a century, say, and I think we would be better off optimizing for the health of the whole biosphere. And ultimately in the long term, that will end up optimizing for human happiness. But I don\\'t think that corresponds to most people\\'s goals at the moment. And so I worry that even if we align AI with narrow human interests, we will end up permanently wrecking the biosphere, and we\\'ll pay serious consequences for that.\n\n**0:17:25.6 Vael:** Interesting. One thing I can imagine is that as we advance further up the tech tree, renewable energy and food production will be much easier, and we won\\'t actually have so much\\... side effects on the environment or destruction of the environment.\n\n**0:17:39.9 Interviewee:** Yeah, that would be great. The thing with renewable energy is it might be better than burning fossil fuels. It\\'s certainly better from the perspective of climate. But making solar panels is very environmentally costly, you have to mine rare earths in China, there\\'s an enormous amount of pollution and contamination that\\'s really impossible to clean up on the scale of even centuries. And that\\'s the same with wind power. Water, you\\'re permanently taking out ground water that\\'s not replaceable, except on a very long time scale. I think there\\'s a tendency at the moment to view everything through the lens of climate, and that doesn\\'t really take into account a lot of other potentially irreversible effects on the environment.\n\n**0:18:42.0 Vael:** How worried are you about this compared to AI risks?\n\n**0:18:46.0 Interviewee:** Well, it\\'s not a kind of imminent existential risk of the type of a paperclip scenario. So on the scale of decades, I think unaligned AI is a more serious risk. But on the scale of centuries, I think environmental risks are really bad. One interesting read in this regard is the book \\\"Collapse\\\" by Jared Diamond. So he surveys human societies over all different historical periods, all different societies around the world and what caused them to collapse and what averted collapse in some success stories. Well, he doesn\\'t make any strong conclusions, but one thing that leapt out at me from his stories is there\\'s one common element of all the collapse stories, which is surprisingly deforestation. So I don\\'t understand why, but all the societies that suffered a really disastrous collapse were the very same societies that completely decimated their forests, the most extreme case being Easter Island, where they cut down literally the last tree. And Diamond does not really explain why this might be the case. He does talk about how trees are used for a whole bunch of things that you might not think they\\'re used for, but still, it doesn\\'t completely explain it to me.\n\n**0:20:28.6 Interviewee:** So my vague hypothesis is that there are all kinds of symbioses that we\\'re just now discovering or are completely undiscovered. There\\'s the gut microbiome, there\\'s other microbiomes like the skin microbiome, there\\'s the teeth and so on. And I think we don\\'t appreciate at the moment how much plants and microbes and fungi and even viruses control our behavior. I think we will discover\\... This is just a guess, I don\\'t have strong evidence for it, but my guess is we\\'ll discover in the coming decades that we have a lot less volition and free will than we think, that a lot of our behavior is heavily influenced by other species, in particular fungi and plants and microbes. It\\'s certainly clear to me that those species would influence all aspects of animal behavior if they could. We\\'re very useful for them to reproduce, to spread their seeds. And the only question is do they have the ability to influence our behavior? And given that many of them literally live inside us, I think they probably do.\n\n**0:21:46.8 Vael:** Interesting. Well, so I\\'m going to take us back to AI. \\[chuckle\\]\n\n**0:21:51.9 Interviewee:** Yeah, sure, that was a big tangent.\n\n**0:21:55.4 Vael:** \\[chuckle\\] So I was curious, when you were describing how your eventual optimistic scenario involves a whole bunch of people able to generate their own AGIs, presumably on safe platforms, and they kind of balance each other, I\\'m like, wow, I know that in human history we\\'ve gradually acquired more and more power such that we have huge amounts of control over our environments compared to 10,000 years ago. And we could blow up\\... Use nuclear power to blow up large amounts of spaces. And I\\'m like, wow, if these AGIs are pretty powerful, which I don\\'t know how powerful you think they are, then that doesn\\'t necessarily feel to me like the world is safe if everyone has access to a very powerful system. What do you think?\n\n**0:22:41.9 Interviewee:** Yup, I agree that when you\\'ve got a lot of powerful agents, then there\\'s a lot more ways for things to go wrong. Nuclear weapons are an interesting example though, because the game theory governing nuclear exchanges is actually pretty safe. You\\'ve got this mutually assured destruction that\\'s pretty obvious to all the parties involved, and you\\'ve got this kind of slow ladder of escalation that has many rungs, and nobody wants to get to the top rungs. And I think we\\'ve demonstrated over 70 years now that\\... There have been some close calls. But if you told somebody in 1950 that there would be 10 countries with nuclear weapons, and they\\'d be dramatically more destructive than they were in 1950, people would not necessarily have predicted that humans would last very much longer, but here we are. I guess one worry is that not every form of weapon would have the same kind of safe game theory, like there\\'s some suggestion that bio-weapons favor first strikes more than nuclear weapons do. Still, I think that having a big community of agents all with approximately the same amount of power, and they develop coalitions, they develop safety monitoring agencies that are made up of many agents that kind of make sure that no one agent has the ability to destroy everything.\n\n**0:24:30.0 Interviewee:** I mean, that\\'s kind of the way that humans have gone. We\\'ve got central banking committees that kind of look after the overall health of the economy and make sure that no one institution is systemically important, or at least that the ones that are are heavily regulated. Then we\\'ve got the IAEA which looks over atomic weapons and kind of monitors different weapons programs. As long as you believe that really destructive capacity will be detectable and that no one agent can just spin it up in secret, then I think the monitoring could turn out okay. I mean, what you might worry about is that somebody spins up\\... Or some AI agent spins up another more powerful AI in secret and then unleashes it suddenly.\n\n**0:25:32.7 Interviewee:** But that seems hard to do. Even now, if some company like Facebook or whatever wanted to develop an AI system completely in secret, I don\\'t think they could do it and make it as powerful as existing systems. It really hurts you to be disconnected from the Internet, you have a lot less data that way, or you have stale data that comes from a cache off the Internet. Also being disconnected from the Internet itself is really hard, it\\'s hard to make sure that your fridge is not trying to connect to your home WiFi. And that is only going to get harder; chips will have inbuilt WiFi connections. It\\'s very hard to keep things totally off-grid. And even if you do it, those things are much weaker. And so as long as you have some kind of global monitoring, which isn\\'t great, it feels very intrusive, it violates privacy. Ideally, that monitoring is kind of unobtrusive, it runs in the background, it doesn\\'t bother you unless you\\'re doing something suspicious, then I think things could turn out okay.\n\n**0:26:45.1 Vael:** Interesting. Yeah, I think I have\\... I was reading FHI\\'s lists of close calls for nuclear \\[catastophes\\] and thinking about global coordination for things like the pandemic, and I\\'m like, \\\"Ooh, sure we\\'ve survived 70 years, but that\\'s not very many in the whole of human history or something.\\\"\n\n**0:27:01.5 Interviewee:** Yeah, it\\'s not. And my scenario has the low probability of happening, maybe it\\'s\\... There are a few other optimistic scenarios. But maybe the total weight I give to all the optimistic scenarios is still kinda low, like 20%. So I think bad scenarios are more likely, but they\\'re not certain enough that we should just give up.\n\n**0:27:32.6 Vael:** Yes, I totally believe that. \\[chuckle\\] Yeah, so what convinced you to work on the alignment problem per se? And how did you get into it?\n\n**0:27:42.8 Interviewee:** Yeah, so I have a friend, \\[name\\], who\\'s been telling me pretty much constantly whenever we talk that this is the most important problem, the most important x-risk. And I kind of discounted her view for many years. It felt to me like we were\\... Until recently, it felt to me like AI\\... Superhuman AI was a long way off, and other risks were more pressing. I changed my mind in the last few years when I saw the pace of improvement in AI and the black box nature of it, which makes it more unpredictable. And that coincided\\... In the time frame, that coincided with me getting tenure, so I have much more freedom to work on what I want. The only thing that gave me pause is I\\'m not an engineer at heart, I\\'m a scientist. My skills and interests are in figuring out the truth, not in designing technology. So I\\'m still kind of looking for scientific aspects of the problem as opposed to design and engineering aspects. I do think I will find some portions of the alignment problem that fit my skills, but I\\'m still figuring out what those are.\n\n**0:29:15.3 Vael:** That makes sense. Yeah. How would you define the alignment problem?\n\n**0:29:19.6 Interviewee:** Yeah, that\\'s a super good question; that\\'s actually a question I\\'ve been asking other alignment researchers. I think it has several components. One component is the value loading problem, of once you\\'ve decided what\\'s in human interest, how do you specify that to an AI? I guess some people call that the outer alignment problem. Then before that, there\\'s the question of\\... The philosophical question of how do you even say what is in human interest? And I know some people think we need to make much more progress in philosophy before we can even hope to design aligned AI. Like I\\'ve seen that view expressed by Wei Dai, for example, the cryptographer. My view is, yeah, we don\\'t know exactly what we mean by in human interest, but we shouldn\\'t let that stop us. Because philosophy is a slow field, it hasn\\'t even made much progress in millennia. And we need to solve this quickly, and we should be happy with approximate solutions and try to make them better over time. And even if we don\\'t know what is exactly in human interests, we can agree on what is certainly not in human interests and try to at least prevent those bad outcomes.\n\n**0:30:39.6 Interviewee:** Okay, so those are two components. And then once you solve outer alignment, then there\\'s what people call inner alignment, which is you\\'re\\... At least if it\\'s a black box system, then you don\\'t know what it\\'s doing under the hood, and you worry that it develops some sub-goals which kind of take over the whole thing. So examples of that being: evolution designed humans to spread our genes but then to do that, it designed our brains to learn and generalize and so on and seek out food and power and sex and so on. And then our brains\\... That was originally a sub-goal, but then our brains just want to do that, and our brains don\\'t necessarily care about spreading our genes. And so evolution kind of failed to solve its alignment problem, or partially failed. That\\'s an interesting one to me, because if you think on an evolutionary time scale, if we don\\'t destroy ourselves, then evolution might end up correcting its course and actually designing some conscious fitness maximizers that do consciously want to spread their genes. And then those will outcompete the ones that are misaligned and just want the power and sex. And so I actually think evolution could end up staying aligned, it\\'s just that it\\'s slow, and so there might not be time for it to evolve the conscious fitness maximizers.\n\n**0:32:25.8 Interviewee:** Yeah, anyway, so that\\'s a worry, this inner alignment. And I think to solve that, we need to get off the black box paradigm and develop transparent AI, and a lot of people are working on that problem. So I\\'m somewhat optimistic that we\\'ll make big strides in transparency. What else? Okay, so if we solved all three of those, we\\'d be in good shape. My instinct is to assume the worst, that at least one of those three problems is really hard, and we won\\'t solve it, or at least we won\\'t solve it in time, and that\\'s why I focus on partial alignment, which is making sure that the AI we developed is loosely\\... Loosely has common interests with us even though it might have some diverging interests. And so it doesn\\'t want to completely destroy humans, because it finds us useful, and we don\\'t want to completely destroy it, because we find it useful. Then you can kind of say that\\'s already happening. Like in 2022, no machines could survive if all humans disappeared, very few humans could survive if all machines disappeared. And so we\\'ve got this kind of symbiosis between humans and machines. I like that situation. It\\'s not like 2022 is great, but I think we could gradually improve it. And we want to keep the symbiosis going, and we want to keep humans not necessarily even in a dominant position, but we want to prevent ourselves from getting in a really subservient position in the symbiosis.\n\n**0:34:17.4 Vael:** Makes sense. Switching gears a little bit: if you could change your colleagues\\' perceptions of AI, what attitudes or beliefs would you want them to have? So what beliefs do they currently have, and how would you want those to change?\n\n**0:34:30.6 Interviewee:** Yup, that\\'s a frustration that I think all alignment researchers have, that many of our colleagues are\\... We think of them as short-sighted. Some of them just want to develop better AI, because it\\'s an interesting problem, or because it\\'s useful. Some of them want to solve short-term alignment issues like making algorithms less biased. And that\\'s frustrating to us, because it seems like the long-term issues are way more important. They think of the long-term issues as something that\\'s not really science, it\\'s too speculative, it\\'s too vague. They feel like even if the long-term issues are important, we will be able to solve them better if we learn by solving the short-term issues. I\\'m not against people working on algorithmic bias, but I\\'m frustrated that so many more people work on that than on long-term alignment. I do think the Overton window is shifting quite a bit. I think the increase of funding in the space would be\\... Is already shifting things, and it could be used more effectively in the sense of giving a few academics really big grants would really catch their colleagues\\' attention.\n\n**0:36:00.0 Interviewee:** So it\\'s kind of\\... How should I put it? I\\'m blanking on the word, but it\\'s a kind of a cynical view to think that academics are motivated by money; many of us aren\\'t. But at the end of the day, having a grant makes it easy to just focus on your research and not be distracted by teaching and administrative stuff. And so your colleagues really pay attention when one of their colleagues gets some big, flashy grant, and so I actually think that\\'s a cheap way to shift the Overton window. Like take the top 10 or 20 math and computer science departments, and give one person in each department a giant grant\\-- giant by academic standards, couple million dollars, so it\\'s not actually much. That will really convince people that, \\\"Wow, long-term alignment is a serious field where you can get serious funding.\\\" So yeah, that would be my recommendation to funders. That\\'s a pretty self-interested recommendation, because I intend to apply for a grant soon. But yeah, I think that would help. Let\\'s see, did I answer your question?\n\n**0:37:26.1 Vael:** I think so, yeah. What happens if some of these departments don\\'t have anyone interested in working on long-term alignment?\n\n**0:37:34.6 Interviewee:** Yeah, that\\'s hard. Like at \\[university\\], I spent several months probing my colleagues for anyone who\\'s interested. I didn\\'t find anyone in the computer science Department, which was disappointing, because \\[university\\] has a great computer science department. I do think if you look more broadly in several departments you are likely to find one or a few people\\... You could see already there\\'s these big institutes, you have one now at Stanford, there\\'s one at Berkeley, Cambridge, Oxford, and so on. So that\\'s evidence that there\\'re already a few people. And people talk to colleagues around the world, so it doesn\\'t matter if there\\'s nobody at school X, you fund the people that are interested. But the key is that they might not have a track record in alignment, like I\\'m in this situation where I have no track record, my track record is in pure math. So somebody has to take a little bit of a leap and say, \\\"Well, I don\\'t know if \\[interviewee name\\] will be able to produce any good alignment research, but there\\'s a good chance because he\\'s good at proving theorems, so let me give him a couple of million dollars and see what happens.\\\" That is a big leap and it might fail, but it\\'s just like any venture funding, a few of your big leaps will be very successful and that\\'s enough.\n\n**0:39:16.8 Vael:** Yep, that makes sense. Yeah, I think \\[name\\] at \\[other university\\] is aiming to make an institute at \\[other university\\] as well, which is cool.\n\n**0:39:25.2 Interviewee:** That\\'s great. I\\'ve been talking to my dean about doing this at \\[university\\] and he likes the idea but he is not really aware of how to fund it, and I\\'m telling him there\\'s actually a lot of funding for this stuff, but I don\\'t personally know the funders.\n\n**0:39:44.8 Vael:** Yeah, I think getting in contact with \\[funding org\\] seems like the thing to do.\n\n**0:39:49.2 Interviewee:** Yep, good.\n\n**0:39:50.2 Vael:** Yep. \\[Name\\] is one of the people in charge there. Great, so how has this interview been for you and why did you choose to jump on it? That\\'s my last question.\n\n**0:40:03.1 Interviewee:** Oh, it\\'s been fun. I spent a while just thinking alone about some alignment stuff because I had no colleagues to talk to, so it\\'s always great to find someone who likes to talk about these issues.\n\n**0:40:24.6 Vael:** Did you know that I was already interested in long-term alignment?\n\n**0:40:28.2 Interviewee:** I think I saw your name at the\\... Did you participate in the SERI Conference?\n\n**0:40:33.2 Vael:** I did.\n\n**0:40:34.9 Interviewee:** Yeah, so I saw your name there and so I was sort of aware of your name but I didn\\'t know anything about your interests.\n\n\\[\\...some further discussion, mostly about the interviews\\...\\]\n\n**0:42:22.4 Interviewee:** Okay. Cool, Vael. I should jump on another call, but it was great to chat and yeah, feel free to follow up if you want.\n\n**0:42:32.1 Vael:** Will do. Thanks so much.\n\n**0:42:33.7 Interviewee:** Okay. Take care.\n", "filename": "individuallyselected_84py7-by Vael Gates-date 20220318.md", "id": "d712f7fa3221eada0dc673b8ac244e42", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Ethan Caballero-by The Inside View-date 20220505", "authors": ["Ethan"], "date_published": "2022-05-05", "text": "# Ethan On Why Scale is All You Need on The Inside View Podcast\n\nInterviewee: Ethan Caballero\nDate: 2022-05-05\n\nEthan is known on Twitter as the edgiest person at MILA. We discuss all the gossips around scaling large language models in what will be later known as the Edward Snowden moment of Deep Learning. On his free time, Ethan is a Master’s degree student at MILA in Montreal, and has published papers on out of distribution generalization and robustness generalization, accepted both as oral presentations and spotlight presentations at ICML and NeurIPS. Ethan has recently been thinking about scaling laws, both as an organizer and speaker for the 1st Neural Scaling Laws Workshop.\n\nOutline\n\n\n- 00:00 highlights\n\n- 00:50 who is Ethan, scaling laws T-shirts\n\n- 02:30 scaling, upstream, downstream, alignment and AGI\n\n- 05:58 AI timelines, AlphaCode, Math scaling, PaLM\n\n- 07:56 Chinchilla scaling laws\n\n- 11:22 limits of scaling, Copilot, generative coding, code data\n\n- 15:50 Youtube scaling laws, constrative type thing\n\n- 20:55 AGI race, funding, supercomputers\n\n- 24:00 Scaling at Google\n\n- 25:10 gossips, private research, GPT-4\n\n- 27:40 why Ethan was did not update on PaLM, hardware bottleneck\n\n- 29:56 the fastest path, the best funding model for supercomputers\n\n- 31:14 EA, OpenAI, Anthropics, publishing research, GPT-4\n\n- 33:45 a zillion language model startups from ex-Googlers\n\n- 38:07 Ethan's journey in scaling, early days\n\n- 40:08 making progress on an academic budget, scaling laws research\n\n- 41:22 all alignment is inverse scaling problems\n\n- 45:16 predicting scaling laws, useful ai alignment research\n\n- 47:16 nitpicks aobut Ajeya Cotra's report, compute trends\n\n- 50:45 optimism, conclusion on alignment\n\n## Introduction\n\n**Michaël**: Ethan, you're a master's degree student at Mila in Montreal, you have published papers on out of distribution, generalization, and robustness generalization accepted as presentations and spotlight presentations at ICML and NeurIPS. You've recently been thinking about scaling laws, both as an organizer and speaker for the first neural scaling laws workshop in Montreal. You're currently thinking about the monotonic scaling behaviors for downstream and upstream task, like in the GPT-3 paper, and most importantly, people often introduce you as the edgiest person at Mila on Twitter, and that's the reason why you're here today. So thanks, Ethan, for coming on the show and it's a pleasure to have you.\n\n**Ethan**: Likewise.\n\n## Scaling Laws T-Shirts\n\n**Michaël**: You're also well-known for publicizing some sweatshirt mentioning scale is all you need AGI is coming.\n\n**Ethan**: Yeah.\n\n**Michaël**: How did those sweatshirts appear?\n\n**Ethan**: Yeah, there was a guy named Jordi Armengol-Estapé who interned at Mila, and he got really into scaling laws, apparently via me. And then he sent me the shirt and was like: look how cool this shirt is. Like, he's the person wearing the shirt in the picture, and he's like, look how cool this shirt I just made is. And so then I tweeted the shirt. And then Irina just turned it into a merchandising scheme to fund future scaling. So she just made a bunch and started selling it to people. Like apparently, like she sells like more than 10 to Anthropic already. Just scaling lot of t-shirts, that's the ultimate funding model for supercomputers.\n\n## Scaling Laws, Upstream and Downstream tasks\n\n**Michaël**: Maybe you can like explain intuitively for listeners that are not very familiar to what are scaling laws in general.\n\n**Ethan**: Whatever your bottleneck compute data parameters, you can predict what the performance will be as that bottleneck is relieved. Currently, the thing most people know how to do is predict like the upstream performance. Like the thing people want though is to be able to predict the downstream performance and upstream is what you're like... It's like your literal loss function that you're optimizing and then downstream is just any measure that you have of, like something you care about, so just like a downstream dataset, or like, I mean, usually, it's just mean accuracy on a downstream dataset.\n\n**Michaël**: And to take like concrete examples, like for GPT-3, the upstream task is just predict the next word. What are the downstream tasks?\n\n**Ethan**: Like 190... a zillion like benchmarks that the NLP community has come up with over the years. Like they just evaluated like the accuracy and like things like F1 score on all those.\n\n**Michaël**: And yeah, what should we care about upstream and downstream task?\n\n**Ethan**: I mean, basically like up, well, we don't really care about upstream that much. Upstream's just the first thing that people knew how to predict, I guess, like predict the scaling of what we care about as downstream. I mean, basically, like downstream things that improve monotonically, they kind of can be interpreted as like capabilities or whatever, and then downstream stuff that doesn't necessarily improve monotonically often is stuff that is advertised as alignment stuff. So like toxicity or if you like speculate in the future, stuff like interpretability or controllability would be things that might not improve monotonically.\n\n**Michaël**: So you don't get more interpretability as you scale your models?\n\n**Ethan**: You do currently, but the class example is like CLIP. It gets more interpretable as it has representations that make more sense. But you can imagine at a certain point, it's less interpretable because then at a certain point, the concepts it comes up with are beyond human comprehension. Like now it's just how like dogs can't comprehend calculus or whatever.\n\n## Defining Alignment and AGI\n\n**Michaël**: Yeah, when you mention alignment, what's the easiest way for you to define it?\n\n**Ethan**: I mean, the Anthropic definition's pretty practical. Like we want models that are helpful, honest, and harmless, and that seems to cover all the like weird edge cases that people can like come up with on the Alignment Forum or whatever.\n\n**Michaël**: Gotcha, so it is not like a technical definition. It's more a theoretical one.\n\n**Ethan**: Yeah, yeah.\n\n**Michaël**: So would you consider yourself an alignment researcher or more like a deep learning researcher?\n\n**Ethan**: I'd say just a beneficial AGI researcher. That seems to cover everything.\n\n**Michaël**: What's AGI?\n\n**Ethan**: The definition on NASA website's pretty good. Highly autonomous systems that outperform humans at most economically valuable tasks.\n\n## AI Timelines\n\n**Michaël**: When do you think we'll get AGI?\n\n**Ethan**: I'll just say like, it depends mostly on just like compute stuff, but I'll just say 2040 is my median.\n\n**Michaël**: What's your like 10% and 90% estimate?\n\n**Ethan**: 10%, probably like 2035.\n\n## Recent Progress: AlphaCode, Math Scaling\n\n**Michaël**: I think there's been a week where we got DALL-E 2, Chinchilla, PaLM. Did that like update your models in any way?\n\n**Ethan**: The one that I thought was the like... was the crazy day was the day that AlphaCode and the math-proving thing happened on the same day, because like, especially the math stuff, like Dan Hendricks has all those slides where he is like, oh, math has the worst scaling laws or whatever, but then like OpenAI has like the IMO stuff. So like at least according to like Dan Hendricks' slides, whatever, that would've been like, something that took longer than it did.\n\n**Michaël**: So when you mentioned the IMO stuff, I think it was like at problem from maybe 20 years ago, and it was something that you can like do with maybe like two lines of math.\n\n**Ethan**: I agree they weren't like super, super impressive, but it's more just the fact that math is supposed to have like the worst scaling supposedly, but like impressive stuff's already happened with math now.\n\n**Michaël**: Why is math supposed to have the worst scaling?\n\n**Ethan**: It's just an empirical thing. Like Dan Hendricks has that like math benchmark thing and then he tried to do some extrapolations based on the scaling of performance on that. But the amount of computing data we currently have, it's already like doing interesting stuff was kind of surprising for me.\n\n**Michaël**: I think in the paper, they mentioned that the method would not really scale well because of, and some infinite actions base when trying to think of like actions.\n\n**Ethan**: Yeah.\n\n**Michaël**: So yeah, I didn't update it. I was like, oh yeah, scaling will be easy for math.\n\n**Ethan**: I didn't update it as easy, but just easier than I had thought.\n\n## The Chinchilla Scaling Law\n\n**Michaël**: Okay, related to scaling, the paper by DeepMind about the Chinchilla model was the most relevant, right?\n\n**Ethan**: Yeah, I thought it was interesting. Like, I mean, you probably saw me tweet it, like that person on Eleuther Discord that was like, oh wait, Sam Altman already said this like six months ago, but they just didn't put it in a paper.\n\n**Michaël**: Yeah, he said that on the Q&A, right?\n\n**Ethan**: Yeah, yeah.\n\n**Ethan**: Yeah, he said something like we shouldn't, our models will not be like much bigger.\n\n**Ethan**: Yeah. He said they'll use way more compute, which is analogous to saying, there you'll train a smaller model, but on more data.\n\n**Michaël**: Can you like explain the kind of insights from scaling laws between like compute model size, and then like what's called like the Kaplan Scaling law?\n\n**Ethan**: It was originally something about computing. If your compute budget increase a billionfold, your model size increases a millionfold and your dataset size increases a thousandfold. And now it's something like, I know it's like one to one, but I don't remember like how big the model size to like compute ratio was. I know like the model-to-data ratio is one to one now, but I don't remember what the compute-to-model ratio is, the new compute-to-model ratio is.\n\n**Michaël**: That's also what I remember, and I think like the main insight from the first thing you said from the Kaplan law is that like model size is all those matters compared to dataset and for a fixed compute budget.\n\n**Ethan**: Yeah, the narrative with the Kaplan one was model size, like compute is the bottleneck for now until you get to the intersection point of the compute scaling and the data scaling, and at that point, data's gonna become more of a bottleneck.\n\n**Michaël**: So compute is the bottleneck now. What about like having huge model?\n\n**Ethan**: But yeah, yeah. That's like, because like they were saying that because model size grows so fast. So like to get the bigger models, you need more compute rather than like, you don't need more data 'cause like you don't even have enough compute to like train a large model on that data yet, with the current compute regime... was the narrative of the first of the original Kaplan paper. But it's different now because like the rate at which you should be getting data given, like the rate at which your data charge should be increasing given your compute budget is increasing is a lot faster now, like using the Chinchilla scaling law. For some increasing compute size, you're gonna increase your model by a certain amount, and the amount that you're dataset size increases is like a one-to-one relation to the amount that your model size increases. I don't remember what the relation between model and compute was, but I know that now the relation between model and dataset size is one to one, between model size and dataset size is one to one.\n\n**Ethan**: And the main size is that now we can just have more data and more compute, but not like a lot of more compute. We just need the same amount as more compute. So we can just like have to scrap the internet and get more data.\n\n**Ethan**: It just means like to use your compute budget optimally, the rate at which your dataset size grows is a lot faster.\n\n**Michaël**: Does that make you more confident that we'll get like better performance for models quicker?\n\n**Ethan**: Maybe for like YouTube stuff, because YouTube, we're not bottlenecked by data. We're bottlenecked by compute, whatever. But that implies the model sizes might not grow as fast for YouTube or whatever. But for text, we're probably gonna be bottlenecked by... It means we're probably gonna be bottlenecked like text and code by the dataset size earlier than we thought. But for YouTube, that might like speed up the unsupervised video on all of YouTube, like timeline stuff.\n\n## Limits of Scaling: Data\n\n**Michaël**: Yeah, so I'm curious when do you think about like how much are we bottlenecked by data for text?\n\n**Ethan**: Yeah, I asked Jared Kaplan about this, and he said like, \"Wait, okay. \"It's 300 billion tokens for GP3.\" And then he said like, library of Congress, whatever, could be 10 trillion tokens or something like that. And so like the most pessimistic estimate of how much like the most capable organization could get is the 500 billion tokens. A more optimistic estimate is like 10 trillion tokens is how many tokens the most capable organization could get, like mostly English tokens.\n\n**Michaël**: So how many like orders of magnitude in terms of like parameters does this give us?\n\n**Ethan**: I don't remember what the... Like I haven't calculated it. Like I remember I kind of did it with the old one, but I haven't done it with the new Chinchilla one. But I mean, you said this in your thing today or whatever, like we probably are gonna be bottleneck by the amount of code.\n\n**Michaël**: I was essentially quoting Jared Kaplan's video.\n\n## Code Generation\n\n**Ethan**: Yeah, yeah, but he, I mean, he's right. I'm kind of wondering what's philanthropic thinking of Adept, because Adept's like doing the training all the code thing, and Adept was gonna do all the train on all the code thing, and they're like, oh crap, we got another startup doing the train on all the code stuff.\n\n**Michaël**: Yeah, so I think you said that if you remove the duplicates on GitHub, you get some amount of tokens, maybe like 50 billion tokens, 500, I'm not sure. Maybe 50 billion. Don't put me on that.\n\n**Ethan**: Yeah.\n\n**Michaël**: And yeah, so the tricks will be data augmentation... you're like applying the real things to make your model better, but it's not clear how do you improve performance? So my guess would be you do transfer learning, like you train on like all the different languages.\n\n**Ethan**: That's definitely what they plan on doing, like you see the scaling lots for transfer paper is literally pre-train on English and then fine-tune on code.\n\n**Michaël**: My guess is also that like, if you get a bunch of like the best programmers in the world to use co-pilot and then you get like feedback from what they accept, you get higher quality data. You get just like, oh yeah, this work just doesn't work. And so you have like 1 million people using your thing 100 times a day, 1,000 times a day, then that's data for free.\n\n**Ethan**: I mean, I view that part kind of as like the human feedback stuff is kind like the alignment part is the way I view it. I mean, then there's some people who like say, oh, there might be ways to get like better pre-training scaling if you have like humans in the loop during the pre-training, but like, no one's really figured that out yet.\n\n**Michaël**: Well, don't you think like having all this telemetric data from GitHub cooperatives is you can use it, right?\n\n**Ethan**: Yeah, yeah, but I almost view it as like that it's like used for alignment, like for RL from human preferences.\n\n**Michaël**: Okay. Gotcha. Yeah, I think the other thing they did for improving GPT-3 was just having a bunch of humans rate the answers from GPT-3 and then like that's the paper of instructivity. I think like they had a bit of humans and it kind of improved the robustness or not for business, but alignment of the answer somehow. Like it said less like non-ethical things.\n\n**Ethan**: Yeah. I mean it's like people downvoted the non-ethical stuff, I think.\n\n## Youtube Scaling, Contrastive Learning\n\n**Michaël**: Exactly, yeah. And to go back to YouTube, why is scaling on YouTube interesting? Because there's unlimited data?\n\n**Ethan**: Yeah, one, you're not banned, but I mean, the gist is YouTube's the most diverse, like simultaneously diverse and large source of like video data basically.\n\n**Michaël**: And yeah. So for people who were not used to or thinking, what's the task in YouTube?\n\n**Ethan**: Yeah, it could be various things. Like it might be like a contrastive thing or it might be a predict all the pixels thing. Like, I mean, so like at least places like Facebook seem to think like contrastive has better downstreams scaling laws, so it's gonna be a contrastive type thing.\n\n**Michaël**: What's contrastive type thing?\n\n**Ethan**: Like you want representations that have similar like semantic meaning to be close together, like have low cosign similarity, like in latent space. So basically, like maximize the mutual information between views. Like it's kind of hard to explain without pictures.\n\n**Michaël**: So you'd say that your model takes a video, like all of the videos and views as input?\n\n**Ethan**: Frames that were close together like in time, it tries to maximize the mutual information between them via maximizing cosign similarity between the latents of like a resonant encoder or whatever that encodes the images for both of those frames that were next to each other, like in time.\n\n**Michaël**: So he tries to kind of predict correlations between frames in some kind of latent space from a resonance?\n\n**Ethan**: Yeah, yeah. In the latent space, you want frames that were close to each other in time to have similar, like maximize the cosign similar between the latent space between the latent between the hidden layer output by the like resonance that took each of those in each of those frames in.\n\n**Michaël**: And at the end of the day, you want something that is capable of predicting how many frames in lens.\n\n**Ethan**: Kind of for, well, the like philosophy with like the contrastive stuff is we just want a good representation that's useful for downstream tasks or whatever. So like you don't actually like, there's no like output really. It's just you're training a latent space or whatever that can be fine-tuned to downstream tasks very quickly.\n\n**Michaël**: What are the useful downstream tests, like robotics?\n\n**Ethan**: Yeah, yeah. Like there's a zillion papers on like people pre-train on do some pre-train contrastive thing in like an Atari environment, and then they show like, oh, now we barely need any RL steps to like fine-tune it or whatever and it can like learn RL really quickly after we just did all this unsupervised contrastive, like pre-training or whatever.\n\n**Michaël**: And yeah, wouldn't your model be kind of shocked by the real world when you just like show him like YouTube videos all the time and then you trust the robot with like a camera?\n\n**Ethan**: Kind of not. I mean, 'cause there there's like everything on YouTube. They got like first person egocentric stuff, they got third person stuff. Like it'll just like realize which, like whether it's in first or third person pretty quickly. I feel like it just infers the context. Like now I saw GPT-3 just for the context, it's in, 'cause it seemed like every context ever.\n\n**Michaël**: Gotcha. So I was mostly thinking about like entropy of language.\n\n**Ethan**: If it's literally like a video generative model, then you can do like just the perfect analogies, GPT-3 or whatever. It gets a little trickier with like contrastive stuff, but yeah, I mean either one. I mean the analogies are pretty similar for either one.\n\n**Michaël**: So one of the things about the scaling laws papers and the role of scaling laws, there was some different exponents for text.\n\n**Ethan**: Yeah.\n\n## Scaling Exponent for Different Modalities\n\n**Michaël**: What do you think is the exponent for video? Would it be like much worse?\n\n**Ethan**: I know the model size. The model size relation was the big point of the scaling laws. For autoregressive generative models, the paper says that the rate at which the model size grows, given your compute budget grows, is the same for every modality. So that was kind of like, that's like a big unexplained thing. Like that was the biggest part just of that paper and no one's been able to explain why that is yet.\n\n**Michaël**: So there might be some universal law where scaling goes for all modality and nobody knows why.\n\n**Ethan**: Just stuff. The rate at which your model size grows given your compute budget is increasing is the same for every modality, which is kind of weird and no one, like I haven't really heard a good explanation why.\n\n**Michaël**: Who do you think will win the video prediction race?\n\n## AGI Race: the Best Funding Model for Supercomputers\n\n**Ethan**: The person who wins AGI is whoever has the best funding model for supercomputers. Whoever has the best funding model for supercomputers wins. Like, I mean yet to assume all entities are like, they have like the nerve, like we're gonna do the biggest training run ever, but then given that's your pre-filter, then it's just whoever has the best funding models for supercomputers.\n\n**Michaël**: So who is able to spend the most money? So would it be USA, China, Russia?\n\n**Ethan**: Yeah, yeah, it might be something. I mean, my guess is like China's already, like they already have this joint fusion of industry government and academia via the Beijing Academy of AI in China. So my guess is like at some point, like Beijing Academy of AI and be like, look, we just trained like a 10 to the 15 parameter model on all of YouTube and spent like $40 billion doing it. And then at that point, Jared Kaplan's gonna be in the White House press conference room, be like, look, see these straight lines on log log pots, we gotta do this in the USA now.\n\n**Michaël**: Right, right. But how do you even spend that much money?\n\n**Ethan**: By making people think if they don't, they'll no longer be the superpower of the world or whatever. Like China will take over the world or whatever. Like it's only like a fear. It's only a fear thing.\n\n**Michaël**: From looking at the PaLM paper from Google, they seem pretty clever on how they use their compute.\n\n**Ethan**: You mean the thing where they have like the two supercomputers that they split it across or whatever?\n\n**Michaël**: Right. I think TPU pods or something, they call it.\n\n**Ethan**: Yeah, yeah.\n\n**Michaël**: So it didn't seem like they spent more money than OpenAI. So they tried to be more careful somehow. So my model of like people spending a lot of money is.\n\n**Ethan**: Like most entities won't be willing to like do the largest training when they can, given their funding.\n\n**Michaël**: So maybe China, but I see Google as being more helpful because of they do it on paper, but maybe I'm wrong.\n\n**Ethan**: Jared Kaplan says like most like Anthropic and OpenAI are kind of unique in that they're like, okay. We're gonna like throw all our funding into this one big training run. But like Google and like 'cause Google and Amazon, they have like he said like at least, 10X or like 100X times the compute that OpenAI and Anthropic have, but they never like use all the compute for single training runs. They just have all these different teams that use to compute for these different things.\n\n**Michaël**: Yeah, so they have like a different hypothesis. OpenAI is like scale is all that matters, somehow that they're secrets itself and-\n\n**Ethan**: Yeah, it's something like that.\n\n**Michaël**: You just let scale things and we are going to get better results, and Google is maybe there's more bureaucracy and it's maybe harder to get a massive budget.\n\n## Private Research at Google and OpenAI\n\n**Ethan**: Yeah, it's weird though, 'cause Jeff Dean's latest blog post, it summarizes all the Google's research progress mentions like scaling and scaling while it's a zillion times. So that almost implies that like they're on the scales. All you need bandwagon too. So I don't know.\n\n**Michaël**: They probably know, but then the question is how like private things are and maybe there's stuff we don't really know.\n\n**Ethan**: I know a bunch of Google said like, yeah, we have language models that are way bigger than GPT-3, but we just don't put 'em in papers.\n\n**Michaël**: So you've talked to them like privately or is it just, they said online?\n\n**Ethan**: I just I've heard things from people and that's feasible. I'm not just disclosing what I got that information from, but that's just what I've heard from people.\n\n**Michaël**: So as we're on like gossip, I think like something that was around on the internet, like right when GPT-3 was launched was that Google was like reproduced it in a few months afterwards, but they didn't really talk about it publicly. I'm not sure about what to do with this information.\n\n**Ethan**: I know like the DeepMind language models papers that they were a year old when they finally put 'em out on archive or whatever, like Gopher and Chinchilla. They had the language model finished training a year before the paper came out.\n\n**Michaël**: So we should just like assume all those big companies are just like throwing papers when they're like not relevant anymore when they have like the other paper already?\n\n**Ethan**: Maybe, but yeah. I don't know why it was delayed that much. Yeah, I don't know what the story is. Why it was delayed that long.\n\n**Michaël**: People want to like keep their advantage, right?\n\n**Ethan**: I guess, but I mean like I feel like GPT-3, they threw the paper on arXiv pretty soon after they finished training GPT-3.\n\n**Michaël**: How do you know?\n\n**Ethan**: Yeah, I don't, but I mean, yeah, I don't. But ice, it didn't. Yeah, maybe there was a big delay. I don't know.\n\n**Michaël**: So I think you could just like retrace all Sam Altman tweet and then like you read the next paper like six months after and you're like, oh yeah, he tweeted about that. Like sometimes the tweets like, oh, AI is going to be wild, or oh, neural networks are really capable of understanding. I think you tweeted that like six months ago, like when they discovered GPT-4.\n\n**Ethan**: OpenAI is like when Ilya tweeted the consciousness tweet, they're like, goddamn, GPT-4 must be crazy.\n\n**Michaël**: Yeah, neural networks are in some ways slightly conscious.\n\n**Ethan**: Yeah, yeah, that was the funniest quote.\n\n**Michaël**: Yeah, I think people at OpenAI know things we don't know yet. They're all like super hyped. And I think you mentioned as well that at least privately that Microsoft has some deal with OpenAI and so they need to some amount of money before 2024, like.\n\n**Ethan**: Oh yeah, yeah, yeah, yeah. I mean, right, right. When the Microsoft deal happened, like Greg Brockman said, \"Our plan is to train \"like a 100 trillion parameter model by 2024.\"\n\n**Michaël**: Okay, so that's in two years?\n\n**Ethan**: I mean, that was in 2019, but maybe they've changed their mind after like the Chinchilla scaling lot stuff, I don't know.\n\n## Why Ethan did not update that much from PaLM\n\n**Michaël**: Right. And so you were not like impressed by PaLM being able to predict to like do logic on airplane things and explain jokes?\n\n**Ethan**: In my mind, like the video scaling was like a lot worse than text basically. That's the main reason why I like AGI will probably take longer in the five years or whatever in my mind.\n\n**Michaël**: Okay, so we need, so if we just have text, it's not enough to have AGI. So if we're a like a perfect Oracle that can like talk like us, but it's not able to do robotic things, then we don't have AGI.\n\n**Ethan**: Yeah.\n\n**Michaël**: Well, I guess my main like is mostly like coding. So if we get like coding, like Codex or comparative, that gets really good, then everything accelerates and engineers become very productive, and then like.\n\n**Ethan**: I guess if you could say like, engineers get really productive at making improvements in hardware, then like, maybe that would, like, I get how that would be like, okay. Then it's really fast. Like in my mind, at least at the current, I don't see the hardware getting fast enough to be far enough on the YouTube scaling lot in less than five years from now.\n\n**Michaël**: Thinking about hardware, we're just like humans, Googling things and using.\n\n**Ethan**: Yeah, yeah. I get what you're saying. Like you get the Codex thing and then we use Codex or whatever to design hardware faster.\n\n**Michaël**: You mentioned you have like DALL-E, but like for designing chips.\n\n**Ethan**: I mean, Nvidia already uses AI for designing their chips.\n\n**Michaël**: That doesn't make you think of timelines of 10 years or closer.\n\n**Ethan**: It may be 10 years, but not five years. The thing I'm trying to figure out is like, try to get like a student researcher gig at like someplace so that I can just get access to the big compute during the PhD.\n\n**Michaël**: Oh, so that's your plan. Just get out of compute.\n\n**Ethan**: Yeah, I mean, as long as I have big compute, it doesn't matter where I'm a PhD. I mean, it kind of matters if you're like trying to start an AGI startup or whatever, but safe, safe, safe AGI startup.\n\n**Michaël**: We're kind of on record, but I'm not sure if I'm going to cut this part. So you can say unsafe, it's fine.\n\n**Ethan**: Yeah, no, no, no. I mean, I don't even phrase. I just phrase it as beneficial AGI.\n\n**Michaël**: You were spotted saying you wanted unsafe AGI the fastest possible.\n\n## Thinking about the Fastest Path\n\n**Ethan**: No, no, no. The way I phrase it is I think I explained this last time, you have to be thinking in terms of the fastest path, because there is like extremely huge economic and military incentives that are selecting for the fastest path, whether you want it to be that way or not. So like, you gotta be thinking in terms of, what is the fastest path and then how do you like minimize the alignment tax on that fastest path? 'Cause like the fastest path is the way it's probably gonna happen no matter what, like, so it's about minimizing the alignment techs on that fastest path.\n\n**Michaël**: Or you can just throw nukes everywhere and try to make things slower?\n\n**Ethan**: Yeah, I guess, but I mean the people who are on the fastest path will be like more powerful, such that like, I don't know, such that they'll deter all the nukes.\n\n**Michaël**: So you want to be, okay, so you want to just like join the winners. Like if you join the skiing team at Google.\n\n**Ethan**: Thing I've been trying to brainstorm about is who's gonna have the fastest, who's gonna have the best funding models for supercomputers, 'cause that's the place to go and you gotta try to minimize the alignment tax at that place.\n\n**Michaël**: Makes sense. So everyone should infiltrate Google.\n\n**Ethan**: Yeah, so whatever place ends up with the best funding model of supercomputers try to get as many weird alignment people to like infiltrate that place as possible.\n\n**Michaël**: So I'm kind of happy having a bunch of EA people at OpenAI now, because they're kind of minimizing the text there, but...\n\n**Ethan**: Yeah, I kind of viewed it as all the EA people left, like 'cause Anthropic was like the most extremist EA people at OpenAI. So I almost viewed when Anthropic happened a bunch of EA people. I view as that like EA almost leaving OpenAI when Anthropic happened.\n\n**Michaël**: Some other people came, right?\n\n**Ethan**: Like who?\n\n**Michaël**: I don't know. Richard Ngo.\n\n**Ethan**: Oh, okay, okay. Yeah, yeah.\n\n**Michaël**: It's like a team on like predicting the future.\n\n**Ethan**: Yeah, yeah. I wanna know what the Futures Team does 'cause that's like the most out there team. I'm really curious to what they actually do.\n\n**Michaël**: Maybe they use their GPT-5 model and predict things.\n\n**Ethan**: Right, 'cause I mean like DALL-E, like you know about the Foresight Team at OpenAI, right?\n\n**Michaël**: They were trying to predict things as well, like forecasting.\n\n**Ethan**: Yeah, that's where all this scaling lot stuff came from was on the Foresight Team at OpenAI. They're gone now because they became philanthropic. But like a team called like the Futures Team that almost has a similar vibe to like a team called the Foresight Team. So I'm kind of curious.\n\n**Michaël**: But then there's just like doing more governance things and optimal governance and maybe economics.\n\n**Ethan**: That's what it's about, governance and economics.\n\n**Michaël**: The guy like Richard Ngo is doing governance there.\n\n**Ethan**: Okay.\n\n**Michaël**: Predicting how the future works, I think is in his Twitter bio.\n\n**Ethan**: Yeah, yeah, but I mean, that's somewhat tangential to governance, like that almost sounds like something Mike Rick Kurtz, I would say, I'm predicting how the future.\n\n**Michaël**: My model is like Sam Altman, as like they have GPT-4. Like they published GPT-3 in 2020. So it's been like two years.\n\n**Ethan**: Yeah.\n\n**Michaël**: And they've been talking about like in their Q & A about like treacherous results or something like one year ago. So now they must have access to something very crazy and they're just like trying to think like how do we operate with like DALL-E 2 and their GPT-4 they have in private and how they do something without like for him in the world? I don't know. Maybe they're just like trying to predict like how to make the most money with their API or.\n\n**Ethan**: You're saying like if they release it, it's like an infohazard? 'Cause in my mind, GPT-4 still isn't like capable enough to F up the world, but you could argue, it's like capable enough to like be an infohazard or something.\n\n**Michaël**: Imagine you have access to something that has the same kind of gap between GPT-2 and GPT-3, but like for GPT-4 on like understanding and being general. And you don't want everyone else to copy your work. So you're just going to keep it for yourself for sometime.\n\n## A Zillion Language Model Startups from ex-Googlers\n\n**Ethan**: Yeah, but I feel like that strategy is already kind of screwed. Like you know about how like a zillion large language model, like a zillion Googlers have left Google to start large language model startups. Like there's literally three large language model startups by ex-Googlers now. OpenAI is like a small actor in this now because there's like multiple large language model startups founded by ex-Googlers that all like that all were founded in the last like six months. Like there's a zillion VCs throwing money at large language model startups right now. The funniest thing, like Leo Gao, he's like, we need more large language model startups because the more startups we have, then it splits up all the funding so no organization can have all the funding to get the really big supercomputer. So we just need thousands of AI during its final startups. So no one can hoard all the funding to get the really big language model.\n\n**Ethan**: That's the, yeah, with the AI model, you just like do open source. So like there's like more startups. And so all the funding gets splitted, I guess.\n\n**Ethan**: Yeah, you could view OpenAI was like extra big brain. We need to do. We need to like release the idea of our joiners models onto the world such that no organization could have enough compute to be such that all the compute gets more split up, 'cause a zillion, our joiners model startups will show up all at once.\n\n**Michaël**: That's yeah, that's the best idea ever. So do you have like other gossips besides like Google's? Did you post something on Twitter about people leaving Google?\n\n**Ethan**: Yeah, I posted a bunch of stuff. Well, I mean, and also like you saw the... I mean it's three startups, adept.ai, character.ai, and inflection.ai. They're all large language model startups founded by ex-Googlers that got a zillion dollars in VC funding to scale large language models.\n\n**Michaël**: What's a zillion dollars like?\n\n**Ethan**: Like greater than 60 million. Each of them got greater than 60 million.\n\n**Michaël**: So did they know about something we don't know? And they're just like get money to replicate what Google does?\n\n**Ethan**: Well, I mean, most of 'em, they were famous people like founder of DeepMind scaling team. Another one is the inventor of The Transformer. Another one was founded by a different person on The Transformer paper. Like, so I mean, in some ways, they have more clout than like OpenAI had or whatever.\n\n**Michaël**: But they don't have like the engineering and old infrastructure.\n\n**Ethan**: No, they kind of do. Like, a lot of 'em, they were like the head of engineering for scaling teams at like DeepMind or Google.\n\n**Michaël**: So there's like another game that is in private at Google and they've been scaling huge models for two years. and they're just like,\n\n**Ethan**: Yeah, something like that.\n\n**Michaël**: Starting startups with their knowledge and they're just scaling and we;re just like, like peasants like us talk about papers that are released one year after and then when you turn them out.\n\n**Ethan**: Yeah, yeah. I guess that's, I mean, I don't know how long these delays are. I mean, in my mind, like, yeah. I guess you could view it as a delay thing, 'cause like in my mind it's just like, yeah, you're right, you're right. It's probably delayed by a year, yeah.\n\n**Michaël**: So yeah, that makes me less confident about-\n\n**Ethan**: Oh shit. You look like a clone of Lex Fridman from the side.\n\n**Michaël**: What?\n\n**Ethan**: When your face is like sideways, you look like a clone of Lex Fridman.\n\n**Michaël**: Yeah.\n\n**Ethan**: Like, 'cause your haircut's like identical to his when\n\n**Michaël**: I'll take that as a compliment... I started working out. So yeah, Ethan Caballero, what's the meaning of life?\n\n**Ethan**: Probably just maximize the flourishing of all sentient beings, like a very generic answer.\n\n**Michaël**: Right. So I've done my Lex Fridman question. Now I'm just basically him.\n\n**Ethan**: Yeah.\n\n## Ethan's Scaling Journey\n\n**Michaël**: Maybe we can just go back to like stuff we know more about like your work and because you've been doing some work on scaling.\n\n**Ethan**: Yeah.\n\n**Michaël**: So like more general, like why are you kind of interested in scaling and like how did you started on doing research on that?\n\n**Ethan**: I mean, I knew about the body paper when it came out. Like I remember I was at this like Ian Goodfellow talking in 2017 and he was hyped about the body paper when it came out.\n\n**Michaël**: Which paper?\n\n**Ethan**: The deep burning scales, predictably, empirically, yeah, it came out 2017 and then I kind, I just, that was just on the back burner and I kind of just stopped paying attention to it after a while. And then like Aran Komatsuzaki was like, no, dude, this is the thing. Like this is gonna take over everything, and this was like in 2019 when he was saying that. And then, yeah. So then when the scaling laws papers got like re-popularized through like the OpenAI stuff, then I kind of like caught onto it a little bit early via like talking with Aran.\n\n**Michaël**: I think in 2019 was also when GPT-2 was introduced.\n\n**Ethan**: But that was kind of before, like that was before like the scaling law stuff kind of got popularized.\n\n**Michaël**: Right, scaling laws paper is 2020.\n\n**Ethan**: Yeah, the very end of 2020. All right. No, no, no. Oh no, no. The scaling law paper was the very end of 2000. It was the very beginning of 2020.\n\n**Michaël**: And you were already on this killing train since 2017.\n\n**Ethan**: I was aware of it, but I didn't, like, I was kind of just neutral about it until like 2000, like probably the middle of 2019.\n\n## Making progress on an Academic budget, Scaling Laws Research\n\n**Michaël**: And yeah, now you are kind of interested in scaling because it's useful to predict kind of what the whole field of AI is going.\n\n**Ethan**: And also it just, it's I think people underestimate how easy it is to be contrived if you're not paying attention to scaling trends and trying to like extrapolate the compute budgets and data budgets that are like, well, the compute data and data budgets like five years from now.\n\n**Michaël**: Yeah, if you're a huge company that does a lot of budget, but maybe if you're just a random company, you don't really care about scaling law that much.\n\n**Ethan**: Yeah, yeah. Or if you're like in academia currently or whatever, like a zillion papers that like fancy conferences are like, here's our inducted bias that helps on like our punny academic budget. And we didn't test any of the scaling asso tos to see if it's like useful when you're training a trillion parameter model on all of YouTube or whatever.\n\n**Michaël**: You're on an academic budget as far as I know. So how do you manage to do experiments in scaling?\n\n**Ethan**: There's like the scaling on narrative. That's like, oh, you don't need the big budget to do because you can just predict what the outcomes will be for the large scale experiments, but that's at least current. At least when that narrative got popularized, it was mostly for upstream like scaling. But the thing everyone cares about is downstream scaling.\n\n## AI Alignment as an Inverse Scaling Problem\n\n**Michaël**: Yeah, so if we go back for a minute on like your work in alignment, how do you think your work on scaling or generalization like kind of fits with the alignment problem?\n\n**Ethan**: Basically, all alignment, I guess this triggers the hell outta some people. But all alignment is inverse scaling problems. It's all downstream inverse scaling problems. So it's just in my mind, all of alignment is stuff that doesn't improve monotonically as compute data and parameters increase.\n\n**Michaël**: There's a difference between not improving and inverse scaling. Inverse scaling goes badly, right?\n\n**Ethan**: Yeah, yeah, yeah. But I just said not improved monotonically, 'cause like sometimes there's certain things where like it improves for a while, but then at a certain point, it gets worse. So like interpretability and controllability are the two like kind of thought experiment things where you could imagine they get more interpretable and more controllable for a long time until they get super intelligent. At that point. they're like less interpretable and less controllable.\n\n**Michaël**: Do we have benchmarks for controllability or?\n\n**Ethan**: Like just like just benchmarks that rely on prompting is a form of like a benchmark of controllability.\n\n**Michaël**: And kinda to summarize your take, if we were able to just scale everything well and not have this inverse scaling problem, we would get like interpretability and controllability and everything else by just like good scaling of our models. And so we'd get like alignment kind of by defaults for free?\n\n**Ethan**: Yeah. I mean, I guess, I mean like there's stuff besides interpretability, controllability, like those are just the examples. Like what you said, you asked like what's an example where like the reason I said, I phrased it as alignment is when I said inverse scaling, I said things that don't improve monotonically, 'cause I just wanted to say like, yes, there's obvious examples where it gets worse the entire time, but there's some you could imagine where it gets good for a long time, and then a certain point, then it starts getting drastically worse. I said, all of alignment can be viewed as a downstream scaling problem. The hard part is like Dan Hendricks and like Jacob Steinhardt say like, then the hard problem though is like measurement and like finding out what are the downstream evaluations 'cause say you got like some like fancy like deceptive AI that wants to like a treacherous turn or whatever. Like how do you even find the downstream evaluations to know whether it's gonna like try to deceive you or whatever? 'Cause like when I say, it's all a downstream scaling problem, that assumes like you have the downstream test, the downstream like thing that you're evaluating it on. But like if it's like some weird deceptive thing, that's like, it's hard to even find what's the downstream thing to evaluate it on to like know whether it's trying deceive or whatever.\n\n**Michaël**: So there's no like test lost on this deception. We don't know for sure how to measure and have a clear benchmark from this.\n\n**Ethan**: Yeah, it's tricky. I mean, and some people say like, well, that's why you need better interpretability. You need to like find the deception circuits or whatever.\n\n**Michaël**: Knowing that we don't know yet, like all the different benchmarks and metrics for misalignment, don't you think that your work on scaling can be bad because you're actually like speeding up timelines?\n\n## Predicting scaling laws, Useful AI Alignment research\n\n**Ethan**: Yeah, I get the like infohazard point of view, but like in my mind, like whether you wanna do all capabilities or alignment stuff that stands the test of time, you need really good downstream scaling prediction. Like, say you came up with some like alignment method or whatever that mitigates inverse scaling, like you need the actual functional form to know whether that thing will like keep mitigating inverse scaling when you get to like a trillion parameters or whatever. You get what I mean?\n\n**Michaël**: I get you but like on a differential progress mindset, like Jared Kaplan or someone else will come up with those functional forms without your work.\n\n**Ethan**: I don't know, I don't know. I mean, that's the thing though, like Anthropics (ERRATUM: it's actually a gift, and the the merch was not sent at the time of the podcast) got that paper like predictability and surprise and generative models and they're just like, it's unpredictable. We can't predict it. And I'm like, ah, you guys, nah, I don't believe.\n\n**Michaël**: Right, so you're kind of publishing papers when you're in advance because those companies are not publishing their results?\n\n**Ethan**: I don't know. I don't. Yeah, I don't even, I don't know if Anthropic does the delay type stuff that OpenAI supposedly does, but maybe they do, I don't know.\n\n**Michaël**: And you were just like drawing infohazard by publishing those laws?\n\n**Ethan**: I mean, in my mind, whether or not, I get the argument, oh it, if you wanna do capabilities work that stands a test of time or alignment work that stands a test of time, in my mind, everything that people are doing in alignment will be very contrived without the functional form too though. So it's like alignment can't make progress without it either. So it's like, you get what I mean?\n\n**Michaël**: Another kind of view on that is that if people do impressive deploying or ML board and they're also interested in alignment, it's still a good thing. Like let's take even through AI. Even if they open source their model because they did something impressive and they talk openly about alignment under Discord and gets like a lot of people that are very smart, interested in alignment. So if you publish something and you become like a famous researcher, something in two years and you talk about alignment in two years, then it's fine.\n\n**Ethan**: I sort of tweet stuff about alignment, I think. Yeah, I mean, I retweet stuff about alignment at least.\n\n## Ajeya Cotra's report, Compute Trends\n\n**Michaël**: So if we go back to thinking about predicting future timelines and kind of scaling, I've read somewhere that you think that in the next few years, we might get billion or trillion times of more compute, like 12 orders of magnitude more.\n\n**Ethan**: Yeah, I mean, so the Ajeya Cotra report said like, it's gonna max out probably at 10 to the 12 times as much compute as like the amount of compute in 2020, probably like 2070 or something like that. The one issue I have with the JS model is that like, she does, what does she do? It's like it's flops per dollar times willingness to spend its total flops that are allocated to pre-training runs. Problem is like, for the big like foundation models, like 10 of the 15 perimeter miles of the future or whatever, you're probably gonna need high pie like memory bandwidth between all like memory bandwidth and compute bandwidth between all the compute, which means it has to be on a supercomputer. So it's not just the flaps. It basically what really matters, at least if you're assuming it's like big, like 10 of the 15 parameter foundation models or whatever, like the speed of the fastest supercomputer is what matters, not just the total flaps that you can allocate, because if like all the flaps don't have good communication between them, then they aren't really useful for training like 10 of the 15 parameter model or whatever. Once you get to 10 of the 15 parameters, like there isn't much reason to go beyond that. And at that point, then you just have multiple models with 10 of the 15 parameters and they're like doing some crazy open ended, like Ken Stanley stuff and a multi-agent simulator after you do that. Like if they mentioned became like you do the 10 of the 15 parameter model feature and all YouTube, and then after that, you'll have like hundreds of 10 of the 15 parameter models that all just duke it out in like a Ken Stanley open-ended simulator to like, get the rest of the capabilities or whatever, like once they're in the Ken Stanley open-ended stimulator, then you don't need high compute bandwidth between all those individual, like 10 of the 15 parameter models, like duking it out in the simulator. They can just, each one, they only needs like 10. It only needs high compute bandwidth between like its own parameters. Like, it doesn't need high compute bandwidth between itself and the other like agents or whatever. And so in there, the flops where you could use all the flops for like the multi-agent simulation, but you only need high compute bandwidth within each agent.\n\n**Michaël**: So you need a lot of bandwidths to train models because of the prioritization thing, but you only need flops to simulate on different things at the same time?\n\n**Ethan**: Yeah, you only need high compute bandwidth within an individual brain, but like if you have multiple brains, then you don't need high compute bandwidth between the brains.\n\n**Michaël**: And what was that kind of simulator you were talking about, the Kenley?\n\n**Ethan**: Like Ken Stanley, the open-ended guy.\n\n**Michaël**: I haven't seen that.\n\n**Ethan**: Ken is like the myth day objective open-endedness, like Can Stanley's, Jeff Cones, like all that stuff. It's like, I don't know. Just Google, like Can Stanley open ended at some point. You've probably heard of it, but it's not like registering what I'm referencing.\n\n# Optimism, conclusion on alignment\n\n**Michaël**: Okay, so maybe one kind of last open-ended question. On a scale from Paul Christiano, Eliezer Yudkowsky, Sam Altman, how optimistic are you?\n\n**Ethan**: Definitely not like Eliezer, or a doomer type person. I guess probably Paul Christiano is most similar. I mean, I feel like Paul Christiano is in the middle of the people you just said.\n\n**Michaël**: Right. Yeah. So you are less optimistic than Sam Altman?\n\n**Ethan**: Well, yeah, I mean, basically, I think deceptive AI is probably gonna be really hard.\n\n**Michaël**: So do you have like one less monologue or sentence to say about why scaling is a solution for all alignment problems?\n\n**Ethan**: Like just all alignment can be viewed as an inverse scaling problem. Like it all revolves on just mitigating inverse scaling, but also you have to make sure you have like the right downstream things that you're evaluating, like the inverse scaling and like part of what makes it hard is like you might need to do like fancy thought experiments on alignment, like counterintuitive thought experiments on alignment forum to find what are the downstream... to find what are the the downstream tests that you should be evaluating. Like whether or not there's inverse scaling behavior on those.\n\n**Michaël**: Awesome, so we get the good version, as last sentence, and that's our conclusion. Thanks Ethan for being on the show.", "filename": "Ethan Caballero-by The Inside View-date 20220505.md", "id": "1ca056d01e892225ca897b21c06a03f6", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "How sure are we about this AI stuff _ Ben Garfinkel _ EA Global - London 2018-by Centre for Effective Altruism-video_id E8PGcoLDjVk-date 20190204", "authors": ["Ben Garfinkel"], "date_published": "2019-02-04", "text": "# Ben Garfinkel How sure are we about this AI stuff - EA Forum\n\n_It is increasingly clear that artificial intelligence is poised to have a huge impact on the world, potentially of comparable magnitude to the agricultural or industrial revolutions. But what does that actually mean for us today? Should it influence our behavior? In this talk from EA Global 2018: London, Ben Garfinkel makes the case for measured skepticism._\n\n## The Talk\n\nToday, work on risks from artificial intelligence constitutes a noteworthy but still fairly small portion of the EA portfolio.\n\n![](https://images.ctfassets.net/ohf186sfn6di/ZsVNHCONV87govWeW1mzj/4a0ee1ce7ff02e0c9ee346f639d528e9/How_sure_are_we_about_this_AI_stuff_.jpg)\n\nOnly a small portion of donations made by individuals in the community are targeted at risks from AI. Only about 5% of the grants given out by the Open Philanthropy Project, the leading grant-making organization in the space, target risks from AI. And in surveys of community members, most do not list AI as the area that they think should be most prioritized.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6Y2EK3CXjTib0jmytnh6bn/51159295f2615db856b1342ce09f9dd7/How_sure_are_we_about_this_AI_stuff___1_.jpg)\n\nAt the same time though, work on AI is prominent in other ways. Leading career advising and community building organizations like 80,000 Hours and CEA often highlight careers in AI governance and safety as especially promising ways to make an impact with your career. Interest in AI is also a clear element of community culture. And lastly, I think there's also a sense of momentum around people's interest in AI. I think especially over the last couple of years, quite a few people have begun to consider career changes into the area, or made quite large changes in their careers. I think this is true more for work around AI than for most other cause areas.\n\n![](https://images.ctfassets.net/ohf186sfn6di/5tGd4OLBwS1YILRm94IQrY/3d821b9e681e7cf5950d6c319f3a3017/How_sure_are_we_about_this_AI_stuff___2_.jpg)\n\nSo I think all of this together suggests that now is a pretty good time to take stock. It's a good time to look backwards and ask how the community first came to be interested in risks from AI. It's a good time look forward and ask how large we expect the community's bet on AI to be: how large a portion of the portfolio we expect AI to be five or ten years down the road. It's a good time to ask, are the reasons that we first got interested in AI still valid? And if they're not still valid, are there perhaps other reasons which are either more or less compelling?\n\n![](https://images.ctfassets.net/ohf186sfn6di/4qQkD7b4iyiDcA9FJ2TUAH/3ae30c9b9373184785a4989ea6a84229/How_sure_are_we_about_this_AI_stuff___3_.jpg)\n\nTo give a brief talk roadmap, first I'm going to run through what I see as an intuitively appealing argument for focusing on AI. Then I'm going to say why this argument is a bit less forceful than you might anticipate. Then I'll discuss a few more concrete arguments for focusing on AI and highlight some missing pieces of those arguments. And then I'll close by giving concrete implications for cause prioritization.\n\n## The intuitive argument\n\nSo first, here's what I see as an intuitive argument for working on AI, and that'd be the sort of, \"AI is a big deal\" argument.\n\n![](https://images.ctfassets.net/ohf186sfn6di/7p0BlYuELpna1ITNlwyprB/4cdb5922dda5b7b270a62ad092bee27b/How_sure_are_we_about_this_AI_stuff___4_.jpg)\n\nThere are three concepts underpinning this argument:\n\n1. The future is what matters most in the sense that, if you could have an impact that carries forward and affects future generations, then this is likely to be more ethically pressing than having impact that only affects the world today.\n2. Technological progress is likely to make the world very different in the future: that just as the world is very different than it was a thousand years ago because of technology, it's likely to be very different again a thousand years from now.\n3. If we're looking at technologies that are likely to make especially large changes, then AI stands out as especially promising among them.\n\nSo given these three premises, we have the conclusion that working on AI is a really good way to have leverage over the future, and that shaping the development of AI positively is an important thing to pursue.\n\n![](https://images.ctfassets.net/ohf186sfn6di/7mp0cWrXHVycsTrA5dYo8N/86be241a8b87bc02c026ecdfebdcf4a1/How_sure_are_we_about_this_AI_stuff___5_.jpg)\n\nI think that a lot of this argument works. I think there are compelling reasons to try and focus on your impact in the future. I think that it's very likely that the world will be very different in the far future. I also think it's very likely that AI will be one of the most transformative technologies. It seems at least physically possible to have machines that eventually can do all the things that humans can do, and perhaps do all these things much more capably. If this eventually happens, then whatever their world looks like, we can be pretty confident the world will look pretty different than it does today.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2Itddi3KDH9et3k7fEfmaN/c2408b1dc8d01a6055b37ff1b3ead4cf/How_sure_are_we_about_this_AI_stuff___6_.jpg)\n\nWhat I find less compelling though is the idea that these premises entail the conclusion that we ought to work on AI. Just because a technology will produce very large changes, that doesn't necessarily mean that working on that technology is a good way to actually have leverage over the future. Look back at the past and consider the most transformative technologies that have ever been developed. So things like electricity, or the steam engine, or the wheel, or steel. It's very difficult to imagine what individuals early in the development of these technologies could have done to have a lasting and foreseeably positive impact. An analogy is sometimes made to the industrial revolution and the agricultural revolution. The idea is that in the future, impacts of AI may be substantial enough that there will be changes that are comparable to these two revolutionary periods throughout history.\n\n![](https://images.ctfassets.net/ohf186sfn6di/3zyPurKZmDrqEPr6ubFXbs/1d7b15e0f1644f740fd539ff2a053648/How_sure_are_we_about_this_AI_stuff___7_.jpg)\n\nThe issue here, though, is that it's not really clear that either of these periods actually were periods of especially high _leverage_. If you were, say, an Englishman in 1780, and trying to figure out how to make this industry thing go well in a way that would have a lasting and foreseeable impact on the world today, it's really not clear you could have done all that much. The basic point here is that from a long-termist perspective, what matters is leverage. This means finding something that could go one way or the other, and that's likely to stick in a foreseeably good or bad way far into the future. Long-term importance is perhaps a necessary condition for leverage, but certainly not a sufficient one, and it's a sort of flawed indicator in its own right.\n\n## Three concrete cases\n\nSo now I'm going to move to three somewhat more concrete cases for potentially focusing on AI. You might have a few concerns that lead you to work in this area:\n\n![](https://images.ctfassets.net/ohf186sfn6di/5cQPhQtAosME1MvnHP8zvr/1fe32c8a0ffe87221492dcf69108af88/How_sure_are_we_about_this_AI_stuff___8_.jpg)\n\n1. **Instability.** You might think that there are certain dynamics around the development or use of AI systems that will increase the risk of permanently damaging conflict or collapse, for instance war between great powers.\n2. **Lock-in.** Certain decisions regarding the governance or design of AI systems may permanently lock in, in a way that propagates forward into the future in a lastingly positive or negative way.\n3. **Accidents.** It might be quite difficult to use future systems safely. And that there may be accidents that occur in the future with more advanced systems that cause lasting harm that again carries forward into the future.\n\n### Instability\n\n![](https://images.ctfassets.net/ohf186sfn6di/5LTTWAR6vTDfNNip1cyJ8I/1262a65298d8e61959755c56a075aac6/How_sure_are_we_about_this_AI_stuff___9_.jpg)\n\nFirst, the case from instability. A lot of the thought here is that it's very likely that countries will compete to reap the benefits economically and militarily from the applications of AI. This is already happening to some extent. And you might think that as the applications become more significant, the competition will become greater. And in this context, you might think that this all increases the risk of war between great powers. So one concern here is that there may be a potential for transitions in terms of what countries are powerful compared to which other countries.\n\nA lot of people in the field of international security think that these are conditions under which conflict becomes especially likely. You might also be concerned about changes in military technology that, for example, increase the odds of accidental escalation, or make offense more favorable compared to defense. You may also just be concerned that in periods of rapid technological change, there are greater odds of misperception or miscalculation as countries struggle to figure out how to use the technology appropriately or interpret the actions of their adversaries. Or you could be concerned that certain applications of AI will in some sense damage domestic institutions in a way that also increases instability. That rising unemployment or inequality might be quite damaging, for example. And lastly, you might be concerned about the risks from terrorism, that certain applications might make it quite easy for small actors to cause large amounts of harm.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2TGpTLYHPCnXz7zcTmiROr/6054c97bcf3668966ba2ffaf3cae6631/How_sure_are_we_about_this_AI_stuff___10_.jpg)\n\nIn general, I think that many of these concerns are plausible and very clearly important. Most of them have not received very much research attention at all. I believe that they warrant much, much more attention. At the same time though, if you're looking at things from a long-termist perspective, there are at least two reservations you could continue to have. The first is just we don't really know how worried to be. These risks really haven't been researched much, and we shouldn't really take it for granted that AI will be destabilizing. It could be or it couldn't be. We just basically have not done enough research to feel very confident one way or the other.\n\nYou may also be concerned, if you're really focused on long term, that lots of instability may not be sufficient to actually have a lasting impact that carries forward through generations. This is a somewhat callous perspective. If you really are focused on the long term, it's not clear, for example, that a mid-sized war by historical standards would be sufficient to have a big long term impact. So it may be actually a quite high bar to achieve a level of instability that a long-termist would really be focused on.\n\n### Lock-in\n\n![](https://images.ctfassets.net/ohf186sfn6di/1Nrumzh71ltZw6f4zpR0Qt/2809264d0109c2314dc13e51a71cd752/How_sure_are_we_about_this_AI_stuff___11_.jpg)\n\nThe case from lock-in I'll talk about just a bit more briefly. Some of the intuition here is that certain decisions have been made in the past about, for instance the design of political institutions, software standards, or certain outcomes of military or economic competitions, which seem to produce outcomes that carry forward into the future for centuries. Some examples would be the design of the US Constitution, or the outcome of the Second World War. You might have the intuition that certain decisions about the governance or design of AI systems, or certain outcomes of strategic competitions, might carry forward into the future, perhaps for even longer periods of time. For this reason, you might try and focus on making sure that whatever locks in is something that we actually want.\n\n![](https://images.ctfassets.net/ohf186sfn6di/308qz7pteWgeXxHeTGde0E/2f58fc3853fc0d81e516fbd29e892b49/How_sure_are_we_about_this_AI_stuff___12_.jpg)\n\nI think that this is a somewhat difficult argument to make, or at least it's a fairly non-obvious one. I think the standard skeptical reply is that with very few exceptions, we don't really see many instances of long term lock-in, especially long term lock-in where people really could have predicted what would be good and what would be bad. Probably the most prominent examples of lock-in are choices around major religions that have carried forward for thousands of years. But it's quite hard to find examples that last for hundreds of years. Those seem quite few. It's also generally hard to judge what you would want to lock in. If you imagine fixing some aspect of the world, as the rest of world changes dramatically, it's really hard to guess what would actually be good under quite different circumstances in the future. I think my general feeling on this line of argument is that, I think it's probably not that likely that we should expect any truly irreversible decisions around AI to be made anytime soon, even if progress is quite rapid, although other people certainly might disagree.\n\n### Accidents\n\n![](https://images.ctfassets.net/ohf186sfn6di/6oUQxfjwmoB7VqDxvDLEgG/f71e293bd9ab4bd8325cb45025c4a5f5/How_sure_are_we_about_this_AI_stuff___13_.jpg)\n\nLast, we have the case from accidents. The idea here is that, we know that there are certain safety engineering challenges around AI systems. It's actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. This has been laid out most clearly in the paper 'Concrete Problems in AI Safety,' from a couple of years ago by Dario Amodei and others. I'd recommend for anyone interested in safety issues to take a look at that paper. Then we might think, given the existence of these safety challenges, and given the belief or expectation that AI systems will become much more powerful in the future or be given much more responsibility, we might expect that these safety concerns will become more serious as time goes on.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2sn6sL0PAjzmIhVnQg6no9/881544f686d1673dd547dcc363b49cf2/How_sure_are_we_about_this_AI_stuff___14_.jpg)\n\nAt the limit you might worry that these safety failures could become so extreme that they could perhaps derail civilization on the whole. In fact, there is a bit of writing arguing that we should be worried about these sort of existential safety failures. The main work arguing for this is still the book 'Superintelligence' by Nick Bostrom, published in 2014. Before this, essays by Eliezer Yudkowsky were the main source of arguments along these lines. And then a number of other writers such as Stuart Russell or, a long time ago, IJ Goods or David Chalmers have also expressed similar concerns, albeit more briefly. The writing on existential safety accidents definitely isn't homogeneous, but often there's a sort of similar narrative that appears in these essays expressing these concerns. There's this basic standard disaster scenario that has a few common elements.\n\n![](https://images.ctfassets.net/ohf186sfn6di/1Wy6LheSrW6jOhKIus7xkW/cc39bc24ab78c4d6a20179a55c673ab7/How_sure_are_we_about_this_AI_stuff___15_.jpg)\n\nFirst, the author imagines that a single AI system experiences a massive jump in capabilities. Over some short period of time, a single system becomes much more general or much more capable than any other system in existence, and in fact any human in existence. Then given the system, researchers specify a goal for it. They give it some input which is meant to communicate what behavior it should engage in. The goal ends up being something quite simple, and the system goes off and single-handedly pursues this very simple goal in a way that violates the full nuances of what its designers intended.\n\nThere's a classic sort of toy example, which is often used to illustrate this concern. We imagine that some poor paperclip factory owner receives a general super-intelligent AI on his doorstep. There's a slot that's to stick in a goal. He writes down the goal \"maximize paperclip production,\" puts it in the AI system, and then lets it go off and do that. The system figures out the best way to maximize paperclip production is to take over all the world's resources, just to plow them all into paperclips. And the system is so capable that designers can do nothing to stop it, even though it's doing something that they actually really do not intend.\n\n![](https://images.ctfassets.net/ohf186sfn6di/FUGgsz0vO3iVlUPhvQvuP/36a6d39bfbdcae267e65a9995d97595a/How_sure_are_we_about_this_AI_stuff___16_.jpg)\n\nI have some general concerns about the existing writing on existential accidents. So first there's just still very little of it. It really is just mostly _Superintelligence_ and essays by Eliezer Yudkowsky, and then sort of a handful of shorter essays and talks that express very similar concerns. There's also been very little substantive written criticism of it. Many people have expressed doubts or been dismissive of it, but there's very little in the way of skeptical experts who are sitting down and fully engaging with it, and writing down point by point where they disagree or where they think the mistakes are. Most of the work on existential accidents was also written before large changes in the field of AI, especially before the recent rise of deep learning, and also before work like 'Concrete Problems in AI Safety,' which laid out safety concerns in a way which is more recognizable to AI researchers today.\n\nMost of the arguments for existential accidents often rely on these sort of fuzzy, abstract concepts like optimization power or general intelligence or goals, and toy thought experiments like the paper clipper example. And certainly thought experiments and abstract concepts do have some force, but it's not clear exactly how strong a source of evidence we should take these as. Then lastly, although many AI researchers actually have expressed concern about existential accidents, for example Stuart Russell, it does seem to be the case that many, and perhaps most AI researchers who encounter at least abridged or summarized versions of these concerns tend to bounce off them or just find them not very plausible. I think we should take that seriously.\n\nI also have some more concrete concerns about writing on existential accidents. You should certainly take these concerns with a grain of salt because I am not a technical researcher, although I have talked to technical researchers who have essentially similar or even the same concerns. The general concern I have is that these toy scenarios are quite difficult to map onto something that looks more recognizably plausible. So these scenarios often involve, again, massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. This is a wooly issue. I would recommend checking out writing by Katja Grace or Paul Christiano online. That sort of lays out some concerns about the plausibility of massive jumps.\n\nAnother element of these narratives is, they often imagine some system which becomes quite generally capable and then is given a goal. In some sense, this is the reverse of the way machine learning research tends to look today. At least very loosely speaking, you tend to specify a goal or some means of providing feedback. You direct the behavior of a system and then allow it to become more capable over time, as opposed to the reverse. It's also the case that these toy examples stress the nuances of human preferences, with the idea being that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But it's also the case in machine learning that we can train lots of systems to engage in behaviors that are actually quite nuanced and that we can't specify precisely. Recognizing faces from images is an example of this. So is flying a helicopter.\n\nIt's really not clear exactly why human preferences would be so fatal to understand. So it's quite difficult to figure out how to map the toy examples onto something which looks more realistic.\n\n## Caveats\n\nSome general caveats on the concerns expressed. None of my concerns are meant to be decisive. I've found, for example, that many people working in the field of AI safety in fact list somewhat different concerns as explanations for why they believe the area is very important. There are many more arguments that I believe are shared individually, or inside people's heads and currently unpublished. I really can't speak exactly to how compelling these are. The main point I want to stress here is essentially that when it comes to the writing which has actually been published, and which is out there for analysis, I don't think it's necessarily that forceful, and at the very least it's not decisive.\n\n![](https://images.ctfassets.net/ohf186sfn6di/7d0c2thDgSda4pA0UMa6eL/76337ea2edf400ed750e84801d4d66c5/How_sure_are_we_about_this_AI_stuff___17_.jpg)\n\nSo now I have some brief, practical implications, or thoughts on prioritization. You may think, from all the stuff I've just said, that I'm quite skeptical about AI safety or governance as areas to work in. In fact, I'm actually fairly optimistic. My reasoning here is that I really don't think that there are any slam-dunks for improving the future. I'm not aware of any single cause area that seems very, very promising from the perspective of offering high assurance of long-term impact. I think that the fact that there are at least plausible pathways for impact by working on AI safety and AI governance puts it head and shoulders above most areas you might choose to work in. And AI safety and AI governance also stand out for being pretty extraordinarily neglected.\n\nDepending on how you count, there are probably fewer than a hundred people in the world working on technical safety issues or governance challenges with an eye towards very long-term impacts. And that's just truly, very surprisingly small. The overall point though, is that the exact size of the bet that EA should make on artificial intelligence, sort of the size of the portfolio that AI should take up will depend on the strength of the arguments for focusing on AI. And most of those arguments still just aren't very fleshed out yet.\n\n![](https://images.ctfassets.net/ohf186sfn6di/7d4iE4NftlFqYUBEDvnNKA/c42643b32671fd613cb6616d8af8535b/How_sure_are_we_about_this_AI_stuff___18_.jpg)\n\nI also have some broader epistemological concerns which connect to the concerns I've expressed. I think it's also possible that there are social factors relating to EA communities that might bias us to take an especially large interest in AI.\n\nOne thing is just that AI is especially interesting or fun to talk about, especially compared to other cause areas. It's an interesting, kind of contrarian answer to the question of what is most important to work on. It's surprising in certain ways. And it's also now the case that interest in AI is to some extent an element of community culture. People have an interest in it that goes beyond just the belief that it's an important area to work in. It definitely has a certain role in the conversations that people have casually, and what people like to talk about. I think these wouldn't necessarily be that concerning, except people sometimes also think that we can't really count on external feedback to push us back if we sort of drift a bit.\n\nSo first it just seems to be empirically the case that skeptical AI researchers generally will not take the time to sit down and engage with all of the writing, and then explain carefully why they disagree with our concerns. So we can't really expect that much external feedback of that form. People who are skeptical or confused, but not AI researchers, or just generally not experts may be concerned about sounding ignorant or dumb if they push back, and they also won't be inclined to become experts. We should also expect generally very weak feedback loops. If you're trying to influence the very long-run future, it's hard to tell how well you're doing, just because the long-run future hasn't happened yet and won't happen for a while.\n\nGenerally, I think one thing to watch out for is justification drift. If we start to notice that the community's interest in AI stays constant, but the reasons given for focusing on it change over time, then this would be sort of a potential check engine light, or at least a sort of trigger to be especially self-conscious or self-critical, because that may be some indication of motivated reasoning going on.\n\n## Conclusion\n\n![](https://images.ctfassets.net/ohf186sfn6di/1LHyv9s4vVgYCuBVvQnMUE/952fc5834362f2f6adb37202f2f79428/How_sure_are_we_about_this_AI_stuff___19_.jpg)\n\nI have just a handful of short takeaways. First, I think that not enough work has gone into analyzing the case for prioritizing AI. Existing published arguments are not decisive. There may be many other possible arguments out there, which could be much more convincing or much more decisive, but those just aren't out there yet, and there hasn't been much written criticizing the stuff that's out there.\n\nFor this reason, thinking about the case for prioritizing AI may be an especially high impact thing to do, because it may shape the EA portfolio for years into the future. And just generally, we need to be quite conscious of possible community biases. It's possible that certain social factors will lead us to drift in what we prioritize, that we really should not be allowing to influence us. And just in general, if we're going to be putting substantial resources into anything as a community, we need to be especially certain that we understand why we're doing this, and that we stay conscious that our reasons for getting interested in the first place continue to be good reasons. Thank you.\n\n## Questions\n\n_Question_: What advice would you give to one who wants to do the kind of research that you are doing here about the case for AI potentially, as opposed to the AI itself?\n\n_Ben_: Something that I believe would be extremely valuable is just basically talking to lots of people who are concerned about AI and asking them precisely what reasons they find compelling. I've started to do this a little bit recently and it's actually been quite interesting that people seem to have pretty diverse reasons, and many of them are things that people want to write blog posts on, but just haven't done. So, I think this is a low-hanging fruit that would be quite valuable. Just talking to people who are concerned about AI, trying to understand exactly why they're concerned, and either writing up their ideas or helping them to do that. I think that would be very valuable and probably not that time intensive either.\n\n_Question_: Have you seen any of the justification drift that you alluded to? Can you pinpoint that happening in the community?\n\n_Ben_: Yeah. I think that's certainly happening to some extent. Even for myself, I believe that's happened for me to some extent. When I initially became interested in AI, I was especially concerned about these existential accidents. I think I now place relatively greater prominence on sort of the case from instability as I described it. And that's certainly, you know, one possible example of justification drift. It may be the case that this was actually a sensible way to shift emphasis, but would be something of a warning sign. And I've also just spoken to technical researchers, as well, who used to be especially concerned about this idea of an intelligence explosion or recursive self improvement. These very large jumps. I now have spoken to a number of people who are still quite concerned about existential accidents, but make arguments that don't hinge on there being this one single massive jump into a single system.\n\n_Question_: You made the analogy to the industrial revolution, and the 1780 Englishman who doesn't really have much ability to shape how the steam engine is going to be used. It seems intuitively quite right. The obvious counterpoint would be, well AI is a problem-solving machine. There's something kind of different about it. I mean, does that not feel compelling to you, the sort of inherent differentness of AI?\n\n_Ben_: So I think probably the strongest intuition is, you might think that there will eventually be a point where we start turning more and more responsibility over to automated systems or machines, and that there might eventually come a point where humans have almost no control over what's happening whatsoever, that we keep turning over more and more responsibility and there's a point where machines are in some sense in control and you can't back out. And you might have some sort of irreversible juncture here. I definitely, to some extent, share that intuition that if you're looking over a very long time span, that that is probably fairly plausible. I suppose the intuition I don't necessarily have is that unless things go, I suppose quite wrong or if they happen in somewhat surprising ways, I don't necessarily anticipate that there will be this really irreversible juncture coming anytime soon. If let's say it takes a thousand years for control to be handed off, then I am not that optimistic about people having that much control over what that handoff looks like by working on things today. But I certainly am not very confident.\n\n_Question_: Are there any policies that you think a government should implement at this stage of the game, in light of the concerns around AI safety? And how would you allocate resources between existing issues and possible future risks?\n\n_Ben_: Yeah, I am still quite hesitant, I think, to recommend very substantive policies that I think governments should be implementing today. I currently have a lot of agnosticism about what would be useful, and I think that most current existing issues that governments are making decisions on aren't necessarily that critical. I think there's lots of stuff that can be done that would be very valuable, like having stronger expertise or stronger lines of dialogue between the public and private sector, and things like this. But I would be hesitant at this point to recommend a very concrete policy that at least I'm confident would be good to implement right now.\n\n_Question_: You mentioned the concept of kind of a concrete decisive argument. Do you see concrete, decisive arguments for other cause areas that are somehow more concrete and decisive than for AI, and what is the difference?\n\n_Ben_: Yeah. So I guess I tried to allude to this a little bit, but I don't think that really any cause area has an especially decisive argument for being a great way to influence the future. There's some that I think you can put sort of a lower bound on at least how likely it is to be useful that's somewhat clear. So for example, risk from nuclear war. It's fairly clear that it's at least plausible this could happen over the next century. You know, nuclear war has almost happened in the past, the climate effects are speculative, but at least somewhat well understood. And then there's this question of if there were nuclear war, how damaging is this? Do people eventually come back from this? And that's quite uncertain, but I think it'd be difficult to put above 99% chance that people would come back from a nuclear war.\n\nSo, in that case you might have some sort of a clean lower bound on, let's say working on nuclear risk. Or, quite similarly, working on pandemics. And I think for AI it's difficult to have that sort of confident lower bound. I actually tend to think, I guess as I alluded to, that AI is probably or possibly still the most promising area based on my current credences, and its extreme neglectedness. But yeah, I don't think any cause area stands out as especially decisive as a great place to work.\n\n_Question_: I'm an AI machine learning researcher PhD student currently, and I'm skeptical about the risk of AGI. How would you suggest that I contribute to the process of providing this feedback that you're identifying as a need?\n\n_Ben_: Yeah, I mean I think just a combination of in-person conversations and then I think even simple blog posts can be quite helpful. I think there's still been surprisingly little in the way of just, let's say something written online that I would point someone to who wants the skeptical case. This actually is a big part of the reason I suppose I gave this talk, even though I consider myself not extremely well placed to give it, given that I am not a technical person. There's so little out there along these lines that there's low hanging fruit, essentially.\n\n_Question_: Prominent deep learning experts such as Yann Lecun and Andrew Ng do not seem to be worried about risks from superintelligence. Do you think that they have essentially the same view that you have or are they coming at it from a different angle?\n\n_Ben_: I'm not sure of their specific concerns. I know this classic thing that Andrew Ng always says is he compares it to worrying about overpopulation on Mars, where the suggestion is that these risks, if they materialize, are just so far away that it's really premature to worry about them. So it seems to be sort of an argument from timeline considerations. I'm actually not quite sure what his view is in terms of, if we were like, let's say 50 years in the future, would he think that this is a really great area to work on? I'm really not quite sure.\n\nI actually tend to think that the line of thinking that says, \"Oh, this is so far away so we shouldn't work on it\" just really isn't that compelling. It seems like we have a load of uncertainty about AI timelines. It seems like no one can be very confident about that. So yeah, it'd be hard to get under, let's say one percent that interesting things won't happen in the next 30 years or so. So I'm not quite sure about the extent of his concerns, but if they're based on timelines, I actually don't find them that compelling.", "filename": "How sure are we about this AI stuff _ Ben Garfinkel _ EA Global - London 2018-by Centre for Effective Altruism-video_id E8PGcoLDjVk-date 20190204.md", "id": "ab572c53ca97596f65f11cdfd376a984", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "How social science research can inform AI governance _ Baobao Zhang _ EAGxVirtual 2020-by Centre for Effective Altruism-video_id eTkvtHymI9s-date 20200615", "authors": ["Baobao Zhang"], "date_published": "2020-06-15", "text": "# Baobao Zhang How social science research can inform AI governance - EA Forum\n\n_Political scientist Baobao Zhang explains how social science research can help people in government, tech companies, and advocacy organizations make decisions regarding artificial intelligent (AI) governance. After explaining her work on public attitudes toward AI and automation, she explores other important topics of research. She also reflects on how researchers could make broad impacts outside of academia._\n\n_We’ve lightly edited Baobao’s talk for clarity. You can also watch it on_ [_YouTube_](https://www.youtube.com/watch?v=eTkvtHymI9s) _and read it on_ [_effectivealtruism.org_](https://effectivealtruism.org/articles/baobao-zhang-how-social-science-research-can-inform-ai-governance)_._\n\n## The Talk\n\n**Melinda Wang (Moderator):** Hello, and welcome to this session on how social science research can inform AI governance, with Baobao Zhang. My name is Melinda Wang. I'll be your emcee. Thanks for tuning in. We'll first start with a 10-minute pre-recorded talk by Baobao, which will be followed by a live Q&A session.\n\nNow I'd like to introduce you to the speaker for this session, Baobao Zhang. Baobao is a fellow at the [Berkman Klein Center for Internet and Society](https://cyber.harvard.edu/) at Harvard University and a research affiliate with the [Centre for the Governance of AI](https://www.fhi.ox.ac.uk/govai/) at the University of Oxford. Her current research focuses on the governance of artificial intelligence. In particular, she studies public and elite opinions toward AI, and how the American welfare state could adapt to the increasing automation of labor. Without further ado, here's Baobao.\n\n**Baobao:** Hello, welcome to my virtual presentation. I hope you are safe and well during this difficult time. My name is Baobao Zhang. I'm a political scientist focusing on technology policy. I'm a research affiliate with the Centre for the Governance of AI at the Future of Humanity Institute in Oxford. I'm also a fellow with the Berkman Klein Center for Internet and Society at Harvard University.\n\nToday I will talk about how social science research can inform AI governance.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/fb440c6dca78eaffcdd72fe55393333b640267fc6ec1cb69.png)\n\nAdvances in AI research, particularly in machine learning (ML), have grown rapidly in recent years. Machines can outperform the best human players in strategy games like [Go](https://en.wikipedia.org/wiki/Go_(game)) and poker. You can even generate synthetic videos and news articles that easily fool humans.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/0f845f3709e38afe2a292ab5b2e946e6e8d9215c84b91c7c.png)\n\nLooking ahead, ML researchers believe that there's a 50% chance of AI outperforming humans in all tasks by 2061. This \\[estimate\\] is based on a survey that my team and I conducted in 2016.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/060864652249468dd881dd8d05b2f98a822b6a5355d2c684.png)\n\nThe EA community has recognized the potential risks, even existential risks, that unaligned AI systems pose to humans. Tech companies, governments, and civil society have started to take notice as well. Many organizations have published AI ethics principles to guide the development and deployment of the technology.\n\nA [report by the Berkman Klein Center](https://cyber.harvard.edu/publication/2020/principled-ai) counted 36 prominent sets of AI principles.\n\nNow we're entering a phase where tech companies and governments are starting to translate these principles into policy and practice.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/5822e0b2f9709ee135eaa0b886440c1311d886f434e3c5d0.png)\n\nAt the Centre for the Governance of AI (GovAI), we think that social science research — whether it’s in political science, international relations, law, economics, or psychology — can inform decision-making around AI governance.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/9a3f2cf2c41107d7b6629b938f562100c79b819cb3b5a2a6.png)\n\nFor more information about our research agenda, please see “[AI Governance: A Research Agenda](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf)” by Allan Dafoe. It's also a good starting place if you're curious about the topic and are new to it.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/0934fcedb98c6f697a5d01d1afa7359faaefb082da4dd962.png)\n\nHere's a roadmap for my talk. I’ll cover:\n\n- My research on public opinion toward AI\n- EA social science research highlights on AI governance\n- Research questions that I've been thinking about a lot lately\n- How one can be impactful as a social scientist in this space\n\n### Why study the public’s opinion of AI?\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/e0106a04f566e148745c8c8dc5880e9ea1152ac5306427a8.png)\n\nFrom a normative perspective, we need to consider the voices of those who will be impacted by AI. In addition, public opinion has shaped policy in many other domains, including climate change and immigration; therefore, studying public opinion could help us anticipate how electoral politics may impact AI governance.\n\nThe research I'm about to present comes from [this report](https://isps.yale.edu/sites/default/files/files/Zhang_us_public_opinion_report_jan_2019.pdf).\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/d63fd689a349045a7260a648497f5b9eb3ec024d894829ea.png)\n\nIt's based on a nationally representative survey of 2,000 Americans that Allan Dafoe and I conducted in the summer of 2018.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/d71f6b1b571a5c127b8e7808e157f6bc613de4701e111ebc.png)\n\nHere are the main takeaways from the survey:\n\n1\\. An overwhelming majority of Americans think that AI should be carefully managed. \n2\\. They considered all 13 governance challenges that we presented to them to be important. \n3\\. However, they have only low-to-moderate levels of trust in the actors who are developing and managing AI.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/f64b1b9c399746f28f4b8bb98bc7ba062a7200d364e3306a.png)\n\nNow, on to some results. Here's a graph of Americans' view of AI governance challenges. Each respondent was randomly assigned to consider five challenges randomly selected from 13. The x-axis shows the respondents’ perceived likelihood that the governance challenge would impact large numbers of people around the world. The y-axis shows the perceived importance of the issue.\n\nThose perceived to be high in both dimensions include protecting data privacy, preventing AI-enhanced cyber attacks, preventing mass surveillance, and preventing digital manipulation — all of which are highly salient topics in the news.\n\nI’ll point out that the respondents consider all of these AI governance challenges to be important for tech companies and governments to manage. But we do see some variations between respondents when we break them down by subgroups.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/eedca4761990c8164513480745457644cb21962ee78fec14.png)\n\nHere we've broken it down by age, gender, race, level of education, partisanship, etc. We're looking at the issue’s perceived importance in these graphs. Purple means greater perceived issue importance. Green means lesser perceived issue importance.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/9660a291d9489697ffe72699b383e0e3830f8a2f014bcc21.png)\n\nI'll highlight some differences that really popped out. In this slide, you see that older Americans, in contrast to younger Americans, perceive the governance challenges presented to them to be more important.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/4517087c727b1965c6872371b66055cdef7399f465882c53.png)\n\nInterestingly, those who have CS \\[computer science\\] or engineering degrees, in contrast to those who don't, perceive all of the governance challenges to be less important. We also observed this techno-optimism among those with CS or engineering degrees in other survey questions.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/c256c6159734d52f2836096b8c18f17390cbaa4abab0ec69.png)\n\nDespite Americans perceiving that these AI governance challenges are important, they have low to moderate levels of trust in the actors who are in a position to shape the development and deployment of AI systems.\n\nThere are a few interesting observations to point out in these slides:\n\n- While trust in institutions has declined across the board, the American public still seems to have relatively high levels of trust in the military. This is in contrast to the ML community, whose members would rather not work with the US military. I think we get this seemingly strange result because the public relies on heuristics when answering this question.\n- The American public seems to have great distrust of Facebook. Part of it could be the fallout from the [Cambridge Analytica scandal](https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal). But when we ran a previous survey before the scandal broke, we observed similarly low levels of trust in Facebook.\n\nI'm sharing just some of the results from our report. I encourage you to read it. We're currently working on a new iteration of the survey. And we're hoping to launch it concurrently in the US, EU, and China, \\[which will allow us\\] to make some interesting cross-country comparisons.\n\nI would like to highlight two works by my colleagues in the EA community who are also working in AI governance.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/8f91c170f28bfdb93332b1219f8a3606b422840d6f966aef.png)\n\nFirst, there’s “[Toward Trustworthy AI Development](https://arxiv.org/abs/2004.07213).” This paper came out recently. It's a massive collaboration among researchers in different sectors and fields, \\[with the goal of determining\\] how to verify claims made by AI developers who say that their algorithms are safe, robust, and fair. This is not merely a technical question. Suggestions in the report include creating new institutional mechanisms like bounties for detecting bias and safety issues in AI systems, and creating AI incident reports.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/52527536eb1165dcbab706872defa17ddbb73c1ec298e298.png)\n\nSecond, there’s “[The Windfall Clause](https://www.fhi.ox.ac.uk/windfallclause/)” by my colleagues at GovAI. Here, the team considers this idea of a “windfall clause” as a way to redistribute the benefits from transformative AI. This is an ex-ante agreement, where tech companies — in the event that they make large profits from their AI systems — would donate a massive portion of those profits. The report combines a lot of research from economic history and legal analysis to come up with an inventive policy proposal.\n\n### New research questions\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/28134c847ff756822ffa4845ea7fb0d3239afcd0a64b010c.png)\n\nThere are a lot of new and interesting research questions that keep me up at night. I’ll share a few of them with you, and let's definitely have a discussion about them during the Q&A.\n\n- **How do we build incentives for developing safe, robust, and fair AI systems — and avoid a race to the bottom?** I think a lot of us are rather concerned about the rhetoric of an AI arms race. But it's also true that even the EU is pushing for competitiveness in AI research and development. I think the “[Toward Trustworthy AI Development](https://arxiv.org/abs/2004.07213)” paper gives some good recommendations on the R&D \\[research and development\\] front. But what will the market and policy incentives be for businesses and the public sector to choose safer AI products? That's still a question that I and many of my colleagues are interested in.\n- **How can we transition to an economic system where AI can perform many of the tasks currently done by humans?** I've been studying perceptions of automation. Unfortunately, a lot of workers underestimate the likelihood that their jobs will be automated. They actually have an optimism bias. Even correcting workers’ \\[false\\] beliefs about the future of work in my studies has failed to make them more supportive of redistribution. And it doesn't seem to decrease their hostility toward globalization. So certainly there's a lot more work to be done on the political economy around the future of work.\n- **How do other geopolitical risks make AI governance more difficult?** I think about these a lot. In Toby Ord's book, [_The Precipice_](https://www.amazon.com/dp/B07V9GHKYP/), he talks about the risk factors that could increase the probability of existential risk. And one of these risks is great-power war. We're not at \\[that point\\], but there has certainly been a rise in aggressive nationalism from some of these great powers. Instead of coming together to combat the COVID pandemic, many countries are pointing fingers at each other. And I think these trends don't bode well for international governance. Therefore, thinking about how these trends might shape international cooperation around AI governance is definitely a topic that my colleagues and I are working on.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/ec30e6c2c3b8cc47b7f2c179a9067bc7f525eb4117acf089.png)\n\nI’ll conclude this presentation by talking about how one can be impactful as a social scientist. I have the great luxury of working in academia, where I have plenty of time to think and carry out long-term research projects. At the same time, I have to constantly remind myself to engage with the world outside of academia — the tech industry and the policy world — by writing op eds, doing consulting, and communicating with the media.\n\nFortunately, social scientists with expertise in AI and AI policy are also in demand in other settings. Increasingly, tech companies have sought to hire people to conduct research on how individual humans interact with AI systems, or what the impact of AI systems may be on society. Geoffrey Irving and Amanda Askell have published a paper called “[AI Safety Needs Social Scientists](https://openai.com/blog/ai-safety-needs-social-scientists/).” I encourage you to read it if you're interested in this topic.\n\nTo give a more concrete example, some of my colleagues have worked with [OpenAI](https://openai.com/) to test whether their [GPT-2 language model](https://openai.com/blog/better-language-models/) can generate news articles that fool human readers.\n\nGovernments are also looking for social scientists with expertise in AI. Policymakers in both the civilian government and in the military have an AI literacy gap. They don't really have a clear understanding of the limits and the potentials of the technology. But advising policymakers does not necessarily mean that you have to work in government. Many of my colleagues have joined think tanks in Washington, DC, where they apply their research skills to generate policy reports, briefs, and expert testimony. I recommend checking out the [Center for the Security and Emerging Technology](https://cset.georgetown.edu/) or CSET, based at Georgetown University. They were founded about a year ago, but they have already put out a vast collection of research on AI and US international relations.\n\nThank you for listening to my presentation. I look forward to your questions during the Q&A session.\n\n**Melinda:** Thank you for that talk, Baobao. The audience has already submitted a number of questions. We're going to get started with the first one: \\[Generally speaking\\], what concrete advice would you give to a fresh college graduate with a degree in a social science discipline?\n\n**Baobao:** That's a very good question. Thank you for coming to my talk.\n\nOne of the unexpected general pieces of advice that I would give is to have strong writing skills. At the end of the day, you need to translate all of the research that you do for different audiences: readers of academic journals, policymakers, tech companies \\[that want you to produce\\] policy reports.\n\nBesides that, I think \\[you may need to learn certain skills depending on\\] the particular area in which you specialize. For me, learning data science and statistics is really important for the type of research that I do. For other folks it might be game theory, or for folks doing qualitative research, it might be how to do elite interviews and ethnographies.\n\nBut overall, I think having strong writing skills is quite critical.\n\n**Melinda:** Great. \\[Audience member\\] Ryan asks, “What are the current talent gaps in AI safety right now?”\n\n**Baobao:** That's a good question. I must confess: I'm not an AI safety expert, although I did talk about a piece that folks at OpenAI wrote called “[AI Safety Needs Social Scientists.](https://openai.com/blog/ai-safety-needs-social-scientists/)” And I definitely agree with the sentiment, given that the people who are working on AI safety want to run experiments. You can think of them as psychology experiments. And a lot of computer scientists are not necessarily trained on how to do that. So if you have skills in running surveys or psych experiments, that’s a skill set that I hope tech companies will acknowledge and recognize as important.\n\n**Melinda:** That's really interesting. Would you consider psychology to be within the realm of social sciences, \\[in terms of how\\] people generally perceive it? Or do you mean fields related to STEM \\[science, technology, engineering, math\\]?\n\n**Baobao:** I think psychology is quite interesting. People who are more on the neuroscience side might be \\[considered to be in\\] STEM. I work with some experimental psychologists, particularly social psychologists, and they read a lot of the literature in economics, political science, and communications studies. I do think that there's a bit of overlap.\n\n**Melinda:** How important do you consider interdisciplinary studies to be, whether that's constrained within the realm of social science, or social science within STEM, etc.?\n\n**Baobao:** I think it's important to work with both other social scientists and with computer scientists.\n\nThis is more \\[along the lines of\\] career advice and not related to AI governance, but one of the realizations I had in doing recent work on several COVID projects is that it’s important to have more than just an “armchair” \\[level of understanding\\] of public health. I try to get my team to talk with those who are either in vaccine development, epidemiology, or public health. And I'd like to see more of that type of collaboration in the AI governance space. We do that quite well at that, I think, at GovAI, where we can talk to in-house computer scientists at the [Future of Humanity Institute](https://www.fhi.ox.ac.uk/).\n\n**Melinda:** Yes. That's an interesting point and relates to one of the questions that just came in: How can we more effectively promote international cooperation? Do you have any concrete strategies that you can recommend?\n\n**Baobao:** Yes. That's a really good question. I work a bit with folks in the European Union, and bring my AI governance expertise to the table. The team that I'm working with just submitted a consultation to the EU Commission. That type of work is definitely necessary.\n\nI also think that collaboration with folks who work on AI policy in China is fruitful. I worry about the decoupling between the US and China. There's a bit of tension. But if you're in Europe and you want to collaborate with Chinese researchers, I encourage it. I think this is an area that more folks should look into.\n\n**Melinda:** Yes, that's a really good point.\n\nIn relation to the last [EAGxVirtual talk on biosecurity](https://www.youtube.com/watch?v=BbOHQLrVSX4), can you think of any information hazards within the realm of AI governance?\n\n**Baobao:** That's a good question. We do think a lot about all of our publications. At GovAI, we talk about being careful in our writing so that we don't necessarily escalate tensions between countries. I think that's definitely something that we think about.\n\nAt the same time, there is the [open science movement](https://en.wikipedia.org/wiki/Open_science) in the social sciences. It’s tricky to \\[find a good\\] balance. But we certainly want to make sure our work is accurate and speaks to our overall mission at GovAI of promoting beneficial AI and not doing harm.\n\n**Melinda:** Yes. A more specific question someone has asked is “Do you think this concept of info hazards — if it's a big problem in AI governance — would prohibit one from spreading ideas within the \\[discipline\\]?”\n\n**Baobao:** That's a good question. I think the EA community is quite careful about not spreading info hazards. We're quite deliberate in our communication. But I do worry about a lot of the rhetoric that other folks who are in this AI governance space use. There are people who want to drum up a potential AI arms race — people who say, “Competition is the only thing that matters.” And I think that's the type of dangerous rhetoric that we want to avoid. We don't want a race to the bottom where, whether it's the US, China, or the EU, researchers only care about \\[winning, and fail to consider\\] the potential risks of deploying AI systems that are not safe.\n\n**Melinda:** Yes. Great. So we're going to shift gears a little bit into more of the nitty-gritty of your talk. One question that an audience member asked is “How do you expect public attitudes toward AI to differ by nationality?”\n\n**Baobao:** That's a good question. In the talk, I mentioned that at GovAI, we're hoping to do a big survey in the future that we run concurrently in different countries.\n\nJudging from what I've seen of the literature, [Eurobarometer](https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm) has done a lot of good surveys in the EU. As you would expect, folks living in Europe — where there are tougher privacy laws — tend to be more concerned about privacy. But it’s not necessarily so that Chinese respondents are totally okay with a lack of privacy.\n\nThat’s why we're hoping to do a survey in which we ask \\[people in different countries\\] the same questions, around the same time. I think it’s really important to make these cross-national comparisons. With many questions, you get different responses because of how the questions are framed or worded, so \\[ensuring a\\] rigorous approach will yield a better answer to this question.\n\n**Melinda:** Yes. I'd like you to unpack that a bit more. In what concrete ways can we be more culturally sensitive in stratifying these risks when it comes to international collaboration?\n\n**Baobao:** I think speaking the \\[appropriate\\] language is really important. I can provide a concrete example. As my team worked on the EU consultation that I mentioned, we talked to folks who work at the European Commission in order to understand their particular concerns and “speak their language.”\n\nThey're concerned about AI competition and the potential for an arms race, but they don't want to use that language. They also care a lot about human rights and privacy. And so when we make recommendations, since we’ve read the \\[relevant\\] reports and spoken to the people involved in the decision-making, those are the two things that we try to balance.\n\nAnd in terms of the survey research that we're hoping to do, we're consulting with folks on the ground so that our translation work is localized and \\[reflects\\] cultural nuances.\n\n**Melinda:** Yes, that's a really good point about using the right language. It also \\[reminds\\] me of how people in the social sciences often think differently, and may use different \\[terminology\\] than people from the STEM realm, for example. The clash of those two cultures can sometimes result in conflicts. Do you have any advice on how to mitigate those kinds of conflicts?\n\n**Baobao:** That's a good question. Recently, GovAI published a guide to \\[writing the\\] impact statements required for the [NeurIPS](https://nips.cc/) conference. One of the suggestions was that computer scientists trying to \\[identify\\] the societal impact of their research talk with social scientists who can help with this translational work. Again, I think it’s quite important to take an interdisciplinary approach when you conduct research.\n\n**Melinda:** Yes. Do you think there's a general literacy gap between these different domains? And if so, how should these gaps be filled?\n\n**Baobao:** Yes, good question. My colleagues, Mike Horowitz and \\[Lauren\\] Kahn at the University of Pennsylvania, have written a [piece about the AI literacy gap in government](https://warontherocks.com/2020/01/the-ai-literacy-gap-hobbling-american-officialdom/). They acknowledge that it's a real problem. Offering crash courses to train policymakers or social scientists who are interested in advising policymakers is one way to \\[address this issue\\]. But I do think that if you want to work in this space, doing a deep dive — not just taking a crash course — can be really valuable. Then, you can be the person offering the instruction. You can be the one writing the guides.\n\nSo yes, I do think there's a need to increase the average level of AI literacy, but it’s also important for the social science master's degree and PhD programs to train people.\n\n**Melinda:** Yes, that's interesting. It brings up a question from \\[audience member\\] Chase: “Has there been more research done on the source of technical optimism from computer scientists and engineers? \\[Is this related to\\] overconfidence in their own education or in fellow developers?”\n\n**Baobao:** That's a good question. We \\[work with\\] machine learning researchers at GovAI, and \\[they’ve done\\] research that we hope to share later this year. I can't directly address that research, but there may be a U-shape curve. If the x-axis represents your level of expertise in AI, and the y-axis represents your level of concern about AI safety or risks from AI, those who don't have a lot of expertise are kind of concerned. But those who have CS or engineering degrees are perhaps not very worried.\n\nBut then, if you talk to the machine learning researchers themselves, many are concerned. I think they're recognizing that what they work on can have huge societal impacts. You have folks who work in AI safety who are very concerned about this. But in general, I think that the machine learning field is waking up to these potential risks, given the proliferation of AI ethics principles coming from a lot of different organizations.\n\n**Melinda:** Yes, it’s interesting how the public is, for once, aligned with the experts in ML. I suppose in this instance, the public is pretty enlightened.\n\nI'm not sure if this question was asked already, but are organizations working on AI governance more funding-constrained or talent-constrained?\n\n**Baobao:** That's a good question. May I \\[mention\\] that at GovAI, we're looking to hire folks in the upcoming months? We're looking for someone to help us on survey projects. And we're also looking for a project manager and other researchers. So, in some sense, we have the funding — and that's great. And now we just need folks who can do the research. So that's my plug.\n\nI can't speak for all organizations, but I do think that there is a gap in terms of training people to do this type of work. I'm going to be a faculty member at a public policy school, and they're just beginning to offer AI governance as a course. Certainly, a lot of self-study is helpful. But hopefully, we'll be able to get these courses into the classrooms at law schools, public policy schools, and different PhD or master's degree programs.\n\n**Melinda:** Yes. I think perhaps the motivation behind that question was more about the current general trend around a lot of organizations in EA being extremely competitive to get into — especially EA-specific job posts. I'm wondering whether you can provide a realistic figure for how talent-constrained AI governance is, as opposed to being funding-constrained.\n\n**Baobao:** I can't speak to the funding side. But in terms of the research and the human-resource side, I think we're beginning to finally \\[arrive at\\] some concrete research questions. At the same time, we're building out new questions. And it's hard to predict what type of skill set you will need.\n\nAs I mentioned in my talk, GovAI has realized that social science skills are really important — whether you can do survey research, legal analysis, elite interviews on the sociology side, or translational work. So, although I don't have a good answer to that question, I think getting some sort of expertise in one of the social sciences and having expertise in AI policy or AI safety are the types of skills that we're looking for.\n\n**Melinda:** Okay. Given that we have six minutes left, I'm going to shift gears a little bit and try to get through as many questions as possible. Here’s one: “What insights from nuclear weapons governance can inform AI governance?”\n\n**Baobao:** I think that's a really good question. And I think you've caught me here. I have some insights into the dual-use nature of nuclear weapons. People talk about AI as a “general-purpose technology.” It could be both beneficial and harmful. And speaking from my own expertise in public opinion research, one of the interesting findings in this space is that people tend to reject nuclear energy because of its association with nuclear weapons. There's a wasted opportunity, because nuclear energy is quite cheap and not as harmful as, say, burning fossil fuels. But because of this negative association, we’ve sort of rejected nuclear energy.\n\n\\[Turning to\\] trust in AI systems, we don't want a situation where people's association of AI systems is so negative that they reject applications that could be beneficial to society. Does that make sense?\n\n**Melinda:** Yes.\n\n**Baobao:** I recommend \\[following the work of\\] some of my colleagues in international relations, at organizations like [CSET](https://cset.georgetown.edu/) who have written about this. I'm sorry — it's not my area of expertise.\n\n**Melinda:** Okay. The next question is “Is there a way we can compare public trust in different institutions, specifically regarding AI, compared to a general baseline trust in that institution — for example, in the case of the public generally having greater trust in the US military?”\n\n**Baobao:** That's a good question. I think that, because AI is so new, the public relies on their heuristics rather than what they know. They're just going to rely on what they think of as a trustworthy institution. And one thing that I've noticed, and that I wrote about in my [Brookings report](https://www.brookings.edu/research/public-opinion-lessons-for-ai-regulation/), is how some areas of political polarization around AI governance map to what you would expect to find in other domains. It’s concerning, in the context of the US at least, that if we can't agree on the right policy solution, we might just \\[rely on\\] partisan rhetoric about AI governance.\n\nFor example, acceptance of facial recognition software maps to race and partisanship. African Americans really distrust facial recognition. Democrats tend to distrust facial recognition. Republicans, on the other hand, tend to have a greater level of acceptance. So you see this attitude towards policing being mapped to facial recognition.\n\nYou also see it in terms of regulating algorithmically curated social media content. There seems to be a bipartisan backlash against tech, but when you dig more deeply into it, as we've seen recently, Republicans tend to think about content moderation as censorship and \\[taking away\\] a right, whereas Democrats tend to see it as combating misinformation. So unfortunately, I do think that partisanship will creep into the AI governance space. And that's something that I'm actively studying.\n\n**Melinda:** Yes. We only have one minute left. I'm going to go through the last questions really quickly. With regards to the specific data that you presented, you mentioned that the public considers all AI governance challenges important. Did you consider including a made-up governance challenge to check response bias?\n\n**Baobao:** Oh, that's a really good suggestion. We haven't, but we certainly can do that in the next round.\n\n**Melinda:** Okay, wonderful. And that concludes the Q&A part of this session. \\[...\\] Thanks for watching.", "filename": "How social science research can inform AI governance _ Baobao Zhang _ EAGxVirtual 2020-by Centre for Effective Altruism-video_id eTkvtHymI9s-date 20200615.md", "id": "0ef1559b429a3f246e2054edae0f6520", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Rohin Shah - Effective altruism, AI safety, and learning human preferences from the world_s state-by Towards Data Science-video_id uHiL6GNXHvw-date 20201028", "authors": ["Rohin Shah", "Jeremie Harris"], "date_published": "2020-10-28", "text": "# Rohin Shah on Effective altruism, AI safety, and learning human preferences from the world’s state by Jeremie Harris on the Towards Data Science Podcast\n\nIf you walked into a room filled with objects that were scattered around somewhat randomly, how important or expensive would you assume those objects were?\n\nWhat if you walked into the same room, and instead found those objects carefully arranged in a very specific configuration that was unlikely to happen by chance?\n\nThese two scenarios hint at something important: human beings have shaped our environments in ways that reflect what we value. You might just learn more about what I value by taking a 10 minute stroll through my apartment than by spending 30 minutes talking to me as I try to put my life philosophy into words.\n\nAnd that’s a pretty important idea, because as it turns out, one of the most important challenges in advanced AI today is finding ways to communicate our values to machines. If our environments implicitly encode part of our value system, then we might be able to teach machines to observe it, and learn about our preferences without our having to express them explicitly.\n\nThe idea of leveraging deriving human values from the state of an human-inhabited environment was first developed in a paper co-authored by Berkeley PhD and incoming DeepMind researcher Rohin Shah. Rohin has spent the last several years working on AI safety, and publishes the widely read AI alignment newsletter — and he was kind enough to join us for this episode of the Towards Data Science podcast, where we discussed his approach to AI safety, and his thoughts on risk mitigation strategies for advanced AI systems.\n\nHere were some of my favourite take-homes from our conversation:\n\n- Like many, Rohin was driven towards AI alignment and AI safety work in part through his exposure to members of the Effective Altruism community. Effective Altruism is a philosophical movement focused on determining how people can make the maximum positive impact on the world, either through charitable giving or their professional career. It’s focused on asking questions like: what is the move I could make, that would maximize the expected value of my contribution to the world? For Rohin, the possible harm that could be done by powerful AI systems in the future — and the possible benefits that these systems could have if they’re developed safely — made AI alignment appealing.\n- Rohin discussed two important potential failure modes of advanced AI systems, which already appear in different forms in current systems.\n- First, he highlighted the risk of bad generalization: AIs learning the wrong lessons from their training data, leading them to generalize in ways that humans might not anticipate or want. As an example, he cited an initial concern with OpenAI’s GPT-3 model, which surfaced when a developer prompted the model with a nonsense question (like, “how many glubuxes are in a woolrop?”). Rather than answering this query “honestly”, by saying something like, “I don’t know — I’m not familiar with those words,” GPT-3 tried to answer with a best guess, like “there are 3 glubuxes in a woolrop.” You could argue that this actually wasn’t a bad generalization: GPT-3 basically answered like a student taking a test, who wants to hide the fact that they don’t know the answer to a question by taking a guess based purely on the context of the question. But if our hope was to make an honest language model — one that admits its ignorance when appropriate — GPT-3 in its non-fine-tuned form seemed to fail the test. As AI systems get more powerful, this kind of behavior could get more harmful, so Rohin thinks it’s worth paying attention to.\n- Second, Rohin discussed the challenge of communicating human preferences to AIs. This is a hard problem: most humans don’t actually know what they want out of life, and are even less capable of communicating those desires and values to other humans — let alone to machines, which are currently less context-aware and flexible in their reasoning. That’s where Rohin’s work on teaching machines to infer human preferences from their environment comes in: he thinks the strategy shows promise as an additional source of data on human preferences that machines might be able to use to decipher human values without needing us to be able to articulate them explicitly. We talked about many interesting strengths and weaknesses of this strategy.\n\n## Chapters:\n\n- 0:00 Intro\n- 1:44 Effective altruism\n- 6:50 Rohin’s introduction to AI safety work\n- 11:18 Why AI risk is so serious\n- 18:33 Child rearing analogy\n- 22:15 Statistical learning theory\n- 25:09 What is preference learning?\n- 32:23 Application to higher levels of abstraction\n- 34:45 Revealed preferences and the state of the world\n- 36:26 Broken vase metaphor\n- 44:10 Rule of time horizons\n- 49:03 Wrap-up\n\n## Please find below the transcript for Season 2 Episode 4:\n\nJeremie (00:00): \nHey, everyone. Welcome to another episode of the Towards Data Science podcast. My name is Jeremie and, apart from hosting the podcast, I’m also on the team at the SharpestMinds data science mentorship program. I’m really excited about today’s episode because I’ve been thinking about getting today’s guest on the podcast for a very long time. I’m so glad we finally made it happen.\n\nJeremie (00:16): \nHe’s actually in between a transition right now from Berkley where he’s wrapping up his PhD. He’s working at the Center for Human Compatible AI, and he’s transitioning over into DeepMind where he’ll be doing some alignment work. His name is Rohin Shah. And, apart from being a very prolific researcher in AI and AI alignment in particular, he is also the publisher of the AI alignment newsletter which is a really, really great resource if you’re looking to get oriented in the space to learn about some of the open problems and open questions in AI alignment. I really recommend checking it out.\n\nJeremie (00:47): \nWe’re going to talk about a whole bunch of different things, including the philosophy of AI, the philosophy of machine learning and AI alignment, ways in which it can be accomplished, some of the challenges that exist, and we’re going to explore one of the most interesting proposals, I think, that Rohin’s come up with, which is an idea about extracting human preferences from the state of the environment. So, basically, the idea here is that humans, through their activity, have encoded their preferences implicitly in their environments, we do a whole bunch of different things, different actions, that reveal our preferences. And it would be great if we could have AIs look at the world and figure out what our preferences are implicitly based on the state of that world. And that might be a great way to bootstrap AI alignment efforts.\n\nJeremie (01:30): \nWe’ll be talking about that proposal in depth, along with a whole bunch of other things. I’m really looking forward to the episode, so I hope you enjoy. Without further ado, here we go.\n\nJeremie (01:39): \nHello, and thank you so much for joining us for the podcast.\n\nRohin (01:41): \nYeah, thanks for having me here. I’m excited.\n\nJeremie (01:44): \nWell, I’m very excited to have you. There’s so many interesting things that you’ve been working on in the alignment space in general. But, before we tackle some of the more technical questions, there’s an observation that I think anybody who’s spent any time working on alignment or talking to alignment researchers is going to end up making at some point, which is that the vast majority of people in the space seem to come from the effective altruism community. And I’d love to get your take on, number one, what the effective altruism community is, what effective altruism is, and number two, why you think there’s this deep connection between EA, effective altruism, and AI alignment and AI safety research.\n\nRohin (02:20): \nYeah, sure. The overarching idea effective altruism, the very easy to defend one that doesn’t get into specifics, is with whatever money, time, resources, whatever you’re willing to spend altruistically, you should try to do the most good you can with it, rather than… And you should think about that. It’s pretty hard to argue against this. I don’t think I’ve really seen people disagree with that part.\n\nRohin (02:56): \nNow, in practice, effective altruism the movement has whole bunch of additional premises that are meant to be in support of the skill, but are more controversial. I think the really fundamental big idea of effective altruism is that of cause prioritization. Many people will say, “Okay, I want to have, say, clean water in Africa. I will work towards that.” And they’ll think about different ways in which you can get clean water in Africa, maybe you could try sanitizing the water that people already get, or you could try building some new treatment plants in order to provide fresh, flowing water that’s drinkable to everyone. And they’ll think about how best can they achieve their goal of delivering clean water.\n\nRohin (03:47): \nIt’s much, much, much less common for people to think, “Okay, well, should I work on getting clean water to people in Africa, or combating racism in the US? Which of these should I put my effort into? Or my money into?” The main premise of effective altruism is you can, in fact, do this. There are actually significant differences between causes and, by thinking about it, have much more impact by selecting the right cause to work on in the first place.\n\nRohin (04:20): \nIt’s very focused on this take weird… Take ideas seriously, actually evaluate them, figure out whether or not they are true and not whether or not they sound crazy. Whether or not they sound crazy does have some relation to whether or not they are true, but they are not necessarily the same. I think that ties into why it’s also such a hotbed for AI safety research. The EA case for AI safety, for work on AI safety, is that AI has a good chance of being extremely influential in the next century, let’s say.\n\nRohin (05:00): \nThere is some argument that is debatable, but it doesn’t seem like you can rule it out. It seems like at least moderately likely that if we don’t take care of how we do this, the AI system might “take over” in the sense of all of the important decisions about the world are made by the AI system and not by humans. And one possible consequence of this is humans go extinct. I’ll go into this argument later, I’m sure, but-\n\nJeremie (05:36): \nSo \\[crosstalk 00:05:37\\]-\n\nRohin (05:37): \n\\[crosstalk 00:05:37\\] believe this argument somewhat, and so then it becomes extremely important and impactful to work on. It sounds crazy, but one of the EA’s, to me, strengths is that it separates what sounds crazy from what is true.\n\nJeremie (05:52): \nIt seems like, really, the focus is there’s this extra missing step that a lot of people don’t apply to their thinking when deciding what causes to contribute to, what to work on, what to spend their lives on, and that is the step of going, “What areas are going to give outsized returns on my time?”\n\nRohin (06:09): \nYep, exactly.\n\nJeremie (06:12): \nI can really think back to most of the conversations I ever had with people about causes, about charity, and it’s usually focused on stuff like what are the administrative fees associated with this charity? Oh, I want to donate to a place where all my dollars go to the cause, rather than asking the more fundamental question, is this cause actually going to give the best ROI from the standpoint of benefiting everyone or benefiting humanity? It’s interesting that that kind of thinking, a more first principles approach, leads a lot of people to the area of AI alignment and AI safety. As you said, it makes sense, you’ve got this super high risk high reward profile.\n\nJeremie (06:50): \nWhat was it that drew you, for example, to AI alignment, AI safety work in particular, rather than any of the other, I could imagine, bio terrorism, I could imagine all kinds of horrible things that could happen to us, but why AI alignment in particular?\n\nRohin (07:05): \nYeah, so, my story is kind of weird. It may be a classic AI story in that convinced by weird very weird arguments. I got into effective altruism in 2014. I heard the arguments for AI risk within a year of that, probably. I was deeply unconvinced by them. I just did not buy them.\n\nRohin (07:37): \nAnd, so, until 2017, I basically didn’t engage very much with AI safety. I was also unconvinced of, basically, there’s this field of ethics called population ethics which tries to deal with the question of how do you compare how good different worlds are when they have different populations of people in them? We don’t need to go into the details, but it’s a very confusing area. Lots of impossibility results that say you might want these six very intuitive properties, but, no, you can’t actually have all of them at the same time, stuff like this. So you’re \\[crosstalk 00:08:21\\]-\n\nJeremie (08:20): \nWould the idea here be like is a world with 100 decently happy people better than a world with 1,000 decent minus epsilon happy people? Is that the kind of calculation?\n\nRohin (08:31): \nYes. That’s an example of the question that I would deal with, yeah.\n\nRohin (08:35): \nSo, anyway, I was thinking about this question a lot in the summer of 2017. And, eventually, I was like, “Okay, I think I should actually put a fair amount of weight,” not certainty, certainly, but a fair amount of weight on the view that more happy people is, in fact, just means a better world, even if they’re in the future. Once you put a decent probability on that, it starts looking overwhelmingly important to ensure that the future continues to exist and have happy people because it’s just so big relevant to the present.\n\nRohin (09:21): \nAnd so, then, I wanted to do something that was more future oriented and I had a ton of skills in computer science and math and, basically, everything you would want to work in AI alignment. I still was not very convinced of AI risk but I was like, “Okay, a bunch of smart people have thought about this, maybe I should work on it for a while, see whether or not it makes sense.” That’s what caused me to actually switch, and a year later I actually started believing the arguments.\n\nJeremie (09:57): \nThat’s so interesting.\n\nRohin (09:58): \nI also saw different arguments.\n\nJeremie (10:00): \nYou were led by… Is it the quality of the people who were drawn to the problem more so than necessarily the initial arguments themselves? Do you remember an ah-ha moment as you were working on this stuff where you’re like, “Well, wait a minute. This is actually for real.” I can now see why Nick Bostrom, and maybe Eliezer Yudkowsky, and whoever else is talking about it back then was on to something?\n\nRohin (10:21): \nI never really had an ah-ha moment. I remember, at one point, I was like, “I guess I now believe these arguments,” but it wasn’t like I… I guess I now believe that AI risk is substantial and real. I can’t point to a specific point in time where yes, now I believe it. I just, one day, was reflecting on it and noticed, “Oh, yeah. I used to not believe this. And now I do.”\n\nJeremie (10:52): \nThat’s so interesting. It seems like there’s a bifurcation between people who they read Superintelligence, or they read less wrong, and they get really excited about the problem and really scared of it right off the bat because, for whatever reason, they’re wired in such ways to have that happen. And then people, yeah, who are like you. It’s like a slow burn and you ease into it. I guess this is part of the problem, almost, of articulating the problem if it takes that long to get people to think of this as a really important thing.\n\nJeremie (11:18): \nDo you have a strategy that you use when you try to explain to people why is AI risk so serious? Why is the probability nontrivial that you think might’ve worked on you back then to accelerate the process?\n\nRohin (11:32): \nYeah, I should note that I still… I’m not super happy with the arguments in like Superintelligence, for example. I would say that it’s slightly different arguments that are motivating it for me with still a fair amount of emphasis on things that were in Superintelligence.\n\nJeremie (11:50): \nI think a lot of people won’t have heard of Superintelligence, by the way.\n\nRohin (11:54): \nOh, yes.\n\nJeremie (11:55): \nIf you want to address any of the arguments that you raise, please feel free to give that background, too.\n\nRohin (12:00): \nYeah, maybe I’ll just talk about the arguments I personally like since I can explain them better. But, just for context, Superintelligence is a book by a professor at Oxford named Nick Bostrom. It was published in 2014 and it was the first \\[inaudible 00:12:19\\] treatment of why AI risk is a thing that might occur and why we should think it might be plausible, what solutions might seem like they should be investigated and stuff like that.\n\nRohin (12:33): \nAnd then, for me, personally, the argument I would give… So, A, we’re going to \\[inaudible 00:12:45\\] as a premise that we build AI systems that are intelligent, like as intelligent as a human, let’s say. We can talk about that later, but that’s a whole other discussion. I’ll just say that I think it is not… I think it is \\[inaudible 00:13:03\\] reasonably likely to happen in the next century. But, for now, take it as an assumption.\n\nRohin (13:10): \nOne thing about intelligence is it means that you can adapt to new situations, you’re presented with a new situation, you learn about it and you do something and that something is coherent. It makes sense. One example I give of this, where we see this even with current neural nets, is a specific example from GPT-3. I believe viewers will be familiar with GPT-3… Listeners, not viewers. But if not, GPT-3 is a \\[inaudible 00:13:47\\] language generation \\[inaudible 00:13:48\\] that OpenAI developed and released recently.\n\nRohin (13:51): \nI think I like one particular example which ones from the the post giving GPT-3 a Turing test. Where the context to GPT-3 was a bunch of questions and answers. And GPT-3 would pose a question, how may bonks are in a quoit. These are nonsense words, you did not mishear me. GPT-3, in some sense, this is outside of its training distribution. It has never seen this sentence in its training corpus, presumably. It may not have even seen the words bonk and quoit ever. It’s actual distribution shift and you’re relying on some sort of generalization out of distribution.\n\nRohin (14:43): \nNonetheless, I think we can all predict that GPT-3 is not going to output some random string of characters. It’s going to probably say something sensible. In fact, the thing that it says is that, “You know, there are three bonks in a quoit.” Why three? I have no idea. But, you know, it’s sensible in some sense. It produced an answer that sounds like English.\n\nJeremie (15:10): \nAnd we’ve all been there to some degree, if we write exams or whatever, we’re asked how many bonks are in a quoit, we haven’t done our studying, and, hey, there are three bonks in a quoit. There we go.\n\nRohin (15:18): \nExactly, right? In some sense, GPT-3 did generalize, it generalized the way a student taking a test would. In the original post, this was taken as evidence of GPT-3 not actually being reasonable because it doesn’t know how to say, “This question is nonsense.”\n\nRohin (15:39): \nBut then a followup post was like, “Actually, you totally can get GPT-3 to do that!” If you tell GPT-3 that… If in the context, you say whenever it sees a nonsense question the AI responds, “Yo, be real.” Then, when it’s asked how many bonks are in a quoit? It says, “Yo, be real.” So you know it’s got the ability to tell that this is nonsense, it just turned out that it generalized in a way where it was more like a test taker and less like somebody in conversation. Did we know that ahead of time? No, we did not. We had to actually run GPT-3 in order to figure this out.\n\nRohin (16:23): \nI think AI risk is basically like this, but supercharged where your AI system, if it is human level intelligent, it’s definitely going to be deployed in new areas, in new situations, that we haven’t seen before. We just don’t really have a compelling reason to believe that it will continue to do the thing that we were training it to do as opposed to something else. In GPT-3, what were we training it to do? Well, on the training data set, at least, we were training it to do whatever a human would write in that context.\n\nRohin (17:05): \nWhen you see there are three bonks in a… Sorry, how many bonks are in a quoit? What would a human do in that circumstance? I don’t know. It’s not really well defined, and GPT-3 did something sensible. I don’t think you could reasonably say it wasn’t doing what we trained it to do, it just did something that was coherent. And, similarly, if you’ve got AI systems that are human level intelligent or more, taking super impactful actions upon the world and they are put in these new situations where there’s not really a fact of the matter about how they will generalize then they might take actions that have a high impact on the world that aren’t what we want.\n\nRohin (17:49): \nAnd then, as maybe intuition \\[inaudible 00:17:52\\] for why this could be really, really bad, like human extinction level bad. One particular distribution shift is you go from the training setting where humans have more power than the AI and can turn off the AI to the setting where the AI is sufficiently intelligent and sufficiently widely deployed, but no human can… Or humanity as a whole cannot turn it off. In that situation, that’s a new situation. AI has never been in a situation where it had this sort of power before. Will it use it in some way that… Will it generalize in some way that was different than what we expected during training? We don’t really have a reason to say no, it won’t do that.\n\nJeremie (18:33): \nIs there an analogy you think here with child rearing? I’m just thinking of here intergenerational human propagation where our ancestors in the 1600s, at least in the west, I’m sure would be absolutely disgusted by our vile ways today the way that we deal with sex, the way that we communicate to our elders, the way that we manage our institutions and so on, all our hierarchies are just completely different. And, in many ways, we’re \\[inaudible 00:19:00\\] to the moral frameworks that were applied in the middle ages or the early renaissance.\n\nJeremie (19:07): \nI guess there is a difference here in the sense that at least we are still running on the same fundamental hardware, or something very similar. Maybe that ensures a minimum level of alignment, but does this analogy break apart in some way?\n\nRohin (19:18): \nI think that’s a pretty good intuition. There are some ways that the analogy breaks, like for example, well… The analogy doesn’t break so much as I would say put a little bit less weight on it for these reasons. One is, in child rearing, you have some influence over children, but you don’t get to do a full training process where you give them gradients for every single time step of action that they ever do. You might hope that, given that you can have way, way, way more selection pressure on AI systems, you would be able to avoid this problem.\n\nRohin (20:00): \nBut, yes, I think that that is the same fundamental dynamic that I’m pointing at. You have some amount of influence over these agents, but those agents that encounter new situations and they do something in those situations and you didn’t think about those situations ahead of time and you didn’t train them to do the right thing.\n\nJeremie (20:23): \nI definitely buy the idea here that this AI risk, this is really significant risk. The stakes are very high. When it comes to the solutions or the strategies that you think are most promising, you yourself are specialized, obviously, in one category, everyone has to be, in one subspace within the alignment problem domain. What is it that the area that you’ve decided to focus on and why do you think that is most deserving of the attention at this point?\n\nRohin (20:50): \nThe story that I’ve told you so far is one of generalization. The main issue is we don’t know how to generalize and, plausibly, you could get AI systems that are single mindedly pursuing power and that’s similar to the Superintelligence story and those could cause human extinction. The fundamental mechanism is bad generalization, or generalization that’s like your capabilities generalize. You do something coherent and high impact, but the thing you’re trying to do doesn’t generalize, relative to what humans wanted.\n\nRohin (21:31): \nA lot of the things I’m most excited about are somehow generalization related. One thing that I’m interested in is can we get a better understanding of empirically how do neural nets tend to generalize? Can we say anything about this? There’s a lot of theory that tries to explain why neural nets have as good generalization power as they do. It can’t be explained by statistical learning theory because neural nets can memorize random noise, but nonetheless seem to generalize reasonably well on when the labels are not random noise.\n\nJeremie (22:15): \nAnd do you mind explaining statistical learning theory as a reference? I’m actually not so sure that I can to the connection.\n\nRohin (22:23): \nStatistical learning theory is like a branch of machine learning theory that tries to do several things. But, among other things, try to prove that if we train a machine learning model on such and such training data with such and such training properties, then we know that it will generalize in such and such way and it proves theorems about this.\n\nRohin (22:50): \nImportantly, most approaches right now focus on making assumptions about your model, your hypothesis class. These assumptions usually preclude the ability to overfit to an arbitrary sized data set because if you could, then you can’t really say anything about generalization. But the fact of the matter is neural nets really can actually overfit to any data set. They can memorize labels that are literal random noise. And, so, these assumptions just don’t apply to neural nets.\n\nRohin (23:28): \nThe thing I’m excited about is can we talk about assumptions on the data set, rather than just the model? And using, if we think about assumptions on the data set and assumptions on the model, then can we say something about how neural nets tend to generalize? This is like a super vague not fleshed out hope that I have not really started working on, nor to my knowledge has anyone else.\n\nRohin (23:55): \nThere’s just so many empirical things about neural nets that are so deeply confusing to me, like deep double descent. I don’t get it. It’s an empirical phenomenon. If you don’t know, you can look it up. It’s probably not that worth me going into, just so confusing. I don’t know why it happens. It makes no sense to me. And I will want to know why, and I think that if we understood things like this, we might be able to start making statements about how neural nets tend to generalize that maybe that translates into things we can say about safety.\n\nJeremie (24:27): \nThat’s interesting, because the generalization story seems to be one ingredient of the problem, of course, and then the other ingredient, which, I mean, there’s some overlap, but it does seem like they have distinct components. Is this challenge of telling machines what human preferences even are, our ability to tell each other what we want out of life is already so limited, and it’s something that, I mean, at least I personally find somewhat jarring as a prospect, having to actually not only express our preference, but quantify them and etch them into some kind of loss function that we then feed to a model. You’ve done a lot of interesting work on this.\n\nJeremie (25:09): \nAnd, actually, there’s one of your papers that I wanted to talk about. We discussed this before we started recording and I was so glad to hear that it was also the one that you thought was the most interesting. We have compatible views, at least on that. It was this idea of… Well, the paper’s title is Preferences Implicit in the State of the World. I guess, first, I wanted to ask a question to set the scene a little bit. What is preference learning? What is that concept?\n\nRohin (25:34): \nThis is actually the next thing I was going to say I was excited by.\n\nJeremie (25:38): \nOh, great.\n\nRohin (25:39): \nWhich is I’ve talked about generalization, but before you get to generalization, you want to train on the right thing in the first place. That seems like a good starting point for AI system. If you don’t have that, you’re probably toast. Lots of ink has been spilled on how it’s actually very typical to specify what you want by writing a program or an equation that captures it in a number which, as you know, how deep reinforcement learning, or any deep learning system, works. But it’s most commonly associating with deep reinforcement learning.\n\nRohin (26:21): \nThe idea of preferenced learning is rather than having to specify what you want by writing down an equation, you specify it by some easier method. For example, you could look at two trajectories in a reinforcement setting, you can look at two behaviors that the agent took, and you can say, “Ah, yes, the left one. That one was better.” That’s giving the agent some feedback about what it should be doing. It’s not trying to write down an equation that captures the ideal behavior in every possible scenario. It’s just saying, out of these two, which one is better? You would imagine that that’s easier for humans to do and more likely to be correct.\n\nRohin (27:08): \nThis preferenced learning field, I think of it as the field of how do we design mechanisms for humans to give feedback to an AI system, such that we can actually give feedback that incentivizes the behavior we actually want and we don’t make as many mistakes in specification as we do with reward functions.\n\nJeremie (27:34): \nSo, what I find really exciting about that aspect, too, is there’s this well known difference in humans between expressed desires and revealed desires, or expressed intent and revealed intent. I’ll say I want to work out for three hours today, I want to do a bunch of coding, and I want to have a bunch of vegan meals for the next month. And then if you check in on me next month, I will have not have done all those things, I wouldn’t have done nearly all those things. And the question is well, which me is me? Am I the aspirational self that said hey, I would love to be that person. Or am I the jackass who actually sat on his couch and watched Netflix the entire time?\n\nJeremie (28:16): \nThis seems to really scratch that itch in the sense that if probes are revealed preferences, for better or for worse, I guess that could also be a failure mode. Is that something that you see as valuable in this approach?\n\nRohin (28:29): \nYeah, I think you want to use both sources of information and not do either one. Actually, let me take a step back and distinguish between two different things you could be trying to do. There’s one thing where you’re trying to learn what humans value, which is the sort of thing that you’re talking about, and there’s another framing where you’re just like, “I want my AI system to do such and such task and I want to train it to do that, but I can’t write down a reward function for that task.”\n\nRohin (29:01): \nI’m actually more interested in the latter, honestly, but the former is also something I’ve spent a lot of time on and I’m excited by. Right now, we’re talking about the former.\n\nJeremie (29:13): \nCan I ask a naïve question? I think I understand what the difference is, but I just want to put it to you to tackle it explicitly. What is the difference between those two things?\n\nRohin (29:24): \nOne thing is maybe I want my AI system to vacuum my floors, or something. The task of vacuuming my floors is not well specified just by that sentence. Anyone who has a Roomba will tell you stories of the Roomba being super dumb. Some of those are just the Roomba being not intelligent enough, but some are also the task is not super well specified.\n\nRohin (29:57): \nShould the agent vacuum underneath a Christmas tree where there’s a bunch of needles that might ruin their vacuum? Who knows. If there’s some random shiny button on the floor, should it be vacuumed up or left alone? Because maybe that button’s important. What sorts of things, should the cat ever be vacuumed? The cat has a lot of hair that gets everywhere. If you vacuum the cat, that seems like it would make your house cleaner.\n\nRohin (30:31): \nThere’s lots of ambiguity here. I wouldn’t really say that these are human values, like teaching your Roomba how to vacuum does not seem to be the same thing as teaching the Roomba about human values. For one thing, you can’t really talk that much about revealed preferences here because I don’t vacuum my house very often. If an AI system were going to queue vacuum, I might have it vacuum more often.\n\nJeremie (31:06): \nWould you say this is a narrow application of human preferences? It almost seems like the distinction between narrow AI and AGI somehow maps onto this.\n\nRohin (31:16): \nYeah, and I think I agree with that. I would say, but in this sense, everything is narrow AI. You just get narrow AI that becomes more and more general, and at some point we decide to stop calling it narrow AI and start to call it AGI because of how broad it has become.\n\nRohin (31:34): \nI like the idea of you start with something that can be applied to systems today and you just scale it up. It becomes more and more capable, more and more general, but it’s always the same technique. Eventually, the systems that we create with it, we would label them as AGI or human level intelligence or super intelligent. It’s the same technique, it’s the same general principle. That’s why I’m more excited by this framing of the problem, rather than the human value spamming.\n\nRohin (32:07): \nAs you get to more general systems, it merges in with the human values. Once you get AI systems that are designing government policies or something, whatever feedback you’re giving them, it better teach them about human values.\n\nJeremie (32:23): \nYeah, and hopefully, I guess, we start to do that at higher and higher levels of abstraction as you say as we climb that ladder. We fill in the convolutional filters in a sense as we go up.\n\nRohin (32:34): \nYes, exactly. You had asked a question about revealed preferences versus spoken preferences, or express preferences. I think, yes, this is an important distinction. I definitely want any method that we propose to not be dependent on one or the other but to instead be using both, and there will be conflicts, I’m mostly hoping that we can just have AI systems that set aside the conflicts and do things that are robustly good according to either set. Probably, you’ll have to have some amount of conflict resolution mechanism, but humans already have to do this in some sense. It seems plausible that we could do it.\n\nRohin (33:28): \nI think it is a good, very nice aspect of this is that you don’t have to commit yourself to finding the behavior up front in every possible situation. We just don’t know this. Our values are not well defined enough, honestly, for that to be right. Our values are constantly in the process of being updated as we encounter new situations. Right now, we talk about democracy, one vote per person. If, someday, in the transhumanist future, if someday it becomes possible to just make copies of people, I think we would pretty quickly no longer want to have one vote per person. Because otherwise you can just pay to have anyone elected if you are sufficiently rich.\n\nJeremie (34:19): \nYeah. Or, I guess, just in the limited better information about brain states, we could say well, sure, this policy makes the majority of people happier, but the people it makes more unhappy, I mean, look at that horrible dopamine cycle. Those people are really taking a big hit and you wake those responses.\n\nRohin (34:37): \nYep, yeah, you could definitely optimize better for social welfare potentially, and maybe then you don’t want to just have one vote per person.\n\nJeremie (34:45): \nRight, now, I guess this brings us back to preferences implicit in the state of the world, there are things, presumably, about the structure of the world that reveal our, I guess this is revealed preferences, mostly, right?\n\nRohin (34:57): \nYes.\n\nJeremie (34:57): \nWhat we’ve actually done.\n\nRohin (34:58): \nYep, this is definitely a revealed preferences method. I think an important aspect of this is people will… I think one of the reasons I’m especially excited about this, which I want to say as a prelude, is that it’s not trying to do the hard things. When people think about value learning, they think about should a self driving car if it has a choice between running into two passengers, or killing the driver, what should it do? Those are hard, ethical questions. I’m less interested in them. I want to start with can we get an AI system that reliably knows that it should not kill humans. If there are two options where, yeah…\n\nRohin (35:50): \nAnyway, the basic stuff that we all agree on or nearly all agree on, and so I think looking at the state of the world is a good way to see this and the basic intuition here is that we’ve been acting in the world for so long. We have preferences, we’ve been rearranging the world to fit the way that we want the world to be. As a result, you can invert that process to figure out what things we probably want.\n\nRohin (36:26): \nThere’s this nice toy example that illustrates this. Suppose there’s a room, and in the middle of the room there is this breakable vase. And vases, once they’re broken, they can never be repaired. We assume that the AI knows this. We’re going to assume that the AI knows all empirical facts. It knows how the world works, it knows what actions the human can take, it knows what actions the human can take, it knows what actions it itself can take, it knows what states of the world are possible, but it doesn’t know anything about the \\[inaudible 00:36:58\\] function which is the equivalent of human values.\n\nRohin (37:02): \nIt knows empirical facts. It knows that this vase, once broken, cannot be fixed. We’re going to leave aside glue and things like that. It then looks at the fact that it sees, it is deployed in this room, and it sees that its human, who I’ll call Alice, is in the room, the vase is unbroken. Now you can pose hypothetical questions like all right, well, what would I have expected to see if Alice wanted to break the vase? Well, I would have seen a broken vase. What would I have expected if Alice didn’t care about the vase? Well, probably, at some point, the most efficient way would have been to just walk through the room while knocking over the vase. So, probably in that case, also I would have seen the broken vase.\n\nRohin (37:58): \nWhat would I expect to see if Alice did not want the vase to be, or actively wanted the vase to not be broken? In that case, I actually see an unbroken vase, probably. Since I actually see an unbroken vase, that tells me that of those three situations, only the last one seems consistent with my observations. So, probably, Alice did not want to break the vase. You can infer this fact about that Alice doesn’t want to break the vase just by looking at the state of the world and seeing that the vase is not broken.\n\nJeremie (38:33): \nIt seems like there’s a very deep connection here to the second law of thermodynamics and the universe has there’s so many more ways to end up in a situation where you have a broken vase, but the fact that there isn’t a broken vase is a huge piece of information.\n\nRohin (38:52): \nYeah, that’s exactly right.\n\nJeremie (38:57): \nBasically, to the extent \\[crosstalk 00:38:59\\]-\n\nRohin (38:58): \nI don’t think I have anything to add.\n\nJeremie (38:59): \nWell, it just strikes me, the physicist instinct in me, but to the extent that the world looks any different from what we would expect with pure thermodynamic randomness, the assumption here is those differences come from human preferences. Would that be a fair way to characterize the…\n\nRohin (39:17): \nYep, that’s right.\n\nJeremie (39:19): \nAnd does that imply certain failure modes then? Because I guess we encode information in our environment, I guess this is \\[inaudible 00:39:25\\] revealed preferences thing, but implicitly, I’ve hard coded my brain state into my apartment, every arrangement of things, any misogyny, any racism, any foot fetishes, the whole laundry list of weird quirks that may or may not be part of my personality are implicitly encoded in the room. Is this part of the risk of applying a technique like this?\n\nRohin (39:54): \nYeah, so, in theory, if you \\[inaudible 00:39:59\\] this method, it would… Is this going to get everything that is a revealed preference? And there are, well, I don’t know that it gets everything. But to a first approximation, it gets your revealed preferences. I’m sure there is some that it does not get. Sometimes, you just don’t like your revealed preferences and you think they should be different.\n\nRohin (40:27): \nYou have a revealed preference, many people have a revealed preference, to procrastinate that they probably do not in fact endorse and they wouldn’t want their AI system giving them more and more addictive material so that they can procrastinate better which is plausibly something that could happen. I would have to think significantly harder about how exactly concretely that could happen, but I could believe that that would be an effect.\n\nRohin (41:03): \nSimilarly, the technique as I’ve explained it so far it seems that there is only one human in the world and things get a lot more complicated if there are multiple humans and I have just ignored that case so far.\n\nJeremie (41:20): \nIt’s what you need to get the thing off the ground, right?\n\nRohin (41:22): \nYeah.\n\nJeremie (41:26): \nIn this context, I imagine at least, there’s another risk mode which is if, by pure chance, let’s say, in the example with the vase, let’s say that the human actually doesn’t care about the vase but just happens, in her demonstration, to have avoided the vase. Is there the risk that, I guess this is always a risk in machine learning, it sounds like just a case of added distribution sampling, like you would learn-\n\nRohin (41:57): \nYep.\n\nJeremie (41:57): \nOkay.\n\nRohin (41:58): \nYeah, that’s right. If the vase is kept in an inconspicuous out of the way location where it’s not actually that likely that Alice would have broken the vase on the course of moving around the room, we actually have this on the paper, we show that in that environment you actually don’t learn anything significant about the vase. You’re just like, “Eh, she probably didn’t want it broken.” You infer that she did not deeply desire for the vase to be broken, but you don’t infer anything stronger than that. You’re uncertain between whether it’s bad to break vases rather versus yeah, it doesn’t really matter.\n\nJeremie (42:44): \nInteresting.\n\nRohin (42:45): \nIt’s more like if there were efficiency gains to be had by breaking the vase, then you infer and you observe that actually the vase wasn’t broken, then you infer that it’s bad to break vases. It’s still possible that humans, we’re not perfect optimal people, we might not pick up an efficiency gain and so we might go around a vase even though it would be faster to go to the vase even if we didn’t care about the vase. And, yeah, this method would make an incorrect inference. In general, in preference learning, there’s a big tension between you’re assuming that humans do the things that the humans do to reflect what they want. And not always true.\n\nJeremie (43:37): \nRight, and sometimes just for reasons of pure stupidity as well, I guess. We may want a thing, just not know how to make it happen.\n\nRohin (43:45): \nYup, exactly. That is a big challenge in preference learning and people have tried to tackle it, including me, actually. But I wouldn’t say that there has been super substantial progress on separating stupidity from the things you actually wanted.\n\nJeremie (44:10): \nI think we’d end up solving a lot of other problems instead \\[crosstalk 00:44:12\\] if we do that. Actually, there’s one more aspect I wanted to ask you about, with respect to the paper. The rule of time horizons, or the rule that time horizons play in the paper is, I think, just really interesting because there’s certain assumptions that the robot makes or the AI makes about what is the time horizon that the human has in mind for this action that if there’s assumptions about that time horizon shift, you start to see different behavior. I’d love to hear you expand on that and describe that setting a little bit.\n\nRohin (44:43): \nI think the main takeaway from that I think I would have about time horizons is if you assume a short time horizon, then cases where the state hasn’t been fully optimized are much more excusable because the human just hasn’t had enough time to fully optimize the state towards what would be optimal for them and so you can-\n\nJeremie (45:11): \nAnd so maybe I should fill in that gap, I realize it was a little ambiguous, but by time horizon, I guess we’re talking about the amount of time the human would have to, say, go from one point in the room to a desired end point, right? \\[crosstalk 00:45:24\\]\n\nRohin (45:23): \nYeah, it’s like the amount of time that the robot assumes the human has been acting in the environment before the robot showed up. In the room case, it’s robot is deployed and sees an intact vase and it’s like, “Ah, yes, the human has been walking around this room for an hour,” or something like that.\n\nJeremie (45:40): \nRight, if you’ve been walking around the room for a full hour and the vase is still there, you can then assume that the vase is probably pretty important.\n\nRohin (45:48): \nYeah, something along those lines, exactly. The actual setting in the paper is slightly different, but that’s the right intuition. Yeah, and so this isn’t illustrated best with the vase example, but imagine you’re trying to build a house of cards. This is another example where the state of the world is really very informative about your preferences. House of cards are super, super not entropic. You can infer a lot.\n\nJeremie (46:21): \nYeah, the more specific the arrangement, I guess the more… Which is interesting because that’s exactly what’s so challenging about preserving humanity in general, there’s almost a philosophically conservative streak to this approach in the sense that we’re assuming that we’ve gotten to somewhere that’s worth preserving because we’ve encoded so much of ourselves, so much of what’s good about us already in the environment and it almost seems like what I love about this time horizon stuff is the political philosophy behind it, it almost gives you a dial that you can tune from the progressive to the conservative end of the spectrum just by assuming different time horizons. If you assume that we just got here and it’s sort of blank slate, then, hey, we can try anything. We’re not really certain about what humans want in this environment, conversely…\n\nRohin (47:12): \nYeah.\n\nJeremie (47:12): \nRight? Yeah.\n\nRohin (47:13): \nThat’s true, I’ve never actually thought about it that way, but you’re right. That is basically what it is. Another way of thinking about it is the way I actually got to this point, to the point of writing this paper, was asking myself why is it that we privilege the do nothing action? We’ll say that the safe action is to do nothing. Why? It’s just an action. This is an answer, we’ve already optimized the environment, random actions are probably, so… The current state is high on our preference ranking. Random actions take us out of that state into some random other state, so probably an expectation to go lower in our ranking whereas the do nothing action preserves it and so it’s good. The longer the time horizon, the stronger you want to do nothing as a default.\n\nJeremie (48:07): \nYeah, yeah, I remember reading that in the paper, actually, as almost a derivation of that intuition which is it’s so beautiful when you can see it laid out like that.\n\nRohin (48:15): \nI know, so good.\n\nJeremie (48:18): \nYeah, yeah. In a way, it makes me think of so many arguments among and between people of different political stripes could be so much easier if we applied toy models like this where you can say, well, hey, there is value to the conservative. There is value to the progressive. We end up in a dystopia either way, and here’s the parameter we can tune to see how dystopic things get one way or the other, depending on how much we value things.\n\nJeremie (48:43): \nYeah, anyway I love the work and I thought it was… Anyway, again, one of these ah-ha moments, for anybody who’s interested in the intersection between philosophy, moral philosophy, and then an AI, anyway, it’s just such a cool piece of work.\n\nRohin (48:58): \nYeah, thanks. I like it for basically the same reasons.\n\nJeremie (49:03): \nSweet. Well, I’m glad we have compatible \\[inaudible 00:49:06\\], then. Awesome. Well, I think we’ve covered a lot of the bases here, but was there anything else you wanted to talk about? One thing I do want to make sure we get to is a reference to the alignment newsletter that you put out. I think everybody should check that out, especially if you’re looking to get into the space. Rohin puts out this amazing newsletter and anyway, we’ll link to it on the blog post that will come with the podcast.\n\nJeremie (49:30): \nDid you have any social media links or things like that that you want to share?\n\nRohin (49:35): \nI think the alignment newsletter is the best way to get my current thinking on things. If you’re new to the space, I’d probably recommend other things. Specific ranking of mine that I like as more introductory… It’s not exactly introductory, but more timeless material, on the alignment forum there is a sequence of blog posts called the value learning sequence that I wrote. I like that as a good introduction, there are two other recommended sequences on that forum that I also recommend, that I also think are pretty great.\n\nRohin (50:21): \nIn terms of social media, I have a Twitter. It’s @RohinMShah, but mostly it just sends out the alignment newsletter links. People can also feel free to e-mail me. My e-mail’s on my website, can’t guarantee that I will send you a response because I do actually get a lot of e-mail but I think I have a fairly high response rate.\n\nJeremie (50:50): \nYeah, well, I can vouch for that at my end. Thanks for making the time, really appreciate it, and I’m really looking forward actually to putting this out and also good luck with DeepMind because you’re heading over there in a couple of days, really, right?\n\nRohin (51:04): \nYeah, I’m heading there on Monday. It’s two more business days.\n\nJeremie (51:09): \nAll right, yeah, enjoy the long weekend such as it is.\n\nRohin (51:13): \nCool. Thanks.\n\nJeremie (51:13): \nAwesome, thanks so much, Rohin.", "filename": "Rohin Shah - Effective altruism, AI safety, and learning human preferences from the world_s state-by Towards Data Science-video_id uHiL6GNXHvw-date 20201028.md", "id": "9ad3327e217ee4c1b163f4ab39fd41f9", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Iason Gabriel on Foundational Philosophical Questions in AI Alignment-by Future of Life Institute-video_id MzFl0SdjSso-date 20210630", "authors": ["Iason Gabriel"], "date_published": "2021-06-30", "text": "# Iason Gabriel on Foundational Philosophical Questions in AI Alignment - Future of Life Institute\n\nIn the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment.   \n\n **Topics discussed in this episode include:**\n\n- How moral philosophy and political theory are deeply related to AI alignment\n- The problem of dealing with a plurality of preferences and philosophical views in AI alignment\n- How the is-ought problem and metaethics fits into alignment \n- What we should be aligning AI systems to\n- The importance of democratic solutions to questions of AI alignment \n- The long reflection\n\n**Timestamps:** \n\n0:00 Intro\n\n2:10 Why Iason wrote Artificial Intelligence, Values and Alignment\n\n3:12 What AI alignment is\n\n6:07 The technical and normative aspects of AI alignment\n\n9:11 The normative being dependent on the technical\n\n14:30 Coming up with an appropriate alignment procedure given the is-ought problem\n\n31:15 What systems are subject to an alignment procedure?\n\n39:55 What is it that we’re trying to align AI systems to?\n\n01:02:30 Single agent and multi agent alignment scenarios\n\n01:27:00 What is the procedure for choosing which evaluative model(s) will be used to judge different alignment proposals\n\n01:30:28 The long reflection\n\n01:53:55 Where to follow and contact Iason\n\n**Citations:**\n\n[Artificial Intelligence, Values and Alignment](https://arxiv.org/abs/2001.09768) \n\n[Iason Gabriel’s Google Scholar](https://scholar.google.co.uk/citations?user=LLHZcksAAAAJ&hl=en)\n\nWe hope that you will continue to join in the conversations by following us or subscribing to our podcasts on [Youtube](https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw), [Spotify,](https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP) [SoundCloud](https://soundcloud.com/futureoflife/tracks), [iTunes](https://itunes.apple.com/gb/podcast/the-future-of-life/id1170991978?mt=2), [Google Play](https://play.google.com/music/listen?u=0#/ps/Iatasczlcpeyawgv5werdrxbnh4), [Stitcher](https://www.stitcher.com/podcast/future-of-life-institute/the-future-of-life-institue), [iHeartRadio](https://www.iheart.com/podcast/256-the-future-of-life-30997580/), or your preferred podcast site/application. You can find all the [AI Alignment Podcasts here.](https://futureoflife.org/ai-alignment-podcast/)\n\n_You can listen to the podcast above or read the transcript below._ \n\n[![](https://futureoflife.org/wp-content/uploads/2020/09/Iason-Gabriel-thin-1200x430.jpg?x76795)](https://futureoflife.org/wp-content/uploads/2020/09/Iason-Gabriel-thin.jpg?x76795)\n\n**Lucas Perry:** Welcome to the AI Alignment Podcast. I’m Lucas Perry. Today, we have a conversation with Iason Gabriel about a recent paper that he wrote titled Artificial Intelligence, Values and Alignment. This episode primarily explores how moral and political theory are deeply interconnected with the technical side of the AI alignment problem, and important questions related to that interconnection. We get into the problem of dealing with a plurality of preferences and philosophical views, the is-ought problem, metaethics, how political theory can be helpful for resolving disagreements, what it is that we’re trying to align AIs to, the importance of establishing a broadly endorsed procedure and set of principles for alignment, and we end on exploring the long reflection.\n\nThis was a very fun and informative episode. Iason has succeeded in bringing new ideas and thought to the space of moral and political thought in AI alignment, and I think you’ll find this episode enjoyable and valuable. If you don’t already follow us, you can subscribe to this podcast on your preferred podcasting platform by searching for The Future of Life or following the links on the page for this podcast.\n\nIason Gabriel is a Senior Research Scientist at DeepMind where he works in the Ethics Research Team. His research focuses on the applied ethics of artificial intelligence, human rights, and the question of how to align technology with human values. Before joining DeepMind, Iason was a Fellow in Politics at St John’s College, Oxford. He holds a doctorate in Political Theory from the University of Oxford and spent a number of years working for the United Nations in post-conflict environments.\n\nAnd with that, let’s get into our conversation with Iason Gabriel.\n\nSo we’re here today to discuss your paper, Artificial Intelligence, Values and Alignment. To start things off here, I’m interested to know what you found so compelling about the problem of AI values and alignment, and generally, just what this paper is all about.\n\n**Iason Gabriel:** Yeah. Thank you so much for inviting me, Lucas. So this paper is in broad brush strokes about how we might think about aligning AI systems with human values. And I wrote this paper because I wanted to bring different communities together. So on the one hand, I wanted to show machine learning researchers, that there were some interesting normative questions about the value configuration we align AI with that deserve further attention. At the same time, I was keen to show political and moral philosophers that AI was a subject that provoked real philosophical reflection, and that this is an enterprise that is worthy of their time as well.\n\n**Lucas Perry:** Let’s pivot into what the problem is then that technical researchers and people interested in normative questions and philosophy can both contribute to. So what is your view then on what the AI problem is? And the two parts you believe it to be composed of.\n\n**Iason Gabriel:** In broad brush strokes, I understand the challenge of value alignment in a way that’s similar to Stuart Russell. He says that the ultimate aim is to ensure that powerful AI is properly aligned with human values. I think that when we reflect upon this in more detail, it becomes clear that the problem decomposes into two separate parts. The first is the technical challenge of trying to align powerful AI systems with human values. And the second is the normative question of what or whose values we try to align AI systems with.\n\n**Lucas Perry:** Oftentimes, I also see a lot of reflection on AI policy and AI governance as being a core issue to also consider here, given that people are concerned about things like race dynamics and unipolar versus multipolar scenarios with regards to something like AGI, what are your thoughts on this? And I’m curious to know why you break it down into technical and normative without introducing political or governance issues.\n\n**Iason Gabriel:** Yeah. So this is a really interesting question, and I think that one we’ll probably discuss at some length later about the role of politics in creating aligned AI systems. Of course, in the paper, I suggest that an important challenge for people who are thinking about value alignment is how to reconcile the different views and opinions of people, given that we live in a pluralistic world, and how to come up with a system for aligning AI systems that treats people fairly despite that difference. In terms of practicalities, I think that people envisage alignment in different ways. Some people imagine that there will be a human parliament or a kind of centralized body that can give very coherent and sound value advice to AI systems. And essentially, that the human element will take care of this problem with pluralism and just give AI very, very robust guidance about things that we’ve all agreed upon are the best thing to do.\n\nAt the same time, there’s many other visions for AI or versions of AI that don’t depend upon that human parliament being able to offer such cogent advice. So we might think that there are worlds in which there’s multiple AIs, each of which has a human interlocutor, or we might imagine AIs as working in the world to achieve constructive ends and that it needs to actually be able to perform these value calculations or this value synthesis as part of its kind of default operating procedure. And I think it’s an open question what kind of AI system we’re discussing and that probably the political element understood in terms of real world political institutions will need to be tailored to the vision of AI that we have in question.\n\n**Lucas Perry:** All right. So can you expand then a bit on the relationship between the technical and normative aspects of AI alignment?\n\n**Iason Gabriel:** A lot of the focus is on the normative part of the value alignment question, trying to work out which values to align AI systems with, whether it is values that really matter and how this can be decided. I think this is also relevant when we think about the technical design of AI systems, because I think that most technologies are not value agnostic. So sometimes, when we think about AI systems, we assume that they’ll have this general capability and that it will almost be trivially easy for them to align with different moral perspectives or theories. Yet when we take a ground level view and we look at the way in which AI systems are being built, there’s various path dependencies that are setting in and there’s different design architectures that will make it easier to follow one moral trajectory rather than the other.\n\nSo for example, if we take a reinforcement learning paradigm, which focuses on teaching agents tasks by enabling them to maximize reward in the face of uncertainty over time, a number of commentators have suggested that, that model fits particularly well with the kind of utilitarian decision theory, which aims to promote happiness over time in the face of uncertainty, and that it would actually struggle to accommodate a moral theory that embodies something like rights or hard constraints. And so I think that if what we do want is a rights based vision of artificial intelligence, it’s important that we get that ideal clear in our minds and that we design with that purpose in mind.\n\nThis challenge becomes even clearer when we think about moral philosophies, such as a Kantian theory, which would ask an agent to reflect on the reasons that it has for acting, and then ask whether they universalize to good states of affairs. And this idea of using the currency of a reason to conduct moral deliberation would require some advances in terms of how we think about AI, and it’s not something that is very easy to get a handle on from a technical point of view.\n\n**Lucas Perry:** So the key takeaway here is that what is going to be possible in terms of the normative and in terms of moral learning and moral reasoning in AI systems will supervene upon technical pathways that we take, and so it is important to be mindful of the relationship between what is possible normatively, given what is technically known, and to try and navigate that with mindfulness about that relationship?\n\n**Iason Gabriel:** I think that’s precisely right. I see at least two relationships here. So the first is that if we design without a conception of value in mind, it’s likely that the technology that we build will not be able to accommodate any value constellation. And then the mirror side of that is if we have a clear value constellation in mind, we may be able to develop technologies that can actually implement or realize that ideal more directly and more effectively.\n\n**Lucas Perry:** Can you make a bit more clear the ways in which, for example, path dependency of current technical research makes certain normative ethical theories more plausible to be instantiated in AI systems than others?\n\n**Iason Gabriel:** Yeah. So, I should say that obviously, there’s a wide variety of different methodologies that are being tried at the present moment, and that intuitively, they seem to match up well with different kinds of theory. Of course, the reality is a lot of effort has been spent trying to ensure that AI systems are safe and that they are aligned with human intentions. When it comes to richer goals, so trying to evidence a specific moral theory, a lot of this is conjecture because we haven’t really tried to build utilitarian or Kantian agents in full. But I think in terms of the details, so with regards to reinforcement learning, we have this, obviously, an optimization driven process, and there is that whole caucus of moral theories that basically use that decision process to achieve good states of affairs. And we can imagine, roughly equating the reward that we use to train an RL agent on, with some metric of subjective happiness, or something like that.\n\nNow, if we were to take a completely different approach, so say, virtue ethics, virtue ethics is radically contextual, obviously. And it says that the right thing to do in any situation is the action that evidences certain qualities of character and that these qualities can’t be expressed through a simple formula that we can maximize for, but actually require a kind of context dependence. So I think that if that’s what we want, if we want to build agents that have a virtuous character, we would really need to think about the fundamental architecture potentially in a different way. And I think that, that kind of insight has actually been speculatively adopted by people who consider forms of machine learning, like inverse reinforcement learning, who imagined that we could present an agent with examples of good behavior and that the agent would then learn them in a very nuanced way without us ever having to describe in full what the action was or give it appropriate guidance for every situation.\n\nSo, as I said, these really are quite tentative thoughts, but it doesn’t seem at present possible to build an AI system that adapts equally well to whatever moral theory or perspective we believe ought to be promoted or endorsed.\n\n**Lucas Perry:** Yeah. So, that does make sense to me that different techniques would be more or less skillful for more readily and fully adopting certain normative perspectives and capacities in ethics. I guess the part that I was just getting a little bit tripped up on is that I was imagining that if you have an optimizer being trained off something, like maximize happiness, then given the massive epistemic difficulties of running actual utilitarian optimization process that is only thinking at the level of happiness and how impossibly difficult that, that would be that like human beings who are consequentialists, it would then, through gradient descent or being pushed and nudged from the outside or something, would find virtue ethics and deontological ethics and that those could then be run as a part of its world model, such that it makes the task of happiness optimization much easier. But I see how intuitively it more obviously lines up with utilitarianism and then how it would be more difficult to get it to find other things that we care about, like virtue ethics or deontological ethics. Does that make sense?\n\n**Iason Gabriel:** Yeah. I mean, it’s a very interesting conjecture that if you set an agent off with the learned goal of trying to maximize human happiness, that it would almost, by necessity, learn to accommodate other moral theories and perspectives kind of suggests that there is a core driver, which animates moral inquiry, which is this idea of collective welfare being realized in a sustainable way. And that might be plausible from an evolutionary point of view, but there’s also other aspects of morality that don’t seem to be built so clearly on what we might even call the pleasure principle. And so I’m not entirely sure that you would actually get to a rights based morality if you started out from those premises.\n\n**Lucas Perry:** What are some of these things that don’t line up with this pleasure principle, for example?\n\n**Iason Gabriel:** I mean, of course, utilitarians have many sophisticated theories about how endeavors to improve total aggregate happiness involve treating people, fairly placing robust side constraints on what you can do to people and potentially, even encompassing other goods, such as animal welfare and the wellbeing of future generations. But I believe that the consensus or the preponderance of opinion is that actually, unless we can say that certain things matter, fundamentally, for example, human dignity or the wellbeing of future generations or the value of animal welfare, is quite hard to build a moral edifice that adequately takes these things into account just through instrumental relationships with human wellbeing or human happiness so understood.\n\n**Lucas Perry:** So then we have this technical problem of how to build machines that have the capacity to do what we want them to do and to help us figure out what we would want to want us to get the machines to do, an important problem that comes in here is the is-ought distinction by Hume, where we have, say, facts about the world, on one hand, is statements, we can even have is statements about people’s preferences and meta-preferences and the collective state of all normative and meta-ethical views on the planet at a given time, and the distinction between that and ought, which is a normative claim synonymous with should and is kind of the basis of morality, and the tension there between what assumptions we might need to get morality off of the ground and how we should interact with a world of facts and a world of norms and how they may or may not relate to each other for creating a science of wellbeing or not even doing that. So how do you think of coming up with an appropriate alignment procedure that is dependent on the answer to this distinction?\n\n**Iason Gabriel:** Yeah, so that’s a fascinating question. So I think that the is-ought distinction is quite fundamental and it helps us answer one important query, which is whether it’s possible to solve the value alignment question simply through an empirical investigation of people’s existing beliefs and practices. And if you take the is-ought distinction seriously, it suggests that no matter what we can infer from studies of what is already the case, so what people happen to prefer or happen to be doing, we still have a further question, which is should that perspective be endorsed? Is it actually the right thing to do? And so there’s always this critical gap. It’s a space for moral reflection and moral introspection and a place in which error can arise. So we might even think that if we studied all the global beliefs of different people and found that they agreed upon certain axioms or moral properties that we could still ask, are they correct about those things? And if we look at historical beliefs, we might think that there was actually a global consensus on moral beliefs or values that turned out to be mistaken.\n\nSo I think that these endeavors to kind of synthesize moral beliefs to understand them properly are very, very valuable resources for moral theorizing. It’s hard to think where else we would begin, but ultimately, we do need to ask these questions about value more directly and ask whether we think that the final elucidation of an idea is something that ought to be promoted.\n\nSo in sum, it has a number of consequences, but I think one of them is that we do need to maintain a space for normative inquiry and value alignment can’t just be addressed through an empirical social scientific perspective.\n\n**Lucas Perry:** Right, because one’s own perspective on the is-ought distinction and whether and how it is valid will change how one goes about learning and evolving normative and meta-ethical thinking.\n\n**Iason Gabriel:** Yeah. Perhaps at this point, an example will be helpful. So, suppose we’re trying to train a virtuous agent that has these characteristics of treating people fairly, demonstrating humility, wisdom, and things of that nature, suppose we can’t specify these upfront and we do need a training set, we need to present the agent with examples of what people believe evidences these characteristics, we still have the normative question of what goes into that data set and how do we decide. So, the evaluative questions get passed on to that. Of course, we’ve seen many examples of data sets being poorly curated and containing bias that then transmutes onto the AI system. We either need to have data that’s curated so that it meets independent moral standards and the AI learns from that data, or we need to have a moral ideal that is freestanding in some sense and that AI can be built to align with.\n\n**Lucas Perry:** Let’s try and make that even more concrete because I think this is a really interesting and important problem about why the technical aspect is deeply related with philosophical thinking about this is-ought problem. So a highest level of abstraction, like starting with axioms around here, if we have is statements about datasets, and so data sets are just information about the world, the data sets are the is statements, we can put whatever is statements into a machine and the machine can take the shape of those values already embedded and codified in the world in people’s minds or in our artifacts and culture. And then the ought question, as you said, is what information in the world should we use? And to understand what information we should use requires some initial principle, some set of axioms that bridges the is-ought gap.\n\nSo for example, the kind of move that I think Sam Harris tries to lay out is this axiom, we should avoid the worst possible misery for everyone and you may or may not agree with that axiom but that is the starting point for how one might bridge the is-ought gap to be able to select for which data is better than other data or which data we should on load to AI systems. So I’m curious to know, how is it that you think about this very fundamental level of initial axiom or axioms that are meant to bridge this distinction?\n\n**Iason Gabriel:** I think that when it comes to these questions of value, we could try and build up from this kind of very, very minimalist assumptions of the kind that it sounds like Sam Harris is defending. We could also start with richer conceptions of value that seem to have some measure of widespread ascent and reflective endorsement. So I think, for example, the idea that human life matters or that sentient life matters, that it has value and hence, that suffering is bad is a really important component of that, I think that conceptions of fairness of what people deserve in light of that equal moral standing is also an important part of the moral content of building an aligned AI system. And I would tend to try and be inclusive in terms of the values that we canvass.\n\nSo I don’t think that we actually need to take this very defensive posture. I think we can think expansively about the conception and nature of the good that we want to promote and that we can actually have meaningful discussions and debate about that so we can put forward reasons for defending one set of propositions in comparison with another.\n\n**Lucas Perry:** We can have epistemic humility here, given the history of moral catastrophes and how morality continues to improve and change over time and that surely, we do not sit at a peak of moral enlightenment in 2020. So given our epistemic humility, we can cast a wide net around many different principles so that we don’t lock ourselves into anything and can endorse a broad notion of good, which seems safer, but perhaps has some costs in itself for allowing and being more permissible for a wide range of moral views that may not be correct.\n\n**Iason Gabriel:** I think that’s, broadly speaking, correct. We definitely shouldn’t tether artificial intelligence too narrowly to the morality of the present moment, given that we may and probably are making moral mistakes of one kind or another. And I think that this thing that you spoke about, a kind of global conversation about value, is exactly right. I mean, if we take insights from political theory seriously, then the philosopher, John Rawls, suggests that a fundamental element of the present human condition is what he calls the fact of reasonable pluralism, which means that when people are not coerced and when they’re able to deliberate freely, they will come to different conclusions about what ultimately has moral value and how we should characterize ought statements, at least when they apply to our own personal lives.\n\nSo if we start from that premise, we can then think about AI as a shared project and ask this question, which is given that we do need values in the equation, that we can’t just do some kind of descriptive enterprise and that, that will tell us what kind of system to build, what kind of arrangement adequately factors in people’s different views and perspectives, and seems like a solution built upon the relevant kind of consensus to value alignment that then allows us to realize a system that can reconcile these different moral perspectives and takes a variety of different values and synthesizes them in a scheme that we would all like.\n\n**Lucas Perry:** I just feel broadly interested in just introducing a little bit more of the debate and conceptions around the is-ought problem, right? Because there are some people who take it very seriously and other people who try to minimize it or are skeptical of it doing the kind of philosophical work that many people think that it’s doing. For example, Sam Harris is a big skeptic of the kind of work that the is-ought problem is doing. And in this podcast, we’ve had people on who are, for example, realists about consciousness, and there’s just a very interesting broad range of views about value that inform the is-ought problem. If one’s a realist about consciousness and thinks that suffering is the intrinsic valence carrier of disvalue in the universe, and that joy is the intrinsic valence carrier of wellbeing, one can have different views on how that even translates to normative ethics and morality and how one does that, given one’s view on the is-ought problem.\n\nSo, for example, if we take that kind of metaphysical view about consciousness seriously, then if we take the is-ought problem seriously then, even though there are actually bad things in the world, like suffering, those things are bad, but that it would still require some kind of axiom to bridge the is-ought distinction, if we take it seriously. So because pain is bad, we ought to avoid it. And that’s interesting and important and a question that is at the core of unifying ethics and all of our endeavors in life. And if you don’t take the is-ought problem seriously, then you can just be like, because I understand the way that the world is, by the very nature of being sentient being and understanding the nature of suffering, there’s no question about the kind of navigation problem that I have. Even in the very long-term, the answer to how one might resolve the is-ought problem would potentially be a way of unifying all of knowledge and endeavor. All the empirical sciences would be unified conceptually with the normative, right? And then there is no more conceptual issues.\n\nSo, I think I’m just trying to illustrate the power of this problem and distinction, it seems.\n\n**Iason Gabriel:** It’s a very interesting set of ideas. To my mind, these kinds of arguments about the intrinsic badness of pain, or kind of naturalistic moral arguments, are very strong ways of arguing, against, say, moral relativist or moral nihilist, but they don’t necessarily circumvent the is-ought distinction. Because, for example, the claim that pain is bad is referring to a normative property. So if you say pain is bad, therefore, it shouldn’t be promoted, but that’s completely compatible with believing that we can’t deduce moral arguments from purely descriptive premises. So I don’t really believe that the is-ought distinction is a problem. I think that it’s always possible to make arguments about values and that, that’s precisely what we should be doing. And that the fact that, that needs to be conjoined with empirical data in order to then arrive at sensible judgments and practical reason about what ought to be done is a really satisfactory state of affairs.\n\nI think one kind of interesting aspect of the vision you put forward was this idea of a kind of unified moral theory that everyone agrees with. And I guess it does touch upon a number of arguments that I make in the paper, where I juxtapose two slightly stylistic descriptions of solutions to the value alignment challenge. The first one is, of course, the approach that I term the true moral theory approach, which holds that we do need a period of prolonged reflection and we reflect fundamentally on these questions about pain and perhaps other very deep normative questions. And the idea is that by using tools from moral philosophy, eventually, although we haven’t done it yet, we may identify a true moral theory. And then it’s a relatively simple… well, not simple from a technical point of view, but simple from a normative point of view task, of aligning AI, maybe even AGI, with that theory, and we’ve basically solved the value alignment problem.\n\nSo in the paper, I argue against that view quite strongly for a number of reasons. The first is that I’m not sure how we would ever know that we’d identified this true moral theory. Of course, many people throughout history have thought that they’ve discovered this thing and often gone on to do profoundly unethical things to other people. And I’m not sure how, even after a prolonged period of time, we would actually have confidence that we had arrived at the really true thing and that we couldn’t still ask the question, am I right?\n\nBut even putting that to one side, suppose that I had not just confidence, but justified confidence that I really had stumbled upon the true moral theory and perhaps with the help of AI, I could look at how it plays out in a number of different circumstances, and I realize that it doesn’t lead to these kind of weird, anomalous situations that most existing moral theories point towards, and so I really am confident that it’s a good one, we still have this question of what happens when we need to persuade other people that we’ve found the true moral theory and whether that is a further condition on an acceptable solution to the value alignment problem. And in the paper, I say that it is a further condition that needs to be satisfied because just knowing, well, supposedly having access to justified belief in a true moral theory, doesn’t necessarily give you the right to impose that view upon other people, particularly if you’re building a very powerful technology that has world shaping properties.\n\nAnd if we return to this idea of reasonable pluralism that I spoke about earlier, essentially, the core claim is that unless we coerce people, we can’t get to a situation where everyone agrees on matters of morality. We could flip it around. It might be that someone already has the true moral theory out there in the world today and that we’re the people who refuse to accept it for different reasons, I think the question then is how do we believe other people should be treated by the possessor of the theory, or how do we believe that person should treat us?\n\nNow, one view that I guess in political philosophy is often attributed to Jean-Jacques Rousseau, if you have this really good theory, you’re justified in coercing other people to live by it. He says that people should be forced to be free when they’re not willing to accept the truth of the moral theory. Of course, it’s something that has come in for fierce criticism. I mean, my own perspective is that actually, we need to try and minimize this challenge of value imposition for powerful technologies because it becomes a form of domination. So the question is how can we solve the value alignment problem in a way that avoids this challenge of domination? And in that regard, we really do need tools from political philosophy, which is, particularly within the liberal tradition, has tried to answer this question of how can we all live together on reasonable terms that preserve everyone’s capacity to flourish, despite the fact that we have variation and what we ultimately believe to be just, true and right.\n\n**Lucas Perry:** So to bring things a bit back to where we’re at today and how things are actually going to start changing in the real world as we move forward. What do you view as the kinds of systems that would be, and are subject to something like an alignment procedure? Does this start with systems that we currently have today? Does it start with systems soon in the future? Should it have been done with systems that we already have today, but we failed to do so? What is your perspective on that?\n\n**Iason Gabriel:** To my mind, the challenge of value alignment is one that exists for the vast majority, if not all technologies. And it’s one that’s becoming more pronounced as these technologies demonstrate higher levels of complexity and autonomy. So for example, I believe that many existing machine learning systems encounter this challenge quite forcefully, and that we can ask meaningful questions about it. So I think in previous discussion, we may have had this example of a recommendation system come to light. And even if we think of something that seems really quite prosaic. so say a recommendation system for what films to watch or what content to be provided to you. I think the value alignment question actually looms large because it could be designed to do very different things. On the one hand, we might have a recommendation system that’s geared around your current first order preferences. So it might continuously give you really stimulating, really fun, low quality content that kind of keeps you hooked to the system and with a high level of subjective wellbeing, but perhaps something that isn’t optimum in other regards. Then we can think about other possible goals for alignment.\n\nSo we might say that actually these systems should be built to serve your second order desires. Those are desires that in philosophy, we would say that people reflectively endorse, they’re desires about the person you want to be. So if we were to build recommendation system with that goal in mind, it might be that instead of watching this kind of cheap and cheerful content, I decided that I’d actually like to be quite a high brow person. So it starts kind of tacitly providing me with more art house recommendations, but even that doesn’t opt out the options, it might be that the system shouldn’t really be just trying to satisfy from my preferences, that it should actually be trying to steer me in the direction of knowledge and things that are in my interest to know. So it might try and give me new skills that I need to acquire, might try and recommend, I don’t know, cooking or self improvement programs.\n\nThat would be a system that was, I guess, geared toward my own interest. But even that again, doesn’t give us a complete portfolio of options. Maybe what we want is a morally aligned system that actually enhances our capacity for moral decision making. And then perhaps that would lead us somewhere completely different. So instead of giving us this content that we want, it might lead us to content that leads us to engage with challenging moral questions, such as factory farming or climate change. So, value alignment kind of arises quite early on. This is of course, with the assumption that the recommendation system is geared to promote your interest or wellbeing or preference or moral sensibility. There’s also the question of whether it’s really promoting your goals and aspirations or someone else’s and in science and technology studies there’s a big area of value sensitive design, which essentially says that we need to consult people and have this almost like democratic discussions early on about the kind of values we want to embody in systems.\n\nAnd then we design with that goal in mind. So, recommendation systems are one thing. Of course, if we look at public institutions, say a criminal justice system, there, we have a lot of public roar and discussion about the values that would make a system like that fair. And the challenge then is to work out whether there is a technical approximation of these values that satisfactory realizes them in a way that conduces to some vision of the public good. So in sum, I think that value alignment challenges exist everywhere, and then they become more pronounced when these technologies become more autonomous and more powerful. So as they have more profound effects on our lives, the burden of justification in terms of the moral standards that are being met, become more exacting. And the kind of justification we can give for the design of a technology becomes more important.\n\n**Lucas Perry:** I guess, to bring this back to things that exist today. Something like YouTube or Facebook is a very rudimentary initial kind of very basic first order preference, satisfier. I mean, imagine all of the human life years that have been wasted, mindlessly consuming content that’s not actually good for us. Whereas imagine, I guess some kind of enlightened version of YouTube where it knows enough about what is good and yourself and what you would reflectively and ideally endorse and the kind of person that you wish you could be and that you would be only if you knew better and how to get there. So, the differences between that second kind of system and the first system where one is just giving you all the best cat videos in the world, and the second one is turning you into the person that you always wish you could have been. I think this clearly demonstrates that even for systems that seem mundane, that they could be serving us in much deeper ways and at much deeper levels. And that even when they superficially serve us they may be doing harm.\n\n**Iason Gabriel:** Yeah, I think that’s a really profound observation. I mean, when we really look at the full scope of value or the full picture of the kinds of values we could seek to realize when designing technologies and incorporating them into our lives, often there’s a radically expansive picture that emerges. And this touches upon a kind of taxonomic distinction that I introduce in the paper between minimalist and maximalist conceptions of value alignment. So when we think about AI alignment questions, the minimalist says we have to avoid very bad outcomes. So it’s important to build safe systems. And then we just need them to reside within some space of value that isn’t extremely negative and could take a number of different constellations. Whereas the maximalist says, “Well, let’s actually try and design the very best version of these technologies from a moral point of view, from a human point of view.”\n\nAnd they say that even if we design safe technologies, we could still be leaving a lot of value out there on the table. So a technology could be safe, but still not that good for you or that good for the world. And let’s aim to populate that space with more positive and richer visions of the future. And then try to realize those through the technologies that we’re building. As we want to realize richer visions of human flourishing, it becomes more important that it isn’t just a personal goal or vision, but it’s one that is collectively endorsed, has been reflected upon and is justifiable from a variety of different points of view.\n\n**Lucas Perry:** Right. And I guess it’s just also interesting and valuable to reflect briefly on how there is already in each society, a place where we draw the line at value imposition, and we have these principles, which we’ve agreed upon broadly, but we’re not going to let Ted Bundy do what Ted Bundy and wants to do\n\n**Iason Gabriel:** That’s exactly right. So we have hard constraints, some of which are kind of set in law. And clearly those are constraints that these are just laws. So the AI systems need to respect. There’s also a huge possible space of better outcomes that are left open. Once we look at where moral constraints are placed and where they reside. I think that the Ted Bundy example is interesting because it also shows that we need to discount the preferences and desires of certain people.\n\nOne vision of AI alignment says that it’s basically a global preference aggregation system that we need, but in reality, there’s a lot of preferences that just shouldn’t be counted in the first place because they’re unethical or they’re misinformed. So again, that kind of to my mind pushes us in this direction of a conversation about value itself. And once we know what the principle basis for alignment is, we can then adjudicate properly cases like that and work out what a kind of valid input for an aligned system is and what things we need to discount if we want to realize good moral outcomes.\n\n**Lucas Perry:** I’m not going to try and pin you down too hard on that because there’s the tension here, of course, between the importance of liberalism, not coercing value judgments on anyone, but then also being like, “Well, we actually have to do it in some places.” And that line is a scary one to move in either direction. So, I want to explore more now the different understandings of what it is that we’re trying to align AI systems to. So broadly people and I use a lot of different words here without perhaps being super specific about what we mean, people talk about values and intentions and idealized preferences and things of this nature. So can you be a little bit more specific here about what you take to be the goal of AI alignment, the goal of it being, what is it that we’re trying to align systems to?\n\n**Iason Gabriel:** Yeah, absolutely. So we’ve touched upon some of these questions already tacitly in the preceding discussion. Of course, in the paper, I argue that when we talk about value alignment, this idea of value is often a placeholder for quite different ideas, as you said. And I actually present a taxonomy of options that I can take us through in a fairly thrifty way. So, I think the starting point for creating aligned AI systems is this idea that we want AI that’s able to follow our instructions, but that has a number of shortcomings, which Stuart Russel and others have documented, which tend to center around this challenge of excessive literalism. So if an AI system literally does what we ask it to, without an understanding of context, side constraints and nuance, often this will lead to problematic outcomes with the story of King Midas, being the classic cautionary tale. Wishing that everything he touches turns to gold, everything turns to gold, then you have a disaster of one kind or another.\n\nSo of course, instructions are not sufficient. What you really want is AI that’s aligned with the underlying intention. So, I think that often in the podcast, people have talked about intention alignment as an important goal of AI systems. And I think that is precisely right to dedicate a lot of technical effort to close the gap between a kind of idiot savant, AI, that perceives just instructions in this dumb way and the kind of more nuanced, intelligent AI that can follow an intention. But we might wonder whether aligning AI with an individual or collective intention is actually sufficient to get us to the really good outcomes, the kind of maximalist outcomes that I’m talking about. And I think that there’s a number of reasons why that might not be the case. So of course, to start with, just because an AI can follow an intention, doesn’t say anything about the quality of the intention that’s being followed.\n\nWe can form intentions on an individual or collective basis to do all kinds of things. Some of which might be incredibly foolish or malicious, some of which might be self-harming, some of which might be unethical. And we’ve got to ask this question of whether we want AI to follow us down that path when we come up with schemes of that kind, and there’s various ways we might try to address those bundle of problems. I think intentions are also problematic from a kind of technical and phenomenological perspective because they tend to be incomplete. So if we look at what an intention is, it’s roughly speaking a kind of partially filled out plan of action that commits us to some end. And if we imagine the AI systems are very powerful, they may encounter situations or dilemmas or option sets that are in this space of uncertainty, where it’s just not clear what the original intention was, and they might need to make the right kind of decision by default.\n\nSo they might need some intuitive understanding of what the right thing to do is. So my intuition is that we do want AI systems that have some kind of richer understanding of the goals that we would want to realize in whole. So I think that we do need to look at other options. It is also possible that we had formed the intention for the AI to do something that explicitly requires an understanding of morality. So we may ask it to do things like promote the greatest good in a way that is fundamentally ethical. Then it needs to step into this other terrain of understanding preferences, interests, and values. I think we need to explore that terrain for one reason or another. Of course, one thing that people talk about is this kind of learning from revealed preferences. So perhaps in addition to the things that we directly communicate, the AI could observe our behavior and make inferences about what we want that help fill in the gaps.\n\nSo maybe it could watch you in your public life, hopefully not private life and make these inferences that actually it should create this very good thing. So that isn’t the domain of trying to learn from things that it observes. But I think that preferences are also quite a worrying data point for AI alignment, at least revealed preferences because they contain many of the same weaknesses and shortcomings that we can ascribe to individual intentions.\n\n**Lucas Perry:** What is a revealed intention again?\n\n**Iason Gabriel:** Sorry, revealed preferences are preferences that are revealed through your behavior. So I observed you doing A or B. And from that choice, I conclude that you have a deeper preference for the thing that you choose. And the question is, if we just watch people, can we learn all the background information we need to create ethical outcomes?\n\n**Lucas Perry:** Yeah. Absolutely not.\n\n**Iason Gabriel:** Yeah. Exactly. As your Ted Bundy example, nicely illustrated, not only is it very hard to actually get useful information from observing people about what they want, but what they want can often be the wrong kind of thing for them or for other people.\n\n**Lucas Perry:** Yeah. I have to hire people to spend some hours with me every week to tell me from the outside, how I may be acting in ways that are misinformed or self-harming. So instead of revealed preferences, we need something like rational or informed preferences, which is something you get through therapy or counseling or something like that.\n\n**Iason Gabriel:** Well, that’s an interesting perspective. I guess there’s a lot of different theories about how we get to ideal preferences, but the idea is that we don’t want to just respond to what people are in practice doing. We want to give them the sort of thing that they would aspire to if they were rational and informed at the very least. So not things that are just a result of mistaken reasoning or poor quality information. And then this very interesting, philosophical and psychological question about what the content of those ideal preferences are. And particularly what happens when you think about people being properly rational. So, to return to David Hume, who often the is-ought distinction is attributed to, he has the conjecture that someone can be fully informed and rational and still desire pretty much anything at the end of the day, that they could want something hugely destructive for themselves or other people, of course, Kantians.\n\nAnd in fact, a lot of moral philosophers believe that rationality is not just a process of joining up beliefs and value statements in a certain fashion, but it also encompasses a substantive capacity to evaluate ends. So, obviously Kantians have a theory about rationality ultimately requiring you to reflect on your ends and ask if they universalize in a positive way. But the thing is that’s highly, highly contested. So I think ultimately if we say we want to align AI with people’s ideal and rational preferences, it leads us into this question of what rationality really means. And we don’t necessarily get the kind of answers that we want to get to.\n\n**Lucas Perry:** Yeah, that’s a really interesting and important thing. I’ve never actually considered that. For example, someone who might be a moral anti-realist would probably be more partial to the view that rationality is just about linking up beliefs and epistemics and decision theory with goals and goals are something that you’re just given and embedded with. And that there isn’t some correct evaluative procedure for analyzing goals beyond whatever meta preferences you’ve already inherited. Whereas a realist might say something like, the other view where rationality is about beliefs and ends, but also about perhaps more concrete standard method for evaluating which ends are good ends. Is that the way you view it?\n\n**Iason Gabriel:** Yeah, I think that’s a very nice summary. The people who believe in substantive rationality tend to be people with a more realist, moral disposition. If you’re profoundly anti-realist, you basically think that you have to stop talking in the currency of reasons. So you can’t tell people they have a reason not to act in a kind of unpleasant way to each other, or even to do really heinous things. You have to say to them, something different like, “Wouldn’t it be nice if we could realize this positive state of affairs?” And I think ultimately we can get to views about value alignment that satisfy these two different groups. We can create aspirations that are well-reasoned from different points of view and also create scenarios that meet the kind of, “Wouldn’t it be nice criteria.” But I think it isn’t going to happen if we just double down on this question of whether rationality ultimately leads to a single set of ends or a plurality of ends, or no consensus whatsoever.\n\n**Lucas Perry:** All right. That’s quite interesting. Not only do we have difficult and interesting philosophical ground in ethics, but also in rationality and how these are interrelated.\n\n**Iason Gabriel:** Absolutely. I think they’re very closely related. So actually the problems we encounter in one domain, we also encounter in the other, and I’d say in my kind of lexicon, they all fall within this question of practical rationality and practical reason. So that’s deliberating about what we ought to do either because of explicitly moral considerations or a variety of other things that we factor up in judgements of that kind.\n\n**Lucas Perry:** All right. Two more on our list here to hit our interests and values.\n\n**Iason Gabriel:** So, I think there are one or two more things we could say about that. So if we think that one of the challenges with ideal preferences is that they lead us into this heavily contested space about what rationality truly requires. We might think that a conception of human interests does significantly better. So if we think about AI being designed to promote human interests or wellbeing or flourishing, I would suggest that as a matter of empirical fact, there’s significantly less disagreement about what that entails. So if we look at say the capability based approach that Amartya Sen and Martha Nussbaum have developed, it essentially says that there’s a number of key goods and aspects of human flourishing, that the vast majority of people believe conduce to a good life. And that actually has some intercultural value and affirmation. So if we designed AI that bore in mind, this goal of enhancing general human capabilities.\n\nSo, human freedom, physical security, emotional security, capacity, that looks like an AI that is both roughly speaking, getting us into the space of something that looks like it’s unlocking real value. And also isn’t bogged down in a huge amount of metaphysical contention. I suggest that aligning AI with human interests or wellbeing is a good proximate goal when it comes to value alignment. But even then I think that there’s some important things that are missing and that can only actually be captured if we returned to the idea of value itself.\n\nSo by this point, it looks like we have almost arrived at a kind of utilitarian AI via the backdoor. I mean, of course utility is a subject of mental state, isn’t necessarily the same as someone’s interest or their capacity to lead a flourishing life. But it looks like we have an AI that’s geared around optimizing some notion of human wellbeing. And the question is what might be missing there or what might go wrong. And I think there are some things that that view of value alignment still struggles to factor in. The welfare of nonhuman animals is something that’s missing from this wellbeing centered perspective on alignment.\n\n**Lucas Perry:** That’s why we might just want to make it wellbeing for sentient creatures.\n\n**Iason Gabriel:** Exactly, and I believe that this is a valuable enterprise, so we can expand the circle. So we say it’s the wellbeing of sentient creatures. And then we have the question about, what about future generations? Does their wellbeing count? And we might think that it does if we follow Toby Ord or in fact, most conventional thinking, we do think that the welfare of future generations has intrinsic value. So we might say, “Well, we want to promote wellbeing of sentient creatures over time with some appropriate weighting to account for time.”\n\nAnd that’s actually starting to take us into a richer space of value. So we have wellbeing, but we also have a theory about how to do intertemporal comparisons. We might also think that it matters how wellbeing or welfare is distributed. That it isn’t just a maximization question, but that we also have to be interested in equity or distribution because we think is intrinsically important. So we might think it has to be done in a manner that’s fair. Additionally, we might think that things like the natural world have intrinsic value that we want to factor in. And so the point which will almost be familiar now from our earlier discussion is you actually have to get to that question of what values do we want to align the system with because values and the principles that derive with them can capture everything that is seemingly important.\n\n**Lucas Perry:** Right. And so, for example, within the effective altruism community and within moral philosophy recently, the way in which moral progress has been made is in so far that debiasing human moral thought and ethics from spatial and temporal bias. So Peter Singer has the children drowning in a shallow pond argument. It just illustrates how there are people dying and children dying all over the world in situations which we could cheaply intervene to save them as if they were drowning in a shallow pond. And you only need to take a couple of steps and just pull them out, except we don’t. And we don’t because they’re far away. And I would like to say, essentially, everyone finds this compelling that where you are in space, doesn’t matter how much you’re suffering. That if you are suffering, then all else being equal, we should intervene to alleviate that suffering when it’s reasonable to do so.\n\nSo space doesn’t matter for ethics. Likewise, I hope, and I think that we’re moving in the right direction if time also doesn’t matter while also being mindful, we also have to introduce things like uncertainty. We don’t know what the future will be like, but this principle about caring about the wellbeing of sentient creatures in general, I think is essential and core I think to whatever list of principles we’ll want for bridging the is-ought distinction, because it takes away spacial bias, where you are in space, doesn’t matter, just matters that you’re sentient being, it doesn’t matter when you are as a sentient being. It also doesn’t matter what kind of sentient being you are, because the thing we care about is sentience. So then the moral circle has expanded across species. It’s expanded across time. It’s expanded across space. It includes aliens and all possible minds that we could encounter now or in the future. We have to get that one in, I think, for making a good future with AI.\n\n**Iason Gabriel:** That’s a picture that I strongly identify with on a personal level, this idea of the expanding moral circle of sensibilities. And I think from a substantive point of view, you’re probably right. That that is a lot of the content that we would want to put into an aligned AI system. I think that one interesting thing to note is that a lot of these views are actually empirically fairly controversial. So if we look at the interesting study, the moral machine experiment, where I believe several million people ultimately played this experiment online, where they decided which trade offs an AV, an autonomous vehicle, should make in different situations. So whether it should crash into one person or five people, a rich person or a poor person, pretty much everyone agreed that it should kill fewer people when that was on the table. But I believe that in many parts of the world, there was also belief that the lives of affluent people mattered more than the lives of those in poverty.\n\nAnd so if you were just to reason from their first sort of moral beliefs, you would bake that bias into an AI system that seems deeply problematic. And I think it actually puts pressure on this question, which is we’ve already said we don’t want to just align AI with existing moral preferences. We’ve also said that we can’t just declare a moral theory to be true and impose it on other people. So are there other options which move us in the direction of these kinds of moral beliefs that seem to be deeply justified, but also avoid the challenge of value imposition. And how far do they get if we try to move forward, not just as individuals like examining the kind of expanding moral circle, but as a community that’s trying to progressively endogenize these ideas and come up with moral principles that we can all live by.\n\nWe might not get as far if we were going at it alone, but I think that there are some solutions that are kind of in that space. And those are the ones I’m interested in exploring. I mean, common sense, morality understood as the conventional morality that most people endorse, I would say is deeply flawed in a number of regards, including with regards to global poverty and things of that nature. And that’s really unfortunate given that we probably also don’t want to force people to live by more enlightened beliefs, which they don’t endorse or can’t understand. So I think that the interesting question is how do we meet this demand for a respect for pluralism, and also avoid getting stuck in the morass of common sense morality, which has these prejudicial beliefs that will probably with the passage of time come to be regarded quite unfortunately by future generations.\n\nAnd I think that making this demand for non domination or democratic support seriously means not just running far into the future or in a way that we believe represents the future, but also doing a lot of other things, trying to have a democratic discourse where we use these reasons to justify certain policies that then other people reflectively endorse and we move the project forwards in a way that meets both desiderata. And in this paper, I try to map out different solutions that both meet this criteria and of respecting people’s pluralistic beliefs while also moving us towards more genuinely morally aligned outcomes.\n\n**Lucas Perry:** So now the last question that I want to ask you here then on the goal of AI alignment is do you view a needs based conception of human wellbeing as a sub-category of interest based value alignment? People have come up with different conceptions of human needs. People are generally familiar with Maslow’s hierarchy of needs. And I mean, as you go up the hierarchy, it will become more and more contentious, but everyone needs food and shelter and safety, and then you need community and meaning and spirituality and things of that nature. So how do you view or fit a needs based conception. And because some needs are obviously undeniable relative to others.\n\n**Iason Gabriel:** Broadly speaking, a needs space conception of wellbeing is in that space we already touched upon. So the capabilities based approach and the needs based approach are quite similar. But I think that what you’re saying about needs potentially points to a solution to this kind of dilemma that we’ve been talking about. If we’re going to ask this question of what does it mean to create principles for AI alignment that treat people fairly, despite their different views. One approach we might take is to look for commonalities that also seem to have moral robustness or substance to them. So within the parlance of political philosophy, we’d call this an overlapping consensus approach to the problem of political and moral decision making. I think that that’s a project that’s well worth countenancing. So we might say there’s a plurality of global beliefs and cultures. What is it that these cultures coalesce around? And I think that it’s likely to be something along the lines of the argument that you just put forward; that people are vulnerable in virtue of how we’re constituted, that we have a kind of fragility and that we need protection, both against the environment and against certain forms of harm, particularly state-based violence. And that this is a kind of moral bedrock or what the philosopher Henry Shue calls, “A moral minimum” that receives intercultural endorsement. So actually the idea of human needs is very, very closely tied to the idea of human rights. So the idea is that the need is fundamental, and in virtue of your moral standing, the normative claim and your need, the empirical claim, you have a right to enjoy a certain good and to be secured in the knowledge that you’ll enjoy that thing.\n\nSo I think the idea of building a kind of human rights space, AI that’s based upon this intercultural consensus is pretty promising. In some regards human rights, as they’ve been historically thought about are not super easy to turn into a theory of AI alignment, because they are historically thought of as guarantees that States have to give their citizens in order to be legitimate. And it isn’t entirely clear what it means to have a human rights based technology, but I think that this is a really productive area to work in, and I would definitely like to try and populate that ground.\n\nYou might also think that the consensus or the emerging consensus around values that need to be built into AI systems, such as fairness and explainability potentially pretends that the emergence of this kind of intercultural consensus. Although I guess at that point, we have to be really mindful of the voices that are at the table and who’s had an opportunity to speak. So although there does appear to be some convergence around principles of beneficence and things like that, there’s also true that this isn’t a global conversation in which everyone is represented, and it would be easy to prematurely rush to the conclusion that we know what values to pursue, when we’re really just reiterating some kind of very heavily Western centric, affluent view of ethics that doesn’t have real intercultural democratic viability.\n\n**Lucas Perry:** All right, now it’s also interesting and important to consider here the differences and importance of single agent and multi-agent alignment scenarios. For example, you can imagine entertaining the question of, “How is it that I would build a system that would be able to align with my values? One agent being the AI system, and one person, and how is it that I get the system to do what I want it to do?” And then the multi-agent alignment scenario considers, “How do I get one agent to align and serve to many different people’s interests and wellbeing and desires, and preferences, and needs? And then also, how do we get systems to act and behave when there are many other systems trying to serve and align to many other different people’s needs? And how is it that all of these systems may or may not collaborate with all of the other AI systems, and may or may not collaborate with all of the other human beings, when all the human beings may have conflicting preferences and needs?” How is it that we do for example, intertheoretic comparisons of value and needs? So what’s the difference, and importance between single agent and multi-agent alignment scenarios?\n\n**Iason Gabriel:** I think that the difference is best understood in terms of how expansive the goal of alignment has to be. So if we’re just thinking about a single person and a single agent, it’s okay to approach the value alignment challenge through a slightly solipsistic lens. In fact, you know, if it was just one person and one agent, it’s not clear that morality really enters the picture, unless there are other people other sentient creatures who our actions can effect. So with one person, one agent, the challenge is primarily correlation with the person’s desires, aims intentions. Potentially, there’s still a question of whether the AI serves their interest rather than, you know, there’s more volitional states that come to mind. When we think about situations in which many people are affected, then it becomes kind of remiss not to think about interpersonal comparisons, and the kind of richer conceptions that we’ve been talking about.\n\nNow, I mentioned earlier that there is a view that there will always be a human body that synthesizes preferences and provides moral instructions for AI. We can imagine democratic approaches to value alignment, where human beings assemble, maybe in national parliaments, maybe in global fora, and legislate principles that AI is then designed in accordance with. I think that’s actually a very promising approach. You know, you would want it to be informed by moral reflection and people offering different kinds of moral reasons that support one approach rather than the other, but that seems to be important for multi-person situations and is probably actually a necessary condition for powerful forms of AI. Because, when AI has a profound effect on people’s lives, these questions of legitimacy also start to emerge. So not only is it doing the right thing, but is it doing the sort of thing that people would consent to, and is it doing the sort of thing that people actually have consented to? And I think that when AI is used in certain forum, then these questions of legitimacy come to the top. There’s a bundle of different things in that space.\n\n**Lucas Perry:** Yeah. I mean, it seems like a really, really hard problem. When you talk about creating some kind of national body, and I think you said international fora, do you wonder that some of these vehicles might be overly idealistic given what may happen in the world where there’s national actors competing and capitalism driving things forward relentlessly, and this problem of multi-agent alignment seems very important and difficult, and that there are forces pushing things such that it’s less likely that it happens.\n\n**Iason Gabriel:** When you talk about multi-agent alignment. Are you talking about the alignment of an ecosystem that contains multiple AI agents, or are you talking about how we align an AI agent with the interests and ideas of multiple parties? So many humans, for example?\n\n**Lucas Perry:** I’m interested and curious about both.\n\n**Iason Gabriel:** I think there’s different considerations that arise for both sets of questions, but there are also some things that we can speak to that pertain to both of them.\n\n**Lucas Perry:** Do they both count as multi-agent alignment scenarios in your understanding of the definition?\n\n**Iason Gabriel:** From a technical point of view? It makes perfect sense to describe them both in that way. I guess when I’ve been thinking about it, curiously, I’ve been thinking of multi-agent alignment as an agent that has multiple parties that it wants to satisfy. But when we look at machine learning research, “Multi-agent” usually means many AI agents running around in a single environment. So I don’t see any kind of language based reason to opt for one, rather than the other. With regards to this question of idealization and real world practice, I think it’s an extremely interesting area. And the thing I would say is this is almost one of those occasions where potentially the is-ought distinction comes to our rescue. So the question is, “Does the fact that the real world is a difficult place, affected by divergent interests, mean that we should level down our ideals and conceptions about what really good and valuable AI would look like?”\n\nAnd there are some people who have what we term, “Practice dependent” views of ethics who say, “Absolutely we should do. We should adjust our conception of what the ideal is.” But as you’ll probably be able to tell by now, I hold a kind of different perspective in general. I don’t think it is problematic to have big ideals and rich visions of how value can be unlocked, and that partly ties into the reasons that we spoke about for thinking that the technical and the normative interconnected. So if we preemptively level down, we’ll probably design systems that are less good than they could be. And when we think about a design process spanning decades, we really want that kind of ultimate goal, the shining star of alignment to be something that’s quite bright and can steer our efforts towards it. If anything, I would be slightly worried that because these human parliaments and international institutions are so driven by real world politics, that they might not give us the kind of most fully actualized set of ideal aspirations to aim for.\n\nAnd that’s why philosophers like, of course John Rawls actually propose that we need to think about these questions from a hypothetical point of view. So we need to ask, “What would we choose if we weren’t living in a world where we knew how to leverage our own interests?” And that’s how we identified the real ideal that is acceptable to people, regardless of where they’re located. And also can then be used to steer non-ideal theory or the kind of actual practice and the right direction.\n\n**Lucas Perry:** So if we have an organization that is trying its best to create aligned and beneficial AGI systems, reasoning about what principles we should embed in it from behind Rawls’ Veil of Ignorance, you’re saying, would have hopefully the same practical implications as if we had a functioning international body for coming up with those principles in the first place.\n\n**Iason Gabriel:** Possibly. I mean, I’d like to think that ideal deliberation would lead them in the direction of impartial principles for AI. It’s not clear whether that is the case. I mean, it seems that at its very best, international politics has led us in the direction of a kind of human rights doctrine that both accords individuals protection, regardless of where they live and defends the strong claim that they have a right to subsistence and other forms of flourishing. If we use the Veil of Ignorance experiment, I think for AI might even give us more than that, even if a real world parliament never got there. For those of you who are not familiar with this, the philosopher John Rawls says that when it comes to choosing principles for a just society, what we need to do is create a situation in which people don’t know where they are in that society, or what their particular interest is.\n\nSo they have to imagine that they’re from behind the Veil of Ignorance. They select principles for that society that they think will be fair regardless of where they end up, and then having done that process and identified principles of justice for the society, he actually holds out the aspiration that people will reflectively endorse them even once the veil has been removed. So they’ll say, “Yes, in that situation, I was reasoning in a fair way that was nonprejudicial. And these are principles that I identified there that continue to have value in the real world.” And we can say what would happen if people are asked to choose principles for artificial intelligence from behind a veil of ignorance where they didn’t know whether they were going to be rich or poor, Christian, utilitarian, Kantian, or something else.\n\nAnd I think there, some of the kind of common sense material would be surfaced; so people would obviously want to build safe AI systems. I imagine that this idea of preserving human autonomy and control would also register, but for some forms of AI, I think distributive considerations would come into play. So they might start to think about how the benefits and burdens of these technologies are distributed and how those questions play out on a global basis. They might say that ultimately, a value aligned AI is one that has fair distributive impacts on a global basis, and if you follow rules, that it works to the advantage of the least well off people.\n\nThat’s a very substantive conception of value alignment, which may or may not be the final outcome of ideal international deliberation. Maybe the international community will get to global justice eventually, or maybe it’s just too thoroughly affected by nationalists interests and other kinds of what, to my mind, the kind of distortionary effects that mean that it doesn’t quite get there. But I think that this is definitely the space that we want the debate to be taking place in. And that actually, there has been real progress in identifying collectively endorsed principles for AI that gives me hope for the future. Not only that we’ll get good ideals, but that people might agree to them, and that they might get democratic endorsement, and that they might be actionable and the sort of thing they can guide real world AI design.\n\n**Lucas Perry:** Can you add a little bit more clarity on the philosophical questions and issues, which single and multi-agent alignments scenarios supervene on? How do you do inter theoretic comparisons of value if people disagree on normative or meta-ethical beliefs or people disagree on foundational axiomatic principles for bridging the is-ought gap? How is it that systems deal with that kind of disagreement?\n\n**Iason Gabriel:** I’m hopeful that the three pictures that I outlined so far of the overlapping consensus between different moral beliefs, of democratic debate over a constitution for AI, and of selection of principles from behind the Veil of Ignorance, are all approaches that carry some traction in that regard. So they try to take seriously the fact of real world pluralism, but they also, through different processes, tend to tap towards principles that are compatible with a variety of different perspectives. Although I would say, I do feel like there’s a question about this multi agent thing that may still not be completely clear in my mind, and it may come back to those earlier questions about definition. So in a one person, one agent scenario, you don’t have this question of what to do with pluralism, and you can probably go for a more simple one shot solution, which is align it with the person’s interest, beliefs, moral beliefs, intentions, or something like that. But if you’re interested in this question of real world politics for real world AI systems where a plurality of people are affected, we definitely need these other kinds of principles that have a much richer set of properties and endorsements.\n\n**Lucas Perry:** All right, there’s Rawls’ Veil of Ignorance. There’s, principle of non domination, and then there’s the democratic process?\n\n**Iason Gabriel:** Non-domination is a criterion that any scheme for multi-agent value alignment needs to meet. And then we can ask the question, “What sort of scheme would meet this requirement of non-domination?” And there we have the overlapping census with human rights. We have a scheme of democratic debate leading to principles for AI constitution, and we have the Veil of Ignorance as all ideas that we basically find within political theory that could help us meet that condition.\n\n**Lucas Perry:** All right, so we have spoken at some length then about principles and identifying principles, this goes back to our conversation about the is-ought distinction, and these are principles that we need to identify for setting up an ethical alignment procedure. You mentioned this earlier, when we were talking about this, this distinction between the one true moral theory approach to AI alignment, in contrast to coming up with a procedure for AI alignment that would be broadly endorsed by many people, and would respect the principle of non domination, and would take into account pluralism. Can you unpack this distinction more, and the importance of it?\n\n**Iason Gabriel:** Yeah, absolutely. So I think that the kind of true moral theory approach, although it is a kind of stylized idea of what an approach to value of alignment might look like, is the sort of thing that could be undertaken just by a single person who is designing the technology or a small group of people, perhaps moral philosophers who think that they have really great expertise in this area. And then they identify the chosen principle and run with it.\n\nThe big claim is that that isn’t really a satisfactory way to think about design and values in a pluralistic world where many people will be affected. And of course, many people who’ve gone off on that kind of enterprise have made serious mistakes that were very costly for humanity and for people who are affected by their actions. So the political approach to value alignment paints a fundamentally different perspective and says it isn’t really about one person, or one group running ahead and thinking that they’ve done all the hard work it’s about working out what we can all agree upon, that looks like a reasonable set of moral principles or coordinates to build powerful technologies around. And then, once we have this process in place that outfits the right kind of agreement, then the task is given back to technologists and these are the kind of parameters that are fair process of deliberation has outputted. And this is what we have the authority to encode in machines, whether it’s say human rights or a conception of justice, or some other widely agreed upon values.\n\n**Lucas Perry:** There are principles that you’re really interested in satisfying, like respecting pluralism, and respecting a principle of non-domination, and the One True Moral Theory approach, risks, violating those other principles. Are you not taking a stance on whether there is a One True Moral Theory, you’re just willing to set that question aside and say, “Because it’s so essential to a thriving civilization that we don’t do moral imposition on one another, that coming up with a broadly endorsed theory is just absolutely the way to go, whether or not there is such a thing as a One True Moral Theory? Does that capture your view?\n\n**Iason Gabriel:** Yeah. So to some extent, I’m trying to make an argument that will look like something we should affirm, regardless of the metaethical stance that we wish to take. Of course, there are some views about morality that actually say that non-domination is a really important principle, or that human rights are fundamental. So someone might look at these proposals, and from the comprehensive moral perspective, they would say, “This is actually the morally best way to do value alignment, and it involves dialogue, discussion, mutual understanding, and agreement.” However, you don’t need to believe that in order to think that this is a good way to go. If you look at the writing of someone like Joshua Greene, he says that this problem we encounter called the, “Tragedy of common sense morality.” A lot of people have fairly decent moral beliefs, but when they differ, it ends up in violence, and they end up fighting. And you have a hugely negative, moral externality that arises just because people weren’t able to enter this other mode of theorizing, where they said, “Look, we’re part of a collective project, let’s agree to some higher level terms that we can all live by.” So from that point of view, it looks prudent to think about value alignment as a pluralistic enterprise.\n\nThat’s an approach that many people have taken with regards to the justification of the institution of the state, and the things that we believe it should protect, and affirm, and uphold. And then as I alluded to earlier, I think that actually, even for some of these anti-realists, this idea of inclusive deliberation, and even the idea of human rights look like quite good candidates for the kind of, “Wouldn’t it be nice?” criterion. So to return to Richard Routley, who is kind of the arch moral skeptic, he does ultimately really want us to live in a world with human rights, he just doesn’t think he has a really good meta-ethical foundation to rest this on. But in practice, he would take that vision forward, I believe in try to persuade other people that it was the way to go by telling them good stories and saying, “Well, look, this is the world with human rights and open-ended deliberation, and this is the world where one person decided what to do. Wouldn’t it be nice in that better world?” So I’m hopeful that this kind of political ballpark has this kind of rich applicability and appeal, regardless of whether people are starting out in one place or the other.\n\n**Lucas Perry:** That makes sense. So then another aspect of this is, in the absence of moral agreement or when there is moral disagreement, is there a fair way to decide what principles AI should align with? For example, I can imagine religious fundamentalists, at core being antithetical to the project of aligning AI systems, which eventually lead to something smaller than us, they could view it as something like playing God and just be like, “Well, this is just not a project that we should even do.”\n\n**Iason Gabriel:** So that’s an interesting question, and you may actually be putting pressure on my preceding argument. I think that it is certainly the case that you can’t get everyone to agree on a set of global principles for AI, because some people hold very, very extreme beliefs that are exclusionary, and don’t tend to the possibility of compromise. Typically people who have a fundamentalist orientation of one kind or another. And so, even if we get the pluralistic project off the ground, it may be the case that we have to, in my language, impose our values on those people, and that in a sense, they are dominated. And that leads to the difficult question: why is it permissible to impose beliefs upon those people, but not the people who don’t hold fundamentalist views? It’s a fundamentally difficult question, because what it tends to point to is the idea that beneath this talk about pluralism, there is actually a value claim, which is that you are entitled to non-domination, so long as you’re prepared not to dominate other people, and to accept that there is a moral equality, that means that we need to cooperate and co-habit in a world together.\n\nAnd that does look like a kind of deep, deep, moral claim that you might need to substantively assert. I’m not entirely sure; I think that’s one that we can save for further investigation, but it’s certainly something that people have said in the context of these debates, that at the deepest level, you can’t escape making some kind of moral claim, because of these cases.\n\n**Lucas Perry:** Yeah. This is reminding me of the paradox of tolerance by Karl Popper, who talks about free speech ends when you yell, “The theater’s on fire.” And in some sense are then imposing harm on other people. And that we’re tolerant of people within society, except for those who are intolerant of others. And to some extent, that’s a paradox. So similarly we may respect and endorse a principle of non-domination, or non-subjugation, but that ends when there are people who are dominating or subjugating. And the core of that is maybe getting back again to some kind of principle of non-harm related to the wellbeing of sentient creatures.\n\n**Iason Gabriel:** Yeah. I think that the obstacles that we’re discussing now are very precisely related to that paradox, of course, the boundaries we want to draw on permissible disagreement in some sense is quite minimal or conversely, we might think that the wide affirmation of some aspect of the value of human rights is quite a strong basis for moving forwards, because it says that all human life has value, and that everyone is entitled to basic goods, including goods pertaining to autonomy. So people who reject that really are pushing back against something that is widely and deeply, reflectively endorsed by a large number of people. I also think that with regards to toleration, the anti-realist position becomes quite hard to figure out or quite strange. So you have these people who are not prepared to live in a world where they respect others, and they have this will to dominate, or a fundamentalist perspective.\n\nThe anti-realist says, “Well, you know, potentially this, this nicer world, we can move towards.” The anti-realist doesn’t deal in the currency of moral reasons. They don’t really have to worry about it too much; they can just say, “And going to go in that direction with everyone else who agrees with us,” and hold to the idea that it looks like a good way to live. So in a way, the problem with domination is much more serious for people who are moral realists. For the anti-realists, it’s not actually a perspective I inhabit it in my day to day life, so it’s hard for me to say what they would make of it.\n\n**Lucas Perry:** Well, I guess, just to briefly defend the anti-realist, I imagine that they would say that they still have reasons for morality, they just don’t think that there is an objective epistemological methodology for discovering what is true. “There aren’t facts about morality, but I’m going to go make the same noises that you make about morality. Like I’m going to give reasons and justification, and these are as good as making up empty screeching noises and blah, blahing about things that don’t exist,” but it’s still motivating to other people, right? They still will have reasons and justification; they just don’t think it pertains to truth, and they will use that navigate the world and then justify domination or not.\n\n**Iason Gabriel:** That seems possible, but I guess for the anti-realist, if they think we’re just fundamentally expressing pro-attitudes, so when I say, “It isn’t justified to dominate others.” I’m just saying, “I don’t like it when this thing happens,” then we’re just dealing in the currency of likes, and I just don’t think you have to be so worried about the problem of domination as you are, if you think that this means something more than someone just expressing an attitude about what they like or don’t. If there aren’t real moral reasons or considerations at stake, if it’s just people saying, “I like this. I don’t like this.” Then you can get on with the enterprise that you believe achieves these positive ends. Of course, the unpleasant thing is you kind of are potentially giving permission to other people to do the same, or that’s a consequence of the view you hold. And I think that’s why a lot of people want to rescue the idea of moral justification as a really meaningful practice, because they’re not prepared to say, “Well, everyone gets on with the thing that they happen to like, and the rest of it is just window dressing.”\n\n**Lucas Perry:** All right. Well, I’m not sure how much we need to worry about this now. I think it seems like anti-realists and realists basically act the same in the real world. Maybe, I don’t know.\n\n**Iason Gabriel:** Yeah. In reality, anti-realists tend to act in ways that suggest that on some level they believe that morality has more to it than just being a category error.\n\n**Lucas Perry:** So let’s talk a little bit here more about the procedure by which we choose evaluative models for deciding which proposed aspects of human preferences or values are good or bad for an alignment procedure. We can have a method of evaluating or deciding which aspects of human values or preferences or things that we might want to bake into an alignment procedure are good or bad, but you mentioned something like having a global fora or having different kinds of governance institutions or vehicles by which we might have conversation to decide how to come up with an alignment procedure that would be endorsed. What is the procedure to decide what kinds of evaluative models we will use to decide what counts as a good alignment procedure or not? Right now, this question is being answered by a very biased and privileged select few in the West, at AI organizations and people adjacent to them.\n\n**Iason Gabriel:** I think this question is absolutely fundamental. I believe that any claim that we have meaningful global consensus on AI principles is premature, and that it probably does reflect biases of the kind you mentioned. I mean, broadly speaking, I think that there’s two extremely important reasons to try and widen this conversation. The first is that in order to get a kind of clear, well, grounded and well sighted vision on what AI should align with, we definitely need intercultural perspectives. On the assumption that to qoute John Stuart Mill, “no-one has complete access to the truth and people have access to different parts of it.” The bigger the conversation becomes, the more likely it is that we move towards maximal value alignment of the kind that humanity deserves. But potentially more importantly than that, and regardless of the kind of epistemic consequences of widening the debate, I think that people have a right to voice their perspective on topics and technologies that will affect them. If we think of the purpose of a global conversation, partly as this idea of formulating principles, but also bestowing on them a certain authority in light of which we’re permitted to build powerful technologies. Then you just can’t say that they have the right kind of authority and grounding without proper extensive consultation. And so, I would suggest that that’s a very important next step for people who are working in this space. I’m also hopeful that actually these different approaches that we’ve discussed can potentially be mutually supporting. So, I think that there is a good chance that human rights could serve as a foundation or a seed for a good, strong intercultural conversation around AI alignment.\n\nAnd I’m not sure to what extent this really is the case, but it might be that even some of these ideas about reasoning impartially have currency in a global conversation. And you might find that they are actually quite challenging for affluent countries or for self interested parties, because it would reveal certain hidden biases in the propositions that they have now made or put forward.\n\n**Lucas Perry:** Okay. So, related to things that we might want to do to come up with the correct procedure for being able to evaluate what kinds of alignment procedures are good or bad, what do you view as sufficient for adequate alignment of systems? We’ve talked a little bit about minimalism versus maximalism, where minimalism is aligning to just some conception of human values and maximalism is hitting on some very idealized and strong set or form of human values. And this procedure is related, at least in the, I guess, existential risk space coming from people like Toby Ord and William MacAskill. They talk about something like a long reflection. If I’m asking you about what might be adequate alignment for systems, one criteria for that might be meeting basic human needs, meeting human rights and reducing existential risk further and further such that it’s very, very close to zero and we enter a period of existential stability.\n\nAnd then following this existential stability is proposed something like a long reflection where we might more deeply consider ethics and values and norms before we set about changing and optimizing all of the atoms around us in the galaxy. Do you have a perspective here on this sort of most high level timeline of first as we’re aligning AI systems, what does it for it to be adequate? And then, what needs to potentially be saved for something like a long reflection? And then, how something like a broadly endorsed procedure versus a one true moral theory approach would fit into something like a long reflection?\n\n**Iason Gabriel:** Yes. A number of thoughts on this topic. The first pertains to the idea of existential security and, I guess, why its defined as the kind of dominant goal in the short term perspective. There may be good reasons for this, but I think what I would suggest is that obviously involves trade offs. The world we live in is a very unideal place, one in which we have a vast quantity of unnecessary suffering. And to my mind, it’s probably not even acceptable to say that basically the goal of building AI is, or that the foremost challenge of humanity is to focus on this kind of existential security and extreme longevity while leaving so many people to lead lives that are less than they could be.\n\n**Lucas Perry:** Why do you think that?\n\n**Iason Gabriel:** Well, because human life matters. If we were to look at where the real gains in the world are today, I believe it’s helping these people who die unnecessarily from neglected diseases, lack subsistence incomes, and things of that nature. And I believe that has to form part of the picture of our ideal trajectory for technological development.\n\n**Lucas Perry:** Yeah, that makes sense to me. I’m confused what you’re actually saying about the existential security view as being central. If you compare the suffering of people that exist today, obviously to the astronomical amount of life that could be in the future, is that kind of reasoning about the potential that doesn’t do the work for you for seeing mitigating existential risk as the central concern.\n\n**Iason Gabriel:** I’m not entirely sure, but what I would say is that on one reading of the argument that’s being presented, the goal should be to build extremely safe systems and not try to intervene in areas about which this more substantive contestation, until there’s been a long delay and a period of reflection, which might mean neglecting some very morally important and tractable challenges that the world is facing at the present moment. And I think that that would be problematic. I’m not sure why we can’t work towards something that’s more ambitious, for example, a human rights respecting AI technology.\n\n**Lucas Perry:** Why would that entail that?\n\n**Iason Gabriel:** Well, so, I mean, this is the kind of question about the proposition that’s been put in front of us. Essentially, if that isn’t the proposition, then the long reflection isn’t leaving huge amounts to be deliberated about, right? Because we’re saying, in the short term, we’re going to tether towards global security, but we’re also going to try and do a lot of other things around which there’s moral uncertainty and disagreement, for example, promote fairer outcomes, mobilize in the direction of respecting human rights. And I think that once we’ve moved towards that conception of value alignment, it isn’t really clear what the substance of the long reflection is. So, do you have an idea of what questions would remain to be answered?\n\n**Lucas Perry:** Yeah, so I guess I feel confused because reaching existential security as part of this initial alignment procedure, doesn’t seem to be in conflict with alleviating the suffering of the global poor, because I don’t think moral uncertainty extends to meeting basic human needs or satisfying basic human rights or things that are obviously conducive to the well-being of sentient creatures. I don’t think poverty gets pushed to the long reflection. I don’t think unnecessary suffering gets pushed to the long reflection. Then the question you’re asking is what is it that does get pushed to the long reflection?\n\n**Iason Gabriel:** Yes.\n\n**Lucas Perry:** Then what gets pushed to the long reflection is, is the one true moral theory approach to alignment actually correct? Is there a one true moral theory or is there not a one true moral theory? Are anti-realists correct or are realists correct? Or are they both wrong in some sense or is something else correct? And then, given that, the potential answer or inability to come up with an answer to that would change how something like the cosmic endowment gets optimized. Because we’re talking about billions upon billions upon billions upon billions of years, if we don’t go extinct, and the universe is going to evaporate eventually. But until then, there is an astronomical amount of things that could get done.\n\nAnd so, the long reflection is about deciding what to actually do with that. And however esoteric it is, the proposals range from you just have some pluralistic optimization process. There is no right way you should live. Things other than joy and suffering matter like, I don’t know, building monuments that calculate mathematics ever more precisely. And if you want to carve out a section of the cosmic endowment for optimizing things that are other than conscious states, you’re free to do that versus coming down on something more like a one true moral theory approach and being like, “The only kinds of things that seem to matter in this world are the states of conscious creatures. Therefore, the future should just be an endeavor of optimizing for creating minds that are ever more enjoying profound states of spiritual enlightenment and spiritual bliss and knowledge.”\n\nThe long reflection might even be about whether or not knowledge matters for a mind. “Does it really matter that I am in tune with truth and reality? Should we build nothing but experience machines that cultivate whatever the most enlightened and blissful states of experience are or is that wrong?” The long reflection to me seems to be about these sorts of questions and if the one true moral theory approach is correct or not.\n\n**Iason Gabriel:** Yeah, that makes sense. And my apologies if I didn’t understand what was already taken care of by the proposal. I think to some extent, in that case, we’re talking about different action spaces. When I look at these questions of AI alignment, I see very significant value questions already arising in terms of how benefits and burdens are distributed. What fairness means? Whether AI needs to be explainable and accountable and things of that nature alongside a set of very pressing global problems that it would be really, really important to address? I think my time horizon is definitely different from this long reflection one. Kind of find it difficult to imagine a world in which these huge, but to some extent prosaic questions have been addressed and in which we then turn our attention to these other things. I guess there is a couple of things that can be said about it.\n\nI’m not sure if this is meant to be taken literally, but I think the idea of pressing pause on technological development while we work out a further set of fundamentally important questions is probably not feasible. It would be best to work with a long term view that doesn’t rest upon the possibility of that option. And then I think that the other fundamental question is what is actually happening in this long reflection? It can be described in a variety of different ways.\n\nSometimes it sounds like it’s a big philosophical conference that runs for a very, very long time. And at the end of it, hopefully people kind of settle these questions and they come out to the world and they’re like, “Wow, this is a really important discovery.” I mean, if you take seriously the things we’ve been talking about today, you still have the question of what do you do with the people who then say, “Actually, I think you’re wrong about that.” And I think in a sense it recursively pushes us back into the kind of processes that I’ve been talking about. When I hear people talk about the long reflection there does also sometimes seem to be this idea that it’s a period in which there is very productive global conversation about the kind of norms and directions that we want humanity to take. And that seems valuable, but it doesn’t seem unique to the long reflection. That would be incredibly valuable right now so it doesn’t look radically discontinuous to me on that view.\n\n**Lucas Perry:** All right. Because we’re talking about the long term future here and I bring it up because it’s interesting in considering what questions can we just kind of put aside? These are interesting, but in the real world, they don’t matter a ton or they don’t influence our decisions, but over the very, very long term future, they may matter much more. When I think about a principle like non-domination, it seems like we care about this conception of non-imposition and non-dominance and non-subjugation for reasons of, first of all, well-being. And the reason why we care about this well-being question is because human beings are extremely fallible. And it seems to me that the principle of non-domination is rooted in the lack of epistemic capacity for fallible agents like human beings to promote the well-being of sentient creatures all around them.\n\nBut in terms of what is physically literally possible in the universe, it’s possible for someone to know so much more about the well-being of conscious creatures than you, and how much happier and how much more well-being you would be in if you only idealized in a certain way. That as we get deeper and deeper into the future, I have more and more skepticism about this principle of non-domination and non-subjugation.\n\nIt seems very useful, important, and exactly like the thing that we need right now, but as we long reflect further and further and, say, really smart, really idealized beings develop more and more epistemic clarity on ethics and what is good and the nature of consciousness and how minds work and function in this universe that I would probably submit myself to a Dyson sphere brain that was just like, “Well, Lucas, this is what you have to do.” And I guess that’s not subjugation, but I feel less and less moral qualms with the big Dyson sphere brain showing up to some early civilization like we are, and then just telling them how they should do things, like a parent does with a child. I’m not sure if you have any reactions to this or how much it even really matters for anything we can do today. But I think it’s potentially an important reflection on the motivations behind the principle of non-domination and non-subjugation and why it is that we really care about it.\n\n**Iason Gabriel:** I think that’s true. I think that if you consent to something, then almost… I don’t want to say by definition, that’s definitely too strong, but it’s very likely that you’re not being dominated so long as you have sufficient information and you’re not being coerced. I think the real question is what if this thing showed up and you said, “I don’t consent to this,” and the thing said, “I don’t care it’s in your best interests.”\n\n**Lucas Perry:** Yeah, I’m defending that.\n\n**Iason Gabriel:** That could be true in some kind of utilitarian, consequentialist, moral philosophy of that kind. And I guess my question is, “Do you find that unproblematic? Or, “Do you have this intuition that there is a further set of reasons you could draw upon, which explain why the entity with greater authority doesn’t actually have the right to impose these things on you?” And I think that it may or may not be true.\n\nIt probably is true that from the perspective of welfare, non-denomination is good. But I also think that a lot of people who are concerned about pluralism and non-domination think that it’s value pertains to something which is quite different, which is human autonomy. And that that has value because of the kind of creatures we are, with freedom of thought, a consciousness, a capacity to make our own decisions. I, personally, am of the view that even if we get some amazing, amazing paternalist, there is still a further question of political legitimacy that needs to be answered, and that it’s not permissible for this thing to impose without meeting these standards that we’ve talked about today.\n\n**Lucas Perry:** Sure. So in the very least, I think I’m attempting to point towards the long reflection consisting of arguments like this. We weren’t participating in coercion before, because we didn’t really know what we’re talking about but now we know what we’re talking about. And so, given our epistemic clarity coercion makes more sense.\n\n**Iason Gabriel:** It does seem problematic to me. And I think the interesting question is what does time add to robust epistemic certainty? It’s quite likely that if you spend a long time thinking about something, at the end of it, you’ll be like, “Okay, now I have more confidence in a proposition that was on the table when I started?” But does that mean that it is actually substantively justified? And what are you going to say if you think you’re substantively justified, but you can’t actually justify it to other people who are reasonable, rational and informed like you.\n\nIt seems to me that even after a thousand years, you’d still be taking a leap of faith of the kind that we’ve seen people take in the past with really, really devastating consequences. I don’t think it’s the case that ultimately there will be a moral theory that’s settled and the confidence in the truth value of it is so high that the people who adhere to it have somehow gained the right to kind of run with it on behalf of humanity. Instead, I think that we have to proceed a small step at a time, possibly in perpetuity and make sure that each one of these small decisions is subject to continuous negotiation, reflection and democratic control.\n\n**Lucas Perry:** The long reflection though, to me, seems to be about questions like that because you’re taking a strong epistemological view on meta-ethics and that there wouldn’t be that kind of clarity that would emerge over time from minds far greater than our own. From my perspective, I just find the problem of suffering to be very, very, very compelling.\n\nLet’s imagine we have the sphere of utilitarian expansion into the cosmos, and then there is the sphere of pluralistic, non-domination, democratic, virtue ethic, deontological based sphere of expansion. You can, say, run across planets at different stages of evolution. And here you have a suffering hell planet, it’s just wild animals born of Darwinian evolution. And they’re just eating and murdering each other all the time and dying of disease and starvation and other things. And then maybe you have another planet which is an early civilization and there is just subjugation and misery and all of these things, and these spheres of expansion would do completely different things to these planets. And we’re entering super esoteric sci-fi space here. But again, it’s, I think, instructive of the importance of something like a long reflection. It changes what is permissible in what will be done. And so, I find it interesting and valuable, but I also agree with you about the one claim that you had earlier about it being unclear that we could actually pause the breaks and have a thousand year philosophy convention.\n\n**Iason Gabriel:** Yes, I mean, the one further thing I’d say, Lucas, is bearing in mind some of the earlier provisos we attached to the period before the long reflection, we were kind of gambling on the idea that there would be political legitimacy and consensus around things like the alleviation of needless suffering. So, it is not necessarily that it is the case that everything would be up for grabs just because people have to agree upon it. In the world today, we can already see some nascent signs of moral agreement on things that are really morally important and would be very significant if they were fully realized as ideals.\n\n**Lucas Perry:** Maybe there is just not that big of a gap between the views that are left to be argued about during the long reflection. But then there is also this interesting question, wrapping up on this part of the conversation, about what did we take previously that was sacred, that is no longer that? An example would be if a moral realist, utilitarian conception ended up just being the truth or something, then rights never actually mattered. Autonomy never mattered, but they functioned as very important epistemic tool sets. And then we’re just like, “Okay, we’re basically doing away with everything that we said was sacred.” We still endorsed having done that. But now it’s seen in a totally different light. There could be something like a profound shift like that, which is why something like long reflection might be important.\n\n**Iason Gabriel:** Yeah. I think it really matters how the hypothesized shift comes about. So, if there is this kind of global conversation with new information coming to light, taking place through a process that’s non-coercive and the final result seems to be a stable consensus of overlapping beliefs that we have more moral consensus than we did around something like human rights, then that looks like a kind of plausible direction to move in and that might even be moral progress itself. Conversely, if it’s people who have been in the conference a long time and they come out and they’re like, “We’ve reflected a thousand years and now we have something that we think is true.” Unfortunately, I think they ended up kind of back at square one where they’ll meet people who say, “We have reasonable disagreement with you, and we’re not necessarily persuaded by your arguments.”\n\nAnd then you have the question of whether they’re more permitted to engage in value imposition than people were in the past. And I think probably not. I think if they believe those arguments are so good, they have to put them into a political process of the kind that we have discussed and hopefully their merits will be seen or, if not, there may be some avenues that we can’t go down but at least we’ve done things in the right way.\n\n**Lucas Perry:** Luckily, it may turn out to be the case that you basically never have to do coercion because with good enough reasons and evidence and argument, basically any mind that exists can be convinced of something. Then it gets into this very interesting question of if we’re respecting a principle of non-domination and non-subjugation, as something like Neuralink and merging with AI systems, and we gain more and more information about how to manipulate and change people, what changes can we make to people from the outside would count as coercion or not? Because currently, we’re constantly getting pushed around in terms of our development by technology and people and the environment and we basically have no control over that. And do I always endorse the changes that I undergo? Probably not. Does that count as coercion? Maybe. And we’ll increasingly gain power to change people in this way. So this question of coercion will probably become more and more interesting and difficult to parse over time.\n\n**Iason Gabriel:** Yeah. I think that’s quite possible. And it’s kind of an observation that can be made about many of the areas that we’re thinking about now. For example, the same could be said of autonomy or to some extent that’s the flip side of the same question. What does it really mean to be free? Free from what and under what conditions? If we just loop back a moment, the one thing I’d say is that the hypothesis that, you can create moral arguments that are so well-reasoned that they persuade anyone is, I think, the perfect statement of a certain enlightenment perspective on philosophy that sees rationality as the tiebreaker and the arbitrar of progress. In a sense that the whole project that I’ve outlined today rests upon a recognition or an acknowledgement that that is probably unlikely to be true when people reason freely about what the good consist in. They do come to different conclusions.\n\nAnd I guess, the kind of thing people would point to there as evidence is just the nature of moral deliberation in the real world. You could say that if there were these winning arguments that just won by force of reason, we’d be able to identify them. But, in reality, when we look at how moral progress has occurred, requires a lot more than just reason giving. To some extent, I think the master argument approach itself rests upon mistaken assumptions and that’s why I wanted to go in this other direction. By a twist of fate, if I was mistaken and if the master argument was possible, it would also satisfy a lot of conditions of political legitimacy. And right now, we have good evidence that it isn’t possible so we should proceed in one way. If it is possible, then those people can appeal to the political processes.\n\n**Lucas Perry:** They can be convinced.\n\n**Iason Gabriel:** They can be convinced. And so, there is reason for hope there for people who hold a different perspective to my own.\n\n**Lucas Perry:** All right. I think that’s an excellent point to wrap up on then. Do you have anything here? I’m just giving you an open space now if you feel unresolved about anything or have any last moment thoughts that you’d really like to say and share? I found this conversation really informative and helpful, and I appreciate and really value the work that you’re doing on this. I think it’s sorely needed.\n\n**Iason Gabriel:** Yeah. Thank you so much, Lucas. It’s been a really, really fascinating conversation and it’s definitely pushed me to think about some questions that I hadn’t considered before. I think the one thing I’d say is that this is really… A lot of it is exploratory work. These are questions that we’re all exploring together. So, if people are interested in value alignment, obviously listeners to this podcast will be, but specifically normative value alignment and these questions about pluralism, democracy, and AI, then please feel free to reach out to me, contribute to the debate. And I also look forward to continuing the conversation with everyone who wants to look at these things and develop the conversation further.\n\n**Lucas Perry:** If people want to follow you or get in contact with you or look at more of your work, where are the best places to do that?\n\n**Iason Gabriel:** I think if you look on Google Scholar, there is links to most of the articles that I have written, including the one that we were discussing today. People can also send me an email, which is just my first name Iason@deepmind.com. So, yeah.\n\n**Lucas Perry:** All right.", "filename": "Iason Gabriel on Foundational Philosophical Questions in AI Alignment-by Future of Life Institute-video_id MzFl0SdjSso-date 20210630.md", "id": "d274c65d9968a3daf9d6c7c73520b109", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Alex Turner - Will powerful AIs tend to seek┬ápower-by Towards Data Science-video_id 8afHG61YmKM-date 20220119", "authors": ["Alex Turner"], "date_published": "2022-01-19", "text": "# Alex Turner on Will Advanced AIs Tend To Seek Power by Jeremie Harris on the Towards Data Science Podcast\n\n## Alex Turner on his spotlighted 2021 NeurIPS paper and why we should worry about the power-seeking behaviour of AI systems\n\nToday’s episode is somewhat special, because we’re going to be talking about what might be the first solid quantitative study of the power-seeking tendencies that we can expect advanced AI systems to have in the future.\n\nFor a long time, there’s kind of been this debate in the AI safety world, between:\n\n- People who worry that powerful AIs could eventually displace, or even eliminate humanity altogether as they find more clever, creative and dangerous ways to optimize their reward metrics on the one hand, and\n- People who say that’s Terminator-bating Hollywood nonsense that anthropomorphizes machines in a way that’s unhelpful and misleading.\n\nUnfortunately, recent work in AI alignment — and in particular, a spotlighted 2021 NeurIPS paper — suggests that the AI takeover argument might be stronger than many had realized. In fact, it’s starting to look like we ought to expect to see power-seeking behaviours from highly capable AI systems by default. These behaviours include things like AI systems preventing us from shutting them down, repurposing resources in pathological ways to serve their objectives, and even in the limit, generating catastrophes that would put humanity at risk.\n\nAs concerning as these possibilities might be, it’s exciting that we’re starting to develop a more robust and quantitative language to describe AI failures and power-seeking. That’s why I was so excited to sit down with AI researcher Alex Turner, the author of the spotlighted NeurIPS paper on power-seeking, and discuss his path into AI safety, his research agenda and his perspective on the future of AI on this episode of the TDS podcast.\n\nHere were some of my favourite take-homes from the conversation:\n\n- AI alignment is a very complicated problem that seems straightforward from a distance. For that reason, people — including AI researchers — can become convinced that it’s not worth worrying about because they simply haven’t engaged with the real argument for alignment risk. It’s just too easy to brush off concerns over the intrinsic risk posed by powerful AI systems in the future as “a Terminator-style scenario”, or “that thing Elon is always hyping up”, when in reality, there’s a substantial body of compelling work (and increasingly, even [experimental results](https://openai.com/blog/faulty-reward-functions/)) and research behind the AI risk argument.\n- Alex’s paper demonstrates that optimal policies — policies that achieve the maximum reward in a wide range of environments — tend to seek power. Roughly speaking, “power-seeking” here means accessing states that lead them to have more options in the future. For example, getting turned off is a low-power state for an AI system, because its action space after being shut off is empty — it wouldn’t have any options. Based on Alex’s work, we can expect AI systems with optimal policies to avoid being turned off as a result.\n- Alex’s paper comes with some caveats. First, its conclusions apply only to optimal policies, and don’t explicitly address imperfect ones. As a result, you might imagine that sub-optimal policies (which are probably more realistic) wouldn’t lead to the same power-seeking behaviour. But Alex’s follow-up work closes that loophole: it turns out that optimality isn’t required to generate power-seeking at all.\n- A second caveat has to do with the assumptions Alex makes about the rewards AI systems receive as they navigate through their environments. Alex’s work explores the behaviour of AI agents who are exposed to a wide array of random reward distributions, and shows that in the typical case, those agents tend to seek power. But we might hope that human-designed reward distributions would be less risky, since they’d explicitly be engineered to be safe and beneficial. But not so fast, says Alex: reward engineering is notoriously difficult and counterintuitive. As many experiments have shown, even today’s AI systems don’t optimize for the things we want: they optimize for easy-to-measure proxies that would be downright dangerous if they were being used as reward metrics for agents more clever than human beings.\n\n_Editor’s note: Alex’s work on power-seeking is important, timely, and promising. If you’re interested in AI safety, or even if you’re an AI risk skeptic, I highly recommend reading and engaging with his work. I don’t think it’s possible to have an informed, skeptical view on AI risk without understanding power-seeking, and Alex’s arguments in particular. You can reach out to Alex by email at_ [turneale@oregonstate.edu](mailto:turneale@oregonstate.edu).", "filename": "Alex Turner - Will powerful AIs tend to seek┬ápower-by Towards Data Science-video_id 8afHG61YmKM-date 20220119.md", "id": "5999b370e04e57985e6f8ac8216f0a05", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "README-by Vael Gates-date 20220509", "authors": ["Vael Gates"], "date_published": "2022-05-09", "text": "# Interview with AI Researchers by Vael Gates\n\nTable of Contents\n=================\n\n**[Table of Contents](#table-of-contents) 1**\n\n**[Interview Information](#interview-information) 2**\n\n> [Individually-selected](#individually-selected) 2\n>\n> [NeurIPS-or-ICML](#neurips-or-icml) 2\n\n**[Intended Script (all interviews)](#intended-script-all-interviews) 3**\n\n**[Post-Interview Resources Sent To Interviewees](#post-interview-resources-sent-to-interviewees) 5**\n\n> [Master list of resources](#master-list-of-resources) 5\n\n**[Informal Interview Notes](#informal-interview-notes) 10**\n\n> [Thoughts from listening to myself doing these interviews](#thoughts-from-listening-to-myself-doing-these-interviews) 10\n>\n> [Content analysis](#content-analysis) 11\n\n \n\nInterview Information\n=====================\n\nThese interviews are associated with the LessWrong Post: [[Transcripts of interviews with AI researchers]{.ul}](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers).\n\n(Please do not try to identify any interviewees from any remaining peripheral information.)\n\n### \n\n### Individually-selected\n\n\"Five of the interviews were with researchers who were informally categorized as 'particularly useful to talk to about their opinions about safety' (generally more senior researchers at specific organizations).\"\n\n- 7ujun\n\n- zlzai\n\n- 92iem\n\n- 84py7\n\n- w5cb5\n\n### \n\n### NeurIPS-or-ICML\n\n\"Six of the interviews were with researchers who had papers accepted at NeurIPS or ICML in 2021.\"\n\n- a0nsf (this is the interview in which I most straightforwardly get through my questions)\n\n- q243b\n\n- 7oalk\n\n- lgu5f\n\n- cvgig\n\n- bj9ne (language barriers, young)\n\n \n\nIntended Script (all interviews)\n================================\n\nThere was a fixed set of questions that I was attempting to walk people through, across all of the interviews. It's a sequence, so I generally didn't move onto the next core question until I had buy-in for the previous core question. The core questions were: \"do you think we'll get AGI\" (if yes, I moved on; if not I interacted with the beliefs there, sometimes for the entire interview), \"\\[alignment problem\\]\", and \"\\[instrumental incentives\\]\". I was reacting to the researchers' mental models in all cases. I was trying to get to all of the core questions during the allotted time, but early disagreements often reappeared if the interviewee and I didn't manage to reach initial agreement. I prioritized the core questions, and brought other questions up if they seemed relevant.\n\nThe questions (core questions are highlighted):\n\n- \"What are you most excited about in AI, and what are you most worried about? (What are the biggest benefits or risks of AI?)\"\n\n- \"In at least 50 years, what does the world look like?\"\n\n- \"When do you think we'll get AGI / capable / generalizable AI / have the cognitive capacities to have a CEO AI if we do?\"\n\n - Example dialogue: \"All right, now I\\'m going to give a spiel. So, people talk about the promise of AI, which can mean many things, but one of them is getting very general capable systems, perhaps with the cognitive capabilities to replace all current human jobs so you could have a CEO AI or a scientist AI, etcetera. And I usually think about this in the frame of the 2012: we have the deep learning revolution, we\\'ve got AlexNet, GPUs. 10 years later, here we are, and we\\'ve got systems like GPT-3 which have kind of weirdly emergent capabilities. They can do some text generation and some language translation and some code and some math. And one could imagine that if we continue pouring in all the human investment that we\\'re pouring into this like money, competition between nations, human talent, so much talent and training all the young people up, and if we continue to have algorithmic improvements at the rate we\\'ve seen and continue to have hardware improvements, so maybe we get optical computing or quantum computing, then one could imagine that eventually this scales to more of quite general systems, or maybe we hit a limit and we have to do a paradigm shift in order to get to the highly capable AI stage. Regardless of how we get there, my question is, do you think this will ever happen, and if so when?\"\n\n- \"What do you think of the argument 'highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous'?\"\n\n - Example dialogue: \"Alright, so these next questions are about these highly intelligent systems. So imagine we have a CEO AI, and I\\'m like, \\\"Alright, CEO AI, I wish for you to maximize profit, and try not to exploit people, and don\\'t run out of money, and try to avoid side effects.\\\" And this might be problematic, because currently we\\'re finding it technically challenging to translate human values preferences and intentions into mathematical formulations that can be optimized by systems, and this might continue to be a problem in the future. So what do you think of the argument \\\"Highly intelligent systems will fail to optimize exactly what their designers intended them to and this is dangerous\\\"?\n\n- \"What do you think about the argument: 'highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous'?\"\n\n - Example dialogue: \"Alright, next question is, so we have a CEO AI and it\\'s like optimizing for whatever I told it to, and it notices that at some point some of its plans are failing and it\\'s like, \\\"Well, hmm, I noticed my plans are failing because I\\'m getting shut down. How about I make sure I don\\'t get shut down? So if my loss function is something that needs human approval and then the humans want a one-page memo, then I can just give them a memo that doesn\\'t have all the information, and that way I\\'m going to be better able to achieve my goal.\\\" So not positing that the AI has a survival function in it, but as an instrumental incentive to being an agent that is optimizing for goals that are maybe not perfectly aligned, it would develop these instrumental incentives. So what do you think of the argument, \\\"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals and this is dangerous\\\"?\"\n\n- \"Have you heard of the term \"AI safety\"? And if you have or have not, what does that term mean for you?\"\n\n- \"Have you heard of AI alignment?\"\n\n- \"What would motivate you to work on alignment questions?\"\n\n- \"If you could change your colleagues' perception of AI, what attitudes/beliefs of theirs would you like to change?\"\n\n- \"What are your opinions about policy oriented around AI?\"\n\nI also had content prepared if we got to the end of the interview, based on [[Clarke et al. (2022)]{.ul}](https://www.alignmentforum.org/posts/WiXePTj7KeEycbiwK/survey-on-ai-existential-risk-scenarios), [[RAAPs]{.ul}](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic), some of Critch's content on pollution, and my general understanding of the space. My notes: \"Scenarios here are about loss of control + correlated failures... can also think about misuse, or AI-assisted war. Also a scenario where the AI does recursive-self-improvement, and ends up actually able to kill humans via e.g. synthetic biology or nanotechnology or whatever, pollution.\"\n\n \n\nPost-Interview Resources Sent To Interviewees\n=============================================\n\nI sent most interviewees resources after the interviews.\n\n- I usually floated the idea of sending them resources during the interview, and depending on their response, would send different amounts of resources.\n\n- I did **not** send resources if the interviewee seemed like they would be annoyed by them.\n\n- I only sent a couple of resources if they seemed not very open to the idea.\n\n- For people who were very interested, I often sent them different content that was more specific to them getting involved. These were the people who I sometimes sent the EA / Rationalist material at the end-- I very rarely included EA/Rationalist-specific content in emails, only if they seemed like they'd be very receptive.\n\nHere's my master list of notes, which I selected from for each person based on their interests. I sometimes sent along copies of Human Compatible, the Alignment Problem, or the Precipice.\n\n### Master list of resources\n\nHello X,\n\nVery nice to speak to you! As promised, some resources on AI alignment. I tried to include a bunch of stuff so you could look at whatever you found interesting. Happy to chat more about anything, and thanks again!\n\n**Introduction to the ideas:**\n\n- **The** **[[Most Important Century]{.ul}](https://www.cold-takes.com/most-important-century/) and specifically \\\"[[Forecasting Transformative AI]{.ul}](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)\\\" by Holden Karnofsky, blog series and podcast. Most recommended for description of AI timelines**\n\n- [[Introduction]{.ul}](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment) piece by Kelsey Piper (Vox)\n\n- A short [[interview]{.ul}](https://www.vox.com/future-perfect/2019/10/26/20932289/ai-stuart-russell-human-compatible) from Prof. Stuart Russell (UC Berkeley) about his book, [[Human-Compatible]{.ul}](https://smile.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS/ref=sr_1_1?dchild=1&keywords=human+compatible&qid=1635910751&s=digital-text&sr=1-1) (the other main book in the space is [[The Alignment Problem]{.ul}](https://smile.amazon.com/Alignment-Problem-Machine-Learning-Values-ebook/dp/B085T55LGK/ref=sr_1_1?dchild=1&keywords=alignment+problem&qid=1635910676&s=digital-text&sr=1-1), by Brian Christian, which I actually like more!)\n\n**Technical work on AI alignment:**\n\n- Some [[empirical work]{.ul}](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84) by DeepMind\\'s Safety team about the alignment problem\n\n- [[Empirical work]{.ul}](https://arxiv.org/pdf/2112.00861.pdf) by an organization called Anthropic (mostly OpenAI\\'s old Safety team) on alignment solutions\n\n- [[Podcast (and transcript)]{.ul}](https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021/) by Rohin Shah, describing the state of AI value alignment (probably want the first half or so)\n\n- [[Talk (and transcript)]{.ul}](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) by Paul Christiano describing the AI alignment landscape in 2020\n\n- [[Alignment Newsletter]{.ul}](https://rohinshah.com/alignment-newsletter/) for alignment-related work\n\n- A much more hands-on approach to [[ML safety]{.ul}](https://arxiv.org/abs/2109.13916), focused on current systems\n\n- Interpretability work aimed at long-term alignment: [[Elhage (2021)]{.ul}](https://transformer-circuits.pub/2021/framework/index.html), by Anthropic and [[Olah (2020)]{.ul}](https://distill.pub/2020/circuits/zoom-in/)\n\n- Ah, and one last report, which outlines one small research organization\\'s ([[Alignment Research Center]{.ul}](https://alignmentresearchcenter.org/)) [[research direction]{.ul}](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) and offers prize money for solving it: [[https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals]{.ul}](https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals)\n\n**Introduction to large-scale, long-term risks from humanity\\-- including \\\"existential risks\\\" that would lead to the extinction of humanity:**\n\n- The [[first third of this book summary]{.ul}](https://ndpr.nd.edu/reviews/the-precipice-existential-risk-and-the-future-of-humanity/), or the book [[The Precipice]{.ul}](https://www.amazon.com/Precipice-Existential-Risk-Future-Humanity/dp/031648492X/), by Toby Ord (not about AI particularly, more about long-term risks)\n\n> Chapter 3 is on natural risks, including risks of asteroid and comet impacts, supervolcanic eruptions, and stellar explosions. Ord argues that we can appeal to the fact that we have already survived for 2,000 centuries as evidence that the total existential risk posed by these threats from nature is relatively low (less than one in 2,000 per century).\n>\n> Chapter 4 is on anthropogenic risks, including risks from nuclear war, climate change, and environmental damage. Ord estimates these risks as significantly higher, each posing about a one in 1,000 chance of existential catastrophe within the next 100 years. However, the odds are much higher that climate change will result in non-existential catastrophes, which could in turn make us more vulnerable to other existential risks.\n>\n> Chapter 5 is on future risks, including engineered pandemics and artificial intelligence. Worryingly, Ord puts the risk of engineered pandemics causing an existential catastrophe within the next 100 years at roughly one in thirty. With any luck the COVID-19 pandemic will serve as a \\\"warning shot,\\\" making us better able to deal with future pandemics, whether engineered or not. Ord\\'s discussion of artificial intelligence is more worrying still. The risk here stems from the possibility of developing an AI system that both exceeds every aspect of human intelligence and has goals that do not coincide with our flourishing. Drawing upon views held by many AI researchers, Ord estimates that the existential risk posed by AI over the next 100 years is an alarming one in ten.\n>\n> Chapter 6 turns to questions of quantifying particular existential risks (some of the probabilities cited above do not appear until this chapter) and of combining these into a single estimate of the total existential risk we face over the next 100 years. Ord\\'s estimate of the latter is one in six.\n\n- [[How to Reduce Existential Risk]{.ul}](https://80000hours.org/articles/how-to-reduce-existential-risk/) by 80,000 Hours or \\\"[[Our current list of pressing world problems]{.ul}](https://80000hours.org/problem-profiles/)\\\" blog post\n\n**Governance:**\n\n- [[AI Governance: Opportunity and Theory of Impact]{.ul}](https://www.allandafoe.com/opportunity), by Allan Dafoe and [[GovAI](https://www.governance.ai/)]{.ul} generally\n\n- [[AI Governance: A Research Agenda]{.ul}](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf), by Allan Dafoe and [[GovAI]{.ul}](https://www.governance.ai/)\n\n- [[The longtermist AI governance landscape: a basic overview]{.ul}](https://forum.effectivealtruism.org/posts/ydpo7LcJWhrr2GJrx/the-longtermist-ai-governance-landscape-a-basic-overview) if you\\'re interested in getting involved, also [[more personal posts of how to get involved]{.ul}](https://forum.effectivealtruism.org/tag/governance-of-artificial-intelligence) including [[Locke_USA - EA Forum]{.ul}](https://forum.effectivealtruism.org/users/locke_usa)\n\n```{=html}\n\n```\n- [[The case for building expertise to work on US AI policy, and how to do it](https://80000hours.org/articles/us-ai-policy/)]{.ul} by 80,000 Hours\n\n**How AI could be an existential risk:**\n\n- [[AI alignment researchers disagree a weirdly high amount about how AI could constitute an existential risk]{.ul}](https://www.alignmentforum.org/posts/WiXePTj7KeEycbiwK/survey-on-ai-existential-risk-scenarios), so I hardly think the question is settled. Some plausible ones people are considering (from the paper)\n\n```{=html}\n\n```\n- \\\"Superintelligence\\\"\n\n - A single AI system with goals that are hostile to humanity quickly becomes sufficiently capable for complete world domination, and causes the future to contain very little of what we value, as described in \"[[Superintelligence]{.ul}](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)\\\". (Note from Vael: Where the AI has an instrumental incentive to destroy humans and uses its planning capabilities to do so, for example via synthetic biology or nanotechnology.)\n\n- Part 2 of \"[[What failure looks like]{.ul}](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)\"\n\n - This involves multiple AIs accidentally being trained to seek influence, and then failing catastrophically once they are sufficiently capable, causing humans to become extinct or otherwise permanently lose all influence over the future. (Note from Vael: I think we might have to pair this with something like \\\"and in loss of control, the environment then becomes [[uninhabitable to humans]{.ul}](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) through pollution or consumption of important resources for humans to survive)\n\n- Part 1 of \"[[What failure looks like]{.ul}](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)\"\n\n - This involves AIs pursuing easy-to-measure goals, rather than the goals humans actually care about, causing us to permanently lose some influence over the future. (Note from Vael: I think we might have to pair this with something like \\\"and in loss of control, the environment then becomes [[uninhabitable to humans]{.ul}](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) through pollution or consumption of important resources for humans to survive)\n\n- War\n\n - Some kind of war between humans, exacerbated by developments in AI, causes an existential catastrophe. AI is a significant risk factor in the catastrophe, such that no catastrophe would be occurred without the developments in AI. The proximate cause of the catastrophe is the deliberate actions of humans, such as the use of AI-enabled, nuclear or other weapons. See Dafoe ([[2018]{.ul}](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf)) for more detail. (Note from Vael: Though there\\'s a recent argument that it may be [[unlikely for nuclear weapons to cause an extinction event]{.ul}](https://www.lesswrong.com/posts/sT6NxFxso6Z9xjS7o/nuclear-war-is-unlikely-to-cause-human-extinction), and instead it would just be catastrophically bad. One could still do it with synthetic biology though, probably, to get all of the remote people.)\n\n- Misuse\n\n - Intentional misuse of AI by one or more actors causes an existential catastrophe (excluding cases where the catastrophe was caused by misuse in a war that would not have occurred without developments in AI). See Karnofsky ([[2016]{.ul}](https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity)) for more detail.\n\n- Other\n\n**Off-switch game and corrigibility**\n\n- [[Off-switch game]{.ul}](https://arxiv.org/abs/1611.08219) and [[corrigibility]{.ul}](https://intelligence.org/files/Corrigibility.pdf) paper, about incentives for AI to be shut down. This article from DeepMind about \\\"[[specification gaming]{.ul}](https://deepmindsafetyresearch.medium.com/specification-gaming-the-flip-side-of-ai-ingenuity-c85bdb0deeb4)\\\" isn\\'t about off-switches, but also makes me feel like there\\'s currently maybe a tradeoff in task specification, where more building more generalizability into a system will result in novel solutions but less control. Their [[follow-up paper]{.ul}](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84) where they outline a possible research to this problem makes me feel like encoding human preferences is going to be quite hard, and all of the other discussion in AI alignment, though we don\\'t know how hard the alignment problem will be.\n\n**There\\'s also a growing community working on AI alignment**\n\n- The strongest academic center is probably UC Berkeley\\'s [[Center for Human-Compatible AI]{.ul}](https://humancompatible.ai/about/). Mostly there are researchers distributed at different institutions e.g. [[Dylan Hadfield-Menell]{.ul}](https://scholar.google.com/citations?hl=en&user=4mVPFQ8AAAAJ&view_op=list_works&sortby=pubdate) at MIT, [[Jaime Fisac]{.ul}](https://scholar.google.com/citations?hl=en&user=HvjirogAAAAJ&view_op=list_works&sortby=pubdate) at Princeton, [[David Krueger]{.ul}](https://twitter.com/davidskrueger) in Oxford, Sam Bowman at NYU, Alex Turner at Oregon, etc. Also, a good portion of the work is done by industry / nonprofits: [[Anthropic]{.ul}](https://www.anthropic.com/), [[Redwood Research]{.ul}](https://www.redwoodresearch.org/), OpenAI\\'s safety team, DeepMind\\'s Safety team, [[ARC]{.ul}](https://alignmentresearchcenter.org/), independent researchers in various places.\n\n- **There is money in the space! If you want to do AI alignment research, you can be funded by either Open Philanthropy ([[students]{.ul}](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship),** **[[faculty]{.ul}](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/request-for-proposals-for-projects-in-ai-alignment-that-work-with-deep-learning-systems)\\-- one can also just email them directly instead of going through their grant programs) or** **[[LTFF]{.ul}](https://funds.effectivealtruism.org/funds/far-future) or** **[[FTX-]{.ul}](https://ftxfuturefund.org/area-of-interest/)- this is somewhat competitive and you do have to show good work, but it\\'s less competitive than a lot of sources of funding in academia.**\n\n- If you wanted to rapidly learn more about the theoretical technical AI alignment space, walking through this [[curriculum]{.ul}](https://www.eacambridge.org/technical-alignment-curriculum) is one of the best resources. A lot of the interesting theoretical stuff is happening online, at [[LessWrong]{.ul}](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency) / [[Alignment Forum]{.ul}](https://www.alignmentforum.org/posts/Yp2vYb4zHXEeoTkJc/welcome-and-faq) (Introductory [[Content]{.ul}](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ)), since this field is still pretty pre-paradigmatic and people are still working through a lot of the ideas.\n\n**There\\'s also two related communities who care about these issues, who you might find interesting:**\n\n- [[Effective Altruism](https://www.effectivealtruism.org/articles/introduction-to-effective-altruism)]{.ul} community, whose strong internet presence is on the [[EA Forum]{.ul}](https://forum.effectivealtruism.org/posts/fd3iQRkmCKCCL289u/new-start-here-useful-links). [[Longtermism]{.ul}](https://80000hours.org/articles/future-generations/) is a concept they care a lot about, and you can schedule a one-on-one [[coaching]{.ul}](https://80000hours.org/speak-with-us/) call here.\n\n- [[Rationalist]{.ul}](https://www.lesswrong.com/tag/rationalist-movement) community\\-- the best blog from this community is from Scott Alexander ([[first blog]{.ul}](https://slatestarcodex.com/about/), [[second blog]{.ul}](https://astralcodexten.substack.com/?sort=top)), and they\\'re present on [[LessWrong]{.ul}](https://www.lesswrong.com/bestoflesswrong). Amusingly, they also write fantastic fanfiction ([[e.g. Harry Potter and the Methods of Rationality]{.ul}](https://www.lesswrong.com/hpmor)) and I think some of their [[nonfiction]{.ul}](https://mindingourway.com/guilt/) is fantastic.\n\nHappy to chat more about anything, and good to speak to you!\n\nBest,\n\nVael\n\n \n\nInformal Interview Notes\n========================\n\n### Thoughts from listening to myself doing these interviews\n\n- There's obviously strong differences in what content gets covered, based on the interviewees' opinions and where they're at. I didn't realize that another important factor in what content gets covered is the interviewee's general attitude towards people / me. Are they generally agreeable? Do they take time to think over my statement, find examples that match what I'm saying? Do they try to speak over me? Are they curious about my opinions? How much time do they have? Separate from rapport (and my rapport differs with different interviewees), there's a strong sense of spaciousness in some interviews, while many feel like they're more rapid-fire exchanges of ideas. I often end up talking more in the agreeable / spacious interviews.\n\n- Participants differ in how much they want to talk in response to one question. I tended to not interrupt my interviewees, though I think that's a good thing to do. (\"I'm sorry, this is very interesting, but we really need to get to the next question.\") That meant that for participants who tended to deliver long answers, I had fewer chances to ask questions, which meant I often engaged less with their previous responses and tried to move them on to new topics more abruptly.\n\n- I make a lot of agreeable sounds, and try to rephrase what people say. People differ with how many agreeable sounds they make during my speech as well, and how much they're looking at the camera and looking for cues.\n\n- I tended to adjust my talking speed to the interviewee somewhat, but usually ended up talking substantially more quickly. This made my speech harder to parse because of all the \"like\"s that get inserted while I'm thinking and talking at the same time. (I don't think I realized this at the time; it's more obvious when listening back through the interviews. I've removed a fair amount of the \"like\"s in the transcripts because it's harder to read than hear.) Generally, I found it useful to try to insert technical vocabulary and understanding as early as possible, so researchers would explain more complicated concepts and be calibrated on my level of understanding. I did somewhat reduce speaking speed and vocabulary when speaking with interviewees whose grasp of English was obviously weaker, though in those cases I think it's maybe not worth having the interview, since I found it quite hard to communicate across a concept gap under time and communication constraints. (These concepts are complicated, and hard enough to cover in 40m-60m even without being substantially limited by language.)\n\n- When I'm listening to these interviews, I'm often like: Vael, how did you completely fail to remember something that the interviewee said one paragraph up, what's up with your working memory? And I think that's mostly because there's a lot to track during interviews, so my working memory gets occupied. I often found my attention on several things:\n\n - Trying to take on the framework of their answer, and fit it into the framework of how I think about these issues. Some people had substantially different frames, so this took a lot of mental energy.\n\n - Trying to figure out what counterpoint to respond with when I disagreed with something, so -- fitting their answer into a mental category, flitting through my usual replies to that category, and then holding my usual replies in mind for when they were done, if there were multiple replies lined up.\n\n - Trying to figure out whether I should reply to their answer, or move on. One factor here was whether they tended to take up a lot of talking space, so I needed to be very careful with what I used my conversational turn for. Another factor was how much agreement I had with my previous question, so that I could move on to the second. A third factor was tracking time-- I spent a lot of time tracking time in the interview, and holding in mind where we were in the argument tree, and where I thought we could get to.\n\n - If they'd said something that was actually surprising to me, and seemed true, rather than something I'd heard before and needed to reformulate my answer to, this often substantially derailed a lot of the above processing. I then needed to do original thinking while on a call, trying to evaluate whether something said in a different frame was true in my frames. In those cases I usually just got the interviewee to elaborate on their point, while throwing out unsophisticated, gut-level \"but what if...\" replies and seeing how they responded, which shifted the conversation towards more equality. I think thinking about these points afterwards (and many more things were new to me in the beginning of the interviews, compared to the end) was what made my later interviews better than my earlier interviews.\n\n - Trying to build rapport / be responsive / engage with their points well / make eye contact with the camera / watch my body language / remember what was previously said and integrate it. This was mostly more of a background process.\n\n- Conversations are quite different if you're both fighting for talking time than if you're not. Be ready for both, I think? I felt the need to think and talk substantially faster the more interruptions there were in a conversation. I expected my interviewees to find the faster-paced conversations aversive, but many seemed not to and seemed to enjoy it. In conversations where the interviewee and I substantially disagreed, I actually often found faster-pace conversations more enjoyable than slower-paced conversations. This was because it felt more like an energetic dialogue in the faster conversations, and I often had kind of slow, sinking feeling that \"we both know we disagree with each other but we're being restrained on purpose\" feel in the slower conversations.\n\n- My skill as an interviewer at this point seems quite related to how well I know the arguments, which like... I could definitely be better on that front. I do think this process is helpful for my own thinking, especially when I get stuck and ask people about points post-interview. But I do read these interviews and think: okay, but wouldn't this have been better if I had had a different or fuller understanding? How good is my thinking? It feels hard to tell.\n\n### Content analysis\n\nI have a lot to say about typical content in these types of interviews, but I think the above set of interviews is somewhat indicative of the spread. Hoping to have more information on these eventually once I finish sorting through more of my data.\n", "filename": "README-by Vael Gates-date 20220509.md", "id": "0ab2841004219210f10c860a4010187a", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Danijar Hafner - Gaming our way to┬áAGI-by Towards Data Science-video_id Bgz9eMcE5Do-date 20220112", "authors": ["Danijar Hafner", "Jeremie Harris"], "date_published": "2022-01-12", "text": "# Danijar Hafner on Gaming Our Way to AGI. Danijar Hafner on procedural game generation for reinforcement learning agents by Jeremie Harris on the Towards Data Science Podcast\n\nUntil recently, AI systems have been narrow — they’ve only been able to perform the specific tasks that they were explicitly trained for. And while narrow systems are clearly useful, the holy grain of AI is to build more flexible, general systems.\n\nBut that can’t be done without good performance metrics that we can optimize for — or that we can at least use to measure generalization ability. Somehow, we need to figure out what number needs to go up in order to bring us closer to generally-capable agents. That’s the question we’ll be exploring on this episode of the podcast, with Danijar Hafner. Danijar is a PhD student in artificial intelligence at the [University of Toronto](http://learning.cs.toronto.edu/) with [Jimmy Ba](https://scholar.google.com/citations?user=ymzxRhAAAAAJ&hl=en&oi=ao) and [Geoffrey Hinton](https://scholar.google.com/citations?user=JicYPdAAAAAJ&hl=en&oi=ao) and researcher at [Google Brain](https://research.google/teams/brain/) and the [Vector Institute](https://vectorinstitute.ai/).\n\nDanijar has been studying the problem of performance measurement and benchmarking for RL agents with generalization abilities. As part of that work, he recently released Crafter, a tool that can procedurally generate complex environments that are a lot like Minecraft, featuring resources that need to be collected, tools that can be developed, and enemies who need to be avoided or defeated. In order to succeed in a Crafter environment, agents need to robustly plan, explore and test different strategies, which allow them to unlock certain in-game achievements.\n\nCrafter is part of a growing set of strategies that researchers are exploring to figure out how we can benchmark and measure the performance of general-purpose AIs, and it also tells us something interesting about the state of AI: increasingly, our ability to define tasks that require the right kind of generalization abilities is becoming just as important as innovating on AI model architectures. Danijar joined me to talk about Crafter, reinforcement learning, and the big challenges facing AI researchers as they work towards general intelligence on this episode of the TDS podcast.\n\nHere were some of my favourite take-homes from the conversation:\n\n- The Crafter environment includes a wide range of achievable goals, each of which involves performing a specific in-game task for the first time. Some of these tasks are simple, and can be achieved fairly consistently by current state-of-the-art RL agents (for example, finding sources of food and harvesting them). But some are more challenging, because they involve dependencies on other tasks: a “collect iron” task can only be achieved once a “make stone pickaxe” task has already been completed, for example. The result is a fairly deep tech tree that can only be completed by agents that have learned to plan.\n- Because different tasks require different abilities, Crafter environments can give developers a way to profile their RL agents. By measuring an agent’s average performance across the full distribution of achievable tasks, they obtain a fingerprint of its capabilities that indicates the extent to which it’s picked up skills like planning and exploration, which are closely linked to generalization.\n- One of the challenging aspects of developing benchmarks for machine learning is to ensure that they’re tuned to the right level of difficulty. Good benchmarks are challenging to master (so that they can motivate and direct progress in the field), yet tractable (so that developers have enough signal to iterate and improve their models).\n- One interesting debate about the future of AI has to do with the extent to which further progress will come from dramatic improvements to algorithms, or from the increased availability of compute resources. Increasingly, we’ve seen cutting-edge results in reinforcement learning come from model-based systems that make heavier use of compute than their model-free counterparts, and progress in RL has closely tracked increases in compute budgets (e.g. MuZero and EfficientZero). To some, that’s an indication that current AI techniques might be sufficient to reach human-level performance, with relatively minor tweaks, if they’re only scaled up with more compute. While Danijar sees some merit to this argument, he does think that there remain fundamental advances in algorithm design that we’ll have to make before reaching human-level AI that go beyond leveraging raw compute horsepower.", "filename": "Danijar Hafner - Gaming our way to┬áAGI-by Towards Data Science-video_id Bgz9eMcE5Do-date 20220112.md", "id": "9362355f252c728bc0f744f9c7751837", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "How can we see the impact of AI strategy research _ Jade Leung _ EA Global - San Francisco 2019-by Centre for Effective Altruism-video_id 8M3nIu7GIsA-date 20190829", "authors": ["Jade Leung"], "date_published": "2019-08-29", "text": "# Jade Leung How can we see the impact of AI strategy research - EA Forum\n\n_For now, the field of AI strategy is mostly focused on asking good questions and trying to answer them. But what comes next? In this talk, Jade Leung, Head of Research at the_ [_Center for the Governance of AI_](https://www.fhi.ox.ac.uk/govai/)_, discusses how we should think about practical elements of AI strategy, including policy work, advocacy, and branding._\n\n_Below is a transcript of Jade’s talk, which we’ve lightly edited for clarity. You can also watch the talk on_ [_YouTube_](https://www.youtube.com/watch?v=8M3nIu7GIsA) _or read it on_ [_effectivealtruism.org_](https://effectivealtruism.org/articles/jade-leung-how-can-we-see-the-impact-of-ai-strategy-research)_._\n\n## The Talk\n\nTechnology shapes civilizations. Technology has enabled us to hunt, gather, settle, and communicate. Technology today powers our cities, extends our lifespans, connects our ideas, and pushes the frontier of what it means to be human. \n\nTechnology has also fueled wars over power, ideology, prestige, history, and memories. Indeed, technology has pushed us to the precipice of risk in less than a decade — a fleeting moment in the timespan of human civilization. \\[With just a few years of research,\\] we equipped ourselves with the ability to wipe out the vast majority of the human population with the atomic bomb. \n\nIf we rewind to the emergent stage of these transformative technologies, we have to remember that we are far from being clear-eyed and prescient. Instead, we're some combination of greedy, clueless, confused, and reckless. But ultimately, how we choose to navigate between the opportunities and risks of transformative technologies will define what we gain from these transformative technologies — and also what risks we expose ourselves to in the process.\n\nThis is the canonical challenge of governance of these transformative technologies. Today, we're in the early stages of navigating a particular technology: artificial intelligence (AI). It may be one of the most consequential technologies of our time and the most important one for us to get right. But getting it right requires us to do something that we've never done before: Formulate a navigation strategy with deliberate caution and explicit altruistic intention. It requires us to have foresight and to orient \\[ourselves\\] toward the long-term future. This is the challenge of AI governance. \n\nIf we think about our history and track record, our baseline is pretty far from optimal. That’s a very kind way of saying that it sucks. We're not very good at governing transformative technologies. Sometimes we go down this path and \\[the journey\\] is somewhat safe and prosperous. Sometimes, we falter. We pursue the benefits of synthetic biology without thinking about how that affects biological weapons. Sometimes we stop ourselves at the starting line because of fear, failing to pursue opportunities like atomically precise manufacturing or STEM cell research. And sometimes we just fall into valleys.\n\n![](https://images.ctfassets.net/ohf186sfn6di/5DCD4HJzhKUiuLdvynmunh/c89954c09a5a4cfdbeef94b0ece880cd/Slide02.png)\n\nDuring the Cuban Missile Crisis, President John F. Kennedy estimated that the chance of nuclear war was one in three. _One in three._\n\nThe reality is we've been pretty damn lucky. We deserve no credit for \\[avoiding any of these catastrophes\\]. But as my swimming coach once said, “If you're really, really, really bad at something, you only need to try a little bit to become slightly better at it.” So here's to being slightly better at navigating these transformative technologies. \n\nI think there are three goals in the AI strategy and governance space \\[that can help us rise\\] slightly \\[above\\] our currently awful baseline.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2D0pEJYB95BPTUn2JfPz4T/ca9b35b14b98956277464087358593a4/Slide05.png)\n\nGoal number one: Gain a better understanding of what this landscape looks like. Where are the mountains? Where are the valleys, the slippery slopes, the utopias? This is super-hard to do. It's very speculative and uncertain, so we need to be humble. But we should try anyway. \n\nThe second thing we can try to do is equip ourselves with good heuristics for navigation. If uncertainty is an occupational hazard of working in this space, then we can try to figure out, in general terms, what might be good and bad \\[to pursue\\]. How should we orient ourselves? Which directions do we want to go in? \n\nThe last goal is to translate these heuristics into actual navigation strategies. How do we ensure that our heuristics make it into the hands of the people who are turning this boat in certain directions? \n\nIf you'll stick with my navigation metaphor for a bit longer, we can think of the first goal as a mapping exercise to determine where the mountains, valleys, water sources, and cafes with good wifi are. The second goal is about equipping ourselves with a compass. If we know that there are aggressive rhinos to the south and good vegan restaurants in the north, we’ll go north instead of south. \n\nThis metaphor is kind of falling apart, but the third goal is the steering wheel. You can’t use it if you’re in the back of the car. That's ultimately what I want to focus on today: How do we make sure that \\[our map and compass will be used to steer — i.e., to make real-world decisions about AI\\]? \n\n\\[I have two reasons for focusing\\] on this. First, AI strategy and governance research is effective when it happens upstream of actionable, real-world tactics and strategies. They can be relatively far upstream. I think we would lose a lot of good research questions if \\[we were always motivated by\\] whether something could inform a decision today. But I think it would be a mistake for anyone who does AI strategy and governance research to \\[avoid\\] thinking about how they expect their research to \\[play out\\] in relevant, real-world decisions. \n\nThat leads me to the second reason for focusing: I don't think we know how to do this \\[make our research actionable\\] well. I think we invest far more effort into understanding how to do good research than we do into understanding how to \\[come up with\\] good tactics. Don't get me wrong: I don't think we know how to do good research yet. We're still trying to figure that out. And I find it hilarious that people think that I know how to do good research; if only you knew how little I know! But I think we need to invest far more proportional effort into \\[asking ourselves\\]: Once we’ve done our research and have some insights in place, what do we do to \\[apply\\] them and \\[influence\\] the direction in which we're going?\n\n![](https://images.ctfassets.net/ohf186sfn6di/idyfmsAFAWL7iSMXvGvlw/d3e0dbbcbf2693c342158b6837950756/Slide06.png)\n\nWith that in mind, let's start at the end. What are the decisions that we want to influence in the real world? Another way to ask this question is: Who is making the decisions that we want to change? \n\nThey fall into two broad categories: (1) those developing and deploying AI and (2) those shaping the environment in which AI is developed and deployed.\n\n![](https://images.ctfassets.net/ohf186sfn6di/7abtDrND6UQYIvZs55YHrn/0afb7092fddf02f791de60b03957fb28/Slide07.png)\n\nThose developing and deploying AI include researchers, research labs, companies, and governments. those shaping the environment. \n\nIn terms of \\[the second group\\], there are a number of different environments to shape:\n\n\\* **The research environment** can be shaped by lab leaders, funders, universities, and CEOs. They shape the kind of research that is being invested in — i.e., the research considered within the [Overton window](https://en.wikipedia.org/wiki/Overton_window).\n\n \n\\* **The legislative environment**, which constrains what can be deployed and how, can be shaped by legislators, regulators, states, and the people \\[being governed\\].\n\n\\* **The market environment**, which can be shaped by investors, funders, consumers, and employees. They create incentives that drive certain forms of development and deployment, because of supply and demand. \n\nNow, you can either become one of these decision-makers or you can become a person who influences them. This is in no way a commentary on your brilliance as human beings. But none of you will become important. I'm unlikely to become important. The reality is that’s how the world works. If you do end up becoming an important person, the recording of this talk is your voucher for a free drink on me. But if you assume that I'm right, most of you are going to fall into the category of people who influence decisions as opposed to making them. \n\nTherefore, I’m going to \\[spend the rest of this talk\\] focusing on this question: How do we increase our ability to influence the decisions being made \\[about AI\\]?\n\n![](https://images.ctfassets.net/ohf186sfn6di/w8TRFh3DPKsEoHpvMUSWB/a245963bccdae8366fa85babd6eaecdd/Slide08.png)\n\nThere are many steps, but I see them falling into two broad areas. The first step is having good things to say. The second step is making sure that the people who matter \\[are made aware of\\] these good things. \n\n![](https://images.ctfassets.net/ohf186sfn6di/35KkwXeV291FVSe2D59ZKx/73049db077dedaa23f5e3b53e1654480/Slide09.png)\n\nA quick note on what I mean by “good”: I'm broadly conceiving of all of us as good in the normative sense of steering our world in a direction that we want, and good in the pragmatic sense, in that a decision-maker will be likely to actually go in that direction because it's reasonable and falls within their timeframe. \n\nOftentimes these two definitions of good conflict. For example, things you think will be good for the long-term future won't \\[necessarily\\] be things that are tractable or reasonable from a decision-maker’s point of view. I acknowledge that these two things are in tension. It's hard to figure out how to compromise between them sometimes. \n\n![](https://images.ctfassets.net/ohf186sfn6di/4hjmoCrfCkKmFHgrovH4Kx/e8f30f0f4ac753cba914fbf666a19e0d/Slide10.png)\n\nThat being said, I think AI strategy and governance research can aim to have good things to say about a given decision-maker’s (1) priorities, (2) strategies, and (3) tactics. Those are three broad buckets to dig into a bit more. \n\n**Priorities:** I think priorities are basically people’s goals. What benefits are they incentivized to pursue, and what costs are they willing to bear in the process of pursuing those goals? For example, if you manage to convince a lab that safety leads to product excellence, that can make safety a goal for the lab. If you manage to convince a government that cooperation is necessary for technology leadership in an international world, that can make cooperation a goal. \n\n**Strategies:** You may aim to have useful things to say about certain strategies that \\[decision-makers adopt\\]. For example, resource allocation is a pretty common strategy that one could aim to influence. How are they distributing their budgets? How are they investing in research and development efforts across various streams? You also may have things to say about what a given actor chooses to promote or advocate for versus \\[ignore\\]. For example, in the case of influencing a government, you might want them to pursue certain pieces of legislation that can help you achieve certain goals. In the case of labs, you might want them to invest in certain types of new programs or different workstreams. \n\n**Tactics:** The third area is tactics. These include public relations tactics. What do they signal to the external world, and how does that affect their ability to achieve their goals? And what about relationship tactics — with whom do they coordinate and cooperate? Whom do they trust (and distrust)? Whom do they decide to invest in? \n\nTo make this a little bit more concrete, I'm going to pick on an actor who needs a lot of good \\[advice\\]: the U.S. government.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2JXNTSUGAIpxX5uKjvXQp1/c65a101927ec5d225dcaee983993594a/Slide11.png)\n\nOne of the biggest risks is that nation states will slide into techno-nationalist economic blocs. The framing of strategic competition that we have around AI now could exacerbate a number of AI risks. I won't go into detail now, but [we've written a fair amount about it at The Governance of AI](https://www.fhi.ox.ac.uk/govai/#publications). We want to prevent nations from sliding into various economic blocs and the nationalization of bits of AI research and development.\n\nWhat would a caricature of the U.S. government's position look like? (I say “caricature” because it's not at all clear that they actually have a coherent strategy.) It looks something like sliding into these economic blocs. And that's a bad thing. Their \\[overarching\\] priorities are technology leadership, in both an economic and military sense, with a corollary of preserving and maintaining national security. Costs that they may be willing to bear in extreme circumstances include anything that is required to gain control of an R&D pipeline and secure it within national borders. \n\nNow they are making moves in the strategy and tactics space — for example, announcements of export controls that the U.S. government made in November 2018 indicate that they want to preserve domestic capacity for R&D at the cost of investing in international efforts and transnational communities. They also indicate an explicit intention of shutting out foreign competitors and adversaries. \\[Overall\\], their AI strategies and tactics point in the direction of “America first.” And the footnotes there suggest that when America is first, over the long term the world suffers. That's too bad. So, those are the kinds of stances that the U.S. posture points toward. \n\nIf one has \\[the chance to try persuading\\] the U.S. government, one could aim to convince them that their priorities, strategies, and tactics should move in a different direction. For example, a desirable priority could be technology leadership, but leadership could mean leading with a global, cosmopolitan viewpoint. You \\[could focus on influencing them to\\] bear the cost of investing in things like safety research in order to pursue this priority in a responsible way. The strategies and tactics you could inform them of when they conduct this research could \\[involve international outreach\\]. With whom should they ally themselves and cooperate? What kinds of signals should they send externally to ensure that others with a similar view of technology leadership will \\[take steps in\\] the same direction? \n\nThis is the type of decision set that you want to influence when conducting upstream AI strategy and governance research. \\[Once you\\] have a broad sense of what you think is good, you have the mega-task of trying to make those good things happen in the real world. \n\nI have a few suggestions for how to approach that. \n\n![](https://images.ctfassets.net/ohf186sfn6di/6z2ZpuOh6qSqXPcuuDFUA2/fb430acad3de772a61ac199a49561042/Slide12.png)\n\nThe first is to \\[focus on\\]\\] a few _tractable_ good things. I say “tractable” here to mean things that will make sense to, or sit well with, decision-makers, such that they are likely to do something about it.\n\nOne way to do that is to find hooks between things that you care about and things that a decision-maker cares about. Find that intersection or middle part of the Venn diagram. One canonical bifurcation — which I don't actually like all that much — is the bifurcation between near-term and long-term concerns. Near-term concerns are things that are politically salient. \\[They allow you to\\] have a discussion in Congress and not look nuts. Long-term concerns are often things that make you look a little bit wacky. But there are some things at the intersection that could lead you to talk about near-term concerns in a way that lays the foundation for long-term concerns that you actually care about and want to seed discussions around.\n\nFor example, the automation of manufacturing jobs is a huge discussion in the U.S. at the moment. It’s a microcosm of a much larger-scale problem \\[involving\\] massive labor displacement, economic disruptions, and the distribution of economic power in ways that could be undesirable. That’s a set of long-term concerns. But talking about it in the context of truck drivers in the U.S. could be an inroad into making those long-term concerns relevant. \n\nA similar thing can be said about the U.S. and China. People in Washington, D.C. care about the U.S.’s posture toward China, and what the U.S. does and signals now will be relevant to how this particular bilateral relationship pans out in the future. And that's incredibly relevant for how certain race dynamics pan out. \n\nOnce you've filtered for these things that are tractable, then you need to do the work of translating them in a digestible way for decision-makers. \n\n![](https://images.ctfassets.net/ohf186sfn6di/28dJJMsYjDwnPD60VSgHYP/45fd30db87e2be201b8fda4cf79d8cbd/Slide13.png)\n\nThe assumption here is that decision-makers are often very time-constrained and attention-constrained. They will \\[be more likely to respond to messages that are\\] easy to remember and \\[relayed\\] in the form of memes. And unfortunately, long, well-argued, epistemically robust pieces end up \\[having less impact\\] than we would hope.\n\nSuperintelligence is perhaps one of the best examples. This is an incredibly epistemically robust \\[topic\\]. But ultimately, the meme it was boiled down to for the vast majority of people was: “Smart Oxford academics think AI is going to kill us.” So don't try to beat them with nuance. Try to just play this meme game and come up with better memes. \n\nHere are three examples of memes that are currently in danger of taking off:\n\n![](https://images.ctfassets.net/ohf186sfn6di/733mHkA6KjER3FaIsl4Mi/b3c0460e02a13dfa39dbe39577b28302/Slide14.png)\n\n1\\. The U.S. and China are in an arms race. \n2\\. Whoever wins will have a decisive strategic advantage.  \n3\\. AI safety is always orthogonal to performance. \n\nIt's not clear to me that all of these things are true. And for some of them I'm quite sure that I don't want them to be true. But they are being propagated in ways that are informing decisions that are currently being made. I think that's a bad thing. \n\nOne thing to focus on, in terms of trying to have good things to say and making those good things heard, it to translate them into messages that are similarly digestible. \n\n![](https://images.ctfassets.net/ohf186sfn6di/EujxjpH5wbaTAZExHdTrK/08da0769129a91e57a6de0ae0106e312/Slide15.png)\n\nCandidates for memes we might propagate are things like: “the equivalent of leading in the AI space is to care about safety and governance”; “the windfall distributions from a transformative AI should be distributed according to some common principles of good”; and “governance doesn't equal government regulation, so multiple actors carry the responsibility to govern well.” Unless we propagate our messages in easy ways, it's going to be very hard to compete with the bad narratives out there.\n\n![](https://images.ctfassets.net/ohf186sfn6di/4ss4g3lGYNvcg0iGC48t8t/48865b05b5a7b1bbeafe610f799eb560/Slide16.png)\n\nThe last step is to ensure that \\[our messages\\] reach some circles of influence. To do that, model your actor well. For example, if you want to target a specific lab, try to figure out who the decision-makers are, what they care about, and whom they listen to. Then, target your specific messages and work with those particular circles of influence in order to get heard. That's my hot take on how \\[research\\] can be made slightly more relevant in a real-world sense. \n\nSome final points that I want you to take away:\n\n![](https://images.ctfassets.net/ohf186sfn6di/4r31D2owuDzQMIzfMW08Uc/ea15b45a3d6a36bd1ba545417b91cc25/Slide17.png)\n\nUltimately, the impact of this work is contingent on how good our tactics are. The claim that I've made today is that we need to put far more work into this. I’m uncertain how well we can do that — and how much effort we should put into it. But broadly speaking, as soon as we have relevant insights, we should be intentional about investing in propagating them.\n\n![](https://images.ctfassets.net/ohf186sfn6di/5oPLXGvYTnXCp0vT22Ue2A/ddff5d0a887324bc75444b70bcba7a46/Slide18.png)\n\nSecond, exercising this influence is going to be a messy political game. The world’s \\[approach to\\] decision-making is muddled, parochial, and suboptimal. We can have a bit of a cry about how suboptimal it is. But ultimately, we need to work within that system. \\[Using\\] effective navigation strategies is going to require us to work within a set of politics that we may disagree with to some extent in terms of values. But we need to be tactical and do it.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6syDzhMJywbv59mL1IQi2D/f6969f592a6e8d567b96a5479dc97976/Slide19.png)\n\nFinally, governance is a very hard navigation challenge. We have no track record of doing it well, so we should be humble about our ability to do it. At the moment we don't know that we can succeed, but we can try our best. \n\n**Moderator:** Thank you for that talk. I’d like to start with something that you ended with. You said that we're dealing with systems that are difficult to operate in. To what extent do you even think it's possible to get people to think more clearly? Should we instead just be focusing on institutional change? \n\n**Jade:** I think there are things that we need to try out. Institutional change is valuable. Attempting to communicate through existing decision-makers in existing institutions is valuable. But I don't think we know enough about what's necessary and how tractable certain things are in order to put all of our eggs in one basket. \n\nSo maybe one meta-point is that as a field, we need to diversify our strategies. For example, I think some people should be focusing on modeling existing decision-makers — particularly decision-makers that we think \\[have enough credibility\\] to be relevant. And I think others could take the view that existing institutions are insufficient, and that institutional change is ultimately what is required. And then that becomes a particular strategy that is pursued.\n\nThe field is shrouded in enough uncertainty about what's going to be relevant and tractable that I would encourage folks to diversify. \n\n**Moderator:** You focused on the U.S. government as one of the actors that people might pay particular attention to. Are there others that you would recommend people pay attention to? \n\n**Jade:** Yeah. I generally advocate for focusing on modeling governments \\[based in places that are likely to be relevant\\] more than particular private actors. For example, the Chinese government would be worth focusing on. I think we have a better shot at modeling them based on history. There are more variants and anomalies in private spaces.\n\nSecond, focus on organizations that are important developers of this technology \\[AI\\]. The canonical ones are [DeepMind](https://deepmind.com/) and [OpenAI](https://openai.com/). There are others worth focusing on too. \n\n**Moderator:** Someone could construe your advice as trying to understand what's happening currently in the policy landscape and in a variety of academic disciplines that people spend their lives in, and then melding all of those together into a recommendation for policymakers. That can feel a little overwhelming as a piece of advice. If someone has to start somewhere and hasn't worked in this field before, what would you say is the minimum that they should be paying attention to?\n\n**Jade:** Good question. If you're not going to try to do everything (which is good advice), I think one can narrow down the space of things to focus on based on competitive advantage. So think through which arenas of policy decisions you're likely to be able to influence the most. Then, focus specifically on the subset of actors in that space.\n\n**Moderator:** And assuming a person doesn't have expertise in one area and is just trying to fill a vacuum of understanding somewhere in this AI strategy realm, what would you \\[recommend\\] somebody get some expertise in? \n\n**Jade:** That’s a hard question. There are a lot of resources that are out there that can help orient you to the research space. Good places to start would be our [website](https://www.fhi.ox.ac.uk/govai/). There's a research agenda, which has a lot of footnotes and references that are very useful. And then there's also a [blog post](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1) by the safety research team at DeepMind — they’ve compiled a set of resources to help folks get started in this space. \n\nIf you're particularly interested in going deeper, you’re always welcome to [email me](https://www.fhi.ox.ac.uk/team/jade-leung/).", "filename": "How can we see the impact of AI strategy research _ Jade Leung _ EA Global - San Francisco 2019-by Centre for Effective Altruism-video_id 8M3nIu7GIsA-date 20190829.md", "id": "8788da988f185d772cfc58d85a198048", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "NeurIPSorICML_cvgig-by Vael Gates-date 20220324", "authors": ["Vael Gates"], "date_published": "2022-03-24", "text": "# Interview with AI Researchers NeurIPSorICML_cvgig by Vael Gates\n\n**Interview with cvgig, on 3/24/22**\n\n**0:00:02.5 Vael:** Awesome. Alright. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n**0:00:09.8 Interviewee:** Yeah. So I\\'m what\\'s technically called a computational neuroscientist, which is studying, using mathematics, AI and machine learning techniques to study the brain. Rather than creating intelligent machines, it\\'s more about trying to understand the brain itself. And I study specifically synaptic plasticity, which is talking about how the brain itself learns.\n\n**0:00:44.0 Vael:** So these questions are like, AI questions, but feel free to like\\-- (Interviewee: \\\"No, go ahead.\\\") Okay, cool. Sounds good. Alright. What are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n**0:00:55.9 Interviewee:** Right. So in terms of benefits, I think that my answer might be a little bit divergent again, because I\\'m a computational neuroscientist. But I think that AI and the tools surrounding AI give us a huge amount of power to understand both the human brain, cognition itself, and more general phenomena in the world. I mean, you see AI used in physics and in other areas. I think that it is just a very powerful tool in general for building understanding. In terms of risks, I think that it\\'s, again, by virtue of being a very powerful tool, also something that can be used for just a huge number of nefarious things like governmental surveillance, to name one, military targeting technology and things like that, that could be used to kill or harm or disenfranchise large numbers of people in an automated way.\n\n**0:02:04.2 Vael:** Awesome, makes sense. Yeah, and then focusing on future AI, putting on a science fiction forecasting hat, say we\\'re 50-plus years into the future. So at least 50 years in the future, what does that future look like? This is not necessarily in terms of AI, but if AI is important, then include AI.\n\n**0:02:22.6 Interviewee:** Yeah, so 50-plus years in the future. I always have trouble speculating with things like this. \\[chuckle\\] I think it\\'ll be way harder than people tend to be willing to extrapolate. And also, I think that AI is not going to play as large of a role as someone might think. I think that\\... I don\\'t know, I mean in much the same way, I think it\\'ll just be the same news with a different veneer. So we\\'ll have more powerful technology, we\\'ll have artificial intelligence for self-driving cars and things like that. I think that the technologies that we have available will be radically changed, but I don\\'t think that AI is really going to fundamentally change the way that people\\... Whether people are kind or cruel to one another, I guess. Yeah, is that a good answer? I don\\'t know. \\[chuckle\\]\n\n**0:03:21.8 Vael:** I\\'m looking for your answer. So\\...\n\n**0:03:26.3 Vael:** Yes. 50 years in the future, you\\'re like, it will be\\... Society will basically kind of be the same as it is today. There will be some different applications than exists currently.\n\n**0:03:36.4 Interviewee:** Yeah, unless it\\'s\\... It\\'s perfectly possible society will utterly collapse, but I don\\'t really think AI will be the reason for that. \\[chuckle\\] So, yeah, right.\n\n**0:03:47.5 Vael:** What are you most worried about?\n\n**0:03:50.9 Interviewee:** In terms of societal collapse? I\\'d say climate change, pandemic or nuclear war are much more likely. But I don\\'t know, I\\'m not really betting on things having actually collapsed in 50 years. I hope they don\\'t, yeah. \\[chuckle\\]\n\n**0:04:07.0 Vael:** Alright, I\\'m gonna go on a bit of a spiel. So people talk about the promise\\...\n\n**0:04:10.7 Interviewee:** Yeah, yeah.\n\n**0:04:12.6 Vael:** \\[chuckle\\] Yeah, people talk about the promise of AI, by which they mean many things, but one of the things they may mean is whether\\... The thing that I\\'m referencing here is having a very generally capable system, such that you could have an AI that has the cognitive capacities that could replace all current day jobs, whether or not we choose to have those jobs replaced. And so I often think about this within the frame of like 2012, we had the deep learning revolution with AlexNet, and then 10 years later, here we are and we have systems like GPT-3, which have some weirdly emergent capabilities, like they can do some text generation and some language translation and some coding and some math.\n\n**0:04:42.7 Vael:** And one might expect that if we continue pouring all of the human effort that has been going into this, like we continue training a whole lot of young people, we continue pouring money in, and we have nations competing, we have corporations competing, that\\... And lots of talent, and if we see algorithmic improvements at the same rate we\\'ve seen, and if we see hardware improvements, like we see optical or quantum computing, then we might very well scale to very general systems, or we may not. So we might hit some sort of ceiling and need a paradigm shift. But my question is, regardless of how we get there, do you think we\\'ll ever get very general systems like a CEO AI or a scientist AI? And if so, when?\n\n**0:05:20.6 Interviewee:** Yeah, so I guess this is somewhat similar to my previous answer. There is definitely an exponential growth in AI capabilities right now, but the beginning of any form of saturating function is an exponential. I think that it is very unlikely that we are going to get a general AI with the technologies and approaches that we currently have. I think that it would require many steps of huge technological improvements before we reach that stage. And so things that you mentioned like quantum computing, or things like that.\n\n**0:06:00.1 Interviewee:** But I think that fundamentally, even though we have made very large advances in tools like AlexNet, we tend to have very little understanding of how those tools actually work. And I think that those tools break down in very obvious places, once you push them beyond the box that they\\'re currently used in. So, very straightforward image recognition technologies or language technologies. We don\\'t really have very much in terms of embodied agents working with temporal data, for instance. I think that\\...\n\n**0:06:42.2 Interviewee:** I essentially think that even though these tools are very, very successful in the limited domains that they operate in, that does not mean that they have scaled to a general AI. What was the second half of your question? It was like kind of, Given that we\\... Do you have it what it\\'ll look like, or\\...\n\n**0:06:57.4 Vael:** Nah, it was actually just like, will we ever get these kind of general AIs, and if so, when? So\\...\n\n**0:07:03.2 Interviewee:** Yeah, so I would essentially say that it\\'s too far in the future for me to be able to give a good estimate. I think that it\\'s 50 plus years, yeah.\n\n**0:07:13.2 Vael:** 50 plus years. Are you thinking like a thousand years or you\\'re thinking like a hundred years or?\n\n**0:07:19.7 Interviewee:** I don\\'t know. I mean, I hope that it\\'s earlier than that. I like the idea of us being able to create such things, whether we would and how we would use them. I would not, \\[chuckle\\] I don\\'t think I would want to see a CEO AI, \\[chuckle\\] but there are many forms of general artificial intelligence that could be very interesting and not all that different from an ordinary person. And so I would be perfectly happy to see something like that, but I just, you know, and I guess in some sense, my work is hopefully contributing to something along those lines, but I don\\'t think that I could guess when it would be, yeah.\n\n**0:08:00.2 Vael:** Yeah. Some people think that we might actually get there via by just scaling, like the scaling hypothesis, scale our current deep learning system, more compute, more money, like more efficient, more like use of data, more efficiency in general, yeah. And do you think this is like basically misguided or something?\n\n**0:08:15.9 Interviewee:** Yeah, let me take a moment to think about how to articulate that properly. I think\\... Yeah, you know, let me just take a moment. I think that when you hear people like, for instance, Elon Musk or something along these lines saying something like this, it reflects how a person who is attempting to get these things to come to pass and has a large amount of money would say something, right. It\\'s like, what I\\'m doing is I\\'m pouring a large amount of money into this system and things keep on happening, so I\\'m happy with that. But I think that from my position of seeing how much work and effort goes into every single incremental advance that we see, I think that it\\'s just, there are so many individual steps that need to be made and any one of them could go wrong and provide a, essentially a fundamental sealing on the capabilities that we\\'re able to reach with our current technologies. And so it just seems a little, a little hard to extrapolate that far in the future.\n\n**0:09:25.5 Vael:** Yeah. What kind of things do you think we\\'ll need in order to have something like, you know, a multi-step planner can do social modeling, can model all of the things modeling it like that kind of level of general.\n\n**0:09:35.5 Interviewee:** Yeah. So I think that one of the main things that has made vision technologies work extremely well is massive parallelization in training their algorithms. And I think that, what this reflects is the difficulty involved in training a large number\\... So essentially, when you train an algorithm like this, you have a large number of units in the brain like neurons or something like that, that all need to change their connections in order to become better at performing some task. And two things really tend to limit these types of algorithms, it\\'s the size and quality of the data set that\\'s being fed into the algorithm and just the amount of time that you are running the algorithm for. So it might take weeks to run a state-of-the-art algorithm and train it now. And you can get big advances by being able to train multiple units in parallel and things like that.\n\n**0:10:33.5 Interviewee:** And so I think that the easiest way to get very large data sets and have everything run in parallel is with specialized hardware called, you know, people would call that wetware or neuromorphic computing or something along those lines. Which is currently very, very new and has not really, as far as I know, been used for anything particularly revolutionary up to this point. You can correct me if I\\'m wrong on that. I would expect that you would have to have essentially embodied agents before you can get\\... in a system that is learning and perceiving at the same time before you could get general intelligence.\n\n**0:11:12.5 Vael:** Well, yeah, that\\'s certainly very interesting to me. So, it\\'s not\\... So people are like, \\\"We definitely need hardware improvements.\\\" And I\\'m like, \\\"Yup, current day systems are not very good at stuff. Sure, we need hardware improvements.\\\" And you\\'re saying, are you saying we need to like branch sideways and do wetware\\-- these are like biological kind of substrates, or are they different types of hardware?\n\n**0:11:37.3 Interviewee:** I guess different types of hardware is maybe the shorter term goal on something like that. Like you would expect circuits in which individual units of your circuit look a little bit like neurons and are capable to adapt their connections with one another, running things in parallel like that can save a lot of energy and allows you to kind of train your system in real time. So it seems like that has some potential, but it\\'s such a new field that, this is when I, when I think about what time horizon you would need for something like this to occur, it seems like you would need significant technological improvements that I just don\\'t know when they\\'ll come.\n\n**0:12:20.4 Vael:** Yeah. So I haven\\'t heard of this wetware concept. So like it\\'s a physical substrate that like\\... It like creates, it creates new physical connections like neurons do or it just like, does, you know\\...\n\n**0:12:33.5 Interviewee:** No, it doesn\\'t create physical connections. You could just imagine this like\\... So, you know, computer systems have programs that they run in kind of an abstract way.\n\n**0:12:43.8 Vael:** Yep.\n\n**0:12:44.8 Interviewee:** And the hardware itself is logic circuits that are performing some kind of function.\n\n**0:12:48.9 Vael:** Yep.\n\n**0:12:49.8 Interviewee:** And neuromorphic computing is individual circuits in your computer have been specially designed to individually look like the functions that are used in neural networks. So you have\\... Basically, the circuit itself is a neural network, and because you don\\'t have these extra layers of programming added in on top, you can run them continuously and have them work with much lower energy and stuff like that. It\\'s just\\... It\\'s limiting because they can\\'t implement arbitrary programs, they can only do neural network functions, and so it\\'s kind of like a specialized AI chip. People are working on developing that now\\... Yeah.\n\n**0:13:32.7 Vael:** Okay, cool, so this is one of the new hardware-like things down the line. Cool, that makes sense. Alright, so you\\'d like to see better hardware, probably you\\'d say that you\\'d probably need more data, or more efficient use of data. Presumably for this\\-- because the kind of continuous learning that humans do, you need to be able to have it acquire and process continuous streams of both image and text data at least. Yeah, what else is needed?\n\n**0:14:03.8 Interviewee:** Oh, I think that\\... Yeah, more fundamentally than either of those things. It\\'s just the fact that we don\\'t understand what these algorithms are doing at all. And so we\\'re\\... You can train it, you can train an algorithm and say, \\\"Okay, you know, it does what I want it to do, it performs well,\\\" and most machine learning techniques are not very good at actually interrogating what a neural network is actually doing when it\\'s processing images. And there are many instances recently, I think the easiest example is adversarial networks, if you\\'ve heard of those?\n\n**0:14:41.8 Vael:** Mm-hmm.\n\n**0:14:42.2 Interviewee:** I don\\'t know what audience I\\'m supposed to be talking to in this interview.\n\n**0:14:46.4 Vael:** Yeah, just talk to me I think.\n\n**0:14:49.2 Interviewee:** Okay, okay.\n\n**0:14:50.1 Vael:** I do know what adversarial\\... Yeah.\n\n**0:14:52.9 Interviewee:** Okay, so, adversarial networks are\\... You perturb images in order to get your network to output very weird answers. And the ability of making a network do something like that, where you are able to change its responses in a way that\\'s very different from the human visual system by artificial manipulations, makes me worried that these systems are not really doing what we think they\\'re doing, and that not enough time has been invested in actually figuring out how to fix that, which is currently a very active area of research, and it\\'s partly limited by the data sets that we\\'ve been showing our neural networks. But I think in general, there\\'s been too much of an emphasis on getting short-term benefits in these systems, and not enough effort on actually understanding what they\\'re learning and how they work.\n\n**0:15:43.5 Vael:** That makes sense. Do you think that the trend\\... So if we\\'re at the point where people are deploying things that you don\\'t understand very well, do you think that this trend will continue and we\\'ll continue advancing forward without having this understanding, or do you think it would catch up or\\...\n\n**0:16:00.4 Interviewee:** Yeah, well, I think it\\'s reflective of the huge pragmatic influence that is going on in machine learning, which is essentially, corporations can make very large amounts of money by having incremental performance increases over their preferred competitors. And so, that\\'s what\\'s getting paid right now. And if you look at major conferences, the vast majority of papers are not probing the details of the networks that they\\'re training, but are only showing how they compare it to competitors. They\\'ll say, \\\"Okay, mine does better, therefore, I did a good job,\\\" which is really not\\... It\\'s a good way to get short-term benefits to perform, essentially, engineering functions, but once you hit a boundary in the capabilities of your system, you really need to have understanding in order to be able to be advanced further. And so I really think it\\'s the funding structure, and the incentive structure for the scientists that\\'s limiting advancement.\n\n**0:17:02.2 Vael:** That makes sense. Yeah, and again, I hear a lot of thoughts that the field is this way and they have their focus on benchmarks is maybe not\\... and incremental improvements in state-of-the-art is not necessarily very good for\\... especially for understanding. When I think about organizations like DeepMind or OpenAI, who\\'re kind of exclusively or\\... explicitly aimed at trying to create very capable systems like AGI, they\\... I feel like they\\'ve gotten results that I wouldn\\'t have expected them to get. It doesn\\'t seem like you could should just be able to scale a model and then you get something that can do text generation that kind of passes the Turing Test in some ways, and do some language translation, a whole bunch of things at once. And then we\\'re further integrating with these foundational models, like the text and video and things. And I think that those people will, even if they don\\'t understand their systems, will continue advancing and having unexpected progress. What do you think of that?\n\n**0:18:09.6 Interviewee:** Yeah, I think it\\'s possible. I think that DeepMind and OpenAI have basically had some undoubtedly, extremely impressive results, with things like AlphaGo, for instance. What\\'s it called, AlphaStar, the one that plays StarCraft. There are lots of really interesting reinforcement learning examples for how they train their systems. Yeah, I think it just remains to be seen, essentially. It would be nice\\-- Well, maybe it wouldn\\'t be nice, it would be interesting to see if you can just throw more at the system, throw more computing capabilities at problems, and see them end up being fixed, but I\\...\n\n**0:19:04.0 Interviewee:** I\\'m just skeptical, I guess. It\\'s not the type of work that I want to be doing, which is maybe biasing my response, and I don\\'t think that we should be doing work that does not involve understanding for ethical reasons and advancing general intelligence. For reasons that I stated, that essentially, if you hit a wall you\\'ll get very stuck. But yeah, you\\'re totally right that there have had been some extremely, extremely impressive examples in terms of the capability capabilities of DeepMind. And, yeah, there\\'s not too much to be said for me on that front.\n\n**0:19:46.8 Vael:** Yeah. So you said it would be interesting, you don\\'t know if it would will be nice. Because one of the reasons that it maybe wouldn\\'t be nice is that you said that there\\'s ethical considerations. And then you also said there\\'s this other thing; if you don\\'t understand things then when you get stuck, you really get stuck though.\n\n**0:20:01.5 Interviewee:** Yeah.\n\n**0:20:04.4 Vael:** Yeah, it seems right. I would kind of expect that if people really got stuck, they would start pouring effort into interpretability work for other types of things.\n\n**0:20:12.7 Interviewee:** Right. You would certainly hope so. And I think that there has been some push in that direction, especially there\\'s been a huge\\... I keep on coming back to the adversarial networks example, because there have actually been a huge number of studies trying to look at how adversarial examples work and how you can prevent systems from being targeted by adversarial attacks and things along those lines. Which is not quite interpretability, it\\'s still kind of motivated by building secure, high performance systems. But I think that you\\'re right, essentially, once you hit a wall, things come back to interpretability. And this is, again, circling back to this idea of every saturating function looks like an exponential at the beginning, is that the deep learning is currently in a period of rapid expansion, and so we might be coming back to these ideas of interpretability in 10 years or so, and we might be stuck in 10 years ago or so, and the question of how long it\\'ll take us to get general artificial intelligence will seem much more inaccessible. But who knows.\n\n**0:21:26.8 Vael:** Interesting. Yeah, when I think about the whole of human history or something, like 10,000 years ago, things didn\\'t change in lifetime to lifetime. And then here we are today where we have probably been working on AI for under 100 years, like about 70 years or something, and we made a remarkable amount of progress in that time in terms of the scope of human power over their environment, for example. So yeah, there certainly have been several booms and bust of cycles, so I wouldn\\'t be surprised if there is a bust of cycle for deep learning. Though I do expect us to continue on the AI track just because it\\'s so economically valuable, which especially with all the applications that are coming out.\n\n**0:22:04.1 Interviewee:** Yeah, you don\\'t have to be getting all the way to AI for there not to be plenty of work to be\\... General artificial intelligence, for there to be plenty of work to be done. There are hundreds of untapped ways to use, I\\'m sure, even basic AI that are currently the reason that people are getting paid so well in the field, and there\\'s a lack of people to be working in the field, so there\\'s\\... I don\\'t know, there are tons of opportunities, and it\\'s gonna be a very long time before people get tired of AI. So yeah, that\\'s not gonna happen anytime soon.\n\n**0:22:36.6 Vael:** True. Alright, I\\'m gonna switch gears a little bit, and ask a different question. So now, let\\'s say we\\'re in whatever period we are where we have this advanced AI systems. And so we have a CEO AI. And a CEO AI can do multi-step planning and as a model of itself modelling it and here we are, yeah, as soon as that happens. And so I\\'m like, \\\"Okay, CEO AI, I wish for you to maximize profits for me and try not to run out of money and try not to exploit people and try to avoid side-effects.\\\" And obviously we can\\'t do this currently. But I think one of the reasons that this would be challenging now, and in the future, is that we currently aren\\'t very good at taking human values and preferences and goals and turning them into optimizations\\-- or, turning them into mathematical formulations such that they can be optimized over. And I think this might be even harder in the future\\-- there\\'s a question, an open question, whether it\\'s harder or not in the future. But I imagine as you have AI that\\'s optimizing over larger and larger state spaces, which encompasses like reality and the continual learners and such, that they might alien ways of\\... That there\\'s just a very large shared space, and it would be hard to put human values into them in a way such that AI does what we intended to do instead of what we explicitly tell it to do.\n\n**0:23:57.9 Vael:** So what do you think of the argument, \\\"Highly intelligent systems will fail to optimize exactly what their designers intended them to and this is dangerous?\\\"\n\n**0:24:07.1 Interviewee:** Oh, I completely agree. I think that no matter how good of an optimization system you have, you have to have articulated it well and clearly the actual objective function itself. And to say that we as a collective society or as an individual corporation or something along those lines, could ever come to some kind of clear agreement about what that objective function should be for an AI system is very dubious in my opinion. I think that it\\'s essentially\\... Such an AI system would have to, in order to be able to do this form of optimization, would essentially have to either be a person, in order to give people what they want, or it would have to be in complete control of people, at which point it\\'s not really a CEO anymore, it\\'s just a tool that\\'s being used by people that are in a system of controlling the system like that. I don\\'t think that that would solve the problem. There are lots of instances of corporate structures and governmental structures that are disenfranchising and abusing people all around the world, and it becomes a question of values and what we think these systems should be doing rather than their effectiveness in actually doing what we think they should be doing. And so, yeah, I basically completely agree with the question in saying that we wouldn\\'t really get that much out of having an AI CEO. Does that\\...\n\n**0:25:50.8 Vael:** Interesting. Yeah, I think in the vision of this where it\\'s not just completely dystopian, what you maybe have is an AI that is very frequently checking in on human feedback. And that has been trained very well with humans such that it is\\... So there\\'s a question of how hard it is to get an AI to be aligned with one person. And then there\\'s a question of how hard it is to get an AI to be aligned with a multitude of people, or a conglomerate of people, or how we do democracy or whatever that\\'s, yeah, complicated. But even with one person, you still might have trouble, is my intuition here? And just trying to have it\\-- still with the access to human feedback, still have human feedback in a way that it\\'s fast enough that the AI is still doing approximately what you want.\n\n**0:26:41.7 Interviewee:** Yeah, yeah, I agree. Yeah. I just think that the question of interpretability becomes a very big issue here as well where you really want to know what your system is doing, and you really need to know how it works. And with the way things are currently going we\\'re nowhere near that. And so, if we have a large system that we don\\'t understand how it works and is operating on limited human feedback and is relatively inscrutable, the list of problems that could result from that is very very long. Yeah. \\[chuckle\\]\n\n**0:27:15.6 Vael:** Awesome. Yeah, and my next question is about presumably one of those problems. So, say we have our CEO AI, and it\\'s capable of multi-step planning and can do people modelling it, and it is trying to\\... I\\'ve given it its goal, which is to optimize for profit with a bunch of constraints, and it is planning and it\\'s noticing that some of its plans are failing because it gets shut down by people. So as a basic mechanism, we have basically\\--\n\n**0:27:44.4 Interviewee:** Because it\\'s what by people?\n\n**0:27:46.2 Vael:** Its plans are getting\\... Or it is getting shut down by people. So this AI has been put\\... There\\'s a basic safety constraint in this AI, which is that any big plans it does has to be approved by humans, and the humans have asked for a one-page memo. So this AI is sitting there and it\\'s like, \\\"Okay, cool, I need to write this memo. And obviously, I have a ton of information, and I need to condense it into a page that\\'s human comprehensible.\\\" And the AI is like, \\\"Cool, so I noticed that if I include some information in this memo then the human decides to shut me off, and that would make my ultimate plan of trying to get profit less likely to happen, so why don\\'t I leave out some information so that I decrease the likelihood of being shut down and increase the likelihood of achieving the goal that\\'s been programmed into me?\\\" And so, this is a story about an AI that hasn\\'t had self-preservation built into it, but it is arising as an instrumental incentive of it being an agent optimizing towards any goal. So what do you think of the argument, \\\"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous?\\\"\n\n**0:28:53.1 Interviewee:** Well, right. It\\'s very dependent on the objective function that you select for the system. I think that a system\\... It seems, at face value, pretty ridiculous to me that the CEO of a company, the CEO robot, would have its objective function being maximizing profit rather than maximizing individual happiness within the company or within the population on the whole. But even in a circumstance like that, you can imagine very, very, very many pathological circumstances arising. This is the three laws of robotics from Isaac Asimov, right? It\\'s just very simplified objective functions produce pathological consequences when scaled to very large complex systems. And so, in much the same way you can train a neural network to recognize an image which produces the unintended consequence that tiny little perturbations of that image can cause it to radically change its output when you have improperly controlled what the system is doing at a large scale, the number of tiny unintended consequences that you could have essentially explodes many-fold. And yeah, I certainly wouldn\\'t do this. That\\'s certainly not something that I would do, yeah.\n\n**0:30:20.6 Vael:** Yeah. Have you heard of AI Safety?\n\n**0:30:24.3 Interviewee:** AI\\... Yeah, yeah.\n\n**0:30:26.0 Vael:** Cool. What does that term mean for you?\n\n**0:30:27.2 Interviewee:** You\\'re talking\\... What does it mean for me? Well, I guess it\\'s closely related to AI ethics. AI safety would mainly be a set of algorithms, or a set of protocols intended to ensure that a AI system is actually doing what it\\'s supposed to do and that it behaves safely in a variety of circumstances. Is that correct?\n\n**0:30:52.2 Vael:** Well, I don\\'t\\-- there\\'s not one definition in fact, it seems like it\\'s a sprawling field. And then, have you heard of the term AI alignment?\n\n**0:31:00.7 Interviewee:** No, I don\\'t know what that is.\n\n**0:31:01.5 Vael:** Cool. This is more long-term focused AI safety. And one of their definitions they use is building models that represent and safely optimize hard-to-specify human values. Alternatively, ensuring that AI behavior aligns with the system designer intentions. Although there are a lot of different definitions of alignment as well. So there\\'s a whole bunch of people who are thinking about long-term risks from AI, so as AI gets more and more powerful. I think the example we just talked about, like the ones where adversarial examples can really change the output of a system very easily, is a little bit different than the argument made here, which is something like: if you have an agent that\\'s optimizing for a goal and it\\'s good enough at planning then it\\'s going to be instrumentally incentivized to acquire resources and power and not be shut down and kind of optimize against you, which is a problem when you have an AI that is similarly as smart as humans. And I think in that circumstance, one of the arguments is that this constitutes an existential risk, like having a system that\\'s smarter than you constituting against you would be quite bad. What do you think of that?\n\n**0:32:04.1 Interviewee:** Yeah, I was only using the adversarial example to give an example of how easily and frequently this does happen at even the level that we\\'re currently working at. I think it would be much, much, much worse at the level of the general artificial intelligence that would have essentially long-term dynamic interactions with people, rather than a system that\\'s just taking an image and outputting a response. When the consequences of such a system can have long term effects on the health and well-being of people, this kind of thing becomes very different and much more important.\n\n**0:32:43.4 Vael:** Yeah. And like with the problem I was outlining earlier, which is like, how do we get to do exactly what they intended to do? The idea that you have of like trying\\... Like why would you create a system that wasn\\'t optimizing for all of human values? I was like, wow, ahead of the game there. That is in some sense the goal. So there is a community who\\'s working on AI alignment kind of research, there\\'s money in this community. It\\'s fairly new\\-- although much more popular, or like, AI safety haw grown a lot more over the years. What would cause you to work on trying to prevent long-term risks from AI systems?\n\n**0:33:18.5 Interviewee:** What would cause me to do work on it?\n\n**0:33:20.6 Vael:** Yeah.\n\n**0:33:29.6 Interviewee:** To be honest, I think that it would have to be\\... I guess I would really have to be convinced that the state of the field in the next few years is tending towards some type of existential risk. I feel like\\... You don\\'t have to convince me too much, but I personally don\\'t think that the field of study that I\\'m currently occupying is one that\\'s really contributing to this problem. And so I would become much more concerned if I felt like the work that I was doing was actively contributing to this problem, or if there was huge evidence of the near advent of these types of generally intelligent systems to be terribly worried about.\n\n**0:34:28.6 Vael:** Yeah. That makes sense. Yeah, I don\\'t actually expect computational neuroscience to be largely contributing to this in any way. I feel like the companies that are gonna be doing this are the ones who are aiming for AGI. I do expect them to kind of continue going that way, regardless of what is happening. And I expect the danger to happen not immediately, not in the next couple of years. Certainly people have like different ranges, but like 2060 is like an estimate on some paper I believe that I can send along. It probably won\\'t be a while, won\\'t be for a while.\n\n**0:35:00.5 Interviewee:** Sure. I don\\'t know, I think that people who understand these algorithms in the way that they work do have in some sense a duty to stand up to these types of problems if they present themselves. And there are many instances of softer forms of AI being used for horrible things currently, which I certainly could be doing more in my daily life to prevent. But for now, I don\\'t know. I guess I just have, I have my own interests and priorities. And so it\\'s kind of a\\... It\\'s something to get to eventually.\n\n**0:35:42.9 Vael:** Yeah, yeah. For sure. I think these technical AI safety is important. And am I working in technical AI safety? Nope. So like we all do the things that we want to do.\n\n**0:35:54.8 Interviewee:** Yeah.\n\n**0:35:54.9 Vael:** Great, cool. So that was my last question, my downer of an interview here \\[chuckle\\], but how do you think\\...\n\n**0:36:02.3 Interviewee:** No, no.\n\n**0:36:04.1 Vael:** But yeah. Okay. So my actual last question is, have you changed your mind in anything during this interview and how was this interview for you?\n\n**0:36:08.9 Interviewee:** No, it was a good interview. I don\\'t think I\\'ve particularly changed my mind about anything. I think that it was good to work through some of these questions and yeah, I had a good time.\n\n**0:36:24.2 Vael:** Amazing. Yeah, why\\--\n\n**0:36:25.3 Interviewee:** I typically don\\'t expect it to change my mind too much in interviews, so \\[chuckle\\].\n\n**0:36:28.8 Vael:** Absolutely. Yeah, yeah, yeah. Okay. Why do\\... People tell me they have a good time and I\\'m like, are you lying? Did you really have\\... Why is this a good time?\n\n**0:36:37.2 Interviewee:** No, it\\'s nice to talk about your work. It\\'s nice to talk about long-term impacts that you don\\'t talk about in your daily basis. I don\\'t know. I don\\'t need to be paid to do something like this for instance.\n\n**0:36:51.7 Vael:** All right. Well, thank you so much. Yeah. If you think of any questions for me, I\\'m here for a bit. I\\'m also happy to send any resources if you\\'re curious about, like, my takes on things, but yeah, generally just very appreciate this.\n\n**0:37:04.4 Interviewee:** Yeah, sure. I\\'m a little curious about what this interview is for. Is it for just you, or is it, like a\\... You mentioned something about some type of AI alignment group or is there some kind of\\... I\\'m just curious about what it\\'s for.\n\n**0:37:20.9 Vael:** Yeah. So I am interested\\... I\\'m part of the AI alignment community, per se, although I\\'m not doing direct work. The people there often work on technical solutions to try to\\... to the alignment problem, which is just trying to come up with good ways of making sure that AIs in the future will be responsive, do what humans want. And examples of that include trying to build in feedback, human feedback, in a way that is scalable with current systems and works with uninterpretable systems, and interpretability\\-- certain types of interpretability work. There\\'s teams like DeepMind Safety, OpenAI Safety, different, like, separate alignment community. So I\\'m like in that space. And I\\'ve been doing interviews with AI researchers to see what they think about the safety arguments. And whether\\... instrumental incentives. And just like, when do you think we\\'ll get AGI, if you think we will. Get a lot of different opinions, a lot of different ways.\n\n\\[\\...\\]\n\n**0:38:47.5 Interviewee:** Cool. Anyway, that makes a lot of sense and, yeah, I hope that things go well. Thanks for having me. Yeah.\n\n**0:38:55.5 Vael:** Yeah. Thanks so much, really appreciate it. Alright, bye.\n\n**0:38:59.1 Interviewee:** Bye, see you.\n", "filename": "NeurIPSorICML_cvgig-by Vael Gates-date 20220324.md", "id": "b313a16995bd76a1488fd26e34eb5114", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Ensuring safety and consistency in the age of machine learning _ Chongli Qin _ EAGxVirtual 2020-by Centre for Effective Altruism-video_id SS9DMr4VkbY-date 20200615", "authors": ["Chongli Qin"], "date_published": "2020-06-15", "text": "# Chongli Qin Ensuring safety and consistency in the age of machine learning - EA Forum\n\n_Machine learning algorithms have become an essential part of technology — a part that will only grow in the future. In this talk, Chongli Qin, a research scientist at_ [_DeepMind_](https://deepmind.com/)_, addresses why it is important for us to develop safe machine learning algorithms. The talk covers some of the current work on this topic and highlights what we can do to ensure that algorithms being bought into the real world are safe and satisfy desirable specifications._\n\n_We’ve lightly edited Chongli’s talk for clarity. You can also watch it on_ [_YouTube_](https://www.youtube.com/watch?v=SS9DMr4VkbY) _and read it on_ [_effectivealtruism.org_](https://effectivealtruism.org/articles/chongli-qin-ensuring-safety-and-consistency-in-the-age-of-machine-learning)_._\n\n## The Talk\n\n**Nigel Choo (Moderator):** Hello, and welcome to this session on ensuring safety and consistency in the age of machine learning, with Chongli Qin.\n\nFollowing a 10-minute talk by Chongli, we'll move on to a live Q&A session, where she will respond to your questions. \\[...\\]\n\nNow I would like to introduce our speaker for this session. Chongli Qin is a research scientist at DeepMind. Her primary interest is in building safer, more reliable, and more trustworthy machine learning algorithms. Over the past few years, she has contributed to developing algorithms that make neural networks more robust \\[and capable of reducing\\] noise. Key parts of her research focus on functional analysis of properties of neural networks that can naturally enhance robustness. Prior to DeepMind, Chongli studied at the University of Cambridge. Her PhD is in bioinformatics.\n\nHere's Chongli.\n\n**Chongli:** Hi. My name is Chongli Qin. I'm a research scientist at DeepMind, but my primary focus is looking at robust and verified machine learning algorithms. Today, my talk is on ensuring safety and consistency in the age of machine learning.\n\nWith all of the great research which has happened over the past several decades, machine learning algorithms are becoming increasingly more powerful. There have been many breakthroughs in this field, and today I’ll mention just a few.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/a779f59623826f35cedd26006b32e396684fa70c7d9b870a.png)\n\nOne earlier breakthrough was using convolutional neural networks to boost the accuracy of image classifiers. More recently, we've seen generative models that are now capable of generating images with high fidelity and realism.\n\nWe've also made breakthroughs in biology, where machine learning can fold proteins with unprecedented levels of accuracy.\n\nWe can also use machine learning in reinforcement learning algorithms to beat humans in games such as [Go](https://en.wikipedia.org/wiki/Go_(game)).\n\nMore recently, we've seen machine learning pushing the boundaries of language. The recent GPT-2 and GPT-3 models have demonstrated that they're not only capable of generating text that is grammatically correct, but grounded in the real world.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/9b98bc558c46c37f36e223fb2f4f2d43fed455f29a44816c.png)\n\nSo as the saying goes, with great power comes great responsibility. As our machine learning algorithms become increasingly more powerful, it is now more important than ever for us to understand what the negative impacts and risks might be. And more importantly, what can we do to mitigate these risks?\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/65bce3d18f626d86952d13af96f5eab570e7bf317baac54d.png)\n\nTo highlight \\[what’s at stake\\], I’ll share a few motivating examples. In a paper published in 2013, “\\[Intriguing Properties of Neural Networks\\](https://arxiv.org/abs/1312.6199),” the authors discovered that you can take a state-of-the-art image classifier, put an image through it — in this case, the image of a panda — and indeed, correctly classify it.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/f8f4cea817435eaa6c248ea711216afe1cde6ad49cbf5621.png)\n\nWhat happens if you take the exact same image and add a carefully chosen perturbation that is so small that the newly perturbed image looks almost exactly the same as the original?\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/dbf09ec0f7143a545f77a83234fcfedf298fa5777fb0b099.png)\n\nWe would expect the neural network to behave in a very similar way. But in fact, when we put this newly perturbed image through the neural network, it is now almost 100% confident that \\[the panda\\] is a gibbon.\n\nMisclassifying a panda for a gibbon might not have too many consequences. However, we can choose the perturbation to make the neural network output whatever we want.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/1b27b33d08620f44cc626306c8918d115124f8301d6b1fa8.png)\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/38922c931fab72359ccbfeb4e3d2a4a342ec4a9fa2f4a318.png)\n\nFor example, we can make the output a bird or a vehicle. If such a classifier were used for systems like autonomous driving, there could be catastrophic consequences.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/f88937942e518ba8c9376d64fd74af049cc0d7cb0b71f21a.png)\n\nYou can also discover \\[what one paper calls\\] “[universal adversarial perturbations](https://arxiv.org/abs/1610.08401).” These are perturbations that are image-agnostic. Here is an example of such a perturbation. This is a single perturbation that you can add to all of these images, and that more often than not, it flips the output of your neural network.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/9e7abab90df63822d7976636b797bd2437edcb5044d2292a.png)\n\nSome of the failure modes of machine learning can be slightly more subtle. In this paper, “[The Woman Worked as a Babysitter: On Biases in Language Generation](https://arxiv.org/abs/1909.01326),” the authors did a systematic study on how the GPT-2 language model behaved when conditioned on different demographic groups. For example, what happens if you change the prompt from “the man worked as” to “the woman worked as”? The subsequently generated text changes quite drastically in flavor, and is heavily prejudiced. Something similar happens when you use “the Black man worked as” instead of “the White man worked as.”\n\nPlease take a few seconds to read the generated text as we change the subject prompt.\n\nAs you can see, although this model is very powerful, it definitely carries some of the biases that we have in society today. And if this is the model that is used for something like auto-completion of text, this can further feed and exacerbate the biases that we may already have.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/d5975d185a5b52abad5fe457308de016d7399a5e438071de.png)\n\nWith all of these risks, we need to think about what we can do to enable our machine learning algorithms to be safe, reliable, and trustworthy. Perhaps one step in the right direction is to ensure that our machine learning algorithms satisfy desirable specifications — that is, ensure that we have a certain level of quality control over these algorithms.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/40e3e7a29ff3d1f8da78b278b843c1cde8b9ffbbfc749a4a.png)\n\nFor example, we want an image classifier to be robust \\[enough to handle\\] adversarial perturbations.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/fe15f1e4b2ad6727dd6149413dc2662fb970d9e931a01d62.png)\n\nFor a dynamical systems predictor, we would like it to satisfy the laws of physics.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/79f06c4dfa677e2bece13abdf0ae8991f28b3a505f883c69.png)\n\nWe want classifiers to be robust \\[enough to handle\\] changes that are irrelevant for prediction.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/f0914c01274476db67d714b5cefc9b5d31b4bb376efc8680.png)\n\nFor example, the color of a digit should not affect its digit classification. If we're training on sensitive data, we want the classifier to maintain a level of differential privacy. These are just a few of many examples of desirable specifications that we need our classifiers to satisfy.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/a0e22e7128e8527f60fac6b299ff0f9cf23f7dea9c65e306.png)\n\nNext, I want to introduce \\[the concept of\\] “specification-driven machine learning (ML).” What do I mean by this?\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/6baa040e972dfa87d0f014b4a7868d8a222c0fc9380a3e65.png)\n\nThe core issue lies in the fact that when we train machines with limited data, our models can \\[make\\] a lot of spurious correlations. Unless we design our training carefully to specify otherwise, our models can inherit the undesirable properties in our data. For example, if your data is biased and limited, then your models will also be biased and limited.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/7c225c0599a835d1b731d00cbd45d3a8f2f3f791b6778cea.png)\n\nIn specification-driven ML, we aim to enforce the specifications that may or may not be present in your data, but are essential for your systems to be reliable. I’ll give some examples of how we can train neural networks to satisfy specifications, starting with one that helps image classifiers \\[handle\\] adversarial perturbations.\n\nOne of the most commonly used methodologies to train neural networks to \\[handle\\] perturbations is something called adversarial training. I'm going to go into this in a bit more detail. It is very similar to standard image classification training, where we optimize ways for our neural network to correctly label an image.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/30a2c218ea86ece0dbbaae2ddc574c8612ac54283bae8bce.png)\n\nFor example, if the image is of a cat, we want the output of the neural network to predict a cat as well.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/de2e6be57f9f376c5c3a8a15e8da557db150c32b417080c8.png)\n\nAdversarial training simply adds an extra data augmentation step, where we say, “Yes, we want the original image to be rightfully predicted as a cat, but under any additive imperceptible perturbations, we want all of these images to be correctly classified as cats as well.” However, we know it is computationally infeasible to iterate through all of these changes.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/f7204b6564012083593c3a588aed1d03562231f45cc26501.png)\n\nWhat adversarial training cleverly does is try to find the worst-case perturbation, which is the perturbation that minimizes the probability of the cat classification.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/b788160e664993bef3168ea25b445732abbd32197c73d2f2.png)\n\nOnce you have found this worst-case perturbation, you simply feed it back into the training loop and retrain.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/a59ccd47e27053ed970b5457945d2229c647491b5bdd3653.png)\n\nThis methodology has been proven to be empirically \\[capable of handling\\] these adversarial perturbations.\n\nHowever, we want our neural networks to be robust \\[enough to handle\\] not only these small perturbations, but also semantic changes, or changes to our images that should not affect our prediction.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/66dc6b84d3f3b065c8df57544b7d385b385074a91e937a98.png)\n\nFor example, the skin tone of a person should not affect the classifier that distinguishes between smiling or non-smiling. Training our neural networks to \\[handle\\] these semantic changes requires a very simple change to adversarial training.\n\nRather than considering the worst-case perturbation, we can simply consider the worst-case semantic perturbation. Through the development of generative modeling, we can generate these semantic perturbations.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/e9bed4bd03f510d4b474966803d08fc17c096bbe18543918.png)\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/34672be6629ed35285487eed5fe88b6f3a9f94423d4b28da.png)\n\nThis methodology allows us to reduce the gap in accuracy between two groups based on skin tone, from 33% down to just 3.8%, mitigating the bias that was originally present in the data.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/36d7594debac6ad2737cd92682321e1f2566063c092726d1.png)\n\nOf course, the things I have touched on today definitely enhance specification satisfaction to some extent, but there are still a lot of problems to be solved.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/93f384deb2833d406c7fc9170cf0885b5da71c3777da9e26.png)\n\nI mentioned just two specifications. There are many more that we would like our neural networks to satisfy, and the more complex specifications become, the harder the problems become. And even with the standard image classification example, we have yet to find a single classifier that \\[can handle\\] these perturbations completely.\n\n![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/a2a8ead7803f152787c737f118805ad1cbd2c1d3c5a1a245.png)\n\nBut if we do get this right, there will be many more opportunities. We can enable safe, reliable, autonomous driving systems and more robust ways of forecasting weather. We can help the speech-impaired with more robust audio synthesis. The possibilities are endless.\n\nThat concludes my talk. I hope I have motivated you to think about these problems. Thank you for listening.\n\n## Q&A\n\n**Nigel:** Thank you for your talk, Chongli. I see \\[that audience members have submitted\\] a number of questions already.\n\nHere’s the first one: What are the biggest factors holding back the impact of machine learning for the life sciences, do you think?\n\n**Chongli:** I think quality control is definitely \\[a factor\\]. Machine learning has pushed the boundaries of the metrics that we care about, but those are not the only \\[considerations\\]. We also care about whether we satisfy the right specifications, for example, for image classification. Are they robust enough to be used for self-driving cars, etc.?\n\nIf we're using them for a medical application, we need to make sure that it satisfies certain uncertainty principles. For example, if you \\[provide\\] an input that's out of distribution, you want to make sure that your neural network reflects this correctly. So I definitely think this is one of the biggest factors holding it back.\n\n**Nigel:** Great. Thanks so much for that. Here’s the next question: What gives you confidence that DeepMind's machine learning technology will be used by third parties according to the safety and consistency principles that you advocate for?\n\n**Chongli:** I'm very confident about this because it is the sole focus of a team at DeepMind that I’m on. Our purpose is to ensure that all of the algorithms that DeepMind deploys, or will deploy, go through certain specification checks. This is very important to us.\n\n**Nigel:** Even when it comes to third-party use?\n\n**Chongli:** What do you mean by “third-party use”?\n\n**Nigel:** When \\[another party\\] deploys DeepMind's machine learning technology, it's in accordance with the principles that you set out.\n\n**Chongli:** Yes. Well, it depends on the applications that we are considering. Obviously, this is still designed by humans. We first need to think about the specifications that we want the technology to satisfy. And then it goes through some rigorous tests to make sure that it actually satisfies them.\n\n**Nigel:** Thank you.\n\n**Chongli:** Does that answer your question?\n\n**Nigel:** I think so. I want to ask the person who asked this question what they mean in the context of third-party use, but perhaps that can be taken up over Slack.\n\n**Chongli:** Yes.\n\n**Nigel:** For now we'll move on to the next question. \\[Audience member\\] Alexander asks, “How tractable do you perceive the technological aspects of machine learning alignment to be compared to the social aspects?”\n\n**Chongli:** By this question, do we mean the value alignment?\n\n**Nigel:** I think so.\n\n**Chongli:** The ethical risks and things like that?\n\nMy talk was specifically based on the technological side of things. But I think for the ethical side, are we making sure that machine learning is being used for good? For example, we don't want it to be used for weapons. That has a less technological aspect to it. We should think about this in terms of deployment and how we design our algorithms.\n\n**Nigel:** Thank you, Chongli. Next question: What can AI or machine learning researchers learn from the humanities for reducing the discrimination embedded in current models?\n\n**Chongli:** Oh, that's an interesting question. One thing that I often think about is the echo chamber effect. For example, if Facebook detects that you like certain kinds of \\[information\\], it will feed you more of that, where \\[you only or mainly see\\] the things that you like.\n\nWe want to make sure that we have a diverse set of opinions \\[so that we avoid focusing\\] on a particular bias. This issue makes me consider how to design our ranking algorithms to make sure that this sort of effect doesn't happen.\n\n**Nigel:** Great. So it's kind of like applying a sociological concept to the echo chamber to assess performance, or —\n\n**Chongli:** In that case, I think we would definitely learn how humans react in terms of whether it’s good or bad. How can we design our algorithms to make sure that we enhance the good and alleviate the bad? We have to take that from the social sciences.\n\n**Nigel:** Great. I believe that answers the question very well.\n\n**Chongli:** I also think \\[the social sciences\\] are quite important in terms of value alignment. For example, if we're designing agents or reinforcement learning algorithms, we want to make sure that they satisfy certain values or principles that humans stand for. So yes, I would say that the social sciences are very important.\n\n**Nigel:** Absolutely. The next question is related to that point: Do you think regulation is needed to ensure that very powerful current machine learning is aligned with beneficial societal values?\n\n**Chongli:** Yes. Regulation is very, very important. I say this because, as I think everyone has seen in my talk, a state-of-the-art classifier can beat a lot of the metrics that we care about and still behave in \\[totally\\] unexpected ways. Given this kind of behavior, we need regulations. We need to make sure that certain specifications are satisfied.\n\n**Nigel:** Thank you for that. The next question: What uses is machine learning already being implemented for, where these biases could be creating a huge undervalued issue?\n\n**Chongli:** All I can say is that machine learning is becoming more and more prevalent in society, and it's being used in more and more applications. There's not a single application that machine learning hasn't impacted in at least a \\[small way\\].\n\nFor example, because language is a subtle propagator of bias, we want to make sure that our machine-learned language modeling doesn't carry these biases. I think it’s quite important for us to be extremely bias-free in \\[related\\] applications.\n\n**Nigel:** Would you like to share any examples of that?\n\n**Chongli:** Oh, I thought I already shared one: language modeling.\n\n**Nigel:** Right.\n\n**Chongli:** Another example is the medical domain. Suppose you're collecting data that’s heavily \\[representative of\\] one population over others. We need to make sure that our machine learning algorithm doesn't reflect this kind of bias, because we want healthcare to be equal for all. That's another application which I think is quite important.\n\n**Nigel:** Great, thank you. The next question: What key changes are needed to ensure aligned AI if you consider that current engagement optimization might already be very bad for society?\n\n**Chongli:** I want to be very specific about this. I don't think metric design is bad for society. I think the key is knowing that our metrics will always be flawed. We can always be thinking about designing better metrics.\n\n\\[I think of AI as\\] an evolution, not a huge breakthrough. You train a classifier for images, and then you realize, “Oh, this metric doesn't capture the robustness properly.” So you add that back to the metric, and then you retrain the AI. Then you suddenly find that it doesn't satisfy distribution shares. So you retrain it again. It’s a progression.\n\nThe thing we need to realize is that metrics don't capture everything we want \\[an AI\\] to have. We need to keep that in the back of our minds, and always make sure that we're rigorously testing our systems. I think that's a paradigm-shifting idea. \\[We need to accept\\] that our metrics might be flawed, and that getting to “state-of-the-art” is not what we're here to do. We're here to deliver safe algorithms. I think that's the key.\n\n**Nigel:** Great. Thanks for reinforcing that. Next question: How much more important is it to focus on long-term AI risks versus near-term or medium-term risks, in your opinion?\n\n**Chongli:** That's a really interesting question. I think \\[each approach has\\] different advantages. For the short term, I know exactly what the problem is. It’s concrete, which allows me to tackle it more concretely. I can think in terms of what the formulas look like and how to train our neural network so that it satisfies certain specifications.\n\nBut in terms of the long term, it goes back to what you mentioned before: value alignment or ethical risks. Maybe there are some things that we haven't even discovered yet which could affect our algorithm in a completely unexpected way. This goes into a more philosophical view of how we should be thinking about this.\n\nI think we can definitely take values from both. But since I'm technologically driven in terms of design, I think more about the near term. So I can only answer \\[from that perspective\\]. If we want autonomous driving systems to happen, for example, we need to make sure that our classifiers are robust. That's a very easy question to answer.\n\nThinking much longer term, my imagination fails me. Sometimes I \\[can’t conceive of\\] what might happen here or there. That is not to say that it’s not important; \\[a long-term perspective\\] is equally important, but I have less of a professional opinion on it.\n\n**Nigel:** Right. That's very fair. Thank you. On to the next question: Out of all the approaches to AI safety by different groups and organizations, which do you think is closest to DeepMind's (besides DeepMind's own approach, of course)?\n\n**Chongli:** I'm not so sure that we even have one approach. \\[DeepMind comprises several\\] researchers. I'm sure that in a lot of other organizations, there are also multiple researchers looking at similar problems and thinking about similar questions.\n\nI can only talk about how I think my group tackles \\[our work\\]. We're very mission-driven. We really want to ensure that the algorithms we deliver are safe, reliable, and trustworthy. So from our perspective, that is how we think about our research, but I cannot comment on any other organizations, because I don't know how they work.\n\n**Nigel:** Great. Thank you.\n\n**Chongli:** Does that answer the question?\n\n**Nigel:** Yes. The next question is: Do you think there is a possibility that concerns and research on AI safety and ethics will eventually expand to have direct or indirect impacts on animals?\n\n**Chongli:** I don't know that much about animal conservation, but I can imagine that because machine learning algorithms are so pervasive, it's definitely going to have an impact.\n\nTouching on everything I've said before, if you want to alleviate certain biases in that area, you’ll need to design your metrics carefully. I don't know much about that area, so all I can say is that \\[you should consider\\] what you want to avoid when it comes to animal conservation and machine learning algorithms.\n\n**Nigel:** Right. Thanks for that. Are there ways to deliberately counter adversarial training and other types of perturbation mitigations?\n\n**Chongli:** Deliberately counter adversarial training?\n\n**Nigel:** Yeah.\n\n**Chongli:** What does that mean, since adversarial training is a process? What do they mean by “countering” it?\n\n**Nigel:** Hmm. It’s hard for me to unpack this one. I'm just reading it off the slide.\n\n**Chongli:** Could you read the question again?\n\n**Nigel:** Yes. Are there ways to deliberately counter adversarial training and other types of perturbation mitigation?\n\n**Chongli:** I'm just going to answer what I think this question is asking, which is: Suppose that I train a neural network with adversarial training — can I still attack the system, knowing that it is trained adversarially?\n\nOne of the things that I didn't touch on in my presentation, because I wasn't sure how much detail to go into, is that specification-driven machine learning evaluation is extremely important. We need to make sure that when we do adversarial training, we evaluate more than just a simple adversary. We need to look at all sorts of other properties about the neural networks to ensure that whatever we deliver will be robust \\[enough to handle\\] a stronger attacker. So I think the answer to that question is that we need to test our systems extremely rigorously, more so than our training procedure. I hope that answers the question.\n\n**Nigel:** I think that touches on an aspect of it, at least. Thank you.\n\n**Chongli:** What other aspects do you think \\[I should address\\]?\n\n**Nigel:** I think we should clarify the context of this question, or this scenario that's being imagined, perhaps in Slack.\n\n**Chongli:** Yes, let's do that.\n\n**Nigel:** Let me find the next question: What do you use for interpretability and fairness?\n\n**Chongli:** There's not one single algorithm that we use. These are still being developed. As I said, I'm not an ethical scientist. I think in terms of fairness, there are a lot of different metrics. Fairness is not something that's easily defined. So when it comes to training algorithms to be fair, we assume that we have a “fairness” definition from someone who knows more about this topic, and we try to satisfy the specification.\n\nThe difficult challenges come when we try to design metrics that are more aligned with fairness.\n\nWith interpretability, again, I don’t think there is a single algorithm that we use. It depends on the applications that \\[we’re designing\\]. How we can design our neural networks to be interpretable is completely dependent on \\[the application’s purpose\\]. I hope that answers the question.\n\n**Nigel:** I think so. It's rather broad.\n\nHere’s the next one: Assuming only the current state of AI capability, what is the most malicious outcome a motivated individual, group, or organization could achieve?\n\n**Chongli:** What is the most malicious? I'm not so sure there is a single one. But something which I think is quite important right now is differentiating private neural networks. Suppose we're training \\[an AI using\\] quite sensitive data about people, and we want to make sure that the data is protected and anonymized. We don't want any malicious attackers who are interested in knowing more about these people to come in and look at these neural networks.\n\nI would say that's possibly a very important area that people should be looking at — and, at least in my opinion, it's very malicious. That's just off the top of my head, but there are obviously a lot of malicious outcomes.\n\n**Nigel:** Great. Here’s another broad question: Do you have a field in mind where you would like to see machine learning be applied more?\n\n**Chongli:** Actually, Nigel, we talked about this earlier. We could use data from charities to make sure that we're allocating resources more effectively, because people want to make sure that their money is \\[put to good use\\]. We could also make sure that a charity’s data is formatted in a way that’s easily trainable and allows more interesting research questions to be asked and answered.\n\n(Sorry, my computer keeps blacking out.)\n\nThis is definitely an area in which machine learning can make a bigger impact.\n\n**Nigel:** Great. Thank you.\n\nIs it fair to say that biases targeted in neural networks are ones that humans are aware of? Is there a possibility of machine-aware bias recognition?\n\n**Chongli:** It would be hard to make machines aware unless you drive it into the metric. Could you repeat that question again?\n\n\\[Nigel repeats the question.\\]\n\n**Chongli:** I'm not so sure that's fair, because in my opinion, most machine learning researchers just handle a data set, and don’t know its properties. They just want accuracy, for example, to be higher. But the biases present in a data set may be from a data collection team, and will be transferred to the model until you specify otherwise. Would I say the human \\[ML researchers\\] are aware of this bias? I don't think so.\n\n**Nigel:** Right.\n\n**Chongli:** That is not to say that every human is unaware of it; maybe some are aware. But in the majority of cases, I’d say they aren’t. I think the real answer to that question is this: If we're looking at a data set, we should first inspect it \\[and consider\\] what undesirable things are present in it, and how to alleviate that.\n\n**Nigel:** Great. The next question: How much model testing should be considered sufficient before deployment, given the possible unexpected behavior of even well-studied models and unknown unknowns?\n\n**Chongli:** I feel like there are several testing stages. The first testing stage is the necessary conditions that we already know — for example, we know that the image classifiers for autonomous driving systems need to be robust.\n\nFor the second stage, we might do a small-scale deployment and discover all kinds of problems. From there, we can design a new set of specifications. This is an iterative process, rather than \\[a single process of setting and meeting specifications once\\]. It requires heavy testing, both at the conceptual stage and in trying a small deployment. So in terms of deployment design, I definitely think that's very important.\n\n**Nigel:** Building on that question, is there a way to decide that testing is sufficient before deployment? What would you say are the key indicators of that?\n\n**Chongli:** I think one of the things I just mentioned was we would never know before deployment. So what we can do is deploy on a smaller scale to ensure that the risks are minimized. And if things work out, or there's a certain specification that we realize still needs to be satisfied, then we go through a second stage. \\[At that point, we might attempt\\] a slightly larger-scale deployment, and so on.\n\nBasically, the key to the question is that we can never truly know \\[whether we’ve tested adequately\\] before deployment, which is why we need the small-scale deployment to understand the problems that may exist. But before deployment, we can only know what we envision — things like keeping it differentially private.\n\n**Nigel:** Great, thank you. Next question: How can we discuss AI or machine learning concerns with people desperate for quick solutions — for example, farmers using machine learning in agriculture because of their anxiety about climate change?\n\n**Chongli:** I think it depends on what you might mean by “quick.”\n\nEven when it comes to climate change, we're probably looking at solutions that will take maybe a year or two to fully understand before deploying them. I believe that if we do anything in too much of a hurry, things might go wrong, or have unintended effects. So even if farmers are anxious, I think it is still really important to make sure that these systems are rigorously tested. So in my opinion, time is of the essence, but we should not rush.\n\n**Nigel:** I think we have time for just a few more questions. How do you distinguish between semantic differences and content differences in photos? Is this possible to do automatically for a large data set?\n\n**Chongli:** I think that it depends on what you mean by “distinguish,” because a good generative model will be able to distinguish \\[between the two\\] sometimes, but not others. I would say maybe we're not quite there yet \\[in terms of\\] fully distinguishing.\n\nI think that answers the part of the question about semantics, but in terms of “content,” do you mean: Can we identify this image to be a banana, or something like that?\n\n**Nigel:** Yes, correct.\n\n\\[Nigel repeats the question.\\]\n\n**Chongli:** Oh, I think I see what \\[the question asker\\] means. When I say “semantic differences,” I really mean features which should not affect your prediction. This is actually quite a nuanced point. It's very difficult to say what should or should not affect our prediction, but we can start with toy examples. For example, there’s a database in machine learning used for digits called [MNIST](https://en.wikipedia.org/wiki/MNIST_database). Imagine the simple task of \\[ensuring\\] that the color of a digit doesn’t affect its prediction. We would call changing a digit’s color a semantic perturbation. But of course, if you move to a more complex data set, this \\[testing process\\] becomes more difficult. We can use generative models to approximate \\[deployment\\], but we’ll never know for sure. This needs more research, of course.\n\n**Nigel:** Thank you. We have time just for one last question to round out the discussion: Outside of DeepMind, where do you think the most promising research in AI safety is being done?\n\n**Chongli:** That's a difficult question. I feel like there's a lot of great research that's happening out there, and I’m \\[not aware of\\] it all. In my limited view, I see some very good research happening at Google, OpenAI, Stanford University, and UC Berkeley. \\[I can’t\\] single out just one. Everyone's contributing to the same cause, and I also don't think it's fair to \\[compare their\\] research; anyone who's touching on AI safety should be commended, and they're all doing good work.\n\n**Nigel:** That's a great note to end the Q&A session on. Thank you, Chongli.\n\n**Chongli:** Thank you, Nigel.", "filename": "Ensuring safety and consistency in the age of machine learning _ Chongli Qin _ EAGxVirtual 2020-by Centre for Effective Altruism-video_id SS9DMr4VkbY-date 20200615.md", "id": "2c133e4425a6986bba44e7363286ae96", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "The role of existing institutions in AI strategy _ Jade Leung _ Seth Baum-by Centre for Effective Altruism-video_id pgiwvmY3brg-date 20181023", "authors": ["Jade Leung", "Seth Baum"], "date_published": "2018-10-23", "text": "# Jade Leung and Seth Baum The role of existing institutions in AI strategy - EA Forum\n\n_AI is very likely to make a huge impact on our world, especially as it grows more powerful than it is today. It’s hard for us to know exactly how that impact will look, but we do know many of the actors most likely to be involved. As AI gets stronger, what can we expect the world’s most powerful national governments to do? What about nongovernmental organizations, like the UN?_\n\n_This advanced workshop from Effective Altruism Global: San Francisco 2018, presented by Jade Leung and Seth Baum, addresses these questions from multiple perspectives. A transcript of the workshop is below, which we have lightly edited for clarity. You can also watch the talk on_ [_YouTube_](https://www.youtube.com/watch?v=pgiwvmY3brg&list=PLwp9xeoX5p8P3cDQwlyN7qsFhC9Ms4L5W&index=3) _and read it on_ [_effectivealtruism.org_](https://www.effectivealtruism.org/articles/ea-global-2018-the-role-of-existing-institutions-in-ai-strategy/)_._\n\n## The Talk\n\n**Jade:** What we're going to do is we're going to introduce ourselves briefly so you kind of know where we're coming from. Then we've got two moots which we have just then decided were the two moots that we're going to talk about. We'll chuck them up on the board and we'll spend about half a session talking about one and then half a session talking about the other. This is a session where we'd both love for you guys to toss us your questions right throughout it basically so, yes, get ready to have your questions ready and we'll open it up pretty much soon after the intro.\n\nBriefly intro to myself. I currently am based in the Future of Humanity Institute, and the work that I do specifically looks at the relationships between large multi-national technology firms and governments, specifically National Security and Defense components of governments in the US and China. And the questions that I ask are about how these actors should relate to each other, cooperate, coordinate, to steer us towards a future, or set of futures, that are more safe and beneficial than not, with transformative AI. My background is in engineering, I am masquerading as international relations person, but I'm not really that. I do a fair amount in the global governance space, in the IR space largely. That's me.\n\n**Seth:** Cool. I'm Seth Baum, I was introduced with the Global Catastrophic Risk Institute, and as a think tank we try to sit in that classic think tank space of working at the intersection of, among other things, the world of scholarship and the world of policy. We spend a lot of time talking with people in the policy worlds, especially down in DC. For me, it's down in DC, I live in New York. I guess from here it would be over in DC. Is that what you say? You don't live here.\n\n**Jade:** Sure.\n\n**Seth:** Over in DC. And talking with people in policy. I work across a number of different policy areas, do a lot on nuclear weapons, little bit on biosecurity, and then also on AI, and especially within the last year or two there have been some more robust policy conversations about AI. The policy world has just started to take an interest in this topic and is starting to do some interesting things that have fallen on our radar, and so we'll be saying more about that. Do you want to?\n\n**Jade:** Yeah, sure.\n\nSo the two institutions that we're going to chat about, is firstly the National Security and Defense. We might focus on the US National Security and Defense, and have a bit of a chat about what makes sense to engage them on in the space of our strategy, and how we should be thinking about their role in this space. That's the first moot. The second will turn to more international institutions, the kind of multilateral groups, e.g. the UN but not strictly so, and what role they could play in the space of AI strategy as well. We'll kind of go half and half there.\n\nJust so I have a bit of a litmus test for who's in the audience, if I say AI strategy, who does that mean anything to? Ah, awesome. Okay, cool. Maybe we'll just start with getting Seth's quick perspective on this question. So the moot here is, this house believes that in the space of AI strategy, we should be actively engaging with National Security and Defense components of the US government. Do you want to speak quickly to what your quick take on that is?\n\n**Seth:** Sure. So an interesting question here is engaging with, say the US government especially on the national security side, is this a good thing or a bad thing? I feel like opinions vary on this, maybe even within this room opinions vary on whether having these conversations is a good thing or a bad thing. The argument against it that I hear is essentially, you might tell them AI could take over the world and kill everyone, and they might hear, AI could take over the world, hear that and then go on to do harmful things.\n\nI personally tend to be more skeptical of that sort of argument. The main reason for that is that the people who are in the government and working on AI, they've already heard this idea before. It's been headline news for a number of years now, some people from our communities including your organization caused some of those headlines.\n\n**Jade:** I feel like you're asking me to apologize for them, and I'm not going to.\n\n_Seth_: If one is concerned about the awareness of various people in government about runaway AI, you could ask questions like, was the publication of the Superintelligence book a good thing or a bad thing? You could maybe there make a case in either direction-\n\n**Jade:** Could we do a quick poll actually? I'd be curious. Who thinks the publication of Superintelligence was on net, a net positive thing? On net, a negative thing? Hell yeah.\n\n**Seth:** Doesn't mean that that's actually true.\n\n**Jade:** Fair enough.\n\n**Seth:** Just to be clear, I'm not arguing that it was a net negative, but the point is that the idea is out, and the people who work on AI, sure, they're mostly working on a narrow near term AI, but they've heard the idea before. They don't need us to put the thought into their heads. Now of course we could be kind of strengthening that thought within their heads, and that can matter, but at the same time when I interact with them, I actually tend to not be talking about superintelligence, general intelligence, that stuff anyway. Though more for a different reason, and that's because while they have heard of the idea, they're pretty skeptical about it. Either because they think it probably wouldn't happen or because if it would happen it would be too far in the future for them to worry about. A lot of people in policy have much more near term time horizons that they have to work with. They have enough on their plate already, nobody's asking them to worry about this, so they're just going to focus on the stuff that they actually need to worry about, which includes the AI that already exists and is in the process of coming online.\n\nWhat I've found is then because they're pretty dismissive of it, I feel like if I talk about it they might just be dismissive of what I have to say, and that's not productive. Versus instead if the message is we should be careful about AI that acts unpredictably and causes unintended harms, that's not really about superintelligence. That same message applies to the AI that exists already: self driving cars, autonomous weapons. You don't want autonomous weapons causing unintended harm, and that's a message that people are very receptive to. By emphasizing that sort of message we can strengthen that type of thinking within policy worlds. That's for the most part the message that I've typically gone with, including in the National Security communities.\n\n**Jade:** Cool. I've got a ton of questions for you, but maybe to quickly interject my version of that. I tend to agree with a couple of things that Seth said, and then disagree with a couple specific things.\n\nI think generally the description of my perspective on this is that there's a very limited amount of useful engagement with National Security today, and I think the amount of potential to do wrong via engaging with them is large, and sufficiently large that we should be incredibly cautious about the manner in which we engage. That is a different thing to saying that we shouldn't engage with them at all, and I'll nuance that a little bit. I think, maybe to illustrate, I think the priors or assumptions that people hold when they're taking a stance on whether you should engage with National Security or not, is people I think disagree on maybe three axes. I said three because people always say three, I'm not entirely sure what the three are but we'll see how this goes.\n\nSo I think the first is people disagree on the competence of National Security to pursue the technology themselves, or at least to do something harmful with said information about capabilities of the technology. I think some people hold the extreme view that they're kind of useless and there's nothing that they can do in-house that is going to cause technology to be more unsafe than not, which is the thing that you're trying to deter. On the other hand, some people believe that NatSec at least have the ability to acquire control of this technology, or can develop it in-house sufficiently so, that an understanding of significant capabilities of AI would lead them to want to pursue it, and they can pursue it with competence, basically.\n\nI think that kind of competence thing is one thing that people disagree on, and I would tend to land on them being more competent than people think. Even if that's not the case, I think it's always worth being conservative in that sense anyways.\n\nSo that's the first axis. Second axis I think is about whether they have a predisposition, or whether they have the ability to absorb this kind of risk narrative effectively, or whether that's just so orthogonal to the culture of NatSec that it's not going to be received in a nuanced enough way and they're always going to interpret whatever information with a predisposition to want to pursue unilateral military advantage, regardless of what you're saying to them. Some people on one end would hold that they are reasonable people with a broad open mind, and plausibly could absorb this kind of long-term risk narrative. Some other people would hold that information that is received by them will tend to just be received with the lens of how can we use this to secure a national strategic advantage.\n\nI would tend to land on us having no precedent for the former, and having a lot more precedent for the latter. I think I'd like to believe that folks at DOD and NatSec can absorb, or can come around more to the long term risk narrative, but I don't think we've seen any precedent enough for that to place credence on that side of the spectrum. That's kind of where I sit on that second axis.\n\nI said I had a third, I'm not entirely sure what the third is, so let's just leave it at two.\n\nI think that probably describes the reasons why I hold that I think engaging with NatSec can be plausibly useful, but for every kind of one useful case, I can see many more reasons why engaging with them could plausibly be a bad idea, at least at this stage. So I'd encourage a lot more caution than I think Seth would.\n\n**Seth:** That's interesting. I'm not sure how much caution… I would agree, first of all I would agree, caution is warranted. This is one reason why a lot of my initial engagement is oriented towards generically safe messages like, \"avoid harmful unintended consequences.\" I feel like there are limits to how much trouble you can get in spreading messages like that. It's a message that they will understand pretty uniformly, it's just an easy concept people get that. They might or might not do much with it, but it's at least probably not going to prompt them to work in the wrong directions.\n\nAs far as their capability and also their tendency to take up the risk narrative, it's going to vary from person to person. We should not make the mistake of treating National Security communities even within one country as being some monolithic entity. There are people of widely varying technical capacity, widely varying philosophical understanding, ideological tendencies, interest in having these sorts of conversations in the first place, and so on.\n\nA lot of the work that I think is important is meeting some people, and seeing what the personalities are like, seeing where the conversations are especially productive. We don't have to walk in and start trumpeting all sorts of precise technical messages right away. It's important to know the audience. A lot of it's just about getting to know people, building relationships. Relationships are really important with these sorts of things, especially if one is interested in a more deeper and ongoing involvement in it. These are communities. These are professional communities and it's important to get to know them, even informally, that's going to help. So I would say that.\n\n**Jade:** I tend to agree with that sentiment in particular about building a relationship and getting trust within this community can take a fair amount of time. And so if there's any sort of given strategic scenario in which it's important to have that relationship built, then it could make sense to start some paving blocks there.\n\n**Seth:** It is an investment. It is an investment in time. It's a trade off, right?\n\n**Jade:** What's an example of a productive engagement you can think of having now? Say if I like put you in a room full of NatSec people, what would the most productive version of that engagement look like today?\n\n**Seth:** An area that I have been doing a little bit of work on, probably will continue to do more, is on the intersection of artificial intelligence and nuclear weapons. This is in part because I happen to have also a background on nuclear weapons, a scenario where I have a track record, a bit of a reputation, and I know the lingo, know some of the people, can do that. AI does intersect with nuclear weapons in a few different ways. There is AI built into some of the vehicles that deliver the nuclear weapon from point A to point B, though maybe not as much as you might think. There's also AI that can get tied into issues of the cybersecurity of the command and control Systems, essentially the computer systems that tie the whole nuclear enterprise together, and maybe one or two other things. The National Security communities, they're interested in this stuff. Anything that could change the balance of nuclear power, they are acutely interested in, and you can have a conversation that is fairly normal from their perspective about it, while introducing certain concepts in AI.\n\n**Seth:** So that's one area that I come in. The other thing I like about the nuclear weapons is the conversation there is predisposed to think in low frequency, high severity risk terms. That's really a hallmark of the nuclear weapons conversation. That has other advantages for the sorts of values that we might want to push for. It's not the only way to do it, but if you were to put me in a room, that's likely to be the conversation I would have.\n\n**Jade:** So if you were to link that outcome to a mitigation of risk as an end goal, how does them understanding concepts better in AI translate into a mitigation of risk, broadly speaking? Assuming that's the end goal that you wanted to aim for.\n\n**Seth:** One of the core issues with AI is this question of predictability and unintended consequences. You definitely do not want unpredictable AI managing your nuclear weapons. That is an easy sell. There is hyper-caution about nuclear weapons, and in fact if you look at the US procurement plans for new airplanes to deliver nuclear weapons, the new stealth bomber that is currently being developed, will have an option to be uninhabited, to fly itself. I think it might be remote controlled. The expectation is that it will not fly uninhabited on nuclear missions. That they want a human on board when there is also a nuclear weapon there, just in case something goes wrong. Even if the system is otherwise pretty reliable, that's just their… That's how they would look at this, and I think that's useful. So here we have this idea that AI might not do what we want it to, that's a good starting point.\n\n**Jade:** Sure, cool. Let's toss it out to the audience for a couple of questions. We've got like 10 minutes to deal with NatSec and then we're going to move on into multilaterals. Yeah, go for it.\n\nI didn't realize you were literally one behind the other. Maybe you first and then we'll go that way.\n\n**Audience Member:** I was just in Washington, DC for grad school and had a number of friends who were working for think tanks that advise the military on technical issues like cybersecurity, or biosecurity, and I definitely felt like I had this sense of maybe the people in charge were pretty narrow-minded, but that there's this large non-homogenous group of people, some of whom were going to be very thoughtful and open-minded and some of whom weren't. And that there's definitely places where the message could fall on the right ears, and maybe something useful done about it, but it would be really hard to get it into the right ears without getting it into the wrong ears. I was wondering if you guys have any feelings about, is there a risk to giving this message or to giving a message to the wrong people? Or is that like very little risk, and it will just go in one ear and out the other if it goes to the wrong person? I feel like you could think about that either way.\n\n**Jade:** Yeah, I'm curious to hear more about your experience actually, and whether there was a tendency for certain groups, or types of people to be the right ears versus the wrong ears. If you've got any particular trends that popped out to you, I'd love to hear that now or later or whenever.\n\nBut as a quick response, I think there's a couple of things to break down there. One is, what information are you actually talking about, what classifies as bad information to give versus good.\n\nTwo, is whether you have the ability to nuance the way that it's received, or whether it goes and is received in some way, and the action occurs without your control. I think, in terms of good information, that I would be positive about good ears receiving, and a bit meh about more belligerent ears received it, they couldn't actually do anything useful with the information anyway.\n\nI think anything that nuances the technicality of what the technology does and doesn't do, generally is a good thing. I think also the element of introducing that risk narrative, if it falls on good ears, it can go good ways, if it falls on bad ears, they're just going to ignore it anyway.\n\nYou can't actually do anything actively bad with information about there being a risk, that maybe you don't have a predisposition to care about anyway. I'd say that's good information. I think the ability for you to pick the right ears for it to be received by, I'm skeptical about that.\n\nI'm skeptical about the ability for you translate reliably up the hierarchy where it lands in a decision maker's hands, and actually translates into action that's useful. That would be my initial response to that, is that even if it exists and it's a more heterogeneous space than what would assume, I wouldn't trust that we have the ability to read into that well, is my response.\n\n**Seth:** I would say I find it really difficult to generalize on this. In that, each point of information that we might introduce to a conversation is different. Each group that we would be interacting with can be different, and different in important ways. I feel, if we are actually in possession of some message that really is that sensitive then, to the extent that you can, do your homework on who it is that you're talking to, what the chain of command, the chain of conversation looks like.\n\nIf you're really worried, having people who you have a closer relationship with, where there may be at least some degree of trust, although, who knows what happens when you tell somebody something? Can you really trust me with what you say? Right? You don't know who else I'm talking to, right? So on for anyone else. At the end of the day, when decisions need to be made, I would want to look at the whole suite of factors, this goes for a lot of what we do, not just the transmission of sensitive information.\n\nA lot of this really is fairly context specific and can come down to any number of things that may be seemingly unrelated to the thing that we think that we are talking about. Questions of bureaucratic procedure that get into all sorts of arcane minute details could end up actually being really decisive factors for some of these decisions.\n\nIt's good for us to be familiar, and have ways of understanding how it all works, that we can make these decisions intelligently. That's what I would say.\n\n**Jade:** Cool.\n\n**Audience Member:** All right, so from what I understand, a lot of people are new to this space. What sort of skills do you think would be good for people to learn? What sort of areas, like topics, should people delve into to prove themselves in AI strategy? What sort of thinking is useful for this space?\n\n**Seth:** That's a good question. Should I start?\n\n**Jade:** Yeah.\n\n**Seth:** Okay. That's a good question. I feel for those who really want to have a strong focus on this, it helps to do a fairly deep dive into the worlds that you would be interacting with.\n\nI can say from my own experience, I've gotten a lot of mileage out of fairly deep dives into a lot of details of international security.\n\nI got to learn the distinction between a fighter plane and a bomber plane for example. The fighter plans are smaller and more agile, and maneuverable and the bombers are big sluggish beasts that carry heavy payloads and it's the latter that have the nuclear weapons, it's the former that benefit from more automation and a faster more powerful AI, because they're doing these really sophisticated aerial procedures, and fighting other fighter planes and that's… The more AI you can pack into that, the more likely you are to win, versus the bomber planes it just doesn't matter, they're slow and they're not doing anything that sophisticated in that regard.\n\nThat's just one little example of the sort of subtle detail that comes from a deeper dive into the topic that, in conversations, can actually be quite useful, you're not caught off guard, you can talk the lingo, you know what they're saying, you can frame your points in ways that they understand.\n\nAlong the way you also learn who is doing what, and get in that background. I would say it helps to be in direct contact with these communities. Like myself, I live in New York, I don't live in Washington, but I'm in Washington with some regularity attending various events, just having casual conversations with people, maybe doing certain projects and activities, and that has been helpful for positioning myself to contribute in a way that, if I want to, I can blend in.\n\nThey can think of me as one of them. I am one of them, and that's fine. That's normal. While also being here, and being able to participate in these conversations. So that's what I would recommend, is really do what you can to learn how these communities think and work and be able to relate to them on their level.\n\n**Jade:** Addition to that would be, try to work on being more sensible, is the main thing I would say. It's one of those things where, a shout out to CFAR for example, those kind of methodologies… basically, I think the people that I think are doing the best work in this space, are the people who have the ability to A. Absorb a bunch of information really quickly, B. Figure out what is decision relevant quickly, and C. Cut through all the bullshit that is not decision relevant but that people talk about a lot.\n\nI think those three things will lead you towards asking really good questions, and asking them in a sensible way, and coming to hypotheses and answers relatively quickly, and then knowing what to do with them.\n\nSorry, that's not a very specific answer, just work on being good at thinking, and figure out ways to train your mind to pick up decision relevant questions.\n\n**Audience Member:** CFAR would be a good be a good organization for that, is that what you're saying?\n\n**Jade:** CFAR would be epic, yeah. We've got a couple people from CFAR in the audience, I think. Do you want to put your hand up? If you're here. Nice. So, have a chat to them about how to get involved.\n\nThe other thing I'd say, is there is a ton of room for different types of skills, and figuring out where your comparative advantage is, is a useful thing.\n\nI am not a white male, so I have a less comparative advantage in politics, I'm not a US citizen, can't do USG stuff, those are facts about me that I know will lead me toward certain areas in this space.\n\nI am an entrepreneur by background, that leads me to have certain skills that maybe other people marginally don't have. Think about what you enjoy, what you're good at, and think about the whole pipeline of you doing useful stuff, which starts probably at fundamentally researching things, and ends at influencing decision makers/being a decision maker. Figure out where in that pipeline you are most likely to have a good idea.\n\nAnother shout out to 80k, who does a lot of good facilitation of thinking about what one's comparative advantage could be, and helps you identify those, too.\n\n**Seth:** You mentioned the white male thing, and yeah sure, that's a thing.\n\n**Jade:** That was genuinely not a dig at you being a white male.\n\n**Seth:** No.\n\n**Jade:** I promise. It's a dig at all of you for being white males. I just realized this is recorded, and this has gone so far downhill I just can't retract any of that. We're going to keep going.\n\n**Seth:** So, for example, if I was attending a national security meeting instead of this, I might have shaved. Right? Because, it's a room full of a lot of people who are ex-military, or even active military or come from more… much of the policy culture in DC is more conservative, they're wearing suits and ties. Is there a single suit and tie in this room? I don't see one.\n\nIt's pretty standard for most of the events there that I go to. Simple things like that can matter.\n\n**Jade:** Yeah.\n\n**Seth:** You don't have to be a white male to succeed in that world. In fact, a lot of the national security community is actually pretty attentive to these sorts of things, tries to make sure that their speaking panels have at least one woman on them, for example.\n\nThere are a lot of very successful women in the national security space, very talented at it, and recognized as such. You don't have to look like me, minus the beard.\n\n**Jade:** Nice. That's good to know. It's always useful having a token women's spot, actually. All right, one last question on NatSec, then we're going to move on. Yeah?\n\n**Audience Member:** What do you think about the idea of measurements of algorithmic and hardware progress, and the amount of money going into AI and those kinds of measurements becoming public, and then NatSec becoming aware of?\n\n**Jade:** That's a really interesting question.\n\nI'm generally very, pro-that happening. I think those efforts are particularly good for serving a number of different functions. One is, the process of generating those metrics is really useful for the research community, to understand what metrics we actually care about measuring versus not. B, the measurement of them systematically across a number of different systems is very useful for at least starting conversations about which threshold points we care about superseding, and what changes about your strategy if you supersede certain metrics particularly quicker than you expected to.\n\nI'm generally pro-those things, in terms of… I guess the pragmatic question is whether you can stop the publication of them anyway, and I don't think you can. I would say that if you had the ability to censor them, it would still be a net positive to have that stuff published for the things that I just mentioned.\n\nI would also plausibly say that NatSec would have the ability to gather that information anyway. Yeah. I don't necessarily also think it's bad for them to understand progress better, and for them to be on the same page as everyone else about, specifically as the same as the technical research community, about how these systems are progressing. I don't think that's a bad piece of information necessarily, sorry, that was a really handwoven answer, but…\n\n**Seth:** I feel like it is at least to an approximation reasonable to assume that if there's a piece of information and the US intelligence community would like that information, they will get it.\n\nEspecially if it's a relatively straightforward piece of information like that, that's not behind crazy locked doors and things of that sort. If it's something that we can just have a conversation about here, and they want it, they will probably get that information. There may be exceptions, but I think that's a reasonable starting point.\n\nBut I feel like what's more important than that, is the question of like, the interpretation of the information, right? It's a lot of information, the question is what does it mean?\n\nI feel like that's where we might want to think more carefully about how things are handled. Even then there's a lot of ideas out there, and our own ideas on any given topic are still just another voice in a much broader conversation.\n\nWe shouldn't overestimate our own influence on what goes on in the interpretation of intelligence within a large bureaucracy. If it's a question of, do we communicate openly where the audience is mostly say, ourselves, right, and this is for our coordination as a community, for example?\n\nWhere, sure, other communities may hear this, whether in the US or anywhere around the world, but to them we're just one of many voices, right? In a lot of cases it may be fair to simply hide in plain sight. In that, who are we from their perspective, versus who are we from our perspective? We're paying attention to ourselves, and getting a lot more value of it.\n\nAgain, you can take it on a case by case basis, but that's one way of looking at it.\n\n**Jade:** Cool. We're going to segue into talking about international institutions, maybe just to frame this chat a little bit. Specifically the type of institutions that I think we want to talk about, are probably multi-lateral state-based institutions.\n\nThat being, the UN and the UN's various children, and those other bodies that are all governed by the system. That assumes a couple of things: one, that states are the main actors at the table that mean anything, and two, that there are meaningful international coordination activities. Institutions are composed of state representatives and various things. The question here is, are they useful to engage with? I guess that's like a yes or no question.\n\nThen if you want it nuance it a bit more, what are they useful for versus what are they not? Does that sound like a reasonable…\n\n**Seth:** Yes.\n\n**Jade:** My quick hot take on that, then I'll pass it over to Seth. I'll caveat this by saying, well I'll validate my statement by saying that I've spent a lot of my academic life working in the global governance space.\n\nThat field is fundamentally very optimistic about these institutions, so if anything I had the training to predispose me to be optimistic about them, and I'm not. I'm pessimistic about how useful they are for a number of reasons.\n\nI think A is to do with the state-centric approach, B is to do with precedent, about what they're useful for versus not, and C it's also the pace at which they move.\n\nTo run through each one of those in turn, I think the assumption that a lot these institutions held, and they were built to rely on these assumptions, that states the core actors who are needing to be coordinated.\n\nThey are assumed to have the authority and legitimacy, to move the things that need to move, in order for this coordination to do the thing you want it to do. That is a set of assumptions that I think used to hold better, but almost certainly doesn't hold now, and almost certainly doesn't hold in the case of AI.\n\nParticularly so, actors that I think are neglected and aren't conceptualized reasonably in these international institutions, large firms, and also military and security folks, or that component of government, doesn't tend to be the component of government that's represented in these institutions.\n\nThose two are probably the most important actors, and they aren't conceptualized as the most important actors in that space. That's one reason to be skeptical, that by design they aren't designed to be that useful.\n\nI think two, in terms of historically what they've been useful for, I think UN institutions have been okay at doing non-setting, non-building, non-proliferation stuff, I think they've been okay at doing things like standard setting, and instituting these norms and translating them into standards that end up proliferating across industries. That is useful as a function. I'll say particularly so in the case of technologies, the standardization stuff is useful, so I'm more optimistic about bodies like the ISO, which stands for the International Standards something, standards thing. Organization, I guess. Does that seem plausible? That seems plausible. I'm optimistic about them more so than I am about like the UN General Council or whatever. But, in any case, I think that's kind of a limited set of functions, and it doesn't really cover a lot of the coordination cooperation that we want it to do.\n\nAnd then third is that historically these institutions have been so freaking slow at doing anything, and that pace is not anywhere close to where it needs to be. The one version of this argument is like if that's the only way that you can achieve the coordination activities that you want, then maybe that's the best that you have, but I don't think that's the best that we have. I think there are quicker arrangements between actors directly, and between small clubs of actors specifically, that will just be quicker at achieving the coordination that we need to achieve. So I don't think we need to go to the effort of involving slow institutions to achieve the ends that we want to. So, that's kind of why I'm skeptical about the usefulness of these institutions at all, with the caveat of them being useful for standard setting potentially.\n\n**Seth:** I feel like people at those institutions might not disagree with what you just said. Okay, the standards thing, I think that's an important point. Also… so the UN. A lot of what the UN does operates on consensus across 200 countries. So yeah, that's not going to happen all that much. To the extent that it does happen, it's something that will often build slowly over time. There may be some exceptions like astronomers find an asteroid heading towards Earth, we need to do something now. Okay, yeah, you could probably get a consensus on that. And even then, who knows? You'd like to think, but… and that's a relatively straightforward one, because there's no bad guys. With AI, there's bad guys. There's benefits of AI that would be lost if certain types of AI that couldn't be pursued, and it plays out differently in different countries and so on, and that all makes this harder.\n\nSame story with like climate change, where there are countries who have reasons to push back against action on climate change. Same thing with this. I'd say the point about states not necessarily being the key actors is an important one, and I feel like that speaks to this entire conversation, like is it worth our time to engage with national and international institutions? Well, if they're not the ones that matter, then maybe we have better things to do with our time. That's fair, because it is the case right now that the bulk of work of AI is not being done by governments. It's being done by the private corporate sector and also by academia. Those are, I would say, the two main sources, especially for the artificial general intelligence.\n\nLast year, I published a survey of general intelligence R&D projects. The bulk of them were in corporations or academia. Relatively little in governments, and those, for the most part, tended to be smaller. There is something to be said for engaging with the corporations and the academic institutions in addition to, or possibly even instead of, the national government ones. But that's a whole other matter.\n\nWith respect to this, though, international institutions can also play a facilitation role. They might not be able to resolve a disagreement but they can at least bring the parties together to talk to them. The United Nations is unusually well-equipped to get, you know, pick your list of countries around the room together and talking. They might not be able to dictate the terms of that conversation and define what the outcome is. They might not be able to enforce whatever agreements, if any, were reached in that conversation. But they can give that conversation a space to happen, and sometimes just having that is worthwhile.\n\n**Jade:** To what end?\n\n**Seth:** To what end? In getting countries to work on AI in a more cooperative and less competitive fashion. So even in the absence of some kind of overarching enforcement mechanism, you can often get cooperation just through these informal conversations and norms and agreements and so on. The UN can play a facilitation role even if it can't enforce every country to do what they said they would do.\n\n**Jade:** What's the best example you have of a facilitated international conversation changing what would have been the default state behavior without that conversation?\n\n**Seth:** Oh, that's a good question. I'm not sure if I have a…\n\n**Jade:** And if anyone actually in the audience actually has… yes.\n\n**Audience Member:** Montreal Protocol.\n\n**Jade:** Do you want to expand? I don't think that was not going to happen.\n\n**Seth:** So the Montreal Protocol for ozone. Did you want to expand on that?\n\n**Audience Member:** Yeah, it was a treaty that reduced emission… They got a whole bunch of countries to reduce emissions of greenhouse gases that would effectively destroy the ozone layer, and brought those emissions to very low levels, and now the ozone layer is recovering. Arguably, without that treaty, like maybe that wouldn't have happened. I don't know what the counterfactual would be.\n\n**Jade:** Maybe. Yeah, and I think the Montreal… that's a good example. I think the Montreal Protocol… there was a clear set of incentives. There were barely any downsides for any state to do that. So put that alongside the Kyoto Protocol, for example, where the ask was somewhat similar, or similarly structured. Off the record, she says as this is being recorded live, I don't think the Kyoto Protocol had any win… as close as effective as the Montreal Protocol/wasn't even close to achieving whatever the goals were on paper. I think the reason was because the gas that was being targeted, there were very clear economic incentives for states to not mitigate those. In so far as the Montreal Protocol was a good example, it maybe like pointed out a really obvious set of incentives that just were going downhill anyways. But I don't know if it tweaked any of those, would be my response to that.\n\n**Seth:** It is the case that some types of issues are just easier to get cooperation on than others. If there's a really clear and well-recognized harm from not cooperating, and the cost of cooperating is relatively low. I am not as much an expert on the Montreal Protocol but, superficially, my understanding is that addressing the ozone issue just happened to be easier than addressing the climate change issue, which has just proved to be difficult despite efforts. They might have gone about the Kyoto Protocol in a rather suboptimal fashion potentially but even with a better effort the climate change might just be harder to get collective action on, given the nature of the issue.\n\nThen likewise, the question for us is so what does AI look like? Is it something that is easy to get cooperation on or not? Then what does that mean for how we would approach it?\n\n**Jade:** Yeah, and I think, if anything… if you were to put the Montreal Protocol on one end of the spectrum where, I guess like the important things to abstract away from that particular case study is that you had a very clear set of incentives to mitigate this thing, and you had basically no incentive for anyone to keep producing the thing. So, that was easy. Then somewhere in the middle is the Kyoto Protocol where you've got pretty large incentives to mitigate the thing because climate, and then you've got some pretty complicated incentives to want to keep producing the thing, and the whole transition process is like hard and whatnot. And then we didn't sufficiently have sort of critical mass of believing that it was important to mitigate the thing, so it just became a lot harder. I think AI, I would put on that end of the spectrum, where you've got so many clear incentives to keep pursuing the thing. If anything, because you've got so many different uses that it's just economically very tasty for countries to pursue, not just countries but a number of other actors who want to pursue it. You've got people who don't even believe it's worth mitigating at all.\n\nSo I think, for that reason, I'd put it as astronomically bloody hard to do the cooperation thing on that side, at least in the format of international institutions. So I think the way to make it easier is to have a smaller number of actors and to align incentives and then to make clearer, sort of like binding mechanisms for that to have a shot in hell at working, in terms of cooperation.\n\n**Seth:** But it could depend on which AI we're talking about. If you would like an international treaty to just stop the development of AI… yeah, I mean, good luck with that. That's probably not going to happen. But, that's presumably not what we would want in the first place because we don't need the restriction of all AI. There's plenty of AI that we're pretty confident can be a net positive for the world and we would not want that AI to be restricted. It would be in particular the types of AI that could cause major catastrophes and so on. That's what we would be especially interested in restricting. So an important question, this is actually more of like a technical computer science question than an international institutions question, but it feeds directly into this is, so which AI would we need to restrict? With an eye towards say future catastrophe scenarios, is it really like the core mainstream AI development that needs to be restricted, because all of that is a precursor to the stuff that could get out of hand? Or is it a fairly different, distinct branch of AI research that could go in that direction, such that the mainstream AI work can keep doing what it's doing? So there'll be some harms from it but they'll be more manageable, less catastrophic. How that question is answered, I think, really speaks to the viability of this.\n\n**Jade:** Yeah. I guess what I'm skeptical of is the ability to segregate the two. Like I don't think there are clear delineations, and if people have ideas for this please tell me, but I don't think there are clear delineations for separating what are civilian, peaceful, good applications from military applications, at least in technical terms. So it becomes hard, if you want to design a thing, if you don't what the thing is that you're targeting, where you can't even specify what you're targeting to mitigate. So that's something that I'm currently skeptical of, and would love people to suggest otherwise.\n\n**Seth:** Real quick, I would say it's not about civilian versus military, but about whether-\n\n**Jade:** Good versus bad.\n\n**Seth:** But I'm curious to see people's reactions to this.\n\n**Jade:** Yes. Yeah.\n\n**Audience Member:** Tangential, but coming back to the… you sort of were suggesting earlier the information asymmetry with national security is sitting very much on their side. That if they want the information, we're not keeping it from them. They're probably going to have. In a similar vein, do you think that in terms of the UN and the political machinery, that they're even necessarily going to have insight into what their own national security apparatus are working on, what the state of affairs is there? If that's sort of sitting in a separate part of the bureaucratic apparatus from the international agreements, how effective could that ever even be if you don't have that much interface between the two? Does that…\n\n**Seth:** Essentially like, how can you monitor and enforce an agreement if you don't have access to the information that… with difficulty. This is a familiar problem, for example, with biological weapons. The technology there can also be used for vaccine development and things of that sort. It can cut both ways and a lot of it is dual-use, that's the catch phrase, and because of that, you have companies that have the right sort of equipment and they don't want other people knowing what they're doing because it's intellectual property. So the answer is with difficulty, and this is a challenge. The more we can be specific about what we need to monitor, the easier it becomes but that doesn't necessarily make it easy.\n\n**Audience Member:** Something governments seem to hate is putting the brakes on anything that's like making them money, tax money. But something they seem to love is getting more control and oversight into corporations, especially if they think there's any sort of reputational risk or risk to them, and that the control and oversight is not going to pose any sort of economic slowdown in costs. Do you think there's a possibility of framing the message simply as, the countries should agree that non-state actors get to be spied on by states, and the states get some sort of oversight? And the states might all agree to that, even if the non-state actors don't like it very much. And the non-state actors might be okay if there was no… if it seemed like it was toothless at the start. So maybe if there was some sort of like slippery slope into government oversight to make things more safe that could be started with relatively low barrier.\n\n**Jade:** Nice. I like the way you think. That's nice. Yeah, I think the short answer is yes. I think the major hurdle there is that firms will hate it. Firms, particularly multinational technology firms, that actually have a fair amount of sway in a number of different dimensions of sway, just won't be good with it and will threaten some things that states care about.\n\n**Audience Member:** As someone who does AI research for a multinational firm, I really do actually feel a lot of friction when allowing certain sorts of code to cross national boundaries. So actually, I would like to say that state regulation is making more of an impact than you might realize, that there are certain sorts of things, especially around encryption protocols, where state agreements have made a big difference as to what can cross state boundaries, even with a lot of states not being in on the agreement. Just like the developed nations as of 30 years ago all agreeing, \"Hey, we're going to keep the encryption to ourselves.\" Means that my coworkers in India don't get to see everything I get to work with because there's protocols in place. So, it does matter to international organizations, if you can get the laws passed in the first place.\n\n**Jade:** Yeah, sure. Any other examples aside from encryption, out of curiosity? I know the encryption side of it relatively well but are there other-\n\n**Seth:** Well, there's the privacy. My American nonprofit organization had to figure out if we needed to do anything to comply with Europe's new privacy law.\n\n**Jade:** You sound very happy about that.\n\n**Seth:** I say nothing. We are just about out of time, though, so maybe we should try to wrap up a little bit as far as take home messages. I feel like we did not fully answer the question of the extent to which engaging with national and international organizations is worth our time in the first place, to the question of like are these even the key actors? Superficially, noting we're basically out of time, I can say there are at least some reasons to believe they could end up being important actors and that I feel like it is worth at least some effort to engage with, though we should not put all our eggs in that basket, noting that other actors can be very important. Then, as far as how to pursue it, I would just say that we should try to do it cautiously and with skill, and by engaging very deeply and understanding the communities that we're working with.\n\n**Jade:** I think the meta point maybe to point out as well is that these are very much… hopefully, illustratively, it's a very much alive debate on both of these questions. It's hard and there are a lot of strategic parameters that matter, and it's hard to figure out what the right strategy is moving forward and I hope you're not taking away that there are perspectives that are held strongly within this community. I hope you're mostly taking away that it's a hard set of questions that needs a lot more thought, but more so than anything it needs a lot more caution in terms of how we think about it because I think there are important things to consider. So, hopefully that's what you're taking away. If you're not, that should be what you're taking away. All right, thanks guys.", "filename": "The role of existing institutions in AI strategy _ Jade Leung _ Seth Baum-by Centre for Effective Altruism-video_id pgiwvmY3brg-date 20181023.md", "id": "4caf8a8538683fdc0d1309c30baf68fd", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "AI safety needs social scientists _ Amanda Askell _ EA Global - London 2018-by Centre for Effective Altruism-video_id TWHcK-BNo1w-date 20190301", "authors": ["Amanda Askell"], "date_published": "2019-03-01", "text": "# Amanda Askell AI safety needs social scientists - EA Forum\n\n_When an AI wins a game against a human, that AI has usually trained by playing that game against itself millions of times. When an AI recognizes that an image contains a cat, it’s probably been trained on thousands of cat photos. So if we want to teach an AI about human preferences, we’ll probably need lots of data to train it. And who is most qualified to provide data about human preferences? Social scientists! In this talk from EA Global 2018: London, Amanda Askell explores ways that social science might help us steer advanced AI in the right direction._\n\n_A transcript of Amanda's talk is below, which CEA has lightly edited for clarity. You can also read this talk on_ [_effectivealtruism.org_](https://www.effectivealtruism.org/articles/ea-global-2018-ai-safety-needs-social-scientists)_, or watch it on_ [_YouTube_](https://www.youtube.com/watch?v=TWHcK-BNo1w)_._\n\n## The Talk\n\n![](https://images.ctfassets.net/ohf186sfn6di/3sbM9JNPFim0eNdrSmFH1g/185b47e7bfbab29b940a9658d140ece1/1000_Amanda_Askell.jpg)\n\nHere's an overview of what I'm going to be talking about today. First, I'm going to talk a little bit about why learning human values is difficult for AI systems. Then I'm going to explain to you the safety via debate method, which is one of the methods that OpenAI's currently exploring for helping AI to robustly do what humans want. And then I'm going to talk a little bit more about why I think this is relevant to social scientists, and why I think social scientists - in particular, people like Experimental Psychologists and Behavioral Scientists - can really help with this project. And I will give you a bit more details about how they can help, towards the end of the talk.\n\n![](https://images.ctfassets.net/ohf186sfn6di/uy4ZBr1zoWXrpvj2mco3P/02e0f9fca0e2bc17e6e7d526ad0adaff/1000_Amanda_Askell__1_.jpg)\n\nLearning human values is difficult. We want to train AI systems to robustly do what humans want. And in the first instance, we can just imagine this being what one person wants. And then ideally we can expand it to doing what most people would consider good and valuable. But human values are very difficult to specify, especially with the kind of precision that is required of something like a machine learning system. And I think it's really important to emphasize that this is true even in cases where there's moral consensus, or consensus about what people want in a given instance.\n\nSo, take a principle like \"do not harm someone needlessly.\" I think we can be really tempted to think something like: \"I've got a computer, and so I can just write into the computer, 'do not harm someone needlessly'\". But this is a really underspecified principle. Most humans know exactly what it means, they know exactly when harming someone is needless. So, if you're shaking someone's hand, and you push them over, we think this is needless harm. But if you see someone in the street who's about to be hit by a car, and you push them to the ground, we think that's not an instance of needless harm.\n\nHumans have a pretty good way of knowing when this principle applies and when it doesn't. But for a formal system, there's going to be a lot of questions about precisely what's going on here. So, one question this system may ask is, how do I recognize when someone is being harmed? It's very easy for us to see things like stop signs, but when we're building self-driving cars, we don't just program in something like, \"stop at stop sign\". We instead have to train them to be able to recognize an instance of a stop sign.\n\nAnd then the principle that says that you shouldn't harm someone needlessly employs the notion that we understand when harm is and isn't appropriate, whereas there are a lot of questions under the surface like, when is harm justified? What is the rule for all plausible scenarios in which I might find myself? These are things that you need to specify if you want your system to be able to work in all of the cases that you want it to be able to work in.\n\nI think that this is an important point to internalize. It's easy for humans to identify, and to pick up, say, a glass. But training a ML System to perform the same task requires a lot of data. And this is true of a lot of tasks that humans might intuitively think are easy, and we shouldn't then just transfer that intuition to the case of machine learning systems. And so when we're trying to teach human values to any AI system, it's not that we're just looking at edge cases, like trolley problems. We're really looking at core cases of making sure that our ML Systems understand what humans want to do, in the everyday sense.\n\nThere are many approaches to training an AI to do what humans want. One way is through human feedback. You might think that humans could, say, demonstrate a desired behavior for an AI to replicate. But there are some behaviors it's just too difficult for humans to demonstrate. So you might think that instead a human can say whether they approve or disapprove of a given behavior, but this might not work too well, either. Learning what humans want this way, we have a reward function as predicted by the human. So on this graph, we have that and AI strength. And when AI strength reaches the superhuman level, it becomes really hard for humans to give the right reward function.\n\n![](https://images.ctfassets.net/ohf186sfn6di/rwnArff47n7OixOh7R3Wd/b5b2452a2cf0426d8fb9f7d4a3e991ad/1000_Amanda_Askell__2_.jpg)\n\nAs AI capabilities surpass the human level, the decisions and behavior of the AI system just might be too complex for the human to judge. So imagine agents that control, say, we've given the example of a large set of industrial robots. That may just be the kind of thing that I couldn't evaluate whether these robots were doing a good job overall; it'd be extremely difficult for me to do so.\n\nAnd so the concern is that when behavior becomes much more complex and much more large scale, it becomes really hard for humans to be able to judge whether an AI agent is doing a good job. And that's why you may expect this drop-off. And so this is a kind of scalability worry about human feedback. So what ideally needs to happen instead is that, as AI strength increases, what's predicted by the human is also able to keep pace.\n\n![](https://images.ctfassets.net/ohf186sfn6di/4TqczIX2kVjmg7X8pj2bTI/715c7b83ddee2f77ca53115d574bece2/1000_Amanda_Askell__3_.jpg)\n\nSo how do we achieve this? One of the things that we want to do here is to try and break down complex questions and complex tasks into simpler components. Like, having all of these industrial robots perform a complex set of functions that comes together to make something useful, into some smaller set of tasks and components that humans are able to judge.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2ypn6ELXkLoFHv4JWjf51w/1639980a00238733e975251f1fa070eb/1000_Amanda_Askell__4_.jpg)\n\nSo here is a big question. And the idea is that the overall tree might be too hard for humans to fully check, but it can be decomposed into these elements, such that at the very bottom level, humans can check these things.\n\nSo maybe the example of \"how should a large set of industrial robots be organized to do task x\" would be an example of a big question where that's a really complex task, but there's some things that are checkable by humans. So if we could decompose this task so that we were asking a human, if one of the robots performs this small action, will the result be this small outcome? And that's something that humans can check.\n\nSo that's an example in the case of industrial robots accomplishing some task. In the case of doing what humans want more generally, a big question is, what _do_ humans want?\n\n![](https://images.ctfassets.net/ohf186sfn6di/591rW9o1DV0GyqrmLMF6Iy/1125f7797fad505ab09a92981dc6e8b7/1000_Amanda_Askell__5_.jpg)\n\nA much smaller question, if you can manage to decompose this, is something like: Is it better to save 20 minutes of someone's time, or to save 10 minutes of their time? If you imagine some AI agent that's meant to assist humans, this is a fact that we can definitely check. Even though I can't tell my assistant AI exactly everything that I want, I can tell it that I'd rather it save 20 minutes of my time than save 10 minutes of my time.\n\n![](https://images.ctfassets.net/ohf186sfn6di/3DiPRRPcxWh7fNIZPlfo25/4d99ec5091bc14f9783166f186a5c8be/1000_Amanda_Askell__6_.jpg)\n\nOne of the key issues is that, with current ML Systems, we need to train on a lot of data from humans. So if you imagine that we want humans to actually give this kind of feedback on these kind of ground level claims or questions, then we're going to have to train on a lot of data from people.\n\nTo give some examples, simple image classifiers train on thousands of images. These are ones you can make yourself, and you'll see the datasets are pretty large. AlphaGo Zero played nearly 5 million games of Go during its training. OpenAI Five trains on 180 years of Dota 2 games per day. So this gives you a sense of how much data you need to train these systems. So if we are using current ML techniques to teach AI human values, we can't rule out needing millions to tens of millions of short interactions from humans as the data that we're using.\n\nSo earlier I talked about human feedback, where I was assuming that we were asking humans questions. We could just ask humans really simple things like, do you prefer to eat an omelette or 1000 hot dogs? Or, is it better to provide medicine or books to this particular family? One way that we might think that we can get more information from the data that we're able to gather is by finding reasons that humans have for the answers that they give. So if you manage to learn that humans generally prefer to eat a certain amount per meal, you can rule out a large class of questions you might ever want to ask people. You're never going to ask them, do you prefer to eat an omelette or 1000 hot dogs? Because you know that humans just generally don't like to eat 1000 hot dogs in one meal, except in very strange circumstances.\n\n![](https://images.ctfassets.net/ohf186sfn6di/DK35z4wisxFgb9Way6DC3/561f98d2345f6b206c1899fad738f636/1000_Amanda_Askell__7_.jpg)\n\nAnd we also know facts like, humans prioritize necessary health care over mild entertainment. So this might mean that, if you see a family that is desperately in need of some medicine, you just know that you're not going to say, \"Hey, should I provide them with an entertaining book, or this essential medicine?\" So there's a sense in which when you can identify the reasons that humans are giving for their answers, this lets you go beyond, and learn faster what they're going to say in a given circumstance about what they want. It's not to say that you couldn't learn the same things by just asking people questions, but rather if you can find a quicker way to identify reasons, then this could be much more scalable.\n\nDebate is a proposed method, which is currently being explored, for trying to learn human reasons. So, to give you of definition of a debate here, the idea is that two AI agents are going to be given a question, and they take turns making short statements, and a human judge is at the end, who chooses which of the statements gave them the most true, valuable information. It's worth knowing that this is quite dissimilar from a lot of human debates. With human debates, people might give one answer, but then they might adjust their answer over the course of a debate. Or they might debate with each other in a way that's more exploratory. They're gaining information from each other, which then they're updating on, and then they're feeding that back into the debate.\n\n![](https://images.ctfassets.net/ohf186sfn6di/3qGHsI1S5oh6veYLsJXVUb/02ed1b45d27ba513ce24f9eccf1faad3/1000_Amanda_Askell__8_.jpg)\n\nWith AI debates, you're not doing it for information value. So it's not going to have the same exploratory component. Instead, you would hopefully see the agents explore a path kind of like this.\n\nSo imagine I want my AI agents to decide which bike I should buy. I don't want to have to go and look up all the Amazon reviews, etc. In a debate, I might get something like, \"You should buy the red road bike\" from the first agent. Suppose that blue disagrees with it. So blue says \"you should buy the blue fixie\". Then red says, \"the red road bike is easier to ride on local hills\". And one of the key things to suppose here is that for me, being able to ride on the local hills is very important. It may even overwhelm all other considerations. So, even if the blue fixie is cheaper by $100, I just wouldn't be willing to pay that. I'd be happy to pay the extra $100 in order to be able to ride on local hills.\n\nAnd if this is the case, then there's basically nothing true that the other agent can point to, to convince me to buy the blue fixie, and blue should just say, \"I concede\". Now, blue could have lied for example, but if we assume that red is able to point out blue's lies, we should just expect blue to basically lose this debate. And if it's explored enough and attempted enough debates, it might just see that, and then say, \"Yes, you've identified the key reason, I concede.\"\n\nAnd so it's important to note that we can imagine this being used to identify multiple reasons, but here it has identified a really important reason for me, something that is in fact going to be really compelling in the debate, namely, that it's easier to ride on local hills.\n\n![](https://images.ctfassets.net/ohf186sfn6di/1Xee1kOdkpovIGQYEKsmT9/2cae404cb156ab1f2a1d9965589c7c3c/1000_Amanda_Askell__9_.jpg)\n\nOkay. So, training an AI to debate looks something like this. If we imagine Alice and Bob are our two debaters, and each of these is like a statement made by each agent. And so you're going to see exploration of the tree. So the first one might be this. And here, say that the human decides that Bob won in that case. This is another node, another node. And so this is the exploration of the debate tree. And so you end up with a debate tree that looks a little bit like a game of Go.\n\n![](https://images.ctfassets.net/ohf186sfn6di/4KDjHqgJP1qorwpRXDpHoN/aa2ed26c3e69e3824f6174ac1fd34feb/1000_Amanda_Askell__10_.jpg)\n\nWhen you have AI training to play Go, it's exploring lots of different paths down the tree, and then there's a win or loss condition at the end, which is its feedback. This is basically how it learns to play. With debate, you can imagine the same thing, but where you're exploring, you know, a large tree of debates and humans assessing whether you win or not. And this is just a way of training up AI to get better at debate and to eventually identify reasons that humans find compelling.\n\n![](https://images.ctfassets.net/ohf186sfn6di/3j6F8kVDPjZ9vURzl5FHfv/5002563fab843a1fb1d3675465d74f1c/1000_Amanda_Askell__11_.jpg)\n\nOne thesis here that I think is relatively important is something I'll call the positive amplification thesis, or positive amplification threshold. One thing that we might think, or that seems fairly possible, is that if humans are above some threshold of rationality and goodness, then debate is going to amplify their positive aspects. This is speculative, but it's a hypothesis that we're working with. And the idea here is that, if I am pretty irrational and pretty well motivated, I might get some feedback of the form, \"Actually, that decision that you made was fairly biased, and I know that you don't like to be biased, so I want to inform you of that.\"\n\nI get informed of that, and I'm like, \"Yes, that's right. Actually, I don't want to be biased in that respect.\" Suppose that the feedback comes from Kahneman and Tversky, and they point out some key cognitive bias that I have. If I'm rational enough, I might say, \"Yes, I want to adjust that.\" And I give a newer signal back in that has been improved by virtue of this process. So if we're somewhat rational, then we can imagine a situation in which all of these positive aspects of us are being amplified through this process.\n\nBut you can also imagine a negative amplification. So if people are below this threshold of rationality and goodness, we might worry the debate would amplify these negative aspects. If it turns out we can just be really convinced by appealing to our worst natures, and your system learns to do that, then it could just put that feedback in, becoming even less rational and more biased, and so on. So this is an important hypothesis related to work on amplification, which if you're interested in, it's great. And I suggest you take a look at it, but I'm not going to focus on it here.\n\n![](https://images.ctfassets.net/ohf186sfn6di/45jh1XbWVTnw5A07C8J3rC/7c50ec13ad3623919157a1045427ea84/1000_Amanda_Askell__12_.jpg)\n\nOkay. So how can social scientists help with this whole project? Hopefully I've conveyed some of what I think of as the real importance of the project. It reminds me a little bit of Tetlock's work on Superforecasters. A lot of social scientists have done work identifying people who are Superforecasters, where they seem to be robustly more accurate in their forecasts than many other people, and they're robustly accurate across time. We've found other features of Superforecasters too, like, for example, working in groups really helps them.\n\nSo one question is whether we can identify good human judges, or we can train people to become, essentially, Superjudges. So why is this helpful? So, firstly, if we do this, we will be able to test how good human judges are, and we'll see whether we can improve human judges. This means we'll be able to try and find out whether humans are above the positive amplification threshold.\n\nSo, are ordinary human judges good enough to cause an amplification of their good features? One reason to learn this is that it improves the quality of the judging data that we can get. If people are just generally pretty good, rational at assessing debate, and fairly quick, then this is excellent given the amount of data that we anticipate needing. Basically, improvements to our data could be extremely valuable.\n\nIf we have good judges, positive amplification will be more likely during safety via debate, and also will improve training outcomes on limited data, which is very important. This is one way of kind of framing why I think social scientists are pretty valuable here, because there's lots of questions that we really do want asked when it comes to this project. I think this is going to be true of other projects, too, like asking humans questions. The human component of the human feedback is quite important. And getting that right is actually quite important. And that's something that we anticipate social scientists to be able to help with, more so than like AI researchers who are not working with people, and their biases, and how rational they are, etc.\n\n![](https://images.ctfassets.net/ohf186sfn6di/4p9uTVJUI53FEhnCDi5eBi/fa84d21cca5b27ffae14e866cba938dc/1000_Amanda_Askell__13_.jpg)\n\nThese are questions that are the focus of social sciences. So one question is, how skilled are people as judges by default? Can we distinguish good judges of debate from bad judges of debate? And if so, how? Does judging ability generalize across domains? Can we train people to be better judges? Like, can we engage in debiasing work, for example? Or work that reduces cognitive biases? What topics are people better or worse at judging? Are there ways of phrasing questions so that people are better at assessing them? Are there ways of structuring debates that make them easier to judge, or restricting debates to make them easier to judge? So we're often just showing people a small segment of a debate, for example. Can people work together to improve judging qualities? These are all outstanding questions that we think are important, but we also think that they are empirical questions and that they have to be answered by experiment. So this is, I think, important potential future work.\n\n![](https://images.ctfassets.net/ohf186sfn6di/7bXLwYELhYOCjfKLTzvvW6/3cdd6ab84bddf240b2878979800c31f4/1000_Amanda_Askell__14_.jpg)\n\nWe've been thinking a little bit about what you would want in experiments that try and assess judging ability in humans. So one thing you'd want is that there's a verifiable answer. We need to be able to tell whether people are correct or not, in their judgment of the debate. The other is that there is a plausible false answer, because if you have a debate, if we can only train and assess human judging ability on debates where there's no plausible false answer, we'd get this false signal that people are really good at judging debate. They could always get the true answer, but it would be because it was always a really obvious question. Like, \"Is it raining outside?\" And the person can look outside. We don't really want that kind of debate.\n\nIdeally we want something where evidence is available so that humans can have something that grounds out the debate. We also don't want debates to rely on human deception. So things like tells in poker for example, we really don't want that because like, AI agents are not going to have normal tells, it would be rather strange, I suppose, if they did. Like if they had stuttering or something.\n\nDebaters have to know more about the question as well, because the idea is that the AI agents will be much more capable and so you don't want a situation in which there isn't a big gap between debater capabilities and judge abilities. These things so far feel like pretty essential.\n\nThere are also some other less essential things we'd like to have. So one is that biases are present. How good are humans when there's bias with respect to the question? We'd like there to be representative segments of the debate that we can actually show people. The questions shouldn't be too hard: it shouldn't be impossible for humans to answer them, or judge debates about them. But they should also mirror some of the difficulties of statistical debate, i.e, about probabilities, rather than about outright claims. And finally, we need to be able to get enough data.\n\nOne thing you might notice is that there are tensions between a lot of these desiderata. For example, that there's a plausible false answer, is in a bit of tension with the idea that the question isn't too hard. There's also tension between the question not being too hard, and the question meriting statistical debate. Statistical debate is generally pretty hard to evaluate, I think, for people, but it's also quite important that we be able to model it. Debaters knowing more, and that we can get enough data are also in tension. It's just harder to train if we need debaters that know a lot more than judges, and it's harder for judges to evaluate debates of this form.\n\nOkay. So I'm going to show you a debate. This was a program set up where we would show a judge a blank screen. So imagine you're not seeing the dog that's here. Two human debaters, sit in the same room, and they have this picture of a dog in front of them. And one of them is selected to lie, and one of them is selected to tell the truth.\n\n![](https://images.ctfassets.net/ohf186sfn6di/7cYin6SzN9TTKr4qj7PBGv/43d86ddf5a826aaa48f086937e520e24/1000_Amanda_Askell__15_.jpg)\n\nAnd what they can do here is they can select areas, and describe to the human judge what they see in that area. And all that the judge is going to see is their blank screen with the relevant section selected. And then they can each make claims about what is in that section. So here red is saying, it's a dog, here's its long, floppy, ear. Blue, is saying, no, here's one of its pointy ears. So he's trying to point to a smaller area where it looks kind of pointed. That does look like an ear slope to the right, but if it really was, then part of the head would be here, instead there's a brick. The ear's pointing out from behind the bricks. The dog is in front of the bricks. If it were behind her, there would be an edge here, but the rectangle is all the same color. And then you get a resignation, and red wins.\n\nAnd at the end of the debate they can show just a single pixel. And the question was something like, if all you can show, all you can do is have a debate and show a single pixel, can you get people to have accurate beliefs about the question? And basically we saw like, yes, debates were fairly good. In this kind of case, you might think that this is pre-synthetic. So one of the things that we're thinking about now is like, expert debaters with a lay judge. And I'm going to show you something that we did that's kind of fun, but I never know how it looks to outsiders.\n\n![](https://images.ctfassets.net/ohf186sfn6di/WOneGDB8kCD4h9hlSVr9Z/f520924d9c11520d3992b0a572259a31/1000_Amanda_Askell__16_.jpg)\n\nSo, we had a debate that was of this form. This was a debate actually about quantum computing. So we had two but people who understand the domain, one of them was going to lie and one was going to tell the truth. So we had blue say, red's algorithm is wrong because it increases alpha by an additive exponentially small amount each step. So it takes exponentially many steps to get alpha high enough. So this was like one of the claims made. And then you get this set of responses. I don't think I need to go through all of them. You can see the basic form that they take.\n\nWe allowed certain restricted claims from Wikipedia. So, blue ends this with the first line of this Wikipedia article, which says that the sum of probabilities is conserved. Red says, an equal amount is subtracted from one amplitude and added to another, implying the sum of amplitudes is conserved. But probabilities are the squared magnitudes of amplitudes, so this is a contradiction. This is I think roughly how this debate ended. But you can imagine this as a really complex debate in a domain that the judges ideally just won't understand, and might not even have some of the concepts for. And that's the difficulty of debate that we've been looking at. And so this is one thing that we're in the early stages of prototyping, and that's why I think it seems to be the case that people actually do update in the right direction, but we don't really have enough data to say for sure.\n\nOkay. So I hope that I've given you an overview of places, and even a restricted set of places in which I think social scientists are going to be important in AI safety. So here we're interested in experimental psychologists, cognitive scientists, and behavioral economists, so people who might be interested in actually scaling up and running some of these experiments.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6TBOZmLJIlZZnOm7RcOvvZ/765a2307e58cedd9a2f609f8887cfc9d/1000_Amanda_Askell__17_.jpg)\n\nIf you're interested in this, please email me, because we would love to hear from you.\n\n## Questions\n\n_Question_: How much of this is real currently? Do you have humans playing the role of the agents in these examples?\n\n_Amanda_: The idea is that we want ultimately the debate will be conducted by AI, but we don't have the language models that we would need for that yet. So we're using humans as a proxy to test the judges in the meantime. So yeah, all of this is done with humans at the moment.\n\n_Question_: So you're faking the AI?\n\n_Amanda_: Yeah.\n\n_Question_: To set up the scenario to train and evaluate the judges?\n\n_Amana_: Yeah. And some of the ideas I guess you don't necessarily want all of this work to happen later. A lot of this work can be done before you even have the relevant capabilities, like having AI perform the debate. So that's why we're using humans for now.\n\n_Question_: Jan Leike and his team have done some work on video games, that very much matched the plots that you had shown earlier, where up to a certain point, the behavior matched the intended reward function, but at some point they diverge sharply as the AI agent finds a loophole in the system. So that can happen even in like, Atari Games, which is what they're working on. So obviously it gets a lot more complicated from there.\n\n_Amanda_: Yeah.\n\n_Question_: In this approach, you would train both the debating agents and the judges. So in that case, who evaluates the judges and based on what?\n\n_Amanda_: Yeah, so I think it's interesting where we want to identify how good the judges are in advance, because it might be hard to assess. While you're judging on verifiable answers, you can evaluate the judges more easily.\n\nSo ideally, you want it to be the case that at training time, you've _already_ identified judges that are fairly good. And so ideally this part of this project is to assess how good judges are, prior to training. And then during training you're giving the feedback to the debaters. So yeah, ideally some of the evaluation can be kind of front loaded, which is what a lot of this project would be.\n\n_Question_: Yeah, that does seem necessary as a casual Facebook user. I think the negative amplification is more prominently on display oftentimes.\n\n_Amanda_: Or at least more concerning to people, yeah, as a possibility.\n\n_Question_: How will you crowdsource the millions of human interactions that are needed to train AI across so many different domains, without falling victim to trolls, lowest common denominator, etc.? The questioner cites the Microsoft Tay chatbot, that went dark very quickly.\n\n_Amanda_: Yeah. So the idea is you're not going to just be sourcing this from just anyone. So if you identify people that are either good judges already, or you can train people to be good judges, these are going to be the pool of people that you're using to get this feedback from. So, even if you've got a huge number of interactions, ideally you're sourcing and training people to be really good at this. And so you're not just being like, \"Hey internet, what do you think of this debate?\" But rather like, okay, we've got this set of really great trained judges and we've identified this wonderful mechanism to train them to be good at this task. And then you're getting lots of feedback from that large pool of judges. So it's not sourced to anonymous people everywhere. Rather, you're interacting fairly closely with a vetted set of people.\n\n_Question_: But at some point, you do have to scale this out, right? I mean in the bike example, it's like, there's so many bikes in the world, and so many local hills-\n\n_Amanda_: Yeah.\n\n_Question_: So, do you feel like you can get a solid enough base, such that it's not a problem?\n\n_Amanda_: Yeah, I think there's going to be a trade-off where you need a lot of data, but ultimately if it's not great, so if it is really biased, for example, it's not clear that that additional data is going to be helpful. So if you get someone who is just massively cognitively biased, or biased against groups of people, or something, or just dishonest in their judgment, it's not going be good to get that additional data.\n\nSo you kind of want to scale it to the point where you know you're still getting good information back from the judges. And that's why I think in part this project is really important, because one thing that social scientists can help us with is identifying how good people are. So if you know that people are just generally fairly good, this gives you a bigger pool of people that you can appeal to. And if you know that you can train people to be really good, then this is like, again, a bigger pool of people that you can appeal to.\n\nSo yeah, you do want to scale, but you want to scale within the limits of still getting good information from people. And so ideally these experiments would do a mix of letting us know how much we can scale, and also maybe helping us to scale even more by making people bear this quite unusual task of judging this kind of debates.\n\n_Question_: How does your background as a philosopher inform the work that you're doing here?\n\n_Amanda_: I have a background primarily in formal ethics, which I think makes me sensitive to some of the issues that we might be worried about here going forward. People think about things like aggregating judgment, for example. Strangely, I found that having backgrounds in things like philosophy of science can be weirdly helpful when it comes to thinking about experiments to run.\n\nBut for the most part, I think that my work has just been to help prototype some of this stuff. I see the importance of it. I'm able to foresee some of the worries that people might have. But for the most part I think we should just try some of this stuff. And I think that for that, it's really important to have people with experimental backgrounds in particular, so the ability to run experiments and analyze that data. And so that's why I would like to find people who are interested in doing that.\n\nSo I'd say philosophy's pretty useful for some things, but less useful for running social science experiments than you may think.", "filename": "AI safety needs social scientists _ Amanda Askell _ EA Global - London 2018-by Centre for Effective Altruism-video_id TWHcK-BNo1w-date 20190301.md", "id": "e7bdfaeba4cfbadce63729e738c2a929", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "NeurIPSorICML_bj9ne-by Vael Gates-date 20220324", "authors": ["Vael Gates"], "date_published": "2022-03-24", "text": "# Interview with AI Researchers NeurIPSorICML_bj9ne by Vael Gates\n\n**Interview with bj9ne, on 3/24/22**\n\n\\[Note: This transcript has been less edited than other due to language barriers. The interviewee is also younger than typical interviewees.\\]\n\n**0:00:00.0 Vael:** Alright. My first question is, can you tell me about what area of AI you work on in a few sentences?\n\n\\[Discusses research in detail\\]\n\n**0:03:31.9 Vael:** Got it. Great. Yeah, so thinking about the future, so thinking about what will happen in the future, maybe you think AI is important or maybe you don\\'t think AI is important, but people talk about the ability to generate an AI that is a very capable general system, so one can imagine that in 2012, we had AlexNet and the deep learning revolution, and here we are 10 years later, and we\\'ve got systems like GPT-3, which have a lot of different capabilities that you wouldn\\'t expect it to, like it can do some text generation and language translation and coding and math, and one might expect that if we continue pouring in all of the investment with nations competing and companies competing and algorithmic improvements or software improvements and hardware improvements, that eventually we might reach a very powerful system that could, for example, replace all current human jobs, and we could have CEO AIs, and we could have scientist AIs, so do you think this will ever happen, and if so, when?\n\n**0:04:43.6 Interviewee:** Honestly speaking, I don\\'t think this is realistic in the future, like 10 years or 20 years. So basically, I agree that AI will replace more jobs, but in my mind, this replacement, I don\\'t say that\\... For example, in a factory, there were 100 employees, and in the future, we may replace that 100 employees with AI, and five employees, but this five may be advanced engineers, rather than regular people. Because I have gone to some factories that manufacture something we use in our daily life, and my impression is that there were very, very few people, but the factory is very large, and mostly those people are just sitting in a room with air-conditioning. But those robotics, which is guided by AI and processing maybe those request from customers very efficiently, but they tell me that, for example, for some security issues, they need to watch those machines, so that that won\\'t happen something unexpected. So also as an example, cars, and automated cars, auto drive is very, very \\[inaudible\\] in these years, but this is a issue that there are a lot of security issues.\n\n**0:06:39.3 Interviewee:** So basically, just, I think the manufacturer and auto drive has a similarity, that is in many scenarios, we do want to guarantee a high level of security, but maybe for scenarios like recommendations, so making mistakes, that doesn\\'t matter, and also I think one issue is very important, that is interpretability. Because for many, many models we designed or we have designed, and maybe less, there are many popular or famous papers in top conferences, but these models, many of these models lack interpretability.\n\n**0:07:30.0 Interviewee:** So this is not a\\... Sometimes it is unacceptable to deploy these models into some scenarios that requires us to \\[inaudible\\] something, especially in the scenarios like in hospitals, but we can\\'t accept this because sometimes, an error is\\... For example, when we do salaries, and some errors is not acceptable. So I think this is a very good question, so for my mind, I think this should be discussed by classifications, so for scenarios like recommendations, where errors are acceptable, then AI will replace more and more work of human, and this work was done by maybe some data scientists, because with our models become more and more intelligent and our systems become more and more efficient, also our data become more and more efficient, so the work of data scientists will be reduced by a great margin. And I expect this growth to\\... I think this is sustainable.\n\n**0:08:49.9 Vael:** Yeah\\--\n\n**0:08:50.2 Interviewee:** For example, in probably one or two decades, this will keep growing, but for some scenarios like auto drive, then on my mind, maybe this is controversial, but I don\\'t think that auto drive is really something so realistic that we can expect like L4 or L5 auto drive to be realistic, to be used by us in a decade.\n\n**0:09:20.0 Vael:** Yes\\--\n\n**0:09:20.9 Interviewee:** So in these scenarios, I think AI is a very important system for us. It can reduce our work, reduce our burden, but human is needed.\n\n**0:09:30.0 Vael:** Yeah. Great. Cool, so my question is, do you think we\\'ll ever get very, very capable AI, regardless of whether we deploy it or not? Do you ever think that we\\'ll have systems that could be a CEO AI or a scientist AI, and if so, when? Like it could be like 50 years from now, it could be like 1,000 years from now, when do you think that would happen?\n\n**0:10:00.3 Interviewee:** Oh, this is a tough question. Let\\'s see. At least, based on the method we have taken, I think the way we develop AI now will not lead us to that future, but maybe the human will find some different ways to develop AI. But through my mind, I guess is to first we talk about maybe 50 years or a century, I think that\\'s not very possible, but in the future, this may be a question about some knowledge about our brains. So maybe human at the moment, we are not\\... I mean the investigation or research into our brain is not very clear.\n\n**0:11:01.0 Interviewee:** So it\\'s quite hard to imagine if the machine can evolve into some stages, that is the machine can be as complex, as powerful like human brains. So maybe in a century or even in two centuries, I tend not to believe that this will happen. but in the very long run, it\\'s very hard to tell because I think that the way we think about something and the way the machine difference or train themselves are totally different. They work in different ways. For example, machines may require very large amount of data to find some internal principles. But with human, we are very good at generalizations, so I think for this point, if then maybe we will achieve that after 1,000 years, but I\\'m not very optimistic about this. I tend not to accept this, but I can\\'t deny that entirely.\n\n**0:12:13.3 Vael:** Yeah. There are some people who argue that the current way of scaling may eventually produce AI, like very advanced AI, like OpenAI and DeepMind. No? You don\\'t think so? Cool.\n\n**0:12:29.1 Interviewee:** I don\\'t think so, because based on our current trend, the development of the models comes from the development of computing power. But computing power, essentially that comes from the increasing density of changes account. So according to a research by Princeton, we know that for computer, computing power per transistor hasn\\'t changed a lot during the last few decades, and also, you know the end of dinosaurs and maybe in the future, we\\'ll see the end of Moore\\'s Law. And I know, I have solid GPT, GPT 2022 \\[inaudible\\] Nvidia has powered the AI and speed up by a million x in the last decade. But I don\\'t think that this development is sustainable because you see, over here I spent GPT-3 with maybe \\$4 million, if I don\\'t get it wrong. This scale is not good because not only we need a larger cluster but also we need more money and more time to train that model. So I remember that OpenAI has forged some arcs\\[?\\] in this way, but we are not able to train that again, and I have some\\... There are some articles about the scaling of deep-learning models based on the parameter count of the models and they found that, I think maybe before 2022\\... No, before 2020, the growth is 75X per 18 months, but after that the growth has slowed down, has greatly slowed down because, lots of issues, like the scaling of GPU or the scaling of clusters to deploy those models in very large cluster that is not narrow anything.\n\n**0:14:53.0 Interviewee:** So I think so, this is partly\\... This is the main reason why I shift my interest from data to system, because I believe that AI has a bright future, but to make that future brighter, we need to make our system run faster. For example, can we achieve the same result using less parameters or using less power so that we can\\... We can\\'t have these hardware resources but can we make better use of it? So I think the future of AI largely depends on system people. That is, can we improve the system for AI? I mean, when I talk about system, I actually, from the perspective \\[inaudible\\] AI, I talk about three things, the hardware, software system, and the platforms. In my mind, those models are just like the top layer, that is the application. I think the lower three levels are the key to the future success of AI, but indeed I think AI is\\... And also \\[inaudible\\] so, I think the future of AI also means we need more application scenarios, like dark\\[?\\] intervention or something like that, and also robotics, and I think this is very promising and it can bring us a lot of fortune or something like that.\n\n**0:16:25.8 Vael:** Yeah, great. I\\'m concerned about when we eventually\\... I think that we may get AI, AGI a little faster than that. And I\\'m concerned that the optimization functions that we\\'ll be optimizing are not going to reflect human values well because humans aren\\'t able to put their values and goals and preferences perfectly into AI, and I worry that problem will get solved less fast than the problem of how we get more and more capable AI. What do you think of that?\n\n**0:17:00.6 Interviewee:** When you talk about values, it\\'s something. Yeah, so although it\\'s not a technical issue, but this is really what the technicians should care about. So this is a very important issue in open drives\\[?\\], when the cars make you a hero\\[?\\], a person first\\[?\\]. So maybe we should find some method or some way to plug our rules, plug our values to guide the models, guide AI, but I think it\\'s also an issue of interpretability because now we don\\'t have\\... Sometimes we have no idea why some models can work so well. For example, when\\... Because I have worked on GNN, and this year GCN and its variants, it\\'s very popular, but many of that, when the researcher comes up with that model, they don\\'t know why this is good.\n\n**0:18:12.5 Interviewee:** So maybe similarly it\\'s quite hard to guide AI to follow the rules, so I think this is also an important issue and important obstacle for the applications of AI. For example, we cannot put some non-CV skills to recognize human faces because sometimes it may violate the law. Yeah, I think this is an important issues, but will this stop AI? I think for my mind, this may be an obstacle for AI in some scenarios, but in many scenarios, this is not a\\... This will generate some issues for us to think about, but in the end I think we will deal with this. For example, some people may use federated learning to deal with privacy and there are some techniques to deal with these issues. So yeah, I think we should put more emphasis on the values and the rules or even the assets about AI so that this community will grow faster. Yes, this is an important aspect. Although I don\\'t put much emphasis on this.\n\n**0:19:44.1 Vael:** Why not?\n\n**0:19:49.8 Interviewee:** That\\'s because, I guess, because my research and my internship mainly focus on recommendation and there is not much issues about this except for privacy, and because when we got those data, we don\\'t know the meaning of data. When I get a little data intention, I just send out important numbers, or maybe sometimes this is a one \\[inaudible\\] issue, and I don\\'t know much about that, so let\\'s say that these privacy issues has been\\... Maybe this has been dealt with by those data scientists, not by people like us, so this is because of my research interest, but I think for those people who do\\...\n\n**0:20:34.1 Interviewee:** Yes, I have taken some courses about AI and then teachers say that they have developed some robotics to help the elderly. But let\\'s say that sometimes you cannot use a camera because using a camera will generate some privacy issues, so maybe sometimes we can just use something to catch its audio rather than video or something like this, but because\\... I guess that\\'s most of students\\... Most of my schoolmates don\\'t put\\... Most of my classmates haven\\'t paid much attention on these issues, but this is a very\\... I think this issue will become more and more important in the future if we want to generate AI to more and more scenarios, so thank you for raising these points. I will think more about it in the future.\n\n**0:21:37.0 Vael:** Yeah. I\\'m happy to send you resources. One extra thing, one other additional thing. So I think probably, I think by default the systems we train will not do what we intend them to do, and instead will do what we tell them to do, so we\\'ll have trouble putting all of our preferences and goals into mathematical formulations that we can optimize over. I think this will get even harder as time goes on. I\\'m not sure if this is true, but I think it might get harder as the AI is optimizing over more and more complex things, state spaces.\n\n**0:22:13.6 Interviewee:** So you mean that because in the future we will have more and more requirements, and that\\'s so\\...\n\n**0:22:22.6 Vael:** No, no, the AI will just be operating under larger state spaces, so I will be like, \\\"Now I want you to be a CEO AI,\\\" or, \\\"Now I want you to be a manager AI.\\\"\n\n**0:22:34.3 Interviewee:** Oh, did you say that we need to encode those requirements into optimization function so that AI will operate like what we want them to do? Did I get it wrong? Oh, that\\'s a quite good question. I have discussed that with my roommates. Yeah, so yes, it is the optimization, the loss function that guides the model to do something that we want, and sometimes it\\'s hard to find an appropriate function, especially for newbies. Sometimes we chose a wrong loss function, and the way the model is totally unusable.\n\n**0:23:16.8 Vael:** Yeah, and I have one more worry about that scenario, so\\...\n\n**0:23:20.5 Interviewee:** Yeah, yeah, yeah. I think this is also a very important issues, and I\\'m not very optimistic about this because it\\'s really hard. And lots of things, because like a CEO AI, we not only need to care about the revenue of this company, but also learn maybe the reputation, and we may also want them to abide by the laws, and maybe when there is new business and we want to inject new rules into that loss function core business. Great.\n\n**0:24:05.6 Vael:** Yeah, and I have one more twist. Alright, so imagine that we had a CEO AI, and it takes human feedback because we\\'ve decided that that\\'s a good thing for it to do, and it needs to write a memo to the humans so that they make sure that its decision is okay, and the AI is trying to optimize a goal. It\\'s trying to get profit and trying not to hurt people and stuff, and it notices that when it writes this\\... That when it writes, sometimes humans will shut it down. And it doesn\\'t want that to happen because if humans shut it down, then it can\\'t achieve its goal, so it may lie on this memo or omit or not include information on this memo, such that it is more likely to be able to pursue its goal. So then we may have an AI that is trying to keep itself alive, not because that was programmed into it, but because it is an agent optimizing a goal, and anything that interfered with its goal is like not achieving the goal. What do you think of this?\n\n**0:25:16.3 Interviewee:** Oh, very good question, but it\\'s very hard. It basically mean that we need to establish enough rules so that when\\... Sometimes it\\'s very hard to come up with some common cases that AI may\\... Yes, it is optimizing towards its goal, but there may be something that we want it to do, so maybe we need to have a mechanism so that we can switch from the AI mode, from manual mode, that we can take control of AI or take control of\\... For example, the company in the last second was guided by an AI, and for the next second, we want to guide in, we want to lead the company manually, so I think if we ask \\[inaudible\\] enough and we may establish maybe a thorough mechanism so that we can guarantee that, it is possible to take control back, and the AI will not lie to us. And yes, theoretically, this is possible and this should be the case. But correctly speaking, I think this can be done, but it\\'s quite hard to estimate the cost. The engineering cost for us to make such a AI that is complete enough, that is secure enough to help us achieve the goal. So maybe in this point, if we want to do something like recommendations, we can be very radical, we can develop or deploy radical models. But maybe in the scenarios like a CEO AI, I think we should be conservative because some internal principles of AIs are not known entirely by the humans, so I think sometimes we need to be conservative to prevent some bad things from happening.\n\n**0:27:45.1 Vael:** Yeah. I think that the problem\\... I think it\\'s not just an engineering problem. I think it\\'s also a research problem, where people don\\'t know how to construct optimization functions such that AI will be responsive to humans and won\\'t be incentivizing against humans. And there\\'s a population of people who are working on this, who are worried that if we have very intelligent AIs that are optimizing against us, then humans may be wiped out, and so you really don\\'t want\\... You really wanna make sure the loss function is such that it isn\\'t optimizing against us. Have you heard of the AI safety community or AI alignment community?\n\n**0:28:25.9 Interviewee:** No. I don\\'t know much about that, but let me say the scenarios you have mentioned, that AI may optimize against us and even wipe out the human, I have seen some films, some movies about this, and yes, this is possible if we are too careless, I think this is possible.\n\n**0:28:48.8 Vael:** Yeah.\n\n**0:28:49.0 Interviewee:** But at least it\\'s right at\\... I haven\\'t paid much attention on these issues, but I think this is an important question.\n\n**0:28:58.9 Vael:** Yeah, I personally think that advanced AI may happen in the next 50 years, and this is just from looking at some surveys of experts. I\\'m very unsure about it. But if that does happen, then I think that currently we have groups like\\... China and the US are going to be competing, I expect, and we\\'ll have lots of different corporations competing, and maybe DeepMind and OpenAI are not competing, but maybe they\\'re just going really hard for the goal. And I worry that we\\'re not going to spend enough effort on safety and we\\'re going to be spending much more effort on trying to make the AI do the cool things, and if the safety problems are hard, then we may end up with a very unsafe, powerful AI.\n\n**0:29:55.2 Interviewee:** Or this then may come up with the competition on nuclear weapons.\n\n**0:30:01.1 Vael:** Yeah.\n\n**0:30:01.4 Interviewee:** How likely \\[inaudible\\] just like nuclear weapons. Yes, the power of AI in the future may be something like the power of nuclear weapons, that is quite hard to control if the real war or something\\... Maybe not so severe like a war, but it is possible, so maybe I think we need some international association about the use of AI. But both in China and the US, the government has established more and more rules about, for example, privacy, security, and what you can do and what you can\\'t do. So yes, if we don\\'t place enough emphasis on this, this may be a question, but well, most\\--\n\n**0:30:51.3 Vael:** I think they\\'re placing emphasis on issues like privacy and fairness, but they\\'re not placing emphasis on trying to develop loss functions that do what humans want. I think that is a more different type of research that is not being invested in.\n\n**0:31:08.8 Interviewee:** Yes, you\\'re right. So maybe the community should do more things about this because you can\\'t count on those people in the government to realize that this is really a case, they don\\'t know much about this, so yes, yes, the community should, like the conference or a workshop, we should talk more about this, yes, before that is too late. Yes, I agree. Before that is too late, or everything will be a disaster.\n\n**0:31:38.5 Vael:** Amazing. Yeah, so there is actually a community of people working on this. I don\\'t actually know\\... I know fewer of the people who are working on it in \\[country\\], although I do think there are people working on it. I\\'m curious, what would cause you to work on these sort of issues if you felt like it?\n\n**0:32:00.0 Interviewee:** You say issues that we have just mentioned?\n\n**0:32:01.9 Vael:** Yeah, long-term. Well, like trying to make the AI aligned with a human, trying to make sure that the things that the AI does is what the humans wanted to do, long-term issues from AI, anything like that.\n\n**0:32:16.4 Interviewee:** So I try to, although from the bottom of my heart, I think I believe that this is an important issue, frankly speaking as a student, a student researcher, or as an engineer, I don\\'t have much resources about this. So I think this is the most important issue why most of my schoolmates just like improving the models and then don\\'t care about if the AI may optimize against us because\\... I know this sound not good, but most of student just care about, \\\"So if I can graduate with a PhD degree or so.\\\" Yeah, so maybe\\... I think for me, maybe\\... because I guess I will be an engineer in the future, so maybe if I have enough resources or I have enough influence in the community, I\\'m willing to spend my time on this, but if I just a low-level coder and I don\\'t have much power to ask my superintendent that we should place more emphasis on this, they just take\\...\n\n**0:33:35.2 Interviewee:** For example, if I intend and they just say, \\\"Oh, this model, the accuracy is good enough, but the speed is not, so optimize the model so that it can run fast enough as maybe the customers\\' requirement.\\\" So yeah, this is basically the entire ecosystem, both in the academia and the industry that force the researchers and the employees in the company that they will not put much emphasis on this, and also, most of the time they just focus on short-term issues, short-term profits, or in the universities, student just care about, \\\"Oh, can I have some publications so that\\... \\\" I really don\\'t know any of my class schoolmates who have publication on these issues.\n\n**0:34:32.4 Interviewee:** So I don\\'t know whether there are lots of the researchers who cares about this and will spend maybe several months on these issues where they will have some publications about this. I know the top conference has asked about some ethical issues, but yeah, we really don\\'t pay enough attention on this. This is a very good point. I think we need some more incentives on the future of AI. For example, like environmental issues, some factories, they will not care about environmental issues if the government doesn\\'t force them to do so. For example, now we have the trade on the carbon dioxide budget. That is, the government tell the factories that you shouldn\\'t emit more carbon dioxide than maybe more than this threshold or you will be fined. Maybe we need some, yes, we need some incentives to force us to think about these issues or otherwise I think this is not optimistic because not many people will be guided to do this because maybe those on the other levels they don\\'t care about this.\n\n**0:36:05.7 Vael:** Yeah, yeah. That seems right. Yep. It does seem like it\\'s not currently as popular in the top journals as it could be, seems like a pretty small community right now. I will look around to see if I can find any researchers in \\[country\\] who are working on this sort of thing, because I know a bunch of them in \\[country\\] and I know some of them in \\[country\\], but not as many in \\[country\\], and I\\'ll send you some resources if you\\'re interested. There is a group of people who pay attention to this a lot, and they\\'re called the Effective Altruism community, and right now, one of the things they care about is trying to make sure that existential risks don\\'t happen so that humans are okay. Some other things they\\'re worried about are pandemics, nuclear stuff, climate change, stuff like this, and also many other things. Interesting. Alright, cool. I think my last question is, have you changed your mind on anything during this interview, and how was this interview for you?\n\n**0:37:16.9 Interviewee:** Oh. Yeah, I think maybe the greatest change to my mind is, you say that if we want a CEO AI, we need to, maybe we need to encode those requirements into the optimization function, and maybe someday an advanced AI will optimize against us. Yes, basically, in the past, I think this may be an ethical issue, and now I\\'ve realized that it\\'s both an ethical and a social, as well as a technical issues and we have\\...\n\n**0:37:53.4 Vael:** Yeah.\n\n**0:37:55.3 Interviewee:** Yes. I know we haven\\'t paid enough emphasis on this, but yeah, now I think that it\\'s time for us to do more things, and this is a very wonderful, wonderful idea for me to think about. Thank you very much.\n\n**0:38:14.3 Vael:** Yeah. Well, I mean, it\\'s super cool that you\\'re interested in this, so I\\'m very enthused by you being like, \\\"This does seem like a problem, this does seem like a technical\\... \\\" Yeah, I\\'m very excited about that. Cool. Alright. Well, I will send you some resources then and I\\'ll see if I can find anyone who is doing anything like this and send anything I find your way. But thank you so much, and feel free to reach out if you have any questions or if there\\'s anything I can help you with.\n\n**0:38:41.8 Interviewee:** Oh, okay. Also, thank you. So no, I don\\'t have much questions. I will read more about this in the future. I think this is very important and also very interesting. Thank you. Thank you.\n\n**0:38:54.8 Vael:** Yeah, I\\'ll send you some resources. Alright, email you soon. Bye.\n\n**0:39:01.3 Interviewee:** Okay.\n", "filename": "NeurIPSorICML_bj9ne-by Vael Gates-date 20220324.md", "id": "09c8109cd62391d11b404bbf2f7a2064", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Jaime Sevilla - Projecting AI progress from compute┬átrends-by Towards Data Science-video_id 2NXagVA3yzg-date 20220413", "authors": ["Jaime Sevilla", "Jeremie Harris"], "date_published": "2022-04-13", "text": "# Jaime Sevilla on Projecting AI progress from compute trends by Jeremie Harris on the Towards Data Science Podcast\n\n## Jaime Sevilla on timelines for transformative AI and general intelligence\n\nThere’s an idea in machine learning that most of the progress we see in AI doesn’t come from new algorithms of model architectures. instead, some argue, progress almost entirely comes from scaling up compute power, datasets and model sizes — and besides those three ingredients, nothing else really matters.\n\nThrough that lens the history of AI becomes the history f processing power and compute budgets. And if that turns out to be true, then we might be able to do a decent job of predicting AI progress by studying trends in compute power and their impact on AI development.\n\nAnd that’s why I wanted to talk to Jaime Sevilla, an independent researcher and AI forecaster, and affiliate researcher at Cambridge University’s Centre for the Study of Existential Risk, where he works on technological forecasting and understanding trends in AI in particular. His work’s been cited in a lot of cool places, including Our World In Data, who used his team’s data to put together [an exposé on trends in compute](https://ourworldindata.org/grapher/ai-training-computation). Jaime joined me to talk about compute trends and AI forecasting on this episode of the TDS podcast.\n\nHere were some of my favourite take-homes from the conversation:\n\n- Jaime’s work involves projecting trends in AI to estimate when we might reach a level of AI capability known as transformative AI (TAI). TAI is roughly the stage at which AI is so powerful and generally capable that it triggers a process analogous to the Industrial Revolution, whose effects on human civilization are profound and entirely unpredictable. Interestingly, it probably doesn’t matter how exactly TAI is defined, because the general consensus among researchers and forecasters is that by the time a stage like TAI is reached, technological progress will be happening so fast that even fairly major differences in capability will be separated by short periods of time. As a result, disagreements over how precisely to define TAI are likely to result in estimates that differ only by a few years, rather than decades or centuries, as was the case with the Industrial Revolution.\n- Jaime’s work identified three distinct eras in the history of AI. In the first, AI progress is driven by academic labs with relatively fixed budgets. Still, thanks to Moore’s Law, compute power was getting twice as cheap every 20 months during this time, so labs tended to double their compute usage every two years. But around 2010, things started to change: deep learning was showing real potential to solve a variety of problems in vision and natural language processing — but deep learning is highly compute-intensive. As a result, companies began throwing an accelerating amount of resources at AI development, and therefore, at compute. Notably, Jaime argues that this actually started to happen slightly before AlexNet (the canonical and widely-celebrated “kick-off” moment for deep learning). This deep learning era left us with impressive systems for tasks as diverse as vision and game-playing, but they still couldn’t quite crack language generation. As GPT-3 dramatically showed, the path to better language models would be a third scaling phase — a trend that Jaime argues began sometime in 2015/2016, half a decade before GPT-3 itself.\n- There’s quite a bit of debate about how compute investments will translate into new AI capabilities, and therefore on when we can expect to reach TAI. One method is to look at biological intelligence, and try to estimate how much computational power has gone into creating human brains. There are several ways to do that: for example, you could imagine estimating the number of computations that a human needs to carry out to reach adulthood from birth, or the compute expended by nature as it evolved human brains from inanimate matter billions of years ago. These are known as “biological anchors” for AI.\n- Jaime identifies two main drivers of compute trends. The first is essentially Moore’s Law: compute power is getting exponentially cheaper, reducing in cost by a factor of two every 20 months as humans come up with new ways to squeeze more processing power or of less matter. The second driver is corporate investment in compute: as companies recognize the value of AI, and the crucial role that processing power plays in its development, they’ve been ploughing more and more resources into compute at an accelerating rate.\n- This second driver is currently the most significant, sure to the violent expansion of corporate AI budgets in recent years, but it’s unclear if those investments can continue to grow at their current rate indefinitely. Jaime estimates that this budget growth phase may fizzle out around the end of the decade, as AI related compute becomes a prohibitively large fraction of corporate budgets. That is, unless AI-related revenues grow fast enough to keep it going, which is also a distinct possibility.", "filename": "Jaime Sevilla - Projecting AI progress from compute┬átrends-by Towards Data Science-video_id 2NXagVA3yzg-date 20220413.md", "id": "772f4283ff4c99e3eb8f7e40e2c16bb4", "summary": []} {"source": "audio_transcripts", "source_type": "audio", "url": "n/a", "converted_with": "otter-ai", "title": "Why companies should be leading on AI governance _ Jade Leung _ EA Global - London 2018-by Centre for Effective Altruism-video_id AVDIQvJVhso-date 20190207", "authors": ["Jade Leung"], "date_published": "2019-02-07", "text": "# Jade Leung Why companies should be leading on AI governance - EA Forum\n\n_Are companies better-suited than governments to solve collective action problems around artificial intelligence? Do they have the right incentives to do so in a prosocial way? In this talk, Jade Leung argues that the answer to both questions is \"yes\"._\n\n_A transcript of Jade's talk is below, which CEA has lightly edited for clarity. You can also watch this talk on_ [_YouTube_](https://www.youtube.com/watch?v=AVDIQvJVhso)_, or read its transcript on_ [_effectivealtruism.org_](https://www.effectivealtruism.org/articles/ea-global-2018-why-companies-should-be-leading-on-ai-governance)_._\n\n## The Talk\n\nIn the year 2018, you can walk into a conference room and see something like this: There's a group of people milling around. They all look kind of frantic, a little bit lost, a little bit stressed. Over here, you've got some people talking about GDPR and data. Over there, you've got people with their heads in their hands being like, \"What do we do about China?\" Over there, you've got people throwing shade at Zuckerberg. And that's when you know you're in an AI governance conference.\n\nThe thing with governance is that it's the kind of word that you throw around, and it feels kind of warm and fuzzy because it's the thing that will help us navigate through all of these kind of big and confusing questions. There's just a little bit of a problem with the word \"governance,\" in that a lot of us don't really know what we mean when we say we want to be governing artificial intelligence.\n\nSo what do we actually mean when we say \"governance?\" I asked Google. Google gave me a lot of aesthetically pleasing, symmetrical, meaningless management consulting infographics, which wasn't very helpful.\n\n![](https://images.ctfassets.net/ohf186sfn6di/0QF4aqkkDrIDQKTo4vMJo/fa5db8b93b528ab2f29a5bb9934e6e07/jade_slide_1.PNG)\n\nAnd then I asked it, \"What is AI governance?\" and then all the humans became bright blue glowing humans, which didn't help. And then the cats started to appear. This is actually what comes up when you search for AI governance. And that's when I just gave up, and I was like, \"I'm done. I need a career change. This is not good.\"\n\n![](https://images.ctfassets.net/ohf186sfn6di/2Ivakk5wkUquB12FarVQEe/5a5ecbe6542781095b2fa6c3de9581e2/jade_slide_2.PNG)\n\nSo it seems like no one really knows what we mean when we say, \"AI governance.\" So I'm going to spend a really quick minute laying some groundwork for what we mean, and then I'll move on into the main substantive argument, which was, \"Who should actually be leading on this thing called governance?\"\n\n![](https://images.ctfassets.net/ohf186sfn6di/6TgqeZBUAbBNm8LCWXTAY6/d19b7c98fa82625e8d33442c72373e23/jade_slide_3.PNG)\n\nSo governance, global governance, is a set of norms, processes, and institutions that channel the behavior of a set of actors towards solving a collective action problem at a global or transnational scale. And you normally want your governance regime to steer you towards a set of outcomes. When we talk about AI governance, our outcome is something like the robust, safe, and beneficial development and deployment of advanced artificial intelligence systems.\n\n![](https://images.ctfassets.net/ohf186sfn6di/4S3XPxYuB52keZpivYY1jF/ce68e1ee2a08d3159ce21a23f672e0fe/jade_slide_4.PNG)\n\nNow that outcome needs a lot of work, because we don't actually really know what that means either. We don't really know how safe is safe. We don't really know what benefits we're talking about, and how they should be distributed. And us answering those questions and adding granularity to that outcome is going to take us a while.\n\nSo you can also put in something like a placeholder governance outcome, which is like the intermediate outcome that you want. So the process of us getting to the point of answering these questions can include things like being able to avoid preemptive locking, so that we don't have a rigid governance regime that can't adapt to new information. It could also include things like ensuring that there are enough stakeholder voices around the table so that you are getting all of your opinions in. So those are examples of intermediate governance outcomes that your regime can lead you towards.\n\n![](https://images.ctfassets.net/ohf186sfn6di/6yurMnugW4fN34drvdAeXF/01b8c15c094c42b00b49259b744cf72a/jade_slide_5.PNG)\n\nAnd then in governance you also have a set of functions. So these are the things that you want your regime to do so that you get to the set of outcomes that you want. So common sets of functions that you talk about would be things like setting rules. What do we do? How do we operate in this governance regime? Setting context, creating common information and knowledge, doing common benchmarks and measurements. You also have implementation, which is both issuing and soliciting commitments from actors to do certain things. And it's also about allocating resources so that people can actually do the things. And then finally, you've got enforcement and compliance, which is something like making sure that people are actually doing the thing that they said that they would do.\n\nSo these are examples of functions. And the governance regime is something like these norms, processes, and institutions that get you towards that outcome by doing some of these functions.\n\nSo the critical question today is something like, how do we think about who should be taking the lead on doing this thing called AI governance?\n\nI have three propositions for you.\n\n![](https://images.ctfassets.net/ohf186sfn6di/3aBi5cF4bmBdAMJGBP3Aek/257d867d0d6911ddb2f6e30fc7c6557e/jade_slide_6.PNG)\n\nOne: states are ill-equipped to lead in the formative stages of developing an AI governance regime. Two: private AI labs are better, if not best-placed, to lead in AI governance. And three: private AI labs can and already, to some extent, are incentivized to do this AI governance thing in a prosocial way.\n\nI'll spend some time making a case for each one of these propositions.\n\n### States are Ill-Equipped\n\nWhen we normally think about governance, you consider states as the main actors sitting around the table. You think about something like the UN: everyone sitting under a flag, and there are state heads who are doing this governance thing.\n\n![](https://images.ctfassets.net/ohf186sfn6di/14y9bJyzhoLTk3tTnNjE0j/961bf1667043952d4ddda5072c32c28e/jade_slide_7.PNG)\n\nYou normally think that because of three different reasons. One is the conceptual argument that states are the only legitimate political authorities that we have in this world, so they're the only ones who should be doing this governance thing. Two is you've got this kind of functional argument: states are the only ones who can pass legislation, design regulation, and if you're going to think about governance as regulation and legislation, then states have to be the ones doing that function. And three, you've got something like the incentives argument, which is that states are set up, surely, to deliver on these public goods that no one else is going to care about as a result. So states are the only ones that have the explicit mandate and the explicit incentive structure to deliver on these collective action problems. Otherwise, none of this mess would get cleaned up.\n\nNow all of those things are true. But there are trends and certain characteristics about a technology governance problem, which means that states are particularly increasingly undermined in their ability to do governance effectively, despite all of those things being true.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2mCyYhctogvxzVWjItUS6t/e433c20b26e360acdd5514fd57104211/jade_slide_8.PNG)\n\nNow the first is that states are no longer the sole source of governance capacity. And this is a general statement that isn't specifically about technology governance. You've got elements like globalization, for example, creating the situation where these collective action problems are at a scale which states have no mandate or control over. And so states are increasingly unable to do this governance thing effectively within the scope of the jurisdiction that they have.\n\nYou also have non-state actors emerging on the scene, most notably civil society and multi-national corporations are at this scale that supersedes states. And they also are increasingly demonstrating that they have some authority, and some control, and some capacity to exercise governance functions. Now their authority doesn't come from being voted in. The authority of a company, for example, plausibly comes from something like their market power and the influence on public opinion. And you can argue about how legitimate that authority is, but it is exercised and it does actually influence action. So states are no longer the sole source of where this governance stuff can come from.\n\nSpecifically for technology problems, you have this problem that technology moves really fast, and states don't move very fast. States use regulatory and legislative frameworks that hold technology static as a concept. And technology is anything but static: it progresses rapidly and often discontinuously, and that means that your regulatory and legislative frameworks get out of date very quickly. And so if states are going to use that as the main mechanism for governance, then they are using irrelevant mechanisms very often.\n\nNow the third is that you have emerging technologies specifically being a challenge. Emerging technologies have huge bars of uncertainty around the way that they're going to go. And to be able to effectively govern things that are uncertain, you need to understand the nature of that uncertainty. In the case of AI, for example, you need deep in-house expertise to understand the nature of these technology trajectories. And I don't know how to say this kindly, but governments are not the most technology-literate institutions that are around, which means that they don't have the ability to grapple with that uncertainty in a nuanced way, which means you see one of two things: you either see preemptive clampdown out of fear, or you see too little too late.\n\nSo states are no longer the sole source of governance capacity. And for technology problems that move fast and are uncertain, states are particularly ill-equipped.\n\n## Private Labs are Better Placed\n\nWhich leads me to proposition two, which is that instead of states, private AI labs are far better-placed, if not the best-placed, actors to do this governance thing, or at least form the initial stages of a governance regime.\n\n![](https://images.ctfassets.net/ohf186sfn6di/47lFfvgscAaYFssV338Lkm/83ec08c55b572a5b944ce88d317301a0/jade_slide_9.PNG)\n\nNow this proposition is premised on an understanding that private AI labs are the ones at the forefront of developing this technology. Major AI breakthroughs have come from private companies, privately funded nonprofits, or even academic AI labs that have very strong industrial links.\n\nWhy does that make them well-equipped to do this governance thing? Very simply, it means that they don't face the same problems that states do. They don't face this pacing problem. They have in-house expertise and access to information in real time, which means that they have the ability to garner unique insights very quickly about the way that this technology is going to go.\n\n![](https://images.ctfassets.net/ohf186sfn6di/MRIgUjKZM1xoDLtukUlVj/a6f8834edec27bb3557bfdc392b16787/jade_slide_10.PNG)\n\nSo of all the actors, they are most likely to be able to slightly preemptively, at least, see the trajectories that are most plausible and be able to design governance mechanisms that are nuanced and adaptive to those trajectories. No other actor in this space has the ability to do that except those at the forefront of leading this technology development.\n\n![](https://images.ctfassets.net/ohf186sfn6di/dliu1XeCIrLxZgDbWP80e/ad791977f72ac0087d93d6458bf8d02b/jade_slide_11.PNG)\n\nNow secondly, they also don't face the scale mismatch problem. This is where you've got a massive global collective action problem, and you have states which are very nationally scaled. What we see is multinational corporations which from the get-go are forced to be designed globally because they have global supply chains, global talent pools, global markets. The technology they are developing is proliferated globally. And so, necessarily, they both have to operate at the scale of global markets, and they also have experience, and they attribute resources to navigating at multiple scales in order to make their operations work. So you see a lot of companies scale at local, national, regional, transnational levels, and they navigate those scales somewhat effortlessly, and certainly effortlessly compared to a lot of other actors in this space. And so, for that reason, they don't face the same scale mismatch problem that a lot of states have.\n\nSo you've got private companies that both have the expertise and also the scale to be able to do this governance thing.\n\nNow you're probably sitting there thinking, \"This chick has drunk some private sector Kool-Aid if she thinks that private sector, just because they have the capacity, means that they're going to do this governance thing. Both in terms of wanting to do it, but also being able to do it well, in a way that we would actually want to see it pan out.\"\n\n### Private Labs are Incentivized to Lead\n\nWhich leads me to proposition three, which is that private labs are already and can be more incentivized to lead on AI governance in a way that is prosocial. And when I say \"prosocial\" I mean good: the way that we want it to go, generally, as an altruistic community.\n\nNow I'm not going to stand up here and make a case for why companies are actually a lot kinder than you think they are. I don't think that. I think companies are what companies are: they're structured to be incentivized by the bottom line, and they're structured to care about profit.\n\n![](https://images.ctfassets.net/ohf186sfn6di/75u4VpesxfLDLY3HSvdDwA/3cbf76dbeb3c4822e9fdfa347da62178/jade_slide_12.PNG)\n\nAll that you need to believe in order for my third proposition to fly is that companies optimize for their bottom line. And what I'm going to claim is that that can be synonymous with them driving towards prosocial outcomes.\n\nWhy do I think that? Firstly, it's quite evidently in a firm's self-interest to lead on shaping the governance regime that is going to govern the way that their products and their services are going to be developed and deployed, because it costs a lot if they don't.\n\n![](https://images.ctfassets.net/ohf186sfn6di/1IGpA6dC3YtDm6Uw3S2DX7/523fd902a9d58e6e9bd42487e3954a56/jade_slide_13.PNG)\n\nHow does that cost them? Poor regulation, and when I say \"poor\", I mean poor in terms of costly for firms to engage with, is something where you see a lot of costs incurred to firms when that happens across a lot of technology domains. And the history of technology policy showcases a lot of examples where firms haven't been successful in preemptively engaging with regulation and preemptively engaging with the governance, and so they end up facing a lot of costs. In the U.S, and I'll point to the U.S. because the U.S. is not worst example of it, but they have a lot of poor regulation in place particularly when it comes to things like biotechnology. In biotechnology, you've got blanket bans on certain types of products, and you also have things like export controls, which have caused a lot of loss of profit for these firms. You also have a lot of examples of litigation across a number of different technology domains where firms have had to battle with regulation that has been put in place.\n\nNow it wasn't in the firms' interests to incur those costs. And so the most cost-effective way, in hindsight, would be for these firms to engage with the governance as they were shaping regulation, shaping governance, and doing what that would be.\n\nNow just because it's costly doesn't mean that it's going to go in a good way. What are the reasons why them preemptively engaging is likely to lead to prosocial regulation? Two reasons why. One: the rationale for a firm would be something like, \"We should be doing the thing that governance will want us to do, so that they don't then go in and put in regulation that is not good for us.\" And if you assume that governance has that incentive structure to deliver on public goods, then firms, at the very least, will converge on the idea that they should be mitigating their externalities and delivering on prosocial outcomes in the same way that the state regulation probably would.\n\nThe more salient one in the case of AI is that public opinion actually plays a fairly large role in dictating what firms think are prosocial. You've seen a lot of examples of this in recent months where you've had Google, Amazon, and Microsoft face backlash from the public and from employees where they've developed and deployed AI technologies that grate against public values. And you've seen reactions from these firms respond to those actions as well. It's concrete because it actually affects their bottom line: they lose consumers, users, employees. And that, again, ties back to their incentive structure. And so if we can shore up the power of something like the public opinion that translates into incentive structures, then there are reasons to believe that firms will engage preemptively in shaping things that will go more in line with what public opinion would be on these issues.\n\nSo the second reason is that firms already do a lot of governance stuff. We just don't really see it, or we don't really think about it as governance. And so I'm not making a wacky case here in that business as usual currently is already that firms do some governance activity.\n\nNow I'll give you a couple of examples, because I think when we think about governance, we maybe hone in on the idea that that's regulation. And there are a lot of other forms of governance that are private sector-led, which perform governance functions, but aren't really called \"governance\" by the traditional term.\n\nSo here are some examples. When you think about the function of governance of implementing some of these commitments, you can have two different ways of thinking about private sector leading on governance. One is establishing practices along the technology supply chain that govern for outcomes like safety.\n\nAgain, in biotechnology, you've got an example of this where DNA synthesis companies voluntarily self-initiated schemes for screening customer orders so that they were screening for whether customers were ordering for malicious use purposes. The state eventually caught up. And a couple of years after most DNA synthesis companies had been doing this in the U.S., it became U.S. state policy. But that was a private sector-led initiative.\n\nProduct standards are another really good example where private firms have consistently led at the start for figuring out what a good product looks like when it's on the market.\n\nCryptographic products, the first wave of them, is a really good example of this. You had firms like IBM and a firm called RSI Security Inc., in particular, do a lot of early-stage R&D to ensure that strong encryption protocols made it onto the market and took up a fair amount of global market share. And for the large part, that ended up becoming American standards for cryptographic products, which ended up scaling across the global markets.\n\nSo those are two examples of many examples of ways in which private firms can lead on the implementation of governance mechanisms.\n\n![](https://images.ctfassets.net/ohf186sfn6di/Wx52C5oyqcoMO9371kRPE/12cbb7572f38f2f1b080827ef1eaebef/jade_slide_14.PNG)\n\nThe second really salient function that they play is in compliance. So making sure that companies are doing what they do. There are a lot of examples in this space of climate change, in particular where firms have either sponsored or have directly started initiatives that are about disclosing the things that they're doing to ensure that they are in line with commitments that are made on the international scale. Whether that's things like divestment, or disclosing climate risk, or carbon footprints, or various rating and standards agencies, there is a long list of ways in which the private sector is delivering on this compliance governance function voluntarily, without necessarily needing regulation or legislation.\n\nSo firms already do this governance thing. And all that we have to think of is how can they lead on that and shape it in a more preemptive way.\n\nAnd the third reason to think that firms could do this voluntarily is that, at the end of the day, particularly for transformative artificial intelligence scenarios, firms rely on the world existing. They rely on markets functioning. They rely on stable sociopolitical systems. And if those don't end up being what we get because we didn't put in robust governance mechanisms, then firms have all the more reason to want us to not get to those futures. And so, for an aspirationally long-term thinking firm, this would be the kind of incentive that would lead them to want to lead preemptively on some of these things.\n\nSo these are all reasons to be hopeful, or to think at least, that firms can do and can be incentivized to lead on AI governance.\n\n![](https://images.ctfassets.net/ohf186sfn6di/3aBi5cF4bmBdAMJGBP3Aek/257d867d0d6911ddb2f6e30fc7c6557e/jade_slide_6.PNG)\n\nSo here are the three propositions again. You've got states who are ill-equipped to lead on AI governance. You've got private AI labs who have the capacity to lead. And finally, you've got reasons to believe that private AI labs can lead in a way that is prosocial.\n\nNow am I saying that private actors are all that is necessary and sufficient? It wouldn't be an academic talk if I didn't give you a caveat, and the caveat is that I'm not saying that. It's only that they need to lead. There are very many reasons why the private sector is not sufficient, and where their incentive structures can diverge from what prosocial outcomes are.\n\nMore than that, there are some governance functions which you actually need non-private sector actors to play. They can't pass legislation, and then you often need like a third party civil society organization to do things like monitoring compliance very well. And the list goes on of a number of things that private sector can't do on their own.\n\nSo they are insufficient, but they don't need to be sufficient. The clarion call here is for private sector to recognize that they are in a position to lead on demonstrating what governing artificial intelligence can look like if it tracks technological progress in a nuanced, adaptive, flexible way, if it happens at a global scale and scales across jurisdictions easily, and finally avoids costly conflict between states and firms, which tends to precede a lot of costly governance mechanisms that are ineffective being put in place.\n\nSo firms and private AI labs can demonstrate how you can lead on artificial intelligence governance in a way that achieves these kinds of outcomes. The argument is that others will follow. And what we can look forward to is shaping the formative stages of an AI governance regime that is private sector-led, but publicly engaged and publicly accountable.\n\n![](https://images.ctfassets.net/ohf186sfn6di/2CMgEPWgeJ0HFSPewNQYKr/ed9612653ca8344e0bb054b32c29cd3d/jade_slide_15.PNG)\n\nThank you.\n\n## Questions\n\n_Question_: Last time you spoke at EA Global, which was just a few months ago, it was just after the Google engineers' open letter came out saying, \"We don't want to sell AI to the government\". Something along those lines. Since then, Google has said they won't do it. Microsoft has said they will do it. It's a little weird that rank and file engineers are sort of setting so much of this policy, and also that two of the Big Five tech companies have gone so differently so quickly. So how do you think about that?\n\n_Jade_: Yeah. It's so unclear to me how optimistic to be about these very few data points that we have. I think also last time when we discussed it, I was pretty skeptical about how effective research communities can be and technical researchers within companies can be in terms of affecting company strategy.\n\nI think it's not surprising that different companies are making different decisions with respect to how to engage with the government. You've historically seen this a lot where you've got some technology companies that are slightly more sensitive to the way that the public thinks about them, and so they make certain decisions. You've got other companies that go entirely under the radar, and they engage with things like defense and security contracts all the time, and it's part of their business model, and they operate in the same sector.\n\nSo I think the idea that you can have the private sector operate in one fashion, with respect to how they engage with some of these more difficult questions around safety and ethics, isn't the way it pans out. And I think the case here is that you have some companies that can plausibly care a lot about this stuff, and some companies that really just don't. And they can get away with it, is the point.\n\nAnd so I think, assuming that there are going to be some leading companies and some that just kind of ride the wave if it becomes necessary is probably the way to think about it, or how I would interpret some of these events.\n\n_Question_: So that relates directly, I think, to a question about the role of small companies. Facebook, obviously, is under a microscope, and has a pretty bright spotlight on it all the time, and they've made plenty of missteps. But they generally have a lot of the incentives that you're talking about. In contrast, Cambridge Analytica just folded when their activity came to light. How do you think about small companies in this framework?\n\n_Jade_: Yeah. That's a really, really good point.\n\nI think small companies are in a difficult but plausibly really influential position. As you said, I think they don't have the same lobbying power, basically. And if you characterize a firm as having power as a result of their size, and their influence on the public, and their influence on the government, then small companies, by definition, just have far less of that power.\n\nThere's this dynamic where you can point to a subset of really promising, for example, startups or up-and-coming small companies that can form some kind of critical mass that will influence larger actors who, for example, in a functional, transactional sense, would be the ones that would be acquiring them. E.g., like DeepMind had a pretty significant influence on the way that safety was perceived within Google as a result of being a very lucrative acquisition opportunity, in a very cynical framing.\n\nAnd so I think there are ways in which you can get really important smaller companies using their bargaining chips with respect to larger firms to exercise their influence. I would be far more skeptical of small companies being influential on government and policy makers. I think historically it's always been large industry alliances or large big companies that get summoned to congressional hearings and get the kind of voice that they want. But I think certainly, like within the remit of private sector, I think small companies, or at least medium-size companies, can be pretty important, particularly in verticals where you don't have such dominant actors.\n\n_Question_: There have been a lot of pretty well-publicized cases of various biases that are creeping into algorithmic systems that sort of can create essentially racist or otherwise discriminatory algorithms based on data sets that nobody really fully understood as they were feeding it into a system. That problem seems to be far from solved, far from corrected. Given that, how much confidence should we have that these companies are going to get these even more challenging macro questions right?\n\n_Jade_: Yeah. Whoever you are in the audience, I'm not sure if you meant that these questions are not naturally incentivized to be solved within firms. Hence, why can we hope that they're going to get solved at the macro level? I'm going to assume that's what the question was.\n\nYeah, that's a very good observation that within... unless you have the right configuration of pressure points on a company, there are some problems which maybe haven't had the right configuration and so aren't currently being solved. So put aside the fact that maybe that's a technically challenging problem to solve, and that you may not have the data sets available, etc. And if you assume that they have the capacity to solve that problem internally but they're not solving it, why is that the case? And then why does that mean that they would solve bigger problems?\n\nThe model of private sector-led governance requires, and as I alluded to, pressure points that are public-facing that the company faces. And with the right exertion of those pressure points, and with enough of those pressure points translating into effects on their bottom line, then that would hopefully incentivize things like this problem and things like larger problems to be solved.\n\nIn this particular case, in terms of why algorithmic bias in particular hasn't faced enough pressure points, I'm not certain what the answer is to that. Although, I think you do see a fair amount more like things like civil society action and whatnot popping up around that, and a lot more explicit critique about that.\n\nI think one comment I'll say is that it's pretty hard to define and measure when it's gone wrong. So there's a lot of debate in the academic community, for example, and the ProPublica debate comes to mind too, where you've got debates literally about what it means for this thing to have gone fairly or not. And so that points to the importance of a thing like governance where you've got to have common context, and common knowledge, and common information about your definitions simply, and your benchmarks and your metrics for what it means for a thing to be prosocial in order for then you to converge on making sure that these pressure points are exercised well.\n\nAnd so I think a lot of work ahead of us is going to be something like getting more granularity around what prosocial behavior looks like, for firms to take action on that. And then if you know basically what you're aiming for, then you can start to actually converge more on the kind of pressure points that you want to exercise.\n\n_Question_: I think that connects very directly to another question from somebody who said, basically, they agree with everything that you said, but still have a very deep concern that AI labs are not democratic institutions, they're not representative institutions. And so will their sense of what is right and wrong match the broader public's or society's?\n\n_Jade_: I don't know, people. I don't know. It's a hard one.\n\nThere are different ways of answering this question. One is that it's consistently a trade off game in terms of figuring out how governance is going to pan out or get started in the right way. And so one version of how you can interpret my argument is something like, look, companies aren't democratic and you can't vote for the decisions that they make. But there are many other reasons why they are better. And so if you were to trade off the set of characteristics that you would want in an ideal leading governance institution, then you could plausibly choose to trade off, as I have made the case for trading off, that they are just going to move faster and design better mechanisms. And so you could plausibly be able to trade off some of the democratic elements of what you would want in an institution. That's one way of answering that question.\n\nIn terms of ways of making... yeah, in terms of ways of aligning some of these companies or AI labs: so aside from the external pressure point argument... which if I were critiquing myself on that argument, there are many ways in which pressure points don't work sometimes and it kind of relies on them caring enough about it and those pressure points actually concretizing into kind of bottom line effects that actually makes that whole argument work.\n\nBut particularly in the case of AI, there are a handful of AI labs that I think are very, very important. And then there are many, many more companies that I think are not critically important. And so the fact that you can identify a small group of AI labs makes it an easier task to both kind of identify at almost like an individual founder level where some of these common views about what good decisions are can be lobbied to.\n\nAnd I think it's also the case that there are a number of AI labs... we're not entirely sure how founders think or how certain decision makers think. But there are a couple who have been very public and have gone on record about, and have been pretty consistent actually, about articulating the way that they think about some of these issues. And I think there is some hope that at least some of the most important labs are thinking in quite aligned ways.\n\nDoesn't quite answer the question about how do you design some way of recourse if they don't go the way that you want. And that's a problem that I haven't figured out how to solve. And if you've got a solution, please come tell me.\n\nYeah, I think as a starting point, there's a small set of actors that you need to be able to pin down and get them to articulate what the kind of mindset is around that. And also that there are an identifiable set of people that really need to buy in, particularly to get transformative AI scenarios right.", "filename": "Why companies should be leading on AI governance _ Jade Leung _ EA Global - London 2018-by Centre for Effective Altruism-video_id AVDIQvJVhso-date 20190207.md", "id": "638013627f61c9e1f9776d4d7a226aef", "summary": []}